|With public key infrastructure in use today, the defense we have against man in the middle attacks with SSL sessions is limited to certificates issued from trusted certificate authorities only. If a session is initiated with a server that isn’t using a trusted CA signed digital certificate and the authenticity of the server is not verified, then there is no guarantee that your transactions with that server are secure.
With a man in the middle attack, an attacker can route all traffic between the
|server and client through their own system and capture all the data sent. If the connection is encrypted, the attacker must decrypt all data before being able to access it. But, if authentication was compromised before hand or the certificate was not verified for authenticity, then the attacker could gain access to all encrypted data easily.
At Carnegie Mellon University, researchers have developed a possible way to ensure that transactions over the internet with unverified servers are secure. The method is called “Perspectives.” It uses a system of notaries that maintains a history of public keys used with the server. The client can use a web interface to verify that the key received is the same key that the server received. In order for the man in the middle attacker to successfully issue the attack, they would need to successfully initiate a man in the middle attack on both the client and the notary. This makes it very difficult to successfully launch the attack.
This method however, is not a replacement for the current PKI system. Perspectives assumes that the man in the middle (MITM) is close in proximity to either client; the client user and the notary. For example, the MITM might be in the same subnet as the client. But, if the MITM is close enough to the server so that both the client and the notary must pass through the MITM to access the server, then the attack may be successful without the Perspectives system being able to detect it.
The Perspectives system also needs to maintain a history of public keys used in order for the system to work. So while the chances are slim, but if a MITM server were to go up at the same time as the notary server, then validity of the notary server is compromised from the start.
For the time being SSL CA’s and PKI is here to stay and is the most efficient and effective way to secure trusted transactions between client and servers. But Perspectives has an intuitive way to verify untrusted certificates and it will be interesting to see if this method takes off in the future.
|In case you needed further confirmation that the internet is not a safe place, an exploit in the Border Gateway Protocol can be used to divert internet traffic to another location. This can be done from anywhere on the internet and does not require the attacker to be within the same subnet. This was demonstrated at the DEFCON security conference in August.
While this is not a new discovery, the recent demostration helps show how unstable and insecure the core infrastructure of the
|internet really is. Not only are higher level applications like DNS vulnerable, but also the lower level protocols have flaws in their design that can be taken advantage of. Experts in the field are calling for changes to internet routing and have been making warnings for years. A newer secure protocol, S-BGP, is a possible solution that could be deployed, yet there are still issues that need to be worked out regarding deployment and operation.
For now however, the only solution to any privacy over the internet is to use point-to-point encryption such as a VPN tunnel. Send data over the internet without encryption and you risk compromising it.
|So I was reading more on the latest information about the number of security breaches this year. I was reading an article by George Hulme over at Information Week about why additionals laws are needed for data protection compliance, particularly in the health care industry. HIPAA policies are beginning to be enforced, but it will be awhile before we start seeing accurate reports on the number of security breaches.|
|There has been better security compliance over the last few years, but there is still much more work that needs to be done. There are many industries that need the same type of attention applied as has been done with the financial quarter. Hulme mentions about how the Health Care industry is so far behind financial industry compliance. I believe that part of this reason is because the health care industry is generally behind the financial industry when it comes to technology.
While there have been strides made in the health care industry with HIPAA policies, there is still a ways to go with enforcement and auditing those standards. It was only last year (2007) that the Department of Health and Human Services conducted its first audit. As always with audits, it will take a little while until all the kinks are worked out and they can really be accurate with their final reports. This can clearly be seen with the financial sector’s lastest report from 2008’s security breach numbers; and SOX auditing has been around longer than HIPAA policy auditing.
The health care industry also reaches sectors where network technology is underutilized. This makes it hard to give accurate numbers on data breaches due to malicious software and attacks. Many private doctor’s offices don’t invest much of their resources into technology and because many offices are making a big push for technology, other offices don’t feel the need to make the push themselves. Often what causes a practice or company to upgrade their infrastructure is when their competitors or partners improve theirs. With much of the industry lagging behind, there is still a large portion that uses paper records as their main data repository method.
Many private practices also outsource their patient management systems to third-party companies. This means that patient data is crossing more networks and is thus exposed to more hands, eyes, and network nodes. All this adds to increased security risk while at the same time makes auditing across seperate networks difficult. Many of these smaller practices and health care providers don’t have a full-time IT staff and so security is not looked at as an ongoing chore as it should be.
So while adjustments have been made to increase security and privacy, there is still a lazy approach to enforcement in the health care industry. More laws are needed to enforce HIPAA policies otherwise this trend will continue.
|According to the Identity Theft Resource Center, the number of reported data breaches for 2008 has already passed the amount that was reported for all of the year 2007. In 2007, the number of data breaches reported was 446. This number for 2008 has already surpassed the 450 mark.
The ITRC only reports on data breaches where more than 22 million records are exposed and in more than 40% of data breaches the number of records exposed is not fully disclosed. This means that the total number of records exposed from these data breaches is incomplete and can’t be used for any accurate research.
|Not only is the number of records exposed inaccurate, but the actual number of breaches is fairly inaccurate as well. One of the reasons why the breach number for 2008 is 4 months ahead of 2007 is because the ITRC is much better at tracking these data breaches. The actual number of breaches is most likely much higher, due in part that some are never reported and some multiple events are actually records as single events. Because of all this, it is hard to say whether or not the number of data breaches is truly getting worse. A longer period of recording is needed to flush out any inaccuracy.
For more information on this report, visit the ITRC’s website.
|Multi-factor authentication has been around for a while now, but has been gaining in popularity dramatically in recent years. Multi-factor authentication is the act of utilizing multiple methods of authentication to access specific data on a network or website.
With so many ways to capture a user’s password, multi-factor authentication is a way to secure data with more than just a single username and password. If that username and password is compromised,
|then the attacker will still need additional information to access the confidential data or website. Multi-factor authentication doesn’t necessarily mean just adding on another username and password prompt. It can be a security question or code request as well. For example, once a user logs in with their username and password, the website also asks them for their mother’s maiden name that they entered when originally setting up their account. This additional prompt can hinder an attackers attempt at accessing their account. Should they already know the username and password, they will also need to know the user’s mother’s maiden name as well.
To add further security to multi-factor authentication, one-time passwords may be added to the mix. A one-time password policy is the technique of using a token or other method that will generate a new password every minute or so and only the server knows what the password is at the minute. The user then uses the password shown on the token as a second authentication method. After the user uses that password, the password is reset again and that password is never used again. This is a very secure method that prevents many types of intrusion. If an attacker was able to capture the hash from the authentication transaction, by the time they get done decrypting it, the password will have changed to a different one.
Another technique to multi-factor authentication is the use of biometrics. Biometrics is the use of some type of scan mechanic on the user’s body such as a finger print or eye retina. Adding multi-factor authentication where the second factor of authentication is something specific to that user adds a much deeper level of authentication that is much more secure. Adding a layer of security that is something of “what the user knows”, “who the user is”, or “what the user has” makes it hard for someone other than that user to access the data.
Multi-factor authentication is being used by more companies every day. It is especially popular online with websites. Compliance acts are also another reason for its growing usage. Compliances acts such as Sarbanes-Oxley require financial institutions to utilize some sort of multi-factor authentication when offering online account access to their customers.
Expect multi-factor authentication to become the standard method of authentication in the future. Whether it is just an additional strong password policy, the use of a security question, or biometrics, additional security is a must when securing data that is confidential.
|As a security IT professional for a company, one of your main concerns revolves around protecting your company’s data. Data loss prevention; really that is what everything boils down to. We do backups of our data to protect it. We patch our systems so that data won’t be compromised. We audit activity to prevent unauthorized access. Almost every security measure we take is to safeguard the data that our company uses.
But, are you doing everything you can to protect data? You may be protecting your
|data from outside intruders, but what about protecting your data from your own employees? For data loss prevention, you must take a look at every interaction that the data has with an end-user and also every interaction that the data has with other network nodes.
Taking the right safeguards for data loss prevention involves using the right tools to not only prevent data loss, but also audit them so that you can make adjustments to prevent them in the future. The first investment you should make is an auditing hardware appliance. These appliances gather syslog data, event log data, and other security auditing information from nodes everywhere on the network. It will then format it and supply all this information to you in report form for review. This will help you analyze your network and put you in the right direction for data loss prevention. For more important security alerts on the network, the device can send out alerts notifying you of any data leaks, unauthorized data access, or intrusions.
Once you know where your main weak areas are on your network, you can take action in locking them down. You will want to do a full permissions audit and review on your data and ensure that no one has access to data that shouldn’t have access. This doesn’t just mean modifying permissions on files on your storage drives. It requires thinking outside of the box. For example, if you send backup tapes offsite to a remote location, then making sure that data on those tapes is encrypted is important. Without encryption, whoever touches those tapes on their way to and from that remote location has access to all the data on those tapes. Or if you send your data to a remote location via the network, is that data stream also encrypted? Are thumb drives and writeable CD’s allowed on the network? If so, are you able to tell if someone copied sensitive information to one?
These are all things you must look at for complete data leak prevention. It is all part of your encompassing network security policy. To prevent data loss, you must track that data wherever it goes. Start with an audit of your network and go from there. Data loss prevention is an on going process and should never be taken lightly.
On August 8th I posted about the security issues around Web 2.0 collaboration websites. Businesses are beginning to see many benefits in adding Web 2.0 tools to their company to allow employees to share information easier. But anytime information is shared over a network, security is a big concern.
I was reading Information Week and found this rolling review they are doing on several Web 2.0 platforms. They go over several points of interest for Web 2.0 tools regarding who has access to the data, how secure it is, what the costs are, and how well the support is for the platform from the vendor.
|Network security is becoming more and more complex each day just as hackers are making their attacks more complex. Attacks today are more blended than ever and that means that protecting your network requires many layers and different devices protecting each facet of your network. For all companies, this requires a lot more money being spent not only on the equipment itself, but also on the expertise to implement it. In comes security as a service.|
|Many companies, in particular small and medium size businesses, rarely have enough full-time IT staff to manage their security services effectively. But even larger businesses can see benefits in security as a service. IT staff in larger companies can often times be tasked with multiple roles that limits their ability to specialize. Managing one task often comes at the expense of other tasks. Sending these tasks to a third-party company that specializes in security will not only help increase the overall security of the network, but can also save the company a lot of money.
Managed service companies that offer security as a service provide many different security services for companies to choose from. Anything from complete comprehensive solutions, to just specific security applications that cover anything from antivirus, email protection, perimeter security, vpn solutions and more. Having some of these services provided to you via a third-party company can provide even more benefits outside of the benefit of outsourcing. For example, having email security come from an outside company many times means that email will be filtered on their network first and then forwarded to your mail server. This allows you to lockdown your perimeter to specific connections because you know where those connections are coming from.
Security as a service solutions also provide many reporting features that encompass all areas of the solutions provided in one place. Often times when designing security services inhouse, reporting in each area comes from different devices and different applications. This can be an administrative nightmare when keeping track of it all. With security as a service, reporting of each application is usually provided through a single and easy to manage reporting web interface.
There a many more benefits to security as a service. It all depends on what the needs are of your business and what resources you have available. Even if the resources are available to you, there are still benefits to be found with a managed security service.
On Google’s enterprise blog on Tuesday, Google said they are going to release a report stating that it has seen more spam virus messages in the month of July than any other month. At one point the number had reached more than 10 million infectious messages in one day on July 24th.
That number is 6 to 7 times what is considered normal. This means spam viruses are now finding ways to bypass the majority of spam filters. Not only that, but spam viruses are now taking a new form. Rather than targeting software vulnerabilities to compromise a system, they are simply utilizing a user’s curiosity. Subject lines of false or wild news stories draw a user in and get them to click on a link that will take them to a website containing malware.
With so much spam viruses prevalent today, it is important to inform users to use their common sense when reading emails. If they don’t know where an email came from or the email contains a news story headline subject that sounds ridiculous, then the email is probably a spam virus.
|More and more companies today are starting to virtualize their server farms. One of the main drivers of this is to make for easier deployments, save rack space, utilize existing servers better, and to save costs. IT engineers are seeing all the benefits of virtualization, but often times forget to look at some of the downsides to it and rush their virtualization infrastructure from testing to production way too quickly.
There have been a lot of technological leaps in the virtualization realm in recent
|years. There is one aspect of virtualization however that still requires some catching up to do. Security application virtualization is something that IT engineers are not fully thinking through during the planning phase and often time deployments go through without having their security application software reviewed. While virtualization will help save on hardware costs in the long run, many times it can be costly in the short term if security application virtualization is done correctly.
Security applications need to be installed on each virtual machine. This means that the more VM’s you have on a physical server, the more security application licenses and installations you will need. This cost is often either overlooked or the security application is not installed on each VM leaving several virtual servers unsecured. Not only is the monetary cost overlooked, but also the cost in processing power that you are asking of each physical server when you multiply the security application requirements across the board.
Hopefully there will be some strides made in this realm in the next few years. Security application software developers are working on ways to virtualize their software in the same way that operating systems are virtualized. This will hopefully allow not only lower monetary costs for licensing, but will also make for more secure virtualization farms. But, don’t expect to see security software vendors quick to release virtualized applications when they are making a lot more money on the seperate agent licenses needed today. In the meantime, secure your VM’s with your security applications properly!