Please visit us at: http://securityblog.astaro.com/ It is the same great content just a new forum.
Last week we announced an exciting new offer for all businesses in the US – a free silent business audit with forensic analysis. This service will help network administrators understand how well their current security products are working, improving network security and employee productivity.
The silent business audit and forensic analysis will accomplish this by sitting behind an organization’s normal firewall and monitoring spam, malware and Internet usage trends to determine what is getting by the firewall and spam filters. At the end of the 14 day audit period Astaro will provide the organization with a report detailing what malware passed through the firewall. As an added bonus, the appliance will also block the transfer of any malware and spyware that makes it passed the normal web filter to avoid the spread of infections.
To register for a silent business audit and forensic analysis click here.
By: Tim Cronin
Dark Reading published an article titled “Booming Underground Economy Makes Spam A Hot Commodity, Expert Says” regarding the ease of using botnets for spam activity and how this makes spamming profitable. Some of the more startling statistics show that “For about $10, [a spammer] can send a million emails”. Even if 2 people order a product that they are selling for $10, that’s a 100% profit over the cost of the use of the botnet. Assuming the actual production of the product is cheap enough, that’s a good margin.
How are botnets so inexpensive, though? And, why are there so many available? If you look at Commtouch’s Malware Outbreak Center you will notice that the vast majority of detected malware seems to be botnet downloaders. Gone are the times when malware consisted of cute “look what I can do” code we are now in the time of real revenue-generating malware. All a botnet “commander” needs to do is create the code, send it out and let it propagate through the Internet. Eventually, there will be enough zombie hosts to really make money.
The strategies in use now should provide a good-enough deterrent to spammers, but there are simply not enough people using current protections. So long as host-based malware detection is in use and network based protections such as IDS/IPS, malware scanning and firewalling are in use, then the amount of zombies on the internet will be reduced enough so that spamming will not be profitable. Then we can look at our in boxes with confidence. We haven’t reached that point yet, because there just simply aren’t enough people using adequate controls of network traffic. According to Commtouch again, in the Western world, zombies are not as common as developing nations. Unfortunately for the Western world, we feel the effects of others’ lack of controls.
Judging from all of this information, all the world needs to do in order to stop spam is make sure we are using currently available controls for our networks. This will make spamming unprofitable and make spammers use their tricks for other means. Until that day, the back-and-forth between spam and anti-spam will continue.
By: Tim Cronin
On Sunday, the Boston Globe printed a portion of a letter to the editor I sent in regards to one of the paper’s articles. The opinion discussed the mandating of electronic health records and the importance of security for such records. Below is the complete letter.
One of the hot-button issues facing the country today is healthcare reform. President Obama has identified widespread electronic medical records as a major benchmark towards achieving the goal of affordable health coverage for all. Scott Kirsner did an excellent job describing some of the technologies Massachusetts companies are creating that will make universal electronic health records possible in his article State helping to shape US efforts to digitize health records for all. The article neglected to examine the network security concerns of such a system.
One may say “Moving medical records online will mean less privacy for everybody.” In reality less privacy is not an issue if proper security is in place. Therefore, moving medical records to electronic storage will increase the need to secure networks. The truth is that records are no less secure when stored electronically, as long as the network is secure. In fact, there are gains in privacy. The biggest risk involved is that making all records electronic does allow a person to attempt to gather information remotely by compromising a network. As long as medical facilities deploy network security technologies and maintain them, this should not be a widespread problem. With paper records, someone who wanted to steal medical information can be successful, but would need to get a hold of a physical copy of the record. This means that an attacker would need to take a risk and go to the location of the records storage. Paper records also pose a risk to patient privacy as medical staff bring records home with them so they can work outside of the hospital. Recently, an employee at a Boston hospital accidently left records on the “T”. If the records were accessible electronically through a secure network connection, this wouldn’t have happened.
Electronic medical record keeping also provides for a more secure data backup process. Hospitals using electronic records will need redundant hard drives, servers, data storage and other important infrastructure to ensure medical information is never lost. With all those backups, many fear that it will be easier to gain unauthorized access to patient information. In actuality, the electronic backups will be easier to secure than the current system of paper charts. Currently paper records are sent to storage vendors and the vendor’s employees have access to the information in clear text. The best security that you can provide without destroying the information is to send the charts in a locked receptacle. In an electronic system, data can be encrypted and stored at vendors’ facilities without fear that the vendor will be able to read the data. This adds to the locked receptacle, because you can lock storage medium in a case, then if that case is compromised, you also have the data in an illegible form. You can also deploy hashing functions to ensure that no data is tampered with.
To address one of the biggest fears, properly deployed medical networks will not send information in a manner that is easy for someone to simply capture. With electronic medical records, you will need to make sure that there is no path for the records to be sent over the open Internet. Instead records should be sent over secured VPN networks specifically designed to protect this information. Nobody should have access to the network that does not need access. Congress has already acted to ensure that this guideline is followed, through the HIPAA and HITECH acts. However, these acts stop short of dictating the security standards and focus on the penalty for if a record is compromised. Creating an electronic medical records system will benefit the healthcare system in America in many ways, including increasing the security of medical records However, if the country is to move towards mandating electronic medical records, then congress should create additional acts creating security standards.
By Tim Cronin
There are three measures network administrators can take to avoid the types of network attacks that plagued US and South Korean websites including www.whitehouse.gov, NASDAQ, NYSE, Yahoo!’s financial page and the Washington Post. The three areas to focus on are network based mitigation, host based mitigation and proactive measures.
Network based mitigation:
- Install IDS/IPS with the ability to track floods (such as SYN, ICMP etc.)
- Install a firewall that has the ability to drop packets rather than have them reach the internal server. The nature of a web server is such that you will allow HTTP to the server from the Internet. You will need to monitor your server to know where to block traffic.
- Have contact numbers for your ISP’s Emergency Management Team (or Response team, or the team that is able to respond to such an event). You will need to contact them in order to prevent the attack from reaching your network’s perimeter in the first place.
Host based mitigation:
- Ensure that HTTP open sessions time out at a reasonable time. When under attack, you will want to reduce this number.
- Ensure that TCP also time out at a reasonable time.
- Install a host-based firewall to prevent HTTP threads from spawning for attack packets
- For those with the knowhow, it would be possible to “fight back” with programs that can neutralize the threat. This method is used mostly by networks that are under constant attack such as government sites.
Astaro earned multiple VMware Ready™ certifications for its security products. Astaro Security Gateway, Astaro Mail Gateway and Astaro Web Gateway have all been certified as VMware Ready, and Astaro is the only Unified Threat Management provider to have submitted to and passed VMware Ready validation.
For more information, check out the press release here.
By: Tim Cronin
PC World’s Jaikumar Vijayan recently reported on the attacks against US government public information infrastructure. In the article, Karen Evans, a Bush administration Information Systems executive outlined what she thought should be fast-tracked. It includes using TICs (Trusted Internet Connections) for all public infrastructures. This would include making sure that the internet connections for public access are consolidated and then served by only trusted parties. In my calculations, this has many benefits with only one glaring weakness.
A single quote of the story stuck out. “the most important lesson learned is that many federal agency security people did not know which network service provider connected their Web sites to the Internet,” said Alan Paller, director of research the SANS Institute. “So they could not get the network service provider to filter traffic.” That quote takes my breath away. If this is accurate, then the preparedness of network security for the government’s infrastructure is simply not up to par. There is not much else that can be said. What are we as a community to do?
Choose the battlefield.
Often used as a text of inspiration to security professionals is Sun Tzu’s “The Art of War”. There are two quotes that are relevant to this discussion. “…And therefore those skilled in war bring the enemy to the field of battle and are not brought there by him.” And “The art of war teaches us to rely not on the likelihood of the enemy’s not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable.” The lessons of Sun Tzu show that we want to essentially choose the battlefield and lay in wait for an attack. We want to be wise about our battlefield and prepared for the enemy. Using the TIC approach is similar to how the Spartans chose the battlefield for the battle of Thermopylae. They chose a small gorge that a small force could successfully defend and then they put up the biggest fight in history. This is the idea behind the TIC. Secure the path to the prize. When you secure the only way to get to the servers, you secure the servers. At the moment, the servers are too distributed to mount an effective defense.
The only glaring weakness that I can calculate is that this can easily turn into a bureaucratic nightmare resulting in weak TICs. Weak TICs will result in a much wider path to the prize (what if the gorge at Thermopylae was twice as wide?).
TICs will have to comply with some standard. Not only that, but likely the TIC will have to be the lowest bidder on the project. So what are the standards? Will they be robust enough? Will the lowest bidder do just enough to get the grant? Will the lowest bidder have qualified personnel? Will there be a process that the TIC and government will need to follow that essentially slows response time? All these are questions that should be answered among many more.
By: Tim Cronin
With the announcement of the upcoming Google Chrome OS, Google is adding some hype to the mix. Google is boldly stating that they are “going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work.” That is a very lofty goal and a loaded statement.
In reality, Google is not too off base here. What it seems they are going to do is make a very small OS. The OS will really only be responsible for basic input and output and run a browser. This means that all of the security holes that go along with the “extras” of modern operating systems will not be a factor. This will have an impact on malware. It means that there won’t be any holes in code that doesn’t exist. This will dramatically reduce the security footprint of the operating system. This is true.
Generally speaking, when you develop something, it will have errors. The errors can be limited and if there are any vulnerabilities, they can be mitigated. However, if you develop software that is used to interact with other peoples projects, then the security is only as good as the weakest link. In Google’s case, they may be developing a light-weight, hardened OS that only runs a browser (for use with Google docs and other web-based applications), but if you use the browser to view a page that is vulnerable then you are still just as insecure.
THE REAL DEAL
Here is a prediction. Google Chrome OS will set out to revolutionize the OS world. They will be successful overall in producing a shift in concepts, but not in the ways they intend on security. There will be exploits that take advantage of the basic input and output. Not only that, but there will be exploits that take advantage of cross-site malware, session hijacking and other browser-only tricks. For instance, Google intends that for productivity you will be using Google Docs. What would happen if you browse a site that has a cross-site exploit that steals your Google Docs? That’s just one thought.
I also predict that there will be security updates. Any operating system has the distinct responsibility to be in charge of any input and output of the entire system. Anything that can subvert this is malware and must be dealt with. Any OS is vulnerable just by the nature of being an OS. The advantage to Google’s approach is that any holes will be found quickly as there will be a much smaller footprint. Also, you will still need to install some third party drivers and such for input and output. Vulnerabilities can quickly show up here (and although Google can’t be held responsible, neither can Microsoft and we all know how we act when something *seems* to be Microsoft’s bug).
IF THE HYPE IS RIGHT
If Google is fully successful in securing their code and making an OS that depends on software that exists over a network then this means that Internet security will inherently be much more important. IPS offerings will be in charge of securing your documents rather than client-based AV protection. Security will shift along with the new thoughts on OS technology and application flow. This is an announcement that should live up to the hype, either way.
By Tim Cronin
Recently, NPR’s “All Tech Considered” posted a very good and concise article on securing WiFi technology. I would just like to add a few key points for those that concern themselves with network security.
First, when using a VPN on an un-trusted hotspot, make sure that it is a “full tunnel” VPN. Split tunnels work well for connecting with trusted networks (like your home network). Unfortunately, if you are on an un-trusted hotspot, then there is no guarantee that there is security on that hotspot and an attacker can use your PC to get access to your internal network.
Second, I would just like to point out that “Secure your home network” Is a huge point. Don’t just take advantage of encryption, MAC filtering and other ubiquitous measures. Also, reduce the size of your network to the minimum that is necessary for the amount of expected systems. And change the default network. Choose something not common. These steps may not be effective alone, but can certainly add to an overall secure environment.
SIDENOTE: MAC filtering and other security features have been shown to be inadequate when a skilled attacker targets your network. There is still not reason *not* to use them. The key is to make your network harder to get into than the ones around you, make it difficult enough so that the attacker loses interest or make it harder than his skill level to crack. An attacker will likely take the path of least resistance, after all. If your network proves to be difficult to hack, the hacker will move on.
Third, disable your wireless antenna when not in use. Most laptops have a button or switch that disables the antenna so that it’s easy to see that it is disabled. This is especially true on airplanes. There are many people that find it fun to browse others’ PCs while on board a plane.
Fourth, if you connect to an access point that you don’t intend to connect with often, delete it from your automatic wireless network list. This was shown to be a very large hole by HD Moore (with his “Evil eeePC”). Instructions here: http://technet.microsoft.com/en-us/library/cc778180(WS.10).aspx
Last, never assume that you aren’t compromised. The chance always exists. Monitor your systems regularly for irregularities.
By: Bill Prout
There have actually been a few major disasters in the past 10 years that have shown the value of good disaster recovery plans. Though they’re far from perfect they do make a difference and can always be improved with newer techniques and technology. When hurricane Katrina struck I was working with the City of New York’s network design team and we were tasked with creating an emergency refugee processing center for the thousands of hurricane victims that the city had taken in. While we were able to throw this site together over a weekend by using a lot of manpower and equipment it could have just as easily been done with a few decent virtual servers hosting the applications we needed. All applications including endpoints security could have been hosted virtually making design and deployment very simple. There most likely would have been significant cost savings on manpower, space, power, etc… Though this is an extreme example it does show how virtual environments can be used for disaster recovery.