BRUSSELS |
BRUSSELS Feb 20 (Reuters) - European Parliament lawmakers voted on Wednesday against mandatory fines of up to 2 percent of global turnover for companies caught breaching consumer privacy, potentially limiting the impact of new data protection rules.The body's industry committee, including Liberal, Conservative and Socialist lawmakers, opted instead to allow national regulators to determine the size of fines.
In practice, maximum fines in the European Union currently only range from 300,000 to 600,000 euros.
The proposed data privacy legislation, first presented in January 2012, is due to go to a vote in the parliament later this year.
Consumer groups said Wednesday's committee vote would probably mean lower penalties for companies relying on unfettered access to clients' data which are caught breaking rules.
They said the committee also backed down from proposals which would have required customers to give their consent before companies can target them with advertising chosen according to their internet browsing habits.
"This consciously keeps consumers in the dark and affords - particularly US companies - a licence to collect and process personal data according to commercial interests," Monique Goyens from the pan-European consumer group BEUC said.
Data gleaned from monitoring which web sites people look at has proven a money spinner for tech start-ups, which earn money by allowing advertisers to target audiences more exactly. So if someone looks at web sites dealing with spa breaks, relatedadvertising will soon appear in their browser.
The industry committee is one of several which will consider the proposed legislation.
An Irish lawmaker from the committee whose job it is to consider the views of industry said he wanted to water down sanctions for small- and medium-sized companies.
"A warning as opposed to an immediate fine makes sense," Sean Kelly said. "The gravity of the offence needs to be taken into consideration."
Monday, February 25, 2013
Sunday, February 24, 2013
Analysis: The near impossible battle against hackers everywhere
Analysis: The near impossible battle against hackers everywhere
Analysis: The near impossible battle against hackers everywhere
By Joseph Menn
SAN FRANCISCO | Sun Feb 24, 2013 3:04am ESTSecurity officers and their consultants say they are overwhelmed. The attacks are not only from China, which Washington has long accused of spying on U.S. companies, many emanate from Russia, Eastern Europe, the Middle East, and Western countries. Perpetrators range from elite military units to organized criminal rings to activist teenagers.
"They outspend us and they outman us in almost every way," said Dell Inc's chief security officer, John McClurg. "I don't recall, in my adult life, a more challenging time."
The big fear is that one day a major company or government agency will face a severe and very costly disruption to their business when hackers steal or damage critical data, sabotage infrastructure or destroy consumers' confidence in the safety of their information.
Elite security firm Mandiant Corp on Monday published a 74-page report that accused a unit of the Chinese army of stealing data from more than 100 companies. While China immediately denied the allegations, Mandiant and other security experts say the hacker group is just one of more than 20 with origins in China.
Chinese hackers tend to take aim at the largest corporations and most innovative technology companies, using trick emails that appear to come from trusted colleagues but bear attachments tainted with viruses, spyware and other malicious software, according to Western cyber investigators.
Eastern European criminal rings, meanwhile, use "drive-by downloads" to corrupt popular websites, such as NBC.com last week, to infect visitors. Though the malicious programs vary, they often include software for recording keystrokes as computer users enter financial account passwords.
Others getting into the game include activists in the style of the loosely associated group known as Anonymous, who favor denial-of-service attacks that temporarily block websites from view and automated searches for common vulnerabilities that give them a way in to access to corporate information.
An increasing number of countries are sponsoring cyber weapons and electronic spying programs, law enforcement officials said. The reported involvement of the United States in the production of electronic worms including Stuxnet, which hurt Iran's uranium enrichment program, is viewed as among the most successful.
Iran has also been blamed for a series of unusually effective denial-of-service attacks against major U.S. banks in the past six months that blocked their online banking sites. Iran is suspected of penetrating at least one U.S. oil company, two people familiar with the ongoing investigation told Reuters.
"There is a battle looming in any direction you look," said Jeff Moss, the chief information security officer of ICANN, a group that manages some of the Internet's key infrastructure.
"Everybody's personal objectives go by the wayside when there is just fire after fire," said Moss, who also advises the U.S. Department of Homeland Security.
HUNDREDS OF CASES UNREPORTED
Industry veterans say the growth in the number of hackers, the software tools available to them, and the thriving economic underground serving them have made any computer network connected to the Internet impossible to defend flawlessly.
"Your average operational security engineer feels somewhat under siege," said Bruce Murphy, a Deloitte & Touche LLP principal who studies the security workforce. "It feels like Sisyphus rolling a rock up the hill, and the hill keeps getting steeper."
In the same month that President Barack Obama decried enemies "seeking the ability to sabotage our power grids, our financial institutions, our air traffic control systems," cyber attacks on some prominent U.S. companies were reported.
Three leading U.S. newspapers, Apple Inc, Facebook Inc, Twitter and Microsoft Corp all admitted in February they had been hacked. The malicious software inserted on employee computers at the technology companies has been detected at hundreds of other firms that have chosen to keep silent about the incidents, two people familiar with the case told Reuters.
"I don't remember a time when so many companies have been so visibly 'owned' and were so ill-equipped," said Adam O'Donnell, an executive at security firm Sourcefire Inc, using the hacker slang for unauthorized control.
Far from being hyped, cyber intrusions remain so under-disclosed — for fear leaks about the attacks will spook investors — that the new head of the FBI's cyber crime effort, Executive Assistant Director Richard McFeely, said the secrecy has become a major challenge.
"Our biggest issue right now is getting the private sector to a comfort level where they can report anomalies, malware, incidences within their networks," McFeely said. "It has been very difficult with a lot of major companies to get them to cooperate fully."
McFeely said the FBI plans to open a repository of malicious software to encourage information sharing among companies in the same industry. Obama also recently issued an executive order on cyber security that encourages cooperation.
The former head of the National Security Agency, Michael Hayden, supports the use of trade and diplomatic channels to pressure hacking nations, as called for under a new White House strategy that was announced on Wednesday.
"The Chinese, with some legitimacy, will say 'You spy on us.' And as former director of the NSA I'll say, 'Yeah, and we're better at it than you are," said Hayden, now a principal at security consultant Chertoff Group.
He said what worries him the most is Chinese presence on networks that have no espionage value, such as systems that run infrastructure like energy and water plants. "There's no intellectual property to be pilfered there, no trade secrets, no negotiating positions. So that makes you frightened because it seems to be attack preparation," Hayden said.
Amid the rising angst, many of the top professionals in the field will convene in San Francisco on Monday for the best-known U.S. security industry conference, named after host company and EMC Corp unit RSA.
Several experts said they were convinced that companies are spending money on the wrong stuff, such as antivirus subscriptions that cannot recognize new or targeted attacks.
RSA Executive Chairman Art Coviello and Francis deSouza, head of products at top vendor Symantec Corp, both said they will give keynote speeches calling for a focus on more sophisticated analytical tools that look for unusual behavior on the network — which sounds expensive.
Others urge a more basic approach of limiting users' computer privileges, rapidly installing software updates, and allowing only trusted programs to function.
Some security companies are starting over with new designs, such as forcing all of their customers' programs to run on walled-off virtual machines.
With such divergent views, so much money at stake, and so many problems, there are perhaps just two areas of agreement.
Most people in the industry and government believe things will get worse. Coviello, for his part, predicted that a first-of-its kind - but relatively simple - virus that deleted all data on tens of thousands of PCs at Saudi Arabia's national oil company last year is a harbinger of what will come.
And most say that the increased mainstream attention on cyber security, even if it fixes uncomfortably on the industry's failings and tenacious adversaries, will help drive a desperately needed debate about what do to internationally and at home.
(Reporting by Joseph Menn in San Francisco; Additional reporting by Jim Finkle in Boston and Deborah Charles in Washington; Editing by Tiffany Wu and Jackie Frank)
European Space, Industrial Firms Breached in Cyber Attacks: Report
By Oliver Rochford on February 24, 2013
European Aeronautic Defence and Space Company and ThyssenKrupp Breached, Germany Calls for Mandatory Breach Disclosure
German news magazine The Spiegel reported on Sunday that European Aeronautic Defence and Space Company (EADS) and German Industrial multinational conglomerate ThyssenKrupp, have fallen victim to recent cyber-exploitation attacks.
A few months ago, an “extraordinary attack” was launched against EADS, according to the report. The Company, however, has remained silent on the topic of any potential damage, but did rate the incident so severe that it alerted the German Federal government about it.
ThyssenKrupp fell victim to an attack in mid 2012, described as “heavy” and of an “exceptional quality”. The Company confirmed the incident to The Spiegel. According to sources, the breach occurred at a US-based subsidiary. The corporation has no knowledge whether anything at all, or anything specific was copied and stolen by attackers. The source addresses of the attacks appeared to be from China, according to the report. The report also states that the German Federal Agency for Constitutional Protection (Verfassungsschutz) registered 1100 exploitation attacks from foreign intelligence services, the majority of them targeting the Chancellery and Foreign and Economic Ministries using Spear-Phishing attacks.
German security agencies noticed a particular spike leading up to the G20 Summit, targeting the German members of the delegation, according to the Spiegel. The focus of interest appeared to be financial and energy policy related material. The Bundesnachrichten Dienst (BND, the German Foreign Intelligence Agency), is now reportedly planning to create a cyberwar department.
The Spiegel also reports that Germany's Minister of the Interior is now planning a IT Security Bill to implement a regulation requiring breaches and incidents to be reported by businesses, joining the EU commission which also is planning to require mandatory reporting of hacking for about 44,000 companies.
The full article from The Spiegel in German is available here.
Thursday, February 21, 2013
Monday, February 18, 2013
Monitoring Your Unknown Network Traffic
By Danelle Au on February 11, 2013
The recent New York Times hack was yet another high-profile attack that demonstrated the evolution towards multi-vector, sophisticated attacks. In this case, the mission of the perpetrators was very specific -- retrieving editorial information and data related to a particular story -- but it could easily have been nastier. With the extent of access the attackers had, the personal information of millions of New York Times subscribers could have been compromised.
The attack clearly demonstrated that a security strategy centered primarily on signature-based endpoint security isn’t enough to prevent against sophisticated attacks that use a cocktail of tactics, including advanced malware. We’ve seen this before. There is clearly a new cyber war being staged by a very different set of actors (nation states, political groups and criminal organizations) and much has been written about how to tackle these new sets of challenges.
What was interesting about this attack was that it emphasized the need for a rapid monitoring and response system. Hand-in-hand with the deployment of a robust security architecture is the need for a monitoring and response process that allows enterprises to continuously monitor and process security data efficiently and proactively act upon this data if something suspicious is found. But, delivering the proper monitoring and response infrastructure starts with feeding it the right information from your security appliances. Selecting the right enforcement model also makes the process of finding unknown traffic and potential malware more manageable.
Garbage In, Garbage Out
The first step is of course actually having useful security data to observe. The analytics of data is only as useful as the data itself, so logging capabilities must be enabled so there is enough data to capture network attacks or anomalies. Routers, switches and network security appliances generate logging and netflow data that can provide information about anomalous behavior within the network.
The challenge with a monitoring and response system is twofold. First, it must process large and complex security data sets in real time. Second, it needs to ensure it is processing useful, intelligent data. Recall the growth of the managed security services business just to handle the volume of IPS alerts generated by enterprises that had no idea how to react to them. Having too much data or data that is not easily actionable just brings operational headaches. Useful data includes information about applications, users and content that can shed light about traffic and user behavior.
A new breed of SIEM and big data vendors like Splunk understand this. They have the intelligence to process richer, premium feeds from next-generation firewalls that include information on applications, users and content for more flexible, analytic tools. Next-generation security appliances now also provide integrated reporting and logging tools that make the security administrator’s job a lot easier.
Looking for the Unknown Unknowns
With the richer data feed now driving more interesting and relevant dashboards and reports, how do you leverage this information to find the malware that may be in the network? What exactly are you looking for? Enter the Rumsfeldian theory applied to network security. In February 2002, Former Defense Secretary Donald Rumsfeld said this:
Positive enforcement means that you selectively allow what is required for day-to-day business operations and block everything else as opposed to a negative enforcement approach where you would selectively block everything that is not allowed. A negative enforcement approach requires you to keep track of any new applications and constantly adapt your policy to block them. This would be a never-ending, non-trivial task. The adoption of a positive enforcement model is therefore fundamental to identifying known applications so that unknown traffic becomes significant.
Here is how the categories apply to network security:
The attack clearly demonstrated that a security strategy centered primarily on signature-based endpoint security isn’t enough to prevent against sophisticated attacks that use a cocktail of tactics, including advanced malware. We’ve seen this before. There is clearly a new cyber war being staged by a very different set of actors (nation states, political groups and criminal organizations) and much has been written about how to tackle these new sets of challenges.
What was interesting about this attack was that it emphasized the need for a rapid monitoring and response system. Hand-in-hand with the deployment of a robust security architecture is the need for a monitoring and response process that allows enterprises to continuously monitor and process security data efficiently and proactively act upon this data if something suspicious is found. But, delivering the proper monitoring and response infrastructure starts with feeding it the right information from your security appliances. Selecting the right enforcement model also makes the process of finding unknown traffic and potential malware more manageable.
Garbage In, Garbage Out
The first step is of course actually having useful security data to observe. The analytics of data is only as useful as the data itself, so logging capabilities must be enabled so there is enough data to capture network attacks or anomalies. Routers, switches and network security appliances generate logging and netflow data that can provide information about anomalous behavior within the network.
The challenge with a monitoring and response system is twofold. First, it must process large and complex security data sets in real time. Second, it needs to ensure it is processing useful, intelligent data. Recall the growth of the managed security services business just to handle the volume of IPS alerts generated by enterprises that had no idea how to react to them. Having too much data or data that is not easily actionable just brings operational headaches. Useful data includes information about applications, users and content that can shed light about traffic and user behavior.
A new breed of SIEM and big data vendors like Splunk understand this. They have the intelligence to process richer, premium feeds from next-generation firewalls that include information on applications, users and content for more flexible, analytic tools. Next-generation security appliances now also provide integrated reporting and logging tools that make the security administrator’s job a lot easier.
Looking for the Unknown Unknowns
With the richer data feed now driving more interesting and relevant dashboards and reports, how do you leverage this information to find the malware that may be in the network? What exactly are you looking for? Enter the Rumsfeldian theory applied to network security. In February 2002, Former Defense Secretary Donald Rumsfeld said this:
“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”Of course, his quote was referring to Iraqi weapons of mass destruction, but these categories apply to network security as well. In an ideal enterprise or data center network, you should know all your traffic, as any unknown traffic could include malware or a possible breach. But in order to easily categorize the unknown, you must start off with a positive enforcement model.
Positive enforcement means that you selectively allow what is required for day-to-day business operations and block everything else as opposed to a negative enforcement approach where you would selectively block everything that is not allowed. A negative enforcement approach requires you to keep track of any new applications and constantly adapt your policy to block them. This would be a never-ending, non-trivial task. The adoption of a positive enforcement model is therefore fundamental to identifying known applications so that unknown traffic becomes significant.
Here is how the categories apply to network security:
• Known knowns - If you apply a positive security enforcement model, then you are allowing approved traffic and blocking everything else. Next-generation firewalls do this by analyzing all traffic and decoding known applications and protocols using a combination of signatures, decoders and heuristics. The “known knowns” therefore are the applications that you are allowing in the network, safely enabled for specific users and network segments. The “known knowns” also means blocking traffic you know is bad. This includes controlling traffic sources and destinations based on risks such as blocking known bad URLs, blocking DNS to known bad hosts and domains, known exploits, malware and command and control traffic.
• Known unknowns – The known unknowns include unknown traffic and unknown files. Any traffic that is not categorized via any of the next-generation firewall application identification technologies is categorized as unknown TCP or unknown UDP. This may include custom applications, enterprise applications that have not yet been classified or malware. Within the enterprise network, there may also be unknown files that are downloaded by users that could be infected by malware. The strategy with the “known unknowns” is to eliminate or inspect. For example, by creating custom identifiers for internal applications, a certain amount of unknown traffic can be eliminated. All unknown files should be inspected to ensure that they do not have advanced malware, accomplished via advanced malware inspection technologies such as cloud-based processing of unknown files in a virtual sandbox to look for malware behaviors.
• Unknown unknowns – Assuming you’ve completed the first two steps above, you’re left with the unknown unknowns. This is where anomalous and malicious behavior can be observed. For example, look for concentrations of unknown traffic in one user or device, unknown traffic with a lot of bursty sessions (relative to bytes) across a lot of different ports or non-standard ports, or unknown traffic coming from certain countries. Encrypted unknown traffic that cannot be inspected can reasonably be blocked at this point. Further investigation can be launched on the rest of the “unknown unknowns” sessions such as drilling into the user or machine generating the traffic or whether the unknown application is transferring files.
In summary, just as building the right security architecture for your network requires the right security appliances, critical foundational elements like the right data feed and the right enforcement model are the building blocks to creating a robust monitoring and response system.Is SSL Secure?
Is SSL Secure?
Lucky13 SSL exploit reveals hidden risk in the pervasive security technology.
Secure Sockets Layer/Transport Layer Security is the foundational technology that secures Web transactions and communications, but it is not infallible.
New research dubbed Lucky13 reveals that SSL/TLS is at risk from a theoretical timing attack that could expose encrypted data. TLS headers include 13 bytes of data used for the secure handshake protocol, said researchers from Royal Holloway at the University of London, and they can be exploited in the Lucky13 attack.
The Lucky13 attack is not the first time in recent years that SSL/TLS has been found to be at risk. In September of 2011, the SSL BEAST attack was first reported. SSL BEAST was patched by Microsoft in January of 2012. The emergence of the new Lucky13 vulnerability is not a surprise to some SSL experts.
Ryan Hurst, CTO of GlobalSign, a SSL Certificate Authority, told eSecurity Planet that there has been some good research into SSL/TLS in recent years.
"Over the last two to three years we have started to see security researchers and universities taking research and seeing if it can be weaponized," Hurst said. "As a trend, I think we'll see more research into attacks that once were thought not to be possible, to actually be possible."
"Many people don't use HSTS, and there are plenty of opportunities to subvert SSL if you don't have a solid SSL configuration," Hurst said. "I don't want to trivialize the Lucky13 attack. It's cool research, but if I wanted to attack SSL I'd start with the initial connection."
HTTP Strict Transport Security (HSTS) is a recently ratified IETF standard to help ensure that browsers connect to a website over HTTPS. Without HSTS, it is possible for a user to insecurely log into a website that they should be logging into securely via HTTPS. At the Black Hat DC 2009 event, security researcher Moxie Marlinspike released a tool called SSLstrip that is able to deceive users and Web browsers into thinking they are on an SSL/HTTPS secured site when in fact they are not.
"In the case of SSL, I don't recommend people deploy CBC-based ciphers today because of Lucky13," Hurst said. "But is CBC fundamentally flawed? No, it just needs to be used right."
Hurst also suggests that users visit online SSL server validation tools, including one developed by his company. Security vendor Qualys also has an SSL checking tool. Those tools are able to look at a server configuration and make recommendations on proper deployment.
Another route to limit the risk of potential SSL vulnerabilities is for enterprises to have a solid grasp of the SSL certificates they are using and where they are being deployed. Jeff Hudson, CEO of Venafi, told eSecurity Planet that in his view most people don't know what they are using when it comes to SSL.
Venafi is a vendor of key and certificate management solutions. In Hudson's view, the Lucky13 vulnerability is not the first and won't be the last time that risks in SSL are exposed. "Any system of any kind of complexity always has vulnerabilities," he said.
For Hudson, managing SSL is all about trust and the ability to manage sources of trust in a rapidly evolving threat landscape.
"We do know that encryption and SSL certificates are foundational and we cannot get away from them as they are the very fabric of our digital world," Hudson said. "We can't change that; what you can do is get better at managing them."
Sean Michael Kerner is a senior editor at InternetNews.com, the news service of the IT Business Edge Network. Follow him on Twitter @TechJournalist.
New research dubbed Lucky13 reveals that SSL/TLS is at risk from a theoretical timing attack that could expose encrypted data. TLS headers include 13 bytes of data used for the secure handshake protocol, said researchers from Royal Holloway at the University of London, and they can be exploited in the Lucky13 attack.
The Lucky13 attack is not the first time in recent years that SSL/TLS has been found to be at risk. In September of 2011, the SSL BEAST attack was first reported. SSL BEAST was patched by Microsoft in January of 2012. The emergence of the new Lucky13 vulnerability is not a surprise to some SSL experts.
Ryan Hurst, CTO of GlobalSign, a SSL Certificate Authority, told eSecurity Planet that there has been some good research into SSL/TLS in recent years.
"Over the last two to three years we have started to see security researchers and universities taking research and seeing if it can be weaponized," Hurst said. "As a trend, I think we'll see more research into attacks that once were thought not to be possible, to actually be possible."
What Makes Lucky 13 Possible?
Why is an attack like Lucky13 theoretically possible today? It has a lot to do with increases in available computing and networking power. Though Lucky13 is a theoretically possible attack vector, hackers will likely not be interested in weaponizing it at the current time."Many people don't use HSTS, and there are plenty of opportunities to subvert SSL if you don't have a solid SSL configuration," Hurst said. "I don't want to trivialize the Lucky13 attack. It's cool research, but if I wanted to attack SSL I'd start with the initial connection."
HTTP Strict Transport Security (HSTS) is a recently ratified IETF standard to help ensure that browsers connect to a website over HTTPS. Without HSTS, it is possible for a user to insecurely log into a website that they should be logging into securely via HTTPS. At the Black Hat DC 2009 event, security researcher Moxie Marlinspike released a tool called SSLstrip that is able to deceive users and Web browsers into thinking they are on an SSL/HTTPS secured site when in fact they are not.
CBC Ciphers
One of the enabling factors for the Lucky13 attack is the usage of a weak CBC (cipher-block chaining) cipher. One of the ways to avoid the Lucky13 attack is to not use a CBC cipher, though Hurst notes that CBC can be implemented properly to limit risk. Getting CBC implemented right is no easy task and involves a lot of technical subtleties."In the case of SSL, I don't recommend people deploy CBC-based ciphers today because of Lucky13," Hurst said. "But is CBC fundamentally flawed? No, it just needs to be used right."
SSL Recommendations
For the Lucky13 attack, one possible fix to mitigate risk is on the server side implementations of SSL/TLS. Hurst suggests that all SSL server vendors, including OpenSSL and Microsoft, are now likely in the process of providing patches to their SSL/TLS servers to provide a constant time computation.Hurst also suggests that users visit online SSL server validation tools, including one developed by his company. Security vendor Qualys also has an SSL checking tool. Those tools are able to look at a server configuration and make recommendations on proper deployment.
Another route to limit the risk of potential SSL vulnerabilities is for enterprises to have a solid grasp of the SSL certificates they are using and where they are being deployed. Jeff Hudson, CEO of Venafi, told eSecurity Planet that in his view most people don't know what they are using when it comes to SSL.
Venafi is a vendor of key and certificate management solutions. In Hudson's view, the Lucky13 vulnerability is not the first and won't be the last time that risks in SSL are exposed. "Any system of any kind of complexity always has vulnerabilities," he said.
For Hudson, managing SSL is all about trust and the ability to manage sources of trust in a rapidly evolving threat landscape.
"We do know that encryption and SSL certificates are foundational and we cannot get away from them as they are the very fabric of our digital world," Hudson said. "We can't change that; what you can do is get better at managing them."
Sean Michael Kerner is a senior editor at InternetNews.com, the news service of the IT Business Edge Network. Follow him on Twitter @TechJournalist.
Friday, February 15, 2013
World’s Leading Certificate Authorities Come Together to Advance Internet Security and the Trusted SSL Ecosystem
Posted by admin on February 14, 2013
San Francisco, CA. – February 14, 2013 – Leading global certificate authorities announced the creation of the Certificate Authority Security Council (CASC), an advocacy group, committed to the exploration and promotion of best practices that advance the security of websites and online transactions. Through public education, collaboration, and advocacy, the CASC strives to improve understanding of critical policies and their potential impact on the internet infrastructure. Members of the CASC include Comodo, DigiCert, Entrust, GlobalSign, Go Daddy, Symantec, and Trend Micro.
Click to Tweet: World’s leading Certificate Authorities join together to form the Certificate Authority Security Council: http://bit.ly/VaZNek
Amid increasing threats from sophisticated hacker networks, global cybercriminal organizations and state-sponsored espionage, the CASC is coming together to promote advanced security standards, encourage best practices, and ultimately improve the deployment of a continually trustworthy SSL ecosystem. In addition, the CASC supports the efforts of the CA/Browser Forum and other standards-setting bodies in their important work, and will continue to help develop reasonable and practical enhancements that improve trusted Secure Sockets Layer (SSL) and certificate authority (CA) operations.
Coinciding with its launch, the CASC is announcing the first of a planned series of educational and advocacy efforts related to best practices in SSL deployment with a focus on the importance of online certificate status checking and revocation. Specifically, the CASC will highlight the benefits of online certificate status protocol (OCSP) stapling for Web server administrators, software vendors, browser developers, and end-users through blog posts, conference presentations and other resources. For more information, visit: http://bit.ly/V01LhG
The backbone of Internet security for nearly two decades, the SSL protocol and certificates from publicly trusted CAs remain the most proven, reliable and scalable method to protect Internet transactions. The CASC is focused on promoting tightened global standards to mitigate high-profile security incidents, and improving research and collaboration that will continue to establish the security-robustness of the SSL ecosystem.
Click to Tweet: World’s leading Certificate Authorities join together to form the Certificate Authority Security Council: http://bit.ly/VaZNek
Amid increasing threats from sophisticated hacker networks, global cybercriminal organizations and state-sponsored espionage, the CASC is coming together to promote advanced security standards, encourage best practices, and ultimately improve the deployment of a continually trustworthy SSL ecosystem. In addition, the CASC supports the efforts of the CA/Browser Forum and other standards-setting bodies in their important work, and will continue to help develop reasonable and practical enhancements that improve trusted Secure Sockets Layer (SSL) and certificate authority (CA) operations.
Coinciding with its launch, the CASC is announcing the first of a planned series of educational and advocacy efforts related to best practices in SSL deployment with a focus on the importance of online certificate status checking and revocation. Specifically, the CASC will highlight the benefits of online certificate status protocol (OCSP) stapling for Web server administrators, software vendors, browser developers, and end-users through blog posts, conference presentations and other resources. For more information, visit: http://bit.ly/V01LhG
The backbone of Internet security for nearly two decades, the SSL protocol and certificates from publicly trusted CAs remain the most proven, reliable and scalable method to protect Internet transactions. The CASC is focused on promoting tightened global standards to mitigate high-profile security incidents, and improving research and collaboration that will continue to establish the security-robustness of the SSL ecosystem.
Quotes
CASC Quote- “SSL remains today the most widely deployed and successful cryptography system in the world,” said Dean Coclin, Steering Committee, Certificate Authority Security Council. “As a unified group of the world’s leading SSL providers, we’re collaborating on matters of highest priority, while also recognizing the value of previous and recent work to continually evolve the standards, and create an industry that understands the issues involved and is committed to making the necessary enhancements.”
- “The CASC is a group of global trust anchors, who understand the necessity for continually evolving security that meets the needs of web sites and their users,” said Yngve Pettersen, Independent Researcher. “The creation of the CASC is a step in the right direction to making practical, scalable improvements to the current SSL/TLS ecosystem while promoting security standards and better education among users.”
- “The CASC members are working actively with browsers and other parties to further improve existing methods that effectively balance performance and security while providing a trusted experience for all internet users,” said Ben Wilson, Chair, CA/Browser Forum. “These collaborative efforts have led to important steps forward that help improve security practices, self-regulation, and globalize the adoption and implementation of stricter, more universal standards. We look forward to working with the CASC to set quality standards that raise the bar for everyone.”
Additional Resources
- CASC Website
- CASC Blog
- CASC Blog Post: CAs Unite
- CASC Blog Post: OCSP Stapling Improved Performance & Security a Win-Win
- CASC Initiative: Certificate Revocation and OCSP Stapling
- SSL Basics
- OCSP and CRL Performance Report
- CA/Browser Forum Baseline Requirements
- CA/Browser Forum Network Security Guidelines
- AlwaysON SSL
- Advancing Internet Security SlideShare Presentation
- SSL Configuration Checker
About the CASC
The Certificate Authority Security Council is comprised of leading global Certificate Authorities that are committed to the exploration and promotion of best practices that advance trusted SSL deployment and CA operations as well as the security of the internet in general. While not a standards-setting organization, the CASC works collaboratively to improve understanding of critical policies and their potential impact on the internet infrastructure. More information is available at https://casecurity.org.
###
Wednesday, February 13, 2013
Locking the bad guys out with asymmetric encryption
It makes online shopping, banking, and secure communications possible.
by Peter Bright- Feb 12 2013, 3:00pm -100
Encryption algorithms are the mathematical formulae for performing these transformations. You provide an encryption algorithm with a key and the data you want to protect (the plaintext), and it produces an encrypted output (the ciphertext). To read the output, you need to feed the key and the ciphertext into a decryption algorithm (sometimes these are identical to encryption algorithms; other times they are closely related but different).
Encryption algorithms are designed so that performing the decryption process is unfeasibly hard without knowing the key.
The algorithms can be categorized in many different ways, but perhaps the most fundamental is the distinction between symmetric and asymmetric encryption.
Before 1973, every known encryption algorithm was symmetric. What this means is that the key used to encrypt the plaintext is the same key used to decrypt the ciphertext; if you know the key, you can decrypt any data encrypted with it. This means that the key must be kept secret—only people authorized to read the messages must know it, and those people who do know it can read every single message that uses it.
This in turn makes symmetric algorithms tricky to use in practice, because those keys must be securely transported somehow. Key transport wasn't so difficult in the 6th century BC, when the first encryption algorithm, called the Caesar cipher, was invented. If you wanted to share a key with someone, you could just tattoo the key onto the shaved head of a slave, wait for his hair to grow back, and then send the recipient of your message the slave.
Unfortunately, that approach doesn't scale very well. If you want to buy something from Amazon or do some online banking, you probably don't have the time to wait for your slave's hair to grow back, and given the multitude of e-commerce sites out there, you may not even have enough slaves to go around.
It took 2,500 years of on-and-off cryptography invention and research for a solution to be found to this problem. That solution is asymmetric encryption. With asymmetric encryption there is not one key, but two related keys. Messages encrypted with one of the keys can't subsequently be decrypted with that same key. Instead, you have to use the other key for decryption. Typically, one key is designated as the public key and is published widely. The other is designated the private key and is kept secret.
The first published, commercially available algorithm (which happened to work in much the same way as GCHQ's worked, albeit in a more generalized form) was invented in 1977, and it was called RSA after the names of its three inventors (Ron Rivest, Adi Shamir and Leonard Adleman).
Symmetric algorithms essentially depend on jumbling up the plaintext in complex ways and doing so lots of times; the key is what specifies the exact jumbling up pattern that was used. Asymmetric algorithms take a different approach. They depend on the existence of hard mathematical problems. The keys here are solutions to these mathematical problems.
The problem that RSA is built around is that of integer factorization. Every integer has a unique prime factorization; that is, it can be written as a multiplicative product of prime numbers. For example, 50 is 2×52. While factorization is something that we all learn at school, it's actually a hard problem, especially when dealing with large numbers.
The naive algorithm you learn in school is called "trial division"; you divide the number you are trying to factorize by each prime number in turn, starting at 2 and working your way up, and check to see if it's wholly divisible. If it's wholly divisible then you then factorize the number that's left over, starting at the same prime as you just divided by. If it isn't, you try the next prime number. You only stop when you've tried every prime number less than or equal to the square root of the number you're trying to factorize.
This is easy enough for small numbers, but for big numbers it's very time-consuming. There are various algorithms that are a bit quicker than trial division, but only incrementally so.
RSA key pairs (that is, the pair of private and public keys) are generated in the following way:
Encryption and decryption are both quite simple. A ciphertext c is generated from a plaintext m as follows:
c = me (mod n)
Decryption is similar:
m = cd (mod n)
Here's where the difficulty of integer factorization is important: if n could easily be factorized then anyone could determine p and q, hence φ(n), and hence d. If d were known to everyone then they could decrypt the messages encrypted with e with ease.
Conventionally, e, the public key, is chosen to be a smaller number than d, typically 65,537.
Since RSA's invention, other asymmetric encryption algorithms have been devised; like RSA, they're all built on the assumption that a particular mathematical problem is hard to solve, usually either integer factorization or the discrete logarithm. Surprisingly, the assumption of difficulty is not actually proven in the case of either integer factorization or the discrete logarithm, and there is a possibility that a "fast" algorithm will be devised, which could render some kinds of public key cryptography insecure.
Perhaps surprisingly, one of the things it's not good for is general-purpose encryption. Those exponentiation operations used in RSA's encryption and decryption (or similar operations in other algorithms) are slow and expensive to compute; as a result, RSA isn't well-suited to encrypting large chunks of data. What it is good for, however, is encrypting small pieces of data—such as the encryption keys for symmetric algorithms, which are good for encrypting large amounts of data.
An example of this is the PGP program, first released in 1991. PGP, standing for "Pretty Good Privacy," is a program for sending secure messages using a combination of symmetric and asymmetric encryption algorithms. Everyone who wants to receive PGP-secured messages publishes their PGP public key. Traditionally, this is an RSA public key, though nowadays other algorithms are also supported.
To send a secure message, you first generate a random key for use with a symmetric algorithm. You use that key and the symmetric algorithm to encrypt your message. The original PGP implementation used an algorithm called IDEA for this purpose, though modern versions offer a variety of options for this, too. You then encrypt that random key with the RSA public key and bundle the two things—symmetrically encrypted message, asymmetrically encrypted random key.
To decrypt the message, the recipient uses his private key to decrypt the random key, and then uses the random key to decrypt the message itself.
Similar schemes are used for the S/MIME secure e-mail standard: public key cryptography is used to secure keys, and symmetric cryptography is used for the bulk encryption.
Disk encryption systems like Microsoft's BitLocker also uses a similar approach on systems equipped with TPM (Trusted Platform Module) chips. These chips securely store encryption keys and in principle only allow authorized software to access them. The bulk encryption of the data on the disk uses the AES symmetric algorithm; that key is then encrypted using RSA.
The ssh network protocol widely used for remote connections to Unix-like operating systems can also use public key cryptography for logging in. Users connecting to systems by ssh generate their own key pairs and associate their public keys with their user accounts on every server they want to connect to. Each time a user connects to the server, the server verifies that they're in possession of the corresponding private key by exploiting the fact that only the private key can decrypt messages encrypted with the public key.
To create a digital signature, first the message is fed into a hash algorithm. Hash algorithms convert variable length data into fixed length numbers. Simple hash algorithms are widely used to provide probabilistic detection of changes to data due to things like transmission errors or disk corruption. Hash algorithms used in cryptographic applications are generally a bit more complex, because they're designed to ensure that it's all but impossible for someone to construct a file in order to have a specific hash value.
The output of this hash function, called the hash, is then encrypted using the private key. The encrypted hash and the message are then distributed.
To verify the signature, the encrypted hash is decrypted. The message is hashed using the same hash algorithm, and this hash is compared to the decrypted value. If they match, then it proves two things: first that the message was sent by the owner of the private key, second that the message has not been subsequently modified.
Digital certificates are little chunks of data that contain some information about an identity (so they could represent a company, a subdivision of a company, or an individual person, say), some usage information about the certificate itself (a certificate might be legitimate only for signing e-mail messages, or for signing software, or for encrypting files), and a public key.
These chunks of data are then all signed by a certificate authority; an organization that's in some sense trusted to have performed verification of the identity of the person the certificate belongs to. Each certificate also has a corresponding private key that is given to the certificate's owner.
Like normal public keys, certificates are freely published; the private keys, of course, are not.
PKI is something that most of us unwittingly use every single day, because these certificates are used for Web serving with secure HTTP (HTTPS). HTTPS is used for both encryption, to prevent people from eavesdropping on your connection to your bank, and also authentication, so that you can prove that it's really your bank you're talking to and not some hacker.
The authentication aspect is provided by the use of digital certificates: whenever a client connects to a server, the server sends its certificate for the client to inspect. For its encryption, HTTPS follows the same pattern that we've seen before; bulk encryption of the data using a symmetric algorithm and a random key, with the key transported using asymmetric encryption. The specifications underpinning HTTP (namely, secure socket layer [SSL] and transport layer security [TLS]) allow a variety of symmetric and asymmetric algorithms, but RSA is one of the more common. When RSA is used, the server certificate's public key itself is used to encrypt the secret key.
As such, it's important to keep private keys private. The best approach for doing this depends on what the key is used for. Private keys can generally be password protected, which provides some protection should the key be lost. However, this can be awkward for many usage scenarios; you don't want a rebooted Web server to be unable to use HTTPS until someone has typed in a password, after all.
If passwords aren't suitable, the next best bet is simply to strive to ensure the integrity of all machines used to store private keys. This means ensuring that systems are patched and malware free, and that access to systems holding important private keys is as restricted as possible. To minimize exposure in the case of a break-in, it's also common for websites to separate the machines that perform HTTPS from the ones that run Web applications; this way, a flaw in the application that exposes the server to hackers doesn't automatically mean that the private key is exposed.
In extreme cases, for particularly sensitive keys, private keys are stored offline and used only when absolutely necessary.
Should they get stolen, the results can be pretty disastrous.
Two chip companies, Realtek and JMicron, had their private keys stolen, probably in 2009 or 2010. These private keys corresponded to certificates that were authorized for code signing. 64-bit Windows requires device drivers to have a digital signature signed by a code-signing certificate. Hardware companies buy the code-signing certificates for the drivers they produce.
In the case of Realtek and JMicron, the stolen private keys were used to sign drivers that were used by the infamous Stuxnet malware. Armed with these private keys, Stuxnet's authors could produce drivers that Windows trusted and that appeared to come from harmless hardware companies from East Asia.
PKI provides a partial solution to this kind of compromise. Certificate authorities don't just sign certificates; they also have the power to revoke them. If a certificate authority is notified that a private key has been compromised for some reason, the CA can add the certificate to its list of revoked certificates. In theory, any system depending on PKI should check these revocation lists and refuse to use certificates found on them. Unfortunately, practice doesn't always match up with theory.
For private keys used outside of PKI, there's no easy solution, as there's no formal revocation procedure, and the impact will vary depending on how the keys were used. Accidental, unexploited disclosure of ssh keys may require nothing more serious than generation of a new key pair, removal of the old public key from all systems it's stored on, and the uploading of the new key.
This is a problem recently faced by GitHub, the project hosting and source control service. GitHub uses ssh for remote access to its source repositories and uses RSA keys to authenticate that access. It was recently noticed that many GitHub users had uploaded their private keys to public GitHub repositories, giving anyone the ability to modify those repositories. GitHub has responded by removing all the corresponding public keys and hence preventing those private keys from being used for further access.
For keys that were only used for GitHub that's probably sufficient, but anyone who used their keys for other purposes will probably have some work to do.
There is, however, a class of computer that can efficiently solve both such problems. Quantum computers that encode data using quantum features such as superposition and entanglement do have effective algorithms for integer factorization and discrete logarithms. A big enough quantum computer could relatively rapidly break RSA and other mainstream public key algorithms.
Fortunately, it's proving rather difficult to make big quantum computers. The number n, calculated in RSA by multiplying the two large prime numbers p and q, is typically 1,024 or 2,048 bits long. To factorize that number with a quantum computer would require at a minimum as many quantum bits (qubits) as there are bits in n. In practice, to account for error correction and other concerns, it could require the square of the number of bits, so about a million qubits to factorize a 1,024 bit n.
The largest number factorized by a quantum computer, however, is perhaps 143, on a system with just 4 qubits. One company claims, contentiously, to have 128 qubit processors but even this is a long way short of the million qubits needed to crack real RSA keys.
Even if realistic quantum computers should be developed, there will be new asymmetric encryption algorithms devised that will be deliberately resistant to quantum attacks. Public key cryptography is a hidden but fundamental part of modern life: even quantum leaps in computing technology aren't going to make it go away.
Encryption algorithms are designed so that performing the decryption process is unfeasibly hard without knowing the key.
The algorithms can be categorized in many different ways, but perhaps the most fundamental is the distinction between symmetric and asymmetric encryption.
Before 1973, every known encryption algorithm was symmetric. What this means is that the key used to encrypt the plaintext is the same key used to decrypt the ciphertext; if you know the key, you can decrypt any data encrypted with it. This means that the key must be kept secret—only people authorized to read the messages must know it, and those people who do know it can read every single message that uses it.
This in turn makes symmetric algorithms tricky to use in practice, because those keys must be securely transported somehow. Key transport wasn't so difficult in the 6th century BC, when the first encryption algorithm, called the Caesar cipher, was invented. If you wanted to share a key with someone, you could just tattoo the key onto the shaved head of a slave, wait for his hair to grow back, and then send the recipient of your message the slave.
Unfortunately, that approach doesn't scale very well. If you want to buy something from Amazon or do some online banking, you probably don't have the time to wait for your slave's hair to grow back, and given the multitude of e-commerce sites out there, you may not even have enough slaves to go around.
It took 2,500 years of on-and-off cryptography invention and research for a solution to be found to this problem. That solution is asymmetric encryption. With asymmetric encryption there is not one key, but two related keys. Messages encrypted with one of the keys can't subsequently be decrypted with that same key. Instead, you have to use the other key for decryption. Typically, one key is designated as the public key and is published widely. The other is designated the private key and is kept secret.
The first public key encryption algorithm: RSA
The first algorithms using asymmetric keys were devised in secret by the British government's SIGINT agency, GCHQ, in 1973. That work was not made public, however, until 1997, when the British government declassified it.The first published, commercially available algorithm (which happened to work in much the same way as GCHQ's worked, albeit in a more generalized form) was invented in 1977, and it was called RSA after the names of its three inventors (Ron Rivest, Adi Shamir and Leonard Adleman).
Symmetric algorithms essentially depend on jumbling up the plaintext in complex ways and doing so lots of times; the key is what specifies the exact jumbling up pattern that was used. Asymmetric algorithms take a different approach. They depend on the existence of hard mathematical problems. The keys here are solutions to these mathematical problems.
The problem that RSA is built around is that of integer factorization. Every integer has a unique prime factorization; that is, it can be written as a multiplicative product of prime numbers. For example, 50 is 2×52. While factorization is something that we all learn at school, it's actually a hard problem, especially when dealing with large numbers.
The naive algorithm you learn in school is called "trial division"; you divide the number you are trying to factorize by each prime number in turn, starting at 2 and working your way up, and check to see if it's wholly divisible. If it's wholly divisible then you then factorize the number that's left over, starting at the same prime as you just divided by. If it isn't, you try the next prime number. You only stop when you've tried every prime number less than or equal to the square root of the number you're trying to factorize.
This is easy enough for small numbers, but for big numbers it's very time-consuming. There are various algorithms that are a bit quicker than trial division, but only incrementally so.
RSA key pairs (that is, the pair of private and public keys) are generated in the following way:
- Choose two large, distinct prime numbers, called p and q.
- Compute p × q, and call this n. The size of this number in bits is the key length, and it gives an indication of how strong the key is: the longer, the better.
- Compute (p - 1) × (q - 1), and call this φ(n).
- Pick an integer e that is coprime with φ(n). That is to say, a prime factorization of e should have no factors shared by a prime factorization of φ(n).
- Calculate d such that d × e = 1 (mod φ(n)). This multiplication uses modulo arithmetic (also known as clock arithmetic). Modulo arithmetic wraps around. It counts normally from 0 to mod φ(n) - 1, then wraps around back to 0 again. This is much the same as the way that on a clock, the number after 12 isn't 13; it's 1.
Encryption and decryption are both quite simple. A ciphertext c is generated from a plaintext m as follows:
c = me (mod n)
Decryption is similar:
m = cd (mod n)
Here's where the difficulty of integer factorization is important: if n could easily be factorized then anyone could determine p and q, hence φ(n), and hence d. If d were known to everyone then they could decrypt the messages encrypted with e with ease.
Conventionally, e, the public key, is chosen to be a smaller number than d, typically 65,537.
Since RSA's invention, other asymmetric encryption algorithms have been devised; like RSA, they're all built on the assumption that a particular mathematical problem is hard to solve, usually either integer factorization or the discrete logarithm. Surprisingly, the assumption of difficulty is not actually proven in the case of either integer factorization or the discrete logarithm, and there is a possibility that a "fast" algorithm will be devised, which could render some kinds of public key cryptography insecure.
Public key cryptography in practice
Given a public key algorithm, what's it actually good for?Perhaps surprisingly, one of the things it's not good for is general-purpose encryption. Those exponentiation operations used in RSA's encryption and decryption (or similar operations in other algorithms) are slow and expensive to compute; as a result, RSA isn't well-suited to encrypting large chunks of data. What it is good for, however, is encrypting small pieces of data—such as the encryption keys for symmetric algorithms, which are good for encrypting large amounts of data.
An example of this is the PGP program, first released in 1991. PGP, standing for "Pretty Good Privacy," is a program for sending secure messages using a combination of symmetric and asymmetric encryption algorithms. Everyone who wants to receive PGP-secured messages publishes their PGP public key. Traditionally, this is an RSA public key, though nowadays other algorithms are also supported.
To send a secure message, you first generate a random key for use with a symmetric algorithm. You use that key and the symmetric algorithm to encrypt your message. The original PGP implementation used an algorithm called IDEA for this purpose, though modern versions offer a variety of options for this, too. You then encrypt that random key with the RSA public key and bundle the two things—symmetrically encrypted message, asymmetrically encrypted random key.
To decrypt the message, the recipient uses his private key to decrypt the random key, and then uses the random key to decrypt the message itself.
Similar schemes are used for the S/MIME secure e-mail standard: public key cryptography is used to secure keys, and symmetric cryptography is used for the bulk encryption.
Disk encryption systems like Microsoft's BitLocker also uses a similar approach on systems equipped with TPM (Trusted Platform Module) chips. These chips securely store encryption keys and in principle only allow authorized software to access them. The bulk encryption of the data on the disk uses the AES symmetric algorithm; that key is then encrypted using RSA.
The ssh network protocol widely used for remote connections to Unix-like operating systems can also use public key cryptography for logging in. Users connecting to systems by ssh generate their own key pairs and associate their public keys with their user accounts on every server they want to connect to. Each time a user connects to the server, the server verifies that they're in possession of the corresponding private key by exploiting the fact that only the private key can decrypt messages encrypted with the public key.
Signed, sealed, and delivered
Both RSA keys can be used to encrypt data. For secure communications, the public key is used to encrypt and the private key to decrypt, but the reverse is also useful; using the private key to encrypt and the public key to decrypt. This is useful because it proves authenticity. If a message can be decrypted using the public key then that proves that it was encrypted by the private key.To create a digital signature, first the message is fed into a hash algorithm. Hash algorithms convert variable length data into fixed length numbers. Simple hash algorithms are widely used to provide probabilistic detection of changes to data due to things like transmission errors or disk corruption. Hash algorithms used in cryptographic applications are generally a bit more complex, because they're designed to ensure that it's all but impossible for someone to construct a file in order to have a specific hash value.
The output of this hash function, called the hash, is then encrypted using the private key. The encrypted hash and the message are then distributed.
To verify the signature, the encrypted hash is decrypted. The message is hashed using the same hash algorithm, and this hash is compared to the decrypted value. If they match, then it proves two things: first that the message was sent by the owner of the private key, second that the message has not been subsequently modified.
Public Key Infrastructure
The biggest single use of public key cryptography, however, is building public key infrastructure (PKI). PKI uses public key cryptography to create digital certificates that allow for secure attestation of identity.Digital certificates are little chunks of data that contain some information about an identity (so they could represent a company, a subdivision of a company, or an individual person, say), some usage information about the certificate itself (a certificate might be legitimate only for signing e-mail messages, or for signing software, or for encrypting files), and a public key.
These chunks of data are then all signed by a certificate authority; an organization that's in some sense trusted to have performed verification of the identity of the person the certificate belongs to. Each certificate also has a corresponding private key that is given to the certificate's owner.
Like normal public keys, certificates are freely published; the private keys, of course, are not.
PKI is something that most of us unwittingly use every single day, because these certificates are used for Web serving with secure HTTP (HTTPS). HTTPS is used for both encryption, to prevent people from eavesdropping on your connection to your bank, and also authentication, so that you can prove that it's really your bank you're talking to and not some hacker.
The authentication aspect is provided by the use of digital certificates: whenever a client connects to a server, the server sends its certificate for the client to inspect. For its encryption, HTTPS follows the same pattern that we've seen before; bulk encryption of the data using a symmetric algorithm and a random key, with the key transported using asymmetric encryption. The specifications underpinning HTTP (namely, secure socket layer [SSL] and transport layer security [TLS]) allow a variety of symmetric and asymmetric algorithms, but RSA is one of the more common. When RSA is used, the server certificate's public key itself is used to encrypt the secret key.
Private parts
Steal someone's private key and you can masquerade as them to any system depending on public key cryptography. Take someone's PGP key and you can read their encrypted mail. Break into a bank's Web server and take its HTTPS certificate's private key and you can stick your own server on the Web and run the perfect phishing site, tricking the unsuspecting customers of the bank into handing over their account details.As such, it's important to keep private keys private. The best approach for doing this depends on what the key is used for. Private keys can generally be password protected, which provides some protection should the key be lost. However, this can be awkward for many usage scenarios; you don't want a rebooted Web server to be unable to use HTTPS until someone has typed in a password, after all.
If passwords aren't suitable, the next best bet is simply to strive to ensure the integrity of all machines used to store private keys. This means ensuring that systems are patched and malware free, and that access to systems holding important private keys is as restricted as possible. To minimize exposure in the case of a break-in, it's also common for websites to separate the machines that perform HTTPS from the ones that run Web applications; this way, a flaw in the application that exposes the server to hackers doesn't automatically mean that the private key is exposed.
In extreme cases, for particularly sensitive keys, private keys are stored offline and used only when absolutely necessary.
Should they get stolen, the results can be pretty disastrous.
Two chip companies, Realtek and JMicron, had their private keys stolen, probably in 2009 or 2010. These private keys corresponded to certificates that were authorized for code signing. 64-bit Windows requires device drivers to have a digital signature signed by a code-signing certificate. Hardware companies buy the code-signing certificates for the drivers they produce.
In the case of Realtek and JMicron, the stolen private keys were used to sign drivers that were used by the infamous Stuxnet malware. Armed with these private keys, Stuxnet's authors could produce drivers that Windows trusted and that appeared to come from harmless hardware companies from East Asia.
PKI provides a partial solution to this kind of compromise. Certificate authorities don't just sign certificates; they also have the power to revoke them. If a certificate authority is notified that a private key has been compromised for some reason, the CA can add the certificate to its list of revoked certificates. In theory, any system depending on PKI should check these revocation lists and refuse to use certificates found on them. Unfortunately, practice doesn't always match up with theory.
For private keys used outside of PKI, there's no easy solution, as there's no formal revocation procedure, and the impact will vary depending on how the keys were used. Accidental, unexploited disclosure of ssh keys may require nothing more serious than generation of a new key pair, removal of the old public key from all systems it's stored on, and the uploading of the new key.
This is a problem recently faced by GitHub, the project hosting and source control service. GitHub uses ssh for remote access to its source repositories and uses RSA keys to authenticate that access. It was recently noticed that many GitHub users had uploaded their private keys to public GitHub repositories, giving anyone the ability to modify those repositories. GitHub has responded by removing all the corresponding public keys and hence preventing those private keys from being used for further access.
For keys that were only used for GitHub that's probably sufficient, but anyone who used their keys for other purposes will probably have some work to do.
Our quantum future?
Keeping the private key private only matters if integer factorization (or discrete logarithms, for the other major family of asymmetric encryption algorithms) remains a hard problem. It's still not proven that this is the case, leaving the door open to a fast algorithm at some point in the future. For the moment, conventional computers struggle at these problems.There is, however, a class of computer that can efficiently solve both such problems. Quantum computers that encode data using quantum features such as superposition and entanglement do have effective algorithms for integer factorization and discrete logarithms. A big enough quantum computer could relatively rapidly break RSA and other mainstream public key algorithms.
Fortunately, it's proving rather difficult to make big quantum computers. The number n, calculated in RSA by multiplying the two large prime numbers p and q, is typically 1,024 or 2,048 bits long. To factorize that number with a quantum computer would require at a minimum as many quantum bits (qubits) as there are bits in n. In practice, to account for error correction and other concerns, it could require the square of the number of bits, so about a million qubits to factorize a 1,024 bit n.
The largest number factorized by a quantum computer, however, is perhaps 143, on a system with just 4 qubits. One company claims, contentiously, to have 128 qubit processors but even this is a long way short of the million qubits needed to crack real RSA keys.
Even if realistic quantum computers should be developed, there will be new asymmetric encryption algorithms devised that will be deliberately resistant to quantum attacks. Public key cryptography is a hidden but fundamental part of modern life: even quantum leaps in computing technology aren't going to make it go away.
Monday, February 11, 2013
Securing
Securing your Pc with True Crypt
By Martin Brinkmann on December 11, 2005 - Tags:encryption
Only a few days ago I wrote a first small article about true crypt and recommended it. Back then I bought a usb 2.0 hard drive with 300 GB capacity and encrypted its entire partition with true crypt. This was done to test the programs functionality but also to see if it would slow down my main computer (athlon 64 3000+, 1 gb ram).
To my great surprise it did not slow down the pc and I decided to expand the encryption to cover all my hard drives. Let me tell you why and how i did this and why you should also be considering this.
Why ?
The first question that comes to my mind and probably yours as well is: Why would someone want to encrypt his hard drives / part of his hard drives ? (note you can also encrypt other storage devices like usb sticks)
There are numerous reasons for this. It can be as profane as to hide your daily dose of naked ladys from your wife, hide personal information from other people who might have access to your pc or encrypt your files on a removable storage device for transportation to prevent that the files can be accessed when the device is stolen.
Now what ?
Now, why encrypt the whole drive(s) and not just a small part of it ?
This is a good questions and I have to answer it to some lengths. Let me first tell you that true crypt is not able to encrypt a operating system and boot from it at the same time. That means either you use a second unencrypted operating system or move all sensible user data to the encrypted partitions.
As I said earlier I only encrypted the removable usb hard drive. All my tools that I´ve been using daily are still on the unencrypted internal drives. Guess what happens when I open Open Office and load a document from the encrypted drive ?
It leaves traces. Last used files are normaly shown, it probably gets cached in windows cache as well. That means, although the file itself is encrypted the possibility exists that it could still be accessed by other means. There are lots of scenarios like this, a browser caches the pages you visit, a media player keeps records of last played files aso.
Wouldn´t it be much securer if those tools are also stored on an encrypted disk ?
The setup:
I decided to do the following. I already have a partition for the operating system. All other partitions would be encrypted. The user data from the operating system would reside on an encrypted disk, as would be the pagefile and all other caching.
As a sidenote, one could also install a clean operating system on that partition and use vmware to install another operating system on encrypted drives. BartPE is another possibility. The operating system would be stored on a read only device.
All my tools reside on the encrypted drives, making it impossible for someone else to access them. (unless one would keep the pc running when leaving..)
How to:
I suppose you already are using your drives. True Crypt will erase all data on a partition if its applied to it. Therefor you should move or backup your files before you start this process.
Download true crypt and install the program. Download the true crypt user manual as well. Then backup / move your files if you have not done so already.
Start True Crypt and select Create Volume. You have the choice to create a standard or a hidden True Crypt Volume. The difference between the two is the following. A hidden volume has a own pass phrase and always resides inside a standard volume. If someone forces you to reveal the pass phrase you provide the one for the standard volume. Its impossible to say if a hidden volume exists even if the standard volume has been mounted. (True Crypt partitions are always filled with random data and one can´t therefor distinguish.)
Select standard partition now and in the next window you have the option to store the encrypted data in a file or encrypt a whole device. We want to encrypt a complete hard drive, select device and chose your hard drive that you want encrypted.
Encryption Options:
You have to select an encryption algorithm and an Hash Algorithm now. I don´t want to recommend one to you but as of now none has been officially cracked. Some people are discussing their choices on the official true crypt forum, if you are unsure you might want to go there. You can also use Wikipedia for more information. (Blowfish information in this example)
Make sure that in the next step the whole hard disk space will be encrypted.
Selecting a password:
You will have to select a password which will be asked every time you want to mount your encrypted drive. Recommendations are that yours should be 20+ chars that consist of a mixture of upper- and lowercase, special chars and numbers. Its hard to remember at first but it will become easier over time. Its suggested that you do not write it down but that’s up to you..
Volume Format:
Move the mouse around for 30+ seconds, select a file system (ntfs for windows xp recommended), leave cluster size at default and click format afterwards. The whole partition will be formatted and encrypted, all data that is left on the device will be lost forever. Make sure there is none that you still need left.
Mounting:
You have to mount an encrypted partition to enable it in windows. Chose Select Device in the main menu of true crypt and pick the encrypted drive. Then click on mount and enter your pass phrase. If its correct the drive will appear and you can fill it with data.
The drive letter remains the same as before, so there should not be any problems with broken program links or the like.
Final Words:
Depending on your choices to use an unencrypted operating system, BartPE or VMware you need to make sure that all personal data and caches are stored on the encrypted partition. I strongly suggest you use one of the latter for the best security.
If you encounter errors I suggest you visit the true crypt forum which is well visited and contains lots of valuable topics of users that had problems with the tool.
I for myself decided to give BartPE a go and forget about the idea to have the operating system on the unencrypted partition. This saves a lot of the hassle of moving all cache and personal data locations to ones on the encrypted drive
To my great surprise it did not slow down the pc and I decided to expand the encryption to cover all my hard drives. Let me tell you why and how i did this and why you should also be considering this.
Why ?
The first question that comes to my mind and probably yours as well is: Why would someone want to encrypt his hard drives / part of his hard drives ? (note you can also encrypt other storage devices like usb sticks)
There are numerous reasons for this. It can be as profane as to hide your daily dose of naked ladys from your wife, hide personal information from other people who might have access to your pc or encrypt your files on a removable storage device for transportation to prevent that the files can be accessed when the device is stolen.
Now what ?
Now, why encrypt the whole drive(s) and not just a small part of it ?
This is a good questions and I have to answer it to some lengths. Let me first tell you that true crypt is not able to encrypt a operating system and boot from it at the same time. That means either you use a second unencrypted operating system or move all sensible user data to the encrypted partitions.
As I said earlier I only encrypted the removable usb hard drive. All my tools that I´ve been using daily are still on the unencrypted internal drives. Guess what happens when I open Open Office and load a document from the encrypted drive ?
It leaves traces. Last used files are normaly shown, it probably gets cached in windows cache as well. That means, although the file itself is encrypted the possibility exists that it could still be accessed by other means. There are lots of scenarios like this, a browser caches the pages you visit, a media player keeps records of last played files aso.
Wouldn´t it be much securer if those tools are also stored on an encrypted disk ?
The setup:
I decided to do the following. I already have a partition for the operating system. All other partitions would be encrypted. The user data from the operating system would reside on an encrypted disk, as would be the pagefile and all other caching.
As a sidenote, one could also install a clean operating system on that partition and use vmware to install another operating system on encrypted drives. BartPE is another possibility. The operating system would be stored on a read only device.
All my tools reside on the encrypted drives, making it impossible for someone else to access them. (unless one would keep the pc running when leaving..)
How to:
I suppose you already are using your drives. True Crypt will erase all data on a partition if its applied to it. Therefor you should move or backup your files before you start this process.
Download true crypt and install the program. Download the true crypt user manual as well. Then backup / move your files if you have not done so already.
Start True Crypt and select Create Volume. You have the choice to create a standard or a hidden True Crypt Volume. The difference between the two is the following. A hidden volume has a own pass phrase and always resides inside a standard volume. If someone forces you to reveal the pass phrase you provide the one for the standard volume. Its impossible to say if a hidden volume exists even if the standard volume has been mounted. (True Crypt partitions are always filled with random data and one can´t therefor distinguish.)
Select standard partition now and in the next window you have the option to store the encrypted data in a file or encrypt a whole device. We want to encrypt a complete hard drive, select device and chose your hard drive that you want encrypted.
Encryption Options:
You have to select an encryption algorithm and an Hash Algorithm now. I don´t want to recommend one to you but as of now none has been officially cracked. Some people are discussing their choices on the official true crypt forum, if you are unsure you might want to go there. You can also use Wikipedia for more information. (Blowfish information in this example)
Make sure that in the next step the whole hard disk space will be encrypted.
Selecting a password:
You will have to select a password which will be asked every time you want to mount your encrypted drive. Recommendations are that yours should be 20+ chars that consist of a mixture of upper- and lowercase, special chars and numbers. Its hard to remember at first but it will become easier over time. Its suggested that you do not write it down but that’s up to you..
Volume Format:
Move the mouse around for 30+ seconds, select a file system (ntfs for windows xp recommended), leave cluster size at default and click format afterwards. The whole partition will be formatted and encrypted, all data that is left on the device will be lost forever. Make sure there is none that you still need left.
Mounting:
You have to mount an encrypted partition to enable it in windows. Chose Select Device in the main menu of true crypt and pick the encrypted drive. Then click on mount and enter your pass phrase. If its correct the drive will appear and you can fill it with data.
The drive letter remains the same as before, so there should not be any problems with broken program links or the like.
Final Words:
Depending on your choices to use an unencrypted operating system, BartPE or VMware you need to make sure that all personal data and caches are stored on the encrypted partition. I strongly suggest you use one of the latter for the best security.
If you encounter errors I suggest you visit the true crypt forum which is well visited and contains lots of valuable topics of users that had problems with the tool.
I for myself decided to give BartPE a go and forget about the idea to have the operating system on the unencrypted partition. This saves a lot of the hassle of moving all cache and personal data locations to ones on the encrypted drive
Friday, February 8, 2013
Rob Cheng (Global) - Malware Storm: The US Department of Homeland Security advised last week that users disable Java
Posted by Rob Cheng Company pcpitstop.com02/07/2013
The US Department of Homeland Security advised last week that users disable Java. This is unprecedented. The government felt this is a computing problem so severe that it must intervene. Java is a real and present threat to not only our national security but our computers, privacy and wallets. The DHS has no motivation to sew misinformation or fear, and they should be heeded.
The Evolution of Malware
Virus writers are having a field day. A new industry has blossomed called Exploit Kits. Talented programmers sell their exploit kits for $3000 a pop to help their brethren malware writers deliver their payloads more effectively.
An exploit locates out-of-date software that allows the payload to be executed without user consent or knowledge. To be clear, just browse to a compromised website, and you are infected. Malware coders have become quite competent at infecting random websites, as well. So a good website today, might be an infected one tomorrow.
Researchers estimate that over half of all infections are through a single kit called the Black Hole Kit. It is not possible for a layperson to obtain the Black Hole Kit, but research indicates that Black Hole's primary target is JAVA.
Late 2012, the NY Times published a controversial piece questioning the effectiveness of modern antivirus software. The shocking conclusion was that after an exhaustive analysis of over 40 antivirus products, there was only a 5% chance of detecting and defeating a new threat. That is, if a computer had 40+ antivirus products running simultaneously, there is a scant 5% chance that the computer would be safe from new threats.
The security industry's response was quick and critical. The motivation, methodology and veracity of the report were questioned. One particularly seething rebuttal discredited the piece and concluded that people should spend as much as they can afford on multiple security solutions.
Ironically, the security industry is doing fine financially. In fact it's a bonanza. As infections rise, we are spending more money on security software, as well as hiring technicians to remove the malware from our computers.
Malware Morphing
Security software detects and blocks malware employing a technique called black list. Once a virus is released into the wild, it begins infecting computers. During the infection period, the virus is trapped by a security company. The virus is tested, confirmed, and then added to the black list. Once added to a black list, the other security black lists are updated. As that happens, that particular virus begins to decline in infections and eventually dies.
Malware makers have found a hole in the black list methodology through a technique called morphing. Once a virus is written, the virus morphs, so that one virus appears to be a thousand to the antivirus software. Polymorphic viruses have created an explosion in malware. There is now more bad software than good software!
The morphing has created a headache for the security industry. The daily number of viruses to be analyzed has exploded. This is a manual process and many of the security heavy weights have created malware research centers in the Phillipines to keep up with the spike in malware. The problem though is that they have forgotten that these viruses are morphing. By the time, the virus has been identified and the black list updated, the virus is no longer in the wild.
Held to Ransom
Private planes, luxury yachts and all the trappings of wealth are the riches of the malware gold rush. The reason that malware exists is financial. They trick users into downloading their payload, and hold the computer hostage until their ransom is paid. This type of activity should be illegal, but the virus industry is thriving and awash in cash.
The nouveaux-rich virus barons are treating their business as a business. They have deadlines, program managers, product roadmaps, and all the workings of a modern software company. On their roadmap are Mac computers, iPhones, tablets and so on. It is just a matter of time.
About 10 years ago, we were on a similar path. Computers were infected with spyware that tracked activity and blanketed the screen with "contextual" popup advertising. The computer became useless, and trust was waning on the wonders of the Internet. Like today, the major security vendors dropped the ball. People were getting infected despite having the best security software money could buy. Like today, the software installed surreptitiously without our consent. A decade ago, it was called drive-by downloads; today it is called exploits and vulnerabilities.
We survived the storm of 10 years ago. The antispyware industry was born and ultimately consolidated into the antivirus industry. The most important event, however, was Microsoft's launch of XP Service Pack 3. XPSP3 eliminated drive-by downloads and added a host of new security features. With one fell swoop, Microsoft stopped the spyware storm. Windows XPSP3 was not bullet proof, just made it a lot more difficult to infect. So hard, in fact, that firms such as WhenU and Gator were no longer financially viable.
We are at a crossroads. Like a decade ago, will the people conquer over the criminals that make viruses today? Unfortunately, Microsoft has lost its focus on making a great and secure operating system. A solution will arise, and it will be free like XP Service Pack 3 for quick adoption. I hope the criminals won't know what hit them, and then the bankers can foreclose on those ill-gotten mansions.
By Rob Cheng, pcpitstop.com
The US Department of Homeland Security advised last week that users disable Java. This is unprecedented. The government felt this is a computing problem so severe that it must intervene. Java is a real and present threat to not only our national security but our computers, privacy and wallets. The DHS has no motivation to sew misinformation or fear, and they should be heeded.
The Evolution of Malware
Virus writers are having a field day. A new industry has blossomed called Exploit Kits. Talented programmers sell their exploit kits for $3000 a pop to help their brethren malware writers deliver their payloads more effectively.
An exploit locates out-of-date software that allows the payload to be executed without user consent or knowledge. To be clear, just browse to a compromised website, and you are infected. Malware coders have become quite competent at infecting random websites, as well. So a good website today, might be an infected one tomorrow.
Researchers estimate that over half of all infections are through a single kit called the Black Hole Kit. It is not possible for a layperson to obtain the Black Hole Kit, but research indicates that Black Hole's primary target is JAVA.
Late 2012, the NY Times published a controversial piece questioning the effectiveness of modern antivirus software. The shocking conclusion was that after an exhaustive analysis of over 40 antivirus products, there was only a 5% chance of detecting and defeating a new threat. That is, if a computer had 40+ antivirus products running simultaneously, there is a scant 5% chance that the computer would be safe from new threats.
The security industry's response was quick and critical. The motivation, methodology and veracity of the report were questioned. One particularly seething rebuttal discredited the piece and concluded that people should spend as much as they can afford on multiple security solutions.
Ironically, the security industry is doing fine financially. In fact it's a bonanza. As infections rise, we are spending more money on security software, as well as hiring technicians to remove the malware from our computers.
Malware Morphing
Security software detects and blocks malware employing a technique called black list. Once a virus is released into the wild, it begins infecting computers. During the infection period, the virus is trapped by a security company. The virus is tested, confirmed, and then added to the black list. Once added to a black list, the other security black lists are updated. As that happens, that particular virus begins to decline in infections and eventually dies.
Malware makers have found a hole in the black list methodology through a technique called morphing. Once a virus is written, the virus morphs, so that one virus appears to be a thousand to the antivirus software. Polymorphic viruses have created an explosion in malware. There is now more bad software than good software!
The morphing has created a headache for the security industry. The daily number of viruses to be analyzed has exploded. This is a manual process and many of the security heavy weights have created malware research centers in the Phillipines to keep up with the spike in malware. The problem though is that they have forgotten that these viruses are morphing. By the time, the virus has been identified and the black list updated, the virus is no longer in the wild.
Held to Ransom
Private planes, luxury yachts and all the trappings of wealth are the riches of the malware gold rush. The reason that malware exists is financial. They trick users into downloading their payload, and hold the computer hostage until their ransom is paid. This type of activity should be illegal, but the virus industry is thriving and awash in cash.
The nouveaux-rich virus barons are treating their business as a business. They have deadlines, program managers, product roadmaps, and all the workings of a modern software company. On their roadmap are Mac computers, iPhones, tablets and so on. It is just a matter of time.
About 10 years ago, we were on a similar path. Computers were infected with spyware that tracked activity and blanketed the screen with "contextual" popup advertising. The computer became useless, and trust was waning on the wonders of the Internet. Like today, the major security vendors dropped the ball. People were getting infected despite having the best security software money could buy. Like today, the software installed surreptitiously without our consent. A decade ago, it was called drive-by downloads; today it is called exploits and vulnerabilities.
We survived the storm of 10 years ago. The antispyware industry was born and ultimately consolidated into the antivirus industry. The most important event, however, was Microsoft's launch of XP Service Pack 3. XPSP3 eliminated drive-by downloads and added a host of new security features. With one fell swoop, Microsoft stopped the spyware storm. Windows XPSP3 was not bullet proof, just made it a lot more difficult to infect. So hard, in fact, that firms such as WhenU and Gator were no longer financially viable.
We are at a crossroads. Like a decade ago, will the people conquer over the criminals that make viruses today? Unfortunately, Microsoft has lost its focus on making a great and secure operating system. A solution will arise, and it will be free like XP Service Pack 3 for quick adoption. I hope the criminals won't know what hit them, and then the bankers can foreclose on those ill-gotten mansions.
By Rob Cheng, pcpitstop.com
Thursday, February 7, 2013
Wednesday, February 6, 2013
IBM security tool can catch insider threats, fraud
IBM package uses big data to watch for internal, external security threats
By Ellen Messmer, Network World
January 30, 2013 11:42 AM ET
January 30, 2013 11:42 AM ET
IBM today rolled out a tool it says can cull massive terabytes of data, including email -- to help customers detect external attacks aimed at stealing sensitive information or insider threats that might reveal corporate secrets.
The tool, called IBM Security Intelligence with Big Data, is built on top of two core IBM products: the IBM enterprise version of open-source Hadoop database with analytics tools known as InfoSphere BigInsights, plus the IBM QRadar security event and information management (SIEM) product that IBM obtained when it acquired Q1 Labs back in 2011.
At its heart, IBM Security Intelligence with Big Data -- IBM thinks 500 terabytes cluster size would be a likely starting point -- would collect and analyze data at high speed data that would include packet-capture data, security-event information from firewalls and other gear, and analyze a stream of content that might include anything from raw email to scrapped SharePoint content, among other business information. The idea is to pull from this voluminous stream the clues that indicate a company is under attack or has been compromised and how.
The tool, called IBM Security Intelligence with Big Data, is built on top of two core IBM products: the IBM enterprise version of open-source Hadoop database with analytics tools known as InfoSphere BigInsights, plus the IBM QRadar security event and information management (SIEM) product that IBM obtained when it acquired Q1 Labs back in 2011.
At its heart, IBM Security Intelligence with Big Data -- IBM thinks 500 terabytes cluster size would be a likely starting point -- would collect and analyze data at high speed data that would include packet-capture data, security-event information from firewalls and other gear, and analyze a stream of content that might include anything from raw email to scrapped SharePoint content, among other business information. The idea is to pull from this voluminous stream the clues that indicate a company is under attack or has been compromised and how.
To continue reading, register here to become an Insider
It's FREE to join
Already an Insider? Sign in
Network World - IBM today rolled out a tool it says can cull massive terabytes of data, including email -- to help customers detect external attacks aimed at stealing sensitive information or insider threats that might reveal corporate secrets.
The tool, called IBM Security Intelligence with Big Data, is built on top of two core IBM products: the IBM enterprise version of open-source Hadoop database with analytics tools known as InfoSphere BigInsights, plus the IBM QRadar security event and information management (SIEM) product that IBM obtained when it acquired Q1 Labs back in 2011.At its heart, IBM Security Intelligence with Big Data -- IBM thinks 500 terabytes cluster size would be a likely starting point -- would collect and analyze data at high speed data that would include packet-capture data, security-event information from firewalls and other gear, and analyze a stream of content that might include anything from raw email to scrapped SharePoint content, among other business information. The idea is to pull from this voluminous stream the clues that indicate a company is under attack or has been compromised and how.
[ RELATED: Could data scientist be your next job?
MORE: IBM's Watson going to college for Web science, big data and artificial intelligence ]
IBM's CTO Sandy Bird said the technology is most likely to first be adopted by large companies with data scientists on staff. He acknowledged there's still a lot to be learned about which analytical models and patterns will be the most successful in threat detection. IBM Security Intelligence with Big Data can be theory be applied to cloud-based services, but its starting point is likely to be deployment near the enterprise data center where massive amounts of data are the moist easily accessed for it to work.
The tool is already being deployed in some large corporations and governments. Mark Clancy, chief information security officer at financial firm Depository Trust & Clearing Corporation, said the bank is using IBM's technology to get real-time security awareness. "We need to move from a world where we 'farm' security data and alerts with various prevention and detection tools to a situation where we actively 'hunt' for cyber-attackers in our networks."
IBM is not alone in talking up big data as a critical tool for security threat detection in the coming years. RSA, the security division of EMC, recently disclosed it's getting into it, too, even betting the company's future on it, with a product announcement anticipated soon.
Gartner analyst Neil MacDonald said players to watch include IBM, HP and RSA, which all have traditional SIEM technologies and are developing analytics to take on the big data challenges around advanced threat detection.
"Gartner believes the information-security problem can really only be solved with big data services," said MacDonald, noting that the term "big data" applies here to situations where combining large volumes or velocity of data, often contextual, requires a new approach for the purposes of advanced threat detection.
MacDonald said this data might be a combination of reputational analysis, firewall logs, network packet data and more contextual information to determine if an attack or compromise has occurred. Today, larger organizations such as big banks and the Defense Department are seeking to do this mainly by building their own big data for security tools, he said. But buying rather than building complex tools like this is likely to prove attractive in the future, if not more cost effective.
It's all still considered emerging technology, but big data put into service for the purposes of security should evolve to be useful for small to midsize companies as well as the large ones, MacDonald urged. It's possible big data for security could also one day become more oriented as a service, he suggested. IBM's Bird said that may be possible eventually, but for now big data for security purposes is seeing its initial deployment in large organizations with mountains of sensitive information at stake.
For a deployment of IBM Security Intelligence with Big Data, the pricing would like look like this: QRadar is priced per appliance and by the quantity of data collected (events and network flows per second). BigInsights is priced by total storage capacity of the cluster. QRadar pricing starts below $50,000. BigInsights pricing starts below $50,000 for a 5TB storage system.
Ellen Messmer is senior editor at Network World, an IDG publication and website, where she covers news and technology trends related to information security. Twitter: @MessmerE. Email: emessmer@nww.com.
Read more about security in Network World's Security section
Tuesday, February 5, 2013
Sunday, February 3, 2013
Subscribe to:
Posts (Atom)