Thursday, June 28, 2012

Rapport over affaire DigiNotar gepubliceerd

Rapport over affaire DigiNotar gepubliceerd 28 juni 2012 07:54ANPNIEUWS De Onderzoeksraad voor Veiligheid (OVV) publiceert vandaag de resultaten van het onderzoek dat hij de afgelopen maanden deed naar aanleiding van de affaire rond het inmiddels failliete bedrijf DigiNotar. Het onderzoek richtte zich op de vraag hoe de overheid, die gebruikmaakte van beveiligingscertificaten van het gehackte bedrijf, de digitale veiligheid van burgers waarborgt. De kwestie DigiNotar begon vorig jaar zomer met een hacker die bij het bedrijf inbrak. DigiNotar gaf veiligheidscertificaten uit voor (overheids)websites die de veiligheid dus niet bleken te garanderen. De zaak veroorzaakte grote commotie over de (on)veiligheid van de digitale contacten met de overheid. De OVV deed het onderzoek op verzoek van het ministerie van Binnenlandse Zaken. De onderzoekers keken ook hoe lokale overheden met digitale veiligheid om gaan.

'Digitale veiligheid bij overheid moet beter'

'Digitale veiligheid bij overheid moet beter' 28 juni 2012 12:19ANPNIEUWS Overheden kunnen niet instaan voor de veiligheid van digitale gegevens van bijvoorbeeld burgers, omdat ze die veiligheid niet in alle gevallen op orde hebben. Volgens de Onderzoeksraad voor Veiligheid (OVV) zijn bestuurders zich te weinig bewust van de risico's. Acute veiligheidsproblemen zijn er echter niet. De onderzoeksraad concludeert dit vandaag in een rapport. De OVV stelt dat de Belastingdienst en de Sociale Verzekeringsbank hun zaken beter op orde hebben. Overheid controleert te weinig Maar over het algemeen vertrouwt de overheid te veel op andere partijen en controleert zelf te weinig of regels worden nageleefd. “We willen niet stellen dat je alle risico's kunt uitsluiten, maar de overheid moet zich wel bewust van de risico’s zijn”, benadrukt voorzitter Tjibbe Joustra. “Je moet, als het misgaat, de schade kunnen beperken.” Scherper toezicht nodig De onderzoeksraad wil daarom dat het bestuurlijk toezicht wordt verscherpt en dat de overheid zorgt dat de verantwoordelijke bestuurders meer kennis krijgen over de aansturing van digitale veiligheid. Afspraken om de veiligheid te kunnen garanderen bestaan al langer, maar worden ‘beperkt nageleefd’. Dat moet beter, benadrukt de raad. Ook zou de overheid publiekelijk verantwoording moeten afleggen.

Tuesday, June 26, 2012

LinkedIn Password Breach: 9 Facts Key To Lawsuit

LinkedIn's privacy policy promised users "industry standard protocols and technology," but a class action lawsuit claims LinkedIn failed to deliver. Take a closer look at the security issues. By Mathew J. Schwartz InformationWeek June 26, 2012 11:36 AM Did LinkedIn fail to follow "industry standard" information security practices? That's the charge leveled against the business-oriented social networking site in a class action lawsuit filed last week in U.S. District Court. Interestingly, the lawsuit doesn't reference any existing U.S. regulation or law that would have required LinkedIn to meet industry standards for security. Instead, the lawsuit points to LinkedIn's privacy policy, which promises users that "personal information you provide will be secured in accordance with industry standards and technology." Another part of that policy likewise promises to use "industry standard protocols and technology." 1. Breach Facts Remain Scarce Here's what's known about the breach: hashes for 6.5 million LinkedIn users' passwords were uploaded to a hacking forum earlier this month by a hacker who requested help with cracking the passwords. Interestingly, no easy passwords appeared to be part of the upload, and there were no duplicates, suggesting that the attacker had already cracked those and edited down the list of uploaded passwords. In light of those facts, Tal Be'ery, the Web security research team leader at Imperva's Application Defense Center, thinks that the number of breached accounts is at least 10 million. [ Are legislators' efforts to craft breach notification standards a waste of time? Read Senators Float National Data Breach Law, Take Four. ] 2. Don't Expect Class Action Lawsuit To Succeed But did LinkedIn's customers suffer damages due to the data breach? Furthermore, can consumers sue a private business based on its privacy policy--which is policed by the Federal Trade Commission--and questions of whether "industry standard" protocols were used? "I think it might be a difficult legal case," said Sean Sullivan, security advisor at F-Secure Labs. "In the court of public opinion? It's a different story." 3. Data Breaches Can Be Difficult To Detect At this point, LinkedIn has yet to provide any details about how many accounts were affected, or how the attacker managed to grab a password database--or databases--containing information on millions of accounts. It appears that LinkedIn didn't know that it had been hacked until the passwords showed up on the password-cracking forum. That's led to charges that LinkedIn's security practices weren't sufficiently robust. For comparison's sake, however, FBI officials have said that in the course of cybercrime investigations, they often turn up evidence that businesses have been breached, but remained unaware of that breach until the bureau informed them. 4. "Standard" Security Approaches Are Often Weak Of course, what that suggests is that many businesses' standard approaches to information security involve poor standards. Oftentimes lacking are specific processes for avoiding and dealing with data breaches, although a recent study did find that businesses in the United States are getting better at handling breaches. 5. No Business Is 100% Breach-Proof Even with the most advanced security program, however, experts say that data breaches should always be treated as a "when, not if" proposition. "If an adversary wants to get into your network, they're going to do it--it doesn't matter how much technology you use. Eventually you're going to lose," said Jerry Johnson, CIO at Pacific Northwest National Laboratory, speaking via phone. Of course, the LinkedIn breach could also have been caused by a trusted insider, against which many security defenses simply wouldn't work. 6. Password Best Practice: Salt Of the information currently available about the LinkedIn security breach, one notable fact is that the business didn't salt its passwords. "Salting password hashes has been good practice for 20 years or more. LinkedIn wasn't salting its password hashes. As a result, in my opinion, LinkedIn failed to meet minimal standards that users would expect them to follow to secure their information," said Graham Cluley, senior technology consultant at Sophos, via email. "Of course, that doesn't mean that LinkedIn are the only ones who are failing to reach such a minimal standard. My expectation is that there are many other websites are out there making similar mistakes--but we just don't know about them," said Cluley. Notably, two password breaches that came to light the same week as the LinkedIn breach, involving eHarmony and, likewise revealed that neither site had salted its passwords. More Security Insights Webcasts • Silver Bullet for Uncertain Times: Cloud Business Applications • Malware from B to Z: Inside the threat from Blackhole to ZeroAccess More >> White Papers • HII Report: Enterprise Password Worst Practices • Why Anti-DDoS Products and Services are Critical for Today's Business Environment: Protecting against Modern DDoS Threats More >> Reports • Will IPv6 Make Us Unsafe? • Database Defenses More >> 7. Security: Where To Find Standards Failing to salt passwords suggests a more widespread lack of effective security practices, and there are a number of not just standard practices, but actual standards that all businesses should be pursuing. "In particular, the OWASP top 10 are commonly seen as industry standard, and referred to in other standards like PCI," said Johannes Ullrich, chief research officer at SANS Institute, via email. For example, here's what the OWASP top 10 section on "insecure cryptographic storage" has to say about passwords: "Ensure passwords are hashed with a strong standard algorithm and an appropriate salt is used." Ullrich also pointed to the common weakness enumeration (CWE) system, which is billed as a "community-developed dictionary of software weakness types," and which specifically calls out the use of a one-way hash without a salt as one of the top 25 most dangerous software errors. 8. Security Involves More Than Hashing When it comes to LinkedIn, however, take the related password discussion with, yes, a grain of salt. "No salting is indeed a bad practice, but I think the whole hashing and salting discussion is missing the main point," said Imperva's Be'ery. "It's very natural to focus on it, as the only thing we know for a fact is that 6.5 million of LinkedIn's hashed passwords were leaked. It's like having a bank robbery that was discovered by finding the bills in circulation, and [having] the press discussing whether and how the bills should be marked, while the real question is: How was the bank robbed in the first place?" Or as F-Secure's Sullivan said, when it comes to LinkedIn, "I'd be curious to know how the internal production systems were secured." 9. LinkedIn: Security Facts Still Outstanding In other words, a few password facts aside, very big questions about LinkedIn's security practices have yet to be publicly detailed. "Hashing and salting, much like bill marking, is a secondary measure of protection," Be'ery said. "The main protection is supposed to keep the bad guys away from the data or the money." "So the real question here is, how the data was breached," he said. "Did LinkedIn use 'industry standard protocols and technology' with respect to breach protection? Did they pen test their app? Did they use a Web application firewall? Did the hackers use some super new '0 day' attack, or did they use some very common Web application attacks such as SQL injection or remote file inclusion?" Until those questions get answered, expect discussions of LinkedIn's security to remain largely academic. Employees and their browsers might be the weak link in your security plan. The new, all-digital Endpoint Insecurity Dark Reading supplement shows how to strengthen them. (Free registration required.)

Saturday, June 23, 2012

Code crackers break 923-bit encryption record

In what was thought an impossibility, researchers break the longest code ever over a 148-day period using 21 computers. by Dara Kerr June 20, 2012 5:41 PM PDT Before today no one thought it was possible to successfully break a 923-bit code. And even if it was possible, scientists estimated it would take thousands of years. However, over 148 days and a couple of hours, using 21 computers, the code was cracked. Working together, Fujitsu Laboratories, the National Institute of Information and Communications Technology, and Kyushu University in Japan announced today that they broke the world record for cryptanalysis using next-generation cryptography. "Despite numerous efforts to use and spread this cryptography at the development stage, it wasn't until this new way of approaching the problem was applied that it was proven that pairing-based cryptography of this length was fragile and could actually be broken in 148.2 days," Fujitsu Laboratories wrote in a press release. Using "pairing-based" cryptography on this code has led to the standardization of this type of code cracking, says Fujitsu Laboratories. Scientists say that breaking the 923-bit encryption, which is 278-digits, would have been impossible using previous "public key" cryptography; but using pairing-based cryptography, scientists were able to apply identity-based encryption, keyword searchable encryption, and functional encryption. "The cryptanalysis is the equivalent to spoofing the authority of the information system administrator," Fujitsu Laboratories wrote. "As a result, for the first time in the world we proved that the cryptography of the parameter was vulnerable and could be broken in a realistic amount of time." Researchers from NICT and Hakodate Future University hold the previous world record for code cracking, which required far less computer power. They managed to figure out a 676-bit, or 204-digit, encryption in 2009. About Dara Kerr Dara Kerr, a freelance journalist based in the Bay Area, is fascinated by robots, supercomputers and Internet memes. When not writing about technology and modernity, she likes to travel to far-off countries. She is a member of the CNET Blog Network and is not an employee of CNET.

Friday, June 22, 2012

LinkedIn sued for $5 million over data breach

LinkedIn sued for $5 million over data breach

The rising risk of electronic medical records

The rising risk of electronic medical records By Jason Dearen | June 20, 2012, 7:44 AM PDT It was a low-tech burglary. No one thought that it would blossom into a high-tech security breach. All it took was a rock — a simple, inanimate, probably centuries-old rock. An enterprising thief picked it up, cocked his arm and tossed it through the window of a Sutter Health office building in Sacramento, Calif. It couldn’t have been easier. Once inside, he found what he was looking for: laptops, monitors and desktop computers. Jackpot. The burglary could have ended there — until Sutter, a network of doctors and hospitals in northern California, realized that one of the purloined computers contained the electronic medical data for more than four million patients. Some of it dated back to 1995. Worse, the data were not encrypted. The only thing standing between someone interested in accessing and selling that information was a computer password. Today, Sutter still doesn’t know what happened to the data. The case remains open. This kind of thing isn’t supposed to happen. But it does — sometimes by accident. A year earlier, the health records of 20,000 Stanford Hospital patients made their way onto a public website after the data were accidentally used as part of a job skills test. The private medical data were exposed for nearly a year before officials ordered it taken down. A $20 million lawsuit was filed, but no one really knows if the valuable information was copied. The sensitive personal information contained in medical records is becoming more accessible than ever as the United States embarks on a fast and unprecedented shift to electronic health records. Today, many of these records are stored in databases called health information exchanges, or HIEs, which are linked together online — making a treasure trove of data accessible to myriad hospital workers, insurance companies and government employees. Unsurprisingly, social security numbers, health histories and other personal data from breached or stolen electronic health records are routinely used by identity thieves. Criminals can buy social security numbers online for about $5 each, but medical profiles can fetch $50 or more because they give identity thieves a much more nuanced look into a victim’s life, said Dr. Deborah Peel, founder of the advocacy group Patient Privacy Rights, which researches data breaches and works for tighter security on people’s personal health records. Some privacy experts worry that current federal law will allow pharmaceutical companies, law enforcement, insurance providers and others to exploit these data without a patient’s knowledge or consent. The pharmaceutical industry already uses medical data — for example, pregnant women who use certain medications often will fill out a voluntary questionnaire asking for more information — to market new products as the child grows. Worse, when records contain errors, linked electronic systems only magnify the errors, privacy groups argue – giving insurance companies and employers inaccurate ammunition to deny employment to candidates. Yet the number of patient records contained in electronic databases is ballooning, fueled by billions of federal stimulus dollars. Recent healthcare legislation championed by U.S. president Barack Obama furthers the cause, imposing fines beginning in 2015 for providers who do not make the shift. The effort is propelled by the belief that a more nimble and connected healthcare system will save billions of dollars and improve the overall standard of care. “The stimulus bill was like pouring gasoline on a fire,” said Lee Tien, a privacy law attorney at the Electronic Frontier Foundation in San Francisco. “It was a slow-moving fire before, but then it got very big and a lot of people began chasing the money. But there was very little [in the bill] that did much on the privacy and security side.” With funds, privacy concerns The federal government’s $19 billion investment in electronic medical record conversion has already created a massive market for HIEs, which share patient records held in physicians’ offices with institutions large and small. Technology companies large and small, from IT industry heavyweights such as Google, IBM, General Electric and Dell to startups, operate in the market. The demand for this data has indirectly fueled a criminal enterprise that seems to be growing: hospitals reported losses or thefts of electronic medical data 364 times from 2010 to 2011 in incidents that affected 18 million patients, according to Associated Press reports. The rapid adoption of networked electronic records has centralized massive amounts of valuable data faster than law and policy can evolve to protect people’s privacy. Privacy lawyers and healthcare policy experts worry that the rapid transition could expose millions of medical records to profit-seeking companies and law-enforcement agencies without patients’ consent. “We were always very happy from a privacy and security standpoint because things were moving slowly and we could look at the national and state standards,” Tien said. Medical data means big business Today, there is no federal law in the U.S. requiring that a patient be notified when their records are added to an exchange. There is no way of knowing if and when thousands of people might gain access to your personal information, either: once a person’s data is entered into an exchange, there is little control over who can access it among the thousands of employees who work in a hospital, from clerks to surgeons to third-party vendors hired to manage these new, complex systems. But the number of exchanges continues to grow. There are at least 255 in operation or in the last stages of development, said Jason Goldwater of eHealth Initiative, a Washington, D.C.-based health-care technology research group that tracks HIEs. Since HIEs are intended to share data, it’s no surprise that the number of entities with access to them — whether hospitals or insurance companies — doubled between 1997 and 2010, according to a study by the data privacy lab at Carnegie Mellon University. So, too, has the number of people willing to pay for this information grown. Pharmaceutical companies seek better information about their customers’ behavior while tabloid newspapers seek scoops on celebrities such as Britney Spears and George Clooney, both of whom had records leaked by hospital employees who had no business having access to them. “Our health records will have an enormous value in the future as genetic profiles are added,” Tien said. “So whatever rules we have for privacy and security, they better be up to snuff to guard against the powerful incentives to get hold of that information.” Security loopholes Patient data is protected in some ways in the U.S. by a federal law known as HIPAA, the Health Insurance Portability and Accountability Act. If data are encrypted, as happens in many exchanges, hospitals are not required by federal law to contact patients when their records are added to the exchange — even if doing so allows many more people to access that information. Still, there is one group exempt from HIPAA regulations: law enforcement. Police investigators and prosecutors already use health records in many different kinds of cases, including health-care fraud allegations, crimes committed in hospitals and even some rape and assault cases. Health information exchanges could increase access, making the long arm of the law much longer by giving investigators access to a much larger pool of data. Under HIPAA, police investigators can access medical records when they deem them necessary for a case. Further, the Patriot Act passed in 2011 to combat terrorism allows federal investigators to get access to medical records with a warrant. As patient data becomes more centralized, current laws will give police and federal agents much easier and deeper access to personal data, creating a host of unprecedented civil liberties issues. “The electronic health records system soon may provide the cops with access in their station to a terminal with everyone’s health records,” said Bob Gellman, a Washington, D.C.-based privacy and information policy consultant. “If they have a list of wanted people and they marry their system to the healthcare electronic records, they can find out when a suspect’s next doctor appointment is. Under [current law], that’s probably allowable.” Gellman said the police exemption is problematic, since that data could easily be sent from law enforcement to another party, like a business or government research institution. A chain is only as strong as its weakest link. “[The law] says hospitals can disclose records to [law enforcement] at will,” Gellman said. ”Cops can get records with no procedure at all. I think that’s inadequate.” Behind the data, a stigma But concern over medical privacy goes beyond privacy law, civil rights or even ethics. For many people, there is grave concern over the potential for exchanged digital records to turn personal problems public. When Peel opened her initial psychiatric practice in Brownsville, Texas in the 1970s, many of her first patients in the U.S.-Mexico border town had a similar concern: could they pay to keep their medical records private? Word travels fast in Brownsville, a city of 175,000 people, and Peel’s patients were worried that if their paper records somehow became public, they would be stigmatized for their medical diagnoses. Schizophrenia, depression and other mental illnesses continue to be poorly understood by the public; at the worst, those who suffer from them are stigmatized in their communities. “If the information leaked to an employer, it would have affected their jobs or reputations. All the time I’ve been practicing, it’s been a very important and delicate issue,” Peel said. “There are prejudices associated with psychiatric diagnoses. People have powerful reactions to the names of these things.” Once genetic profiles are routinely added to the mix, access to electronic health data may predetermine who can get jobs or serve in public office, Peel warned. While genetic information may help physicians fend off severe diseases earlier than ever, it may also be used to stigmatize people who will be stripped of opportunity based on some familial history of disease. “If the world looked like that,” Peel said, “Lou Gehrig would never get a contract to be a ball player if the team knew he had a disease that would degenerate his muscles, or Ronald Reagan would never get elected president if they knew dementia ran in his family.”

Tuesday, June 12, 2012

Next Generation Encryption Algorithms

Cisco Blog > Security Panos Kampanakis | June 12, 2012 at 9:43 am PST Over the years, numerous cryptographic algorithms have been developed and used in many different protocols and functions. Cryptography is by no means static. Steady advances in computing and in the science of cryptanalysis have made it necessary to continually adopt newer, stronger algorithms, and larger key sizes. Older algorithms are supported in current products to ensure backward compatibility and interoperability. However, some older algorithms and key sizes no longer provide adequate protection from modern threats and should be replaced. Over the years, some cryptographic algorithms have been deprecated, “broken,” attacked, or proven to be insecure. There have been research publications that compromise or affect the perceived security of almost all algorithms by using reduced step attacks or others (known plaintext, bit flip, and more). Additionally, every year advances in computing reduce the cost of information processing and data storage to retain effective security. Because of Moore’s law, and a similar empirical law for storage costs, symmetric cryptographic keys must grow by 1 bit every 18 months. For an encryption system to have a useful shelf life and securely interoperate with other devices throughout its life span, the system should provide security for 10 or more years into the future. The use of good cryptography is more important now than ever before because of the very real threat of well-funded and knowledgeable attackers. Next Generation Encryption (NGE) technologies satisfy the security requirements described above while using cryptographic algorithms that scale better. For more information on Legacy, Acceptable, Recommended and NGE algorithms that should be avoided or used in your networks, you can refer to our latest Whitepaper.