Thursday, June 28, 2012
Rapport over affaire DigiNotar gepubliceerd 28 juni 2012 07:54ANPNIEUWS De Onderzoeksraad voor Veiligheid (OVV) publiceert vandaag de resultaten van het onderzoek dat hij de afgelopen maanden deed naar aanleiding van de affaire rond het inmiddels failliete bedrijf DigiNotar. Het onderzoek richtte zich op de vraag hoe de overheid, die gebruikmaakte van beveiligingscertificaten van het gehackte bedrijf, de digitale veiligheid van burgers waarborgt. De kwestie DigiNotar begon vorig jaar zomer met een hacker die bij het bedrijf inbrak. DigiNotar gaf veiligheidscertificaten uit voor (overheids)websites die de veiligheid dus niet bleken te garanderen. De zaak veroorzaakte grote commotie over de (on)veiligheid van de digitale contacten met de overheid. De OVV deed het onderzoek op verzoek van het ministerie van Binnenlandse Zaken. De onderzoekers keken ook hoe lokale overheden met digitale veiligheid om gaan.
'Digitale veiligheid bij overheid moet beter' 28 juni 2012 12:19ANPNIEUWS Overheden kunnen niet instaan voor de veiligheid van digitale gegevens van bijvoorbeeld burgers, omdat ze die veiligheid niet in alle gevallen op orde hebben. Volgens de Onderzoeksraad voor Veiligheid (OVV) zijn bestuurders zich te weinig bewust van de risico's. Acute veiligheidsproblemen zijn er echter niet. De onderzoeksraad concludeert dit vandaag in een rapport. De OVV stelt dat de Belastingdienst en de Sociale Verzekeringsbank hun zaken beter op orde hebben. Overheid controleert te weinig Maar over het algemeen vertrouwt de overheid te veel op andere partijen en controleert zelf te weinig of regels worden nageleefd. “We willen niet stellen dat je alle risico's kunt uitsluiten, maar de overheid moet zich wel bewust van de risico’s zijn”, benadrukt voorzitter Tjibbe Joustra. “Je moet, als het misgaat, de schade kunnen beperken.” Scherper toezicht nodig De onderzoeksraad wil daarom dat het bestuurlijk toezicht wordt verscherpt en dat de overheid zorgt dat de verantwoordelijke bestuurders meer kennis krijgen over de aansturing van digitale veiligheid. Afspraken om de veiligheid te kunnen garanderen bestaan al langer, maar worden ‘beperkt nageleefd’. Dat moet beter, benadrukt de raad. Ook zou de overheid publiekelijk verantwoording moeten afleggen.
Tuesday, June 26, 2012
Saturday, June 23, 2012
In what was thought an impossibility, researchers break the longest code ever over a 148-day period using 21 computers. by Dara Kerr June 20, 2012 5:41 PM PDT Before today no one thought it was possible to successfully break a 923-bit code. And even if it was possible, scientists estimated it would take thousands of years. However, over 148 days and a couple of hours, using 21 computers, the code was cracked. Working together, Fujitsu Laboratories, the National Institute of Information and Communications Technology, and Kyushu University in Japan announced today that they broke the world record for cryptanalysis using next-generation cryptography. "Despite numerous efforts to use and spread this cryptography at the development stage, it wasn't until this new way of approaching the problem was applied that it was proven that pairing-based cryptography of this length was fragile and could actually be broken in 148.2 days," Fujitsu Laboratories wrote in a press release. Using "pairing-based" cryptography on this code has led to the standardization of this type of code cracking, says Fujitsu Laboratories. Scientists say that breaking the 923-bit encryption, which is 278-digits, would have been impossible using previous "public key" cryptography; but using pairing-based cryptography, scientists were able to apply identity-based encryption, keyword searchable encryption, and functional encryption. "The cryptanalysis is the equivalent to spoofing the authority of the information system administrator," Fujitsu Laboratories wrote. "As a result, for the first time in the world we proved that the cryptography of the parameter was vulnerable and could be broken in a realistic amount of time." Researchers from NICT and Hakodate Future University hold the previous world record for code cracking, which required far less computer power. They managed to figure out a 676-bit, or 204-digit, encryption in 2009. About Dara Kerr Dara Kerr, a freelance journalist based in the Bay Area, is fascinated by robots, supercomputers and Internet memes. When not writing about technology and modernity, she likes to travel to far-off countries. She is a member of the CNET Blog Network and is not an employee of CNET.
Friday, June 22, 2012
The rising risk of electronic medical records By Jason Dearen | June 20, 2012, 7:44 AM PDT It was a low-tech burglary. No one thought that it would blossom into a high-tech security breach. All it took was a rock — a simple, inanimate, probably centuries-old rock. An enterprising thief picked it up, cocked his arm and tossed it through the window of a Sutter Health office building in Sacramento, Calif. It couldn’t have been easier. Once inside, he found what he was looking for: laptops, monitors and desktop computers. Jackpot. The burglary could have ended there — until Sutter, a network of doctors and hospitals in northern California, realized that one of the purloined computers contained the electronic medical data for more than four million patients. Some of it dated back to 1995. Worse, the data were not encrypted. The only thing standing between someone interested in accessing and selling that information was a computer password. Today, Sutter still doesn’t know what happened to the data. The case remains open. This kind of thing isn’t supposed to happen. But it does — sometimes by accident. A year earlier, the health records of 20,000 Stanford Hospital patients made their way onto a public website after the data were accidentally used as part of a job skills test. The private medical data were exposed for nearly a year before officials ordered it taken down. A $20 million lawsuit was filed, but no one really knows if the valuable information was copied. The sensitive personal information contained in medical records is becoming more accessible than ever as the United States embarks on a fast and unprecedented shift to electronic health records. Today, many of these records are stored in databases called health information exchanges, or HIEs, which are linked together online — making a treasure trove of data accessible to myriad hospital workers, insurance companies and government employees. Unsurprisingly, social security numbers, health histories and other personal data from breached or stolen electronic health records are routinely used by identity thieves. Criminals can buy social security numbers online for about $5 each, but medical profiles can fetch $50 or more because they give identity thieves a much more nuanced look into a victim’s life, said Dr. Deborah Peel, founder of the advocacy group Patient Privacy Rights, which researches data breaches and works for tighter security on people’s personal health records. Some privacy experts worry that current federal law will allow pharmaceutical companies, law enforcement, insurance providers and others to exploit these data without a patient’s knowledge or consent. The pharmaceutical industry already uses medical data — for example, pregnant women who use certain medications often will fill out a voluntary questionnaire asking for more information — to market new products as the child grows. Worse, when records contain errors, linked electronic systems only magnify the errors, privacy groups argue – giving insurance companies and employers inaccurate ammunition to deny employment to candidates. Yet the number of patient records contained in electronic databases is ballooning, fueled by billions of federal stimulus dollars. Recent healthcare legislation championed by U.S. president Barack Obama furthers the cause, imposing fines beginning in 2015 for providers who do not make the shift. The effort is propelled by the belief that a more nimble and connected healthcare system will save billions of dollars and improve the overall standard of care. “The stimulus bill was like pouring gasoline on a fire,” said Lee Tien, a privacy law attorney at the Electronic Frontier Foundation in San Francisco. “It was a slow-moving fire before, but then it got very big and a lot of people began chasing the money. But there was very little [in the bill] that did much on the privacy and security side.” With funds, privacy concerns The federal government’s $19 billion investment in electronic medical record conversion has already created a massive market for HIEs, which share patient records held in physicians’ offices with institutions large and small. Technology companies large and small, from IT industry heavyweights such as Google, IBM, General Electric and Dell to startups, operate in the market. The demand for this data has indirectly fueled a criminal enterprise that seems to be growing: hospitals reported losses or thefts of electronic medical data 364 times from 2010 to 2011 in incidents that affected 18 million patients, according to Associated Press reports. The rapid adoption of networked electronic records has centralized massive amounts of valuable data faster than law and policy can evolve to protect people’s privacy. Privacy lawyers and healthcare policy experts worry that the rapid transition could expose millions of medical records to profit-seeking companies and law-enforcement agencies without patients’ consent. “We were always very happy from a privacy and security standpoint because things were moving slowly and we could look at the national and state standards,” Tien said. Medical data means big business Today, there is no federal law in the U.S. requiring that a patient be notified when their records are added to an exchange. There is no way of knowing if and when thousands of people might gain access to your personal information, either: once a person’s data is entered into an exchange, there is little control over who can access it among the thousands of employees who work in a hospital, from clerks to surgeons to third-party vendors hired to manage these new, complex systems. But the number of exchanges continues to grow. There are at least 255 in operation or in the last stages of development, said Jason Goldwater of eHealth Initiative, a Washington, D.C.-based health-care technology research group that tracks HIEs. Since HIEs are intended to share data, it’s no surprise that the number of entities with access to them — whether hospitals or insurance companies — doubled between 1997 and 2010, according to a study by the data privacy lab at Carnegie Mellon University. So, too, has the number of people willing to pay for this information grown. Pharmaceutical companies seek better information about their customers’ behavior while tabloid newspapers seek scoops on celebrities such as Britney Spears and George Clooney, both of whom had records leaked by hospital employees who had no business having access to them. “Our health records will have an enormous value in the future as genetic profiles are added,” Tien said. “So whatever rules we have for privacy and security, they better be up to snuff to guard against the powerful incentives to get hold of that information.” Security loopholes Patient data is protected in some ways in the U.S. by a federal law known as HIPAA, the Health Insurance Portability and Accountability Act. If data are encrypted, as happens in many exchanges, hospitals are not required by federal law to contact patients when their records are added to the exchange — even if doing so allows many more people to access that information. Still, there is one group exempt from HIPAA regulations: law enforcement. Police investigators and prosecutors already use health records in many different kinds of cases, including health-care fraud allegations, crimes committed in hospitals and even some rape and assault cases. Health information exchanges could increase access, making the long arm of the law much longer by giving investigators access to a much larger pool of data. Under HIPAA, police investigators can access medical records when they deem them necessary for a case. Further, the Patriot Act passed in 2011 to combat terrorism allows federal investigators to get access to medical records with a warrant. As patient data becomes more centralized, current laws will give police and federal agents much easier and deeper access to personal data, creating a host of unprecedented civil liberties issues. “The electronic health records system soon may provide the cops with access in their station to a terminal with everyone’s health records,” said Bob Gellman, a Washington, D.C.-based privacy and information policy consultant. “If they have a list of wanted people and they marry their system to the healthcare electronic records, they can find out when a suspect’s next doctor appointment is. Under [current law], that’s probably allowable.” Gellman said the police exemption is problematic, since that data could easily be sent from law enforcement to another party, like a business or government research institution. A chain is only as strong as its weakest link. “[The law] says hospitals can disclose records to [law enforcement] at will,” Gellman said. ”Cops can get records with no procedure at all. I think that’s inadequate.” Behind the data, a stigma But concern over medical privacy goes beyond privacy law, civil rights or even ethics. For many people, there is grave concern over the potential for exchanged digital records to turn personal problems public. When Peel opened her initial psychiatric practice in Brownsville, Texas in the 1970s, many of her first patients in the U.S.-Mexico border town had a similar concern: could they pay to keep their medical records private? Word travels fast in Brownsville, a city of 175,000 people, and Peel’s patients were worried that if their paper records somehow became public, they would be stigmatized for their medical diagnoses. Schizophrenia, depression and other mental illnesses continue to be poorly understood by the public; at the worst, those who suffer from them are stigmatized in their communities. “If the information leaked to an employer, it would have affected their jobs or reputations. All the time I’ve been practicing, it’s been a very important and delicate issue,” Peel said. “There are prejudices associated with psychiatric diagnoses. People have powerful reactions to the names of these things.” Once genetic profiles are routinely added to the mix, access to electronic health data may predetermine who can get jobs or serve in public office, Peel warned. While genetic information may help physicians fend off severe diseases earlier than ever, it may also be used to stigmatize people who will be stripped of opportunity based on some familial history of disease. “If the world looked like that,” Peel said, “Lou Gehrig would never get a contract to be a ball player if the team knew he had a disease that would degenerate his muscles, or Ronald Reagan would never get elected president if they knew dementia ran in his family.”
Sunday, June 17, 2012
Tuesday, June 12, 2012
Cisco Blog > Security Panos Kampanakis | June 12, 2012 at 9:43 am PST Over the years, numerous cryptographic algorithms have been developed and used in many different protocols and functions. Cryptography is by no means static. Steady advances in computing and in the science of cryptanalysis have made it necessary to continually adopt newer, stronger algorithms, and larger key sizes. Older algorithms are supported in current products to ensure backward compatibility and interoperability. However, some older algorithms and key sizes no longer provide adequate protection from modern threats and should be replaced. Over the years, some cryptographic algorithms have been deprecated, “broken,” attacked, or proven to be insecure. There have been research publications that compromise or affect the perceived security of almost all algorithms by using reduced step attacks or others (known plaintext, bit flip, and more). Additionally, every year advances in computing reduce the cost of information processing and data storage to retain effective security. Because of Moore’s law, and a similar empirical law for storage costs, symmetric cryptographic keys must grow by 1 bit every 18 months. For an encryption system to have a useful shelf life and securely interoperate with other devices throughout its life span, the system should provide security for 10 or more years into the future. The use of good cryptography is more important now than ever before because of the very real threat of well-funded and knowledgeable attackers. Next Generation Encryption (NGE) technologies satisfy the security requirements described above while using cryptographic algorithms that scale better. For more information on Legacy, Acceptable, Recommended and NGE algorithms that should be avoided or used in your networks, you can refer to our latest Whitepaper.