Monday, May 28, 2012

Stuxnet, Duqu and Flame are all examples of cases where we - the antivirus industry - have failed.

Flame (aka Flame aka Skywiper) is a massive, complex piece of malware, used for information gathering and espionage. The malware is most likely created by a western intelligence agency or military. It has infected computers in Iran, Lebanon, Syria, Sudan and elsewhere. There seems to be a clear difference in how online espionage is done from China and how it's done from the west. Chinese actors prefer attacks targeted via spoofed emails with boopy-trapped documents attached. Western actors seem to avoid email and instead use USB sticks or targeted break-ins to gain access. Worst part of Flame? It has been spreading for years. Stuxnet, Duqu and Flame are all examples of cases where we - the antivirus industry - have failed. All of these cases were spreading undetected for extended periods of time. More information from: •Budapest University of Technology and Economics's Laboratory of Cryptography and System Security (CrySyS) •Securelist (Kaspersky) •Iran National CERT (MAHER)

Tuesday, May 22, 2012

Security Priorities for Banks Gartner's Chuvakin on Mobile, Cloud, Hacktivist Attacks By Tracy Kitten, May 18, 2012. Credit Eligible From mobile and the cloud to DDoS attacks and risks surrounding big data, what should banks and credit unions do now to mitigate exposure? Gartner's Anton Chuvakin offers his top recommendations. Chuvakin, who joined Gartner in 2011, says because most banking institutions have spent far too much time focusing on compliance instead of security, many have missed opportunities to exploit the full potential of the fraud-detection and prevention technologies in which they've investing. "Compliance is meant to drive security, not replace it," Chuvakin says. "Compliance is a motivator, not the end goal." What does Chuvakin recommend? That banking institutions invest in technologies that offer more transactional visibility. Banks and credit unions need systems that can raise red flags when suspicious activity occurs. But for anomalous-behavior detection to be effective, institutions must have adequate data collection, data analysis and skilled people who can filter through all of it. "Having data that flows into the technologies and then having people smart enough to analyze the data is the key," he says. " want to see more people focus on technology and skills in visibility." But it's easy for banking institutions to get ahead of themselves, Chuvakin warns, by trying to master more data than they can handle. "In 90 percent of (breach) cases, the evidence of the intrusion was in the logs or other monitoring technologies," he says. "To me, this means we're not doing enough to get this visibility to collect data ... or nobody is looking at the data." During this interview, Chuvakin discusses: •The dangers of moving too much data to the cloud, and how some organizations may get too comfortable with public cloud environments; •Why the industry must pay more attention to denial of service attacks; •Four steps every banking institution should take now to ensure security. Before Chuvakin joined Gartner, his job responsibilities included security product management, evangelist, research, competitive analysis, PCI-DSS compliance, and SIEM development and implementation. He is the author of "Security Warrior" and "PCI Compliance," and was a contributor to "Know Your Enemy II," "Information Security Management Handbook" and others. He has published dozens of papers on log management, SIEM, correlation, security data analysis, PCI-DSS, and security management. His blog, "Security Warrior," has grown to become one of the most popular in the industry. Chuvakin also has taught classes and presented at security conferences across the world; he recently addressed audiences in the U.S., the U.K., Singapore, Spain, Russia and other countries. He has worked on emerging security standards and served on advisory boards of several security startup companies.

Thursday, May 17, 2012

EU to impose compulsory cyber defence rules | EurActiv

EU to impose compulsory cyber defence rules | EurActiv

sory cyber defence rules

Published 16 May 2012
The European Commission is planning to force energy, transport and financial companies to invest more in their cyber security and to report on breaches suffered, two EU officials said.
“The European Commission will propose by the end of the third quarter of 2012 a new obligation for security breach notifications for the energy, transport, banking and financial sectors,” said an official working at the Commission's digital agenda department.
The official said that companies have an interest in beefing up their protection against cyber attacks, but that they were not doing enough to defend their infrastructure.
“When they suffer a security breach, they usually do not report it,” the official explained, saying the Commission was looking at ways of obliging companies to notify those.
“The obligation to report would worsen the reputational damage suffered by companies which undergo security breaches. This should lead them to invest more in security to lower their vulnerability,” the official said.
Following the ICT model
A second official, from the Commission directorate in charge of Justice and Home Affairs, confirmed plans to extend security breach notifications to new industries, other than telecommunication companies and internet firms which in Europe are already subject to reporting obligations.
The EU directive on e-Privacy states that “in case of a particular risk of a breach of the security of the network, the provider of a publicly available electronic communications service must inform the subscribers concerning such risk.”
This e-Privacy directive is currently the reference on cyber security, but it is likely to be soon complemented by more stringent rules. At the beginning of the year, the European Commission pushed forward a new legislative proposal to impose reporting obligations on data breaches for ICT firms, on top of the current security breaches.
Viviane Reding, the EU Justice Ccommissioner who is also in charge of privacy issues, proposed in January a 24-hour reporting obligation for telecoms and Internet companies when they suffer data losses.
Cooperation needed
Involving the private sector in the pursuit of stronger cyber security is necessary as it owns 90% of critical infrastructure in the EU, according to Europol, the EU law enforcement agency.
National and European institutions will also have to increase their cooperation to fight cyber crime. The Commission has recently proposed the establishment of a European cyber crime centre which is expected to become operational in January 2013.
But cooperation among the myriad of security agencies in the continent is far from guaranteed. “There is enough crime that we do not have to compete for it,” said Troels Ørting of Europol, the designated director of the European cyber crime centre.
EurActiv.com

Saturday, May 12, 2012

Everyone Has Been Hacked. Now What?

By Kim Zetter Email Author May 4, 2012 | 7:22 pm | Categories: Breaches, Cybersecurity Oak Ridge National Laboratory was hit by a targeted hacker attack in 2011 that forced the lab to take all its computers offline. Photo: Oak Ridge National Laboratory The attackers chose their moment well. On Apr. 7, 2011, five days before Microsoft patched a critical zero-day vulnerability in Internet Explorer that had been publicly disclosed three months earlier on a security mailing list, unknown attackers launched a spear-phishing attack against workers at the Oak Ridge National Laboratory in Tennessee. The lab, which is funded by the U.S. Department of Energy, conducts classified and unclassified energy and national security work for the federal government. The e-mail, purporting to come from the lab’s human resources department, went to about 530 workers, or 11 percent of the lab’s workforce. The cleverly crafted missive included a link to a malicious webpage, where workers could get information about employee benefits. But instead of getting facts about a health plan or retirement fund, workers who visited the site using Internet Explorer got bit with malicious code that downloaded silently to their machines. Although the lab detected the spear-phishing attack soon after it began, administrators weren’t quick enough to stop 57 workers from clicking on the malicious link. Luckily, only two employee machines were infected with the code. But that was enough for the intruders to get onto the lab’s network and begin siphoning data. Four days after the e-mails arrived, administrators spotted suspicious traffic leaving a server. Only a few megabytes of stolen data got out, but other servers soon lit up with malicious activity. So administrators took the drastic step of severing all the lab’s computers from the internet while they investigated. Oak Ridge had become the newest member of a club to which no one wants to belong – a nonexclusive society that includes Fortune 500 companies protecting invaluable intellectual property, law firms managing sensitive litigation and top security firms that everyone expected should have been shielded from such incursions. Even His Holiness the Dalai Lama has been the victim of an attack. *** Last year, antivirus firm McAfee identified some 70 targets of an espionage hack dubbed Operation Shady RAT that hit defense contractors, government agencies and others in multiple countries. The intruders had source code, national secrets and legal contracts in their sights. Source code and other intellectual property was also the target of hackers who breached Google and 33 other firms in 2010. In a separate attack, online spies siphoned secrets for the Pentagon’s $300 billion Joint Strike Fighter project. Then, last year, the myth of computer security was struck a fatal blow when intruders breached RSA Security, one of the world’s leading security companies that also hosts the annual RSA security conference, an august and massive confab for security vendors. The hackers stole data related to the company’s SecurID two-factor authentication systems, RSA’s flagship product that is used by millions of corporate and government workers to securely log into their computers. Fortunately, the theft proved to be less effective for breaking into other systems than the intruders probably hoped, but the intrusion underscored the fact that even the keepers of the keys cannot keep attackers out. Security researcher Dan Kaminsky in his Seattle apartment. Photo: John Keatley Independent security researcher Dan Kaminsky says he’s glad the security bubble has finally burst and that people are realizing that no network is immune from attack. That, he says, means the security industry and its customers can finally face the uncomfortable fact that what they’ve been doing for years isn’t working. “There’s been a deep conservatism around, ‘Do what everyone else is doing, whether or not it works.’ It’s not about surviving, it’s about claiming you did due diligence,” Kaminsky says. “That’s good if you’re trying to keep a job. It’s bad if you’re trying to solve a technical problem.” In reality, Kaminsky says, “No one knows how to make a secure network right now. There’s no obvious answer that we’re just not doing because we’re lazy.” Simply installing firewalls and intrusion detection systems and keeping anti-virus signatures up to date won’t cut it anymore — especially since most companies never know they’ve been hit until someone outside the firm tells them. “If someone walks up to you on the street and hits you with a lead pipe, you know you were hit in the head with a lead pipe,” Kaminsky says. “Computer security has none of that knowing you were hit in the head with a lead pipe.” According to Richard Bejtlich, chief security officer for computer security firm Mandiant, which has helped Google and many other companies conduct forensics and clean up their networks after an attack, the average cyberespionage attack goes on for 458 days, well over a year, before a company discovers it’s been hacked. That’s actually an improvement over a few years ago, he says, when it was normal to find attackers had been in a network two or three years before being discovered. Bejtlich credits the drop in time not to companies doing better internal monitoring, but to notifications by the FBI, the Naval Criminal Investigative Service and the Air Force Office of Special Investigation, who discover breaches through a range of tactics including hanging out in hacker forums and turning hackers into confidential informants, as well as other tactics they decline to discuss publicly. These government agencies then notify companies that they’ve been hacked before they know it themselves. Shawn Henry, the FBI's former top cyber-cop, is gravely warning that corporate hacking is much worse than people think it is. Photo: DoJ But even the FBI took a defeatist view of the situation recently when Shawn Henry, former executive assistant director of the FBI, told The Wall Street Journal on the eve of his retirement from the Bureau that intruders were winning the hacker wars, and network defenders were simply outgunned. The current approaches to fending off hackers are “unsustainable,” Henry said, and computer criminals are too wily and skilled to be stopped. So if hackers are everywhere and everyone has been hacked, what’s a company to do? Kaminsky says the advantage of the new state of affairs is that it opens the window for innovation. “The status quo is unacceptable. What do we do now? How do we change things? There really is room for innovation in defensive security. It’s not just the hackers that get to have all the fun.” Companies and researchers are exploring ideas for addressing the problem, but until new solutions are found for defending against attacks, Henry and other experts say that learning to live with the threat, rather than trying to eradicate it, is the new normal. Just detecting attacks and mitigating against them is the best that many companies can hope to do. “I don’t think we can win the battle,” Henry told Wired.com. “I think it’s going to be a constant battle, and it’s something we’re going to be in for a long time…. We have to manage the way we assess the risk and we have to change the way we do business on the network. That’s going to be a fundamental change that we’ve got to make in order for people to be better secure.” In most cases, the hacker will be a pedestrian intruder who is simply looking to harvest usernames and passwords, steal banking credentials or hijack computers for a botnet to send spam. These attackers can be easier to root out than focused adversaries — nation states, economic competitors and others — who are looking to steal intellectual property or maintain a strategic foothold in a network for later use, such as to conduct sabotage in conjunction with a military strike or in some other kind of political operation. Once a company’s networks have been breached, Bejtlich says his company focuses on finding all of the systems and credentials that have been compromised and getting rid of any backdoors the intruders have planted. But once the attackers have been kicked off the network, there is generally a flood of new attempts to get back into the network, often through a huge wave of phishing attacks. “For the most part, once you’ve been targeted by these guys, you’re now living with this for the rest of your security career,” Bejtlich said. Many companies have resolved themselves to the fact that they’re never going to keep spies out entirely of their network and have simply learned to live with the intruders by taking steps to segregate and secure important data and controls. Henry, who is now president of CrowdStrike Services, a newly launched security firm, says that once companies accept that they’re never going to be able to keep intruders out for good, the next step is to determine how they can limit the damage. This comes down, in part, to realizing that “there are certain pieces of information that just don’t need to reside on the network.” “It comes down to balancing the risks, and companies need to assess how important is it for me to secure the data versus how important is it to continue doing my business or to be effective in my business,” he says. “We have to assume that the adversary is on the network and if we assume that they’re on the network, then that should change the way we decide what we put on the network and how we transmit it. Do we transmit it in the clear, do we transmit it encrypted, do we keep it resident on the network, do we move it off the network?” Bejtlich says that in addition to moving data off the network, the companies that have been most successful at dealing with intruders have redefined what’s trustworthy on their network and become vigilant about monitoring. He says there are some organizations who have been plagued by intruders for eight or nine years who have learned to live with them by investing in good detection systems. Other companies burn down their entire infrastructure and start from scratch, going dark for a week or so while they re-build their network, using virtualization tools that allow workers to conduct business while protecting the network core from attackers. Bejtlich, who used to work for General Electric, said one of the first things he did after being hired by GE was to establish a segmented network for his security operations, so that any intruders who might have already been on the corporate network wouldn’t have access to his security plans and other blueprints he developed for defending the network. “The first thing you’ve got to do is to establish something that you trust because nobody else can get access to it, and then you monitor the heck out of it to see if anybody else is trying to poke around,” he said. “So you go from a posture of putting up a bunch of tools and sitting back, to one of being very vigilant and hunting for the bad guys…. The goal is to find them so quickly that before they can really do anything to you to steal your data, you’ve kicked them out again.” Kaminsky advocates shrinking perimeters to limit damage. “Rather than one large server farm, you want to create small islands, as small as is operationally feasible,” he says. “When you shrink your perimeter you need to interact with people outside your perimeter and figure out how to do that securely” using encryption and authentication between systems that once communicated freely. “It changes the rules of the game,” he says. “You can’t trust that your developers’ machines aren’t compromised. You can’t trust that your support machines aren’t compromised.” He acknowledges, however, that this is an expensive solution and one that not everyone will be able to adopt. While all of these solutions are more work than simply making certain that every Windows system on a network has the latest patch, there’s at least some comfort in knowing that having a hacker in your network doesn’t have to mean it’s game over. “There have been organizations that this has been like an eight- or nine-year problem,” Bejtlich says. “They’re still in business. You don’t see their names in the newspaper all the time [for being hacked], and they’ve learned to live with it and to have incident detection and response as a continuous business process.” Update 5.7.12: To reflect number of days on average, rather than median, that companies have been hacked before discovering breach.

Sunday, May 6, 2012

The history of encryption

http://visual.ly/history-encryption

Chain-Link Confidentiality: A HIPAA-Like Approach To Online Privacy

Frederic Lardinois May 5, 2012 As we put more of our private information online and entrust it to web services, privacy breaches become almost inevitable. One major problem with online privacy is that there is really no enforceable chain of confidentiality. So when a third-party service makes your information available to another party, things can get complicated. A new paper by Samford University law professor Woodrow Harzog argues that traditional privacy laws aren’t the best ways to protect private information online. Instead, he suggests an approach that’s more like the U.S. HIPAA rules that currently govern how private health information can be shared between your health provider and third parties. The system he proposes would be based on established principles in confidentiality and contract law. Confidentiality law, says Harzog, typically only binds the first recipient of information. Online, that obviously isn’t enough to protect a user’s privacy and most scholars have argued that confidentiality law is simply not suited to deal with online privacy issues. Harzog, however, argues that a HIPAA-like “chain-link confidentiality” regime would be more effective in protecting users’ privacy than current regulations. This system would not just ensure confidentiality between the user and the first service where data is stored, but the obligation of confidentiality would also flow downstream. Under this regime, he writes, “Internet users could then pursue a remedy against anyone in the chain who either failed to abide by her obligation of confidentiality or failed to require confidentiality of a third-party recipient.” Harzog argues that our current privacy regulations are “a patchwork of laws and remedies” and often in conflict with other laws and evolving technologies. It’s also often unclear how “privacy” is actually defined and what, for example, constitutes a “reasonable expectations of privacy.” In Harzog’s view, “traditional privacy remedies are inadequate in the digital age.” Here is what chain-link confidentiality on the Internet would look like in practice: a website that collects your personal information (and that explicitly allow to share your information with other services) would also have to establish a confidentiality contract with any other company it discloses your information to – and those companies would be required to establish the same kind of contract with every subsequent recipient as well. These contracts, of course, could also simply prohibit any further dissemination of your personal information or limit it to certain companies or companies that fulfill certain security requirements. Every web service could, of course, also tweak this contract depending on its needs. In a way, this isn’t all that different from the Creative Commons “Share Alike” provision: depending on the Creative Commons license used – artist can allow others to remix their work, for example, as long as it is then shared under the same license terms as the original work. The chain-link confidentiality approach then would allow for the flow of information, says Harzog, ” by continually re-creating an environment for sharing that accommodates the sender, receiver, and the subject of the personal information.” Even though this isn’t a cure-all – your information, after all, could still leak out or be scraped by others – it’s an interesting way of looking at privacy from a more contractual point of view, especially because it sets up a legal framework for sharing information between services. For the more lawyerly and in-depth discussion of this, take a look at Harzog’s paper here.