Friday, June 28, 2013

The Top Five IT Security Cyber Threats Are...

The Top Five IT Security Cyber Threats Are...


26 June 2013

As cybercrime expands and evolves, a new study categorizes and describes the top five threats: data breaches, malware, DDoS, mobile threats and the industrialization of fraud – and they're all interrelated.


  1. Data Breach
  2. Malware
  3. DDoS
  4. Mobile Threats
  5. Industrialisation of Fraud 
Security firm 41st Parameter describes each threat in turn. The data breach threat is illustrated by the LivingSocial breach earlier this year. 50 million records were compromised in April. Although no financial records were stolen, they probably weren’t the direct target: “consumers don’t realize that the real concern behind the theft of personal data (such as email addresses, birthdates and encrypted passwords) is potential exposure to various forms of identity theft.”
The real problem with large data heists comes in the following months when the attackers use the data they have stolen to engineer compelling phishing attacks “to dupe unsuspecting victims into revealing sensitive data that can be used to open new accounts or take over existing ones.” In this instance there were two difficulties – firstly consumers still tend to reuse passwords over multiple accounts, and secondly LivingSocial’s business model sends out ‘daily deals’ emails to its subscribers. A forged email could look like a genuine LivingSocial mail but actually contain a disguised link to a malicious site.
That malicious site would contain the second of the major threats: malware. Malware delivery from a malicious URL, otherwise known as drive-by downloading, is one of the three top delivery mechanisms of 2012. The others are app repackaging for mobile devices, and smishing. The first takes a genuine app, alters it for bad intent, and then redistributes it via a different channel. Smishing is the use of “unsolicited text messages that prompt users to provide credentials.”
There is no single solution to malware, but the threat can be mitigated by the use of up-to-date anti-malware software, and improved visibility into the devices – especially mobile devices – that connect to the corporate network.
The third threat is DDoS. DDoS attacks are disruptive, driving costs up and reputations down; and there are more than 7000 DDoS attacks every day. But there is a growing issue “more prevalent now than it’s ever been,” when the target site is a bank. Possibly using account credentials stolen by the malware distributed after a data breach, it’s now “common for fraudsters to access a group of accounts, perform reconnaissance and money movement activities and then immediately launch a DDoS attack in order to create a diversion.”
The fourth threat is that posed by and to the mobile market – 700 million smartphones were sold in 2012 alone. “Since fraudsters typically attack the weakest point of ingress,” warns 41st Parameter, “and without the proper device recognition and detection systems in place, the mobile channel may soon emerge as their channel of choice.” Overall, 2012 saw a 163% increase in mobile threats, with 95% of mobile threats attacking the Android platform. In all, 32.8 million mobile devices were infected with malware.
Finally, the report discusses the industrialization of fraud. Since online transactions are by their nature ‘machine-to-machine’ they lend themselves to automation. But just as the banks automate their own processes, so too are criminals automating fraud. “Recently, 41st Parameter has seen the standardization of fraud software building blocks and data formats, which make it easier to collaborate and exchange information between fraud rings.” And there are more than 10,000 of these fraud rings in the US alone.
One of the problems that comes from this automation is that criminals can just as easily perpetrate hundreds or thousands of small frauds to gain the same financial return as a few large ones – but staying small they are more likely to slip under the banks’ fraud detection systems.
All of these threats could stem from that initial data breach: stolen personal data leading to phishing and the installation of malware that steals account data (although the mobile arena is increasingly used to do the same), in turn leading to financial fraud which is increasingly industrialized and disguised by DDoS attacks. In fact, “The increase in large-scale data breaches and high-volume, coordinated fraud attacks are byproducts of the industrialization of fraud driven by the movement of services online,” says Eli Katz, vice president of financial industry solutions at 41st Parameter. “Financial institutions and consumers must each take steps to adjust to this evolving threat landscape.”

Monday, June 24, 2013

Encryption would exempt ISPs from data breach notification to EU customers

 


Jennifer Baker, IDG News Service Brussels correspondent, IDG News Service

Jennifer Baker reports on the European Union: Commission, Parliament, technology policy, regulation, and competition.
More by Jennifer Baker, IDG News Service
From around the end of August, European Union telecom providers and ISPs will have to comply with new rules to ensure that customers in all E.U. countries receive the same information if their personal data is lost, stolen, or otherwise compromised.
The new “technical implementing measures,” published by the European Commission on Monday, are detailed practical rules on implementing existing legislation—the ePrivacy Directive.
Since 2011, telecom companies and ISPs have been under a general obligation to inform national authorities and subscribers about breaches of personal data. But a public consultation launched in May 2011 found that some national rules were widely divergent.
The U.K., for example, wanted only “serious” breaches notified, while France wanted notification by registered mail. Deadlines for data breach notification also ranged widely—from two days in Ireland to 10 days in Greece.
As a result the Commission decided to introduce the technical implementing measures. Providers will be obliged to notify authorities 24 hours after detection of the breach. The Commission interprets detection to have occurred “when the provider has enough information to make a meaningful notification—mere suspicion of a breach is not enough.” But providers also have three additional days to provide further information. If they need more time after that, then they have to justify it to national authorities.
Different rules apply when it comes to notifying subscribers. Companies must assess the type of data compromised and whether it is likely to include personal data. However, companies that encrypt personal data, using technological protection measures from an approved Commission list, will be exempt having to notify the subscriber.
The measures are also clear on what measures are optional. Providers may, if they choose, notify the media of any breach, but there is no obligation to do so.
Wholesale telecoms providers that offer back-end services and have no direct relationship with the subscriber are not required to notify national authorities or the subscriber, although if data under their responsibility is compromised, they should notify “up the chain.”
The reform of the E.U.’s data protection rules, which is currently under way, proposes extending the ePrivacy data breach obligation to any entity that holds customers’ personal data.
The measures for telecom companies and ISPS will come into force two months after publication in the Official Journal of the European Union.

Wednesday, June 19, 2013

Risk I/O Integrates Real-Time Attack Data

Risk I/O Integrates Real-Time Attack Data


Risk I/O, a vulnerability intelligence platform designed to help organizations efficiently report and mitigate security vulnerabilities, on Wednesday announced that it now analyzes real-time, global attack data alongside security vulnerabilities.
With the objective of helping organizations identify weaknesses most likely to be exploited by attackers, Risk I/O customers are now able to prioritize vulnerability remediation based on which attacks are most prevalent as well as which attacks are being used against their industry peers in real-time, the company said.
The processing engine behind Risk I/O’s platform cross-references an organization’s own vulnerability data and remediation priorities against data from other Risk I/O users, publicly available vulnerability repositories, now adding attack data to the mix.
Delivered via the cloud, Risk I/O’s platform aggregates and prioritizes vulnerability data from more than 20 popular security assessment technologies to provide enterprises with a single view of their security risk, the company said.  The platform currently processes more than 30 million vulnerabilities a day.
According to the company, the latest release of Risk I/O prioritizes vulnerabilities on four key factors:
Exploitation Risk – Using attack data from third-party contributors, Risk I/O identifies patterns that indicate the ongoing exploitation of particular vulnerabilities worldwide, prioritizing those vulnerabilities that have above average or increasing exploit traffic thus posing a higher risk to the organization. Exploitation risk is also assessed by industry, so an organization can understand where its peers have been attacked and adjust remediation activities accordingly.
Trending Vulnerabilities – Risk I/O cross-references vulnerability data against the most high-velocity vulnerabilities as observed across all 5,000 Risk I/O users and within publicly available vulnerability databases such as RiskDB, The National Vulnerability Database, The Web Applications Security Consortium, The Exploit Database, SHODAN, and The Metasploit Project;
Remediation Impact – Vulnerabilities with identified patches are prioritized for remediation based on the overall reduction in risk posture that is realized by the remediation efforts;
Common Vulnerability Scoring System (CVSS) User-Set Priority Scores – Vulnerabilities surfaced in the Risk I/O vulnerability intelligence platform are automatically prioritized by high CVSS scores and the organization’s own priority scores.
“CISOs can spend countless hours searching through thousands, if not millions, of vulnerabilities from web apps, servers, databases, network devices or even source code trying to understand what to fix first,” said Ed Bellis, Cofounder and CEO of Chicago-based Risk I/O. “Risk I/O now sources and correlates real-time attack data from a growing list of external threat feeds like AlienVault to help businesses hone in on their vulnerabilities most likely to be exploited,” Bellis said.
"At AlienVault, we strongly believe the only way the security industry can defend against attacks is by maintaining a steady exchange of intelligence and information," said Roger Thornton, CTO of AlienVault. "By integrating the AlienVault Open Threat Exchange™ (OTX) with the real-time correlation of Risk I/O's vulnerability intelligence platform, organizations not only have access to the broadest community sourced threat data but can better prioritize their vulnerabilities in real-time against industry peers and get unprecedented insight into where they are most likely to be attacked."
Additional details on Risk I/O’s vulnerability intelligence platform is available online

Tuesday, June 18, 2013

Another Huge Cash-Out Scheme Revealed

Another Huge Cash-Out Scheme Revealed

Experts Say Banks Must Enhance Defenses, Monitoring

By , June 13, 2013.  
Another Huge Cash-Out Scheme Revealed
Federal authorities have charged eight suspects in yet another ATM cash-out and cybercrime scheme that involved online account takeovers and prepaid card compromises.
So far, four arrests have been made in the case, which also involved fraudulent ATM withdrawals and wire transfers to overseas accounts, as well as identity theft, in some cases, to perpetrate tax fraud, investigators say.
         
The government's ongoing investigation has so far identified attempts to defraud the victim companies and their customers of more than $15 million, federal prosecutors say.
This is the second major cash-out scheme revealed by federal authorities in recent weeks. In May, authorities announced the separate indictment of eight other individuals allegedly linked to a $45 million global ATM cash-out and money-laundering scheme.
Experts say these types of cash-out attacks are an emerging trend that card-issuing banks and credit unions need to take seriously.
"This trend is of grave concern," says financial fraud expert Shirley Inscoe, an analyst with Aite, an industry consultancy and research firm. "The risk-reward picture is very attractive to those who are inclined to steal from others for their own personal gain."

International Crime Ring

Those charged in the latest cash-out case have been linked to an international cybercrime ring that hacked customer accounts at more than a dozen banks, brokerage firms, payroll processing companies and government agencies, according to a statement from the U.S. Attorney's Office for the District of New Jersey.
"According to the complaint unsealed today, cybercriminals penetrated some of our most trusted financial institutions as part of a global scheme that stole money and identities from people in the United States," says U.S. Attorney Paul J. Fishman.
"Today's charges and arrests take out key members of the organization, including leaders of crews in three states that used those stolen identities to cash-out hacked accounts in a series of internationally coordinated modern-day bank robberies. We will continue to pursue our investigation into this scheme and our fight against the rising threat of criminals for whom computers are the weapon of choice."

Networks Breached

Hackers allegedly intercepted account and cardholder data after gaining unauthorized access to computer networks of global financial institutions and organizations, including Aon Hewitt; Automated Data Processing Inc.; Citibank N.A.; E-Trade; Electronic Payments Inc.; Fundtech Holdings LLC; iPayment Inc.; JPMorgan Chase Bank N.A.; Nordstrom Bank; PayPal; TD Ameritrade; the U.S. Department of Defense; the Defense Finance and Accounting Service; TIAA-CREF; USAA and Veracity Payment Solutions Inc.
Once inside the victim companies' computer networks, the defendants and conspirators allegedly diverted money from accounts of the companies' customers to bank accounts and prepaid debit cards controlled by the defendants. They then employed crews of individuals known as cashers to withdraw the stolen funds from ATMs and through fraudulent purchases in New York, Massachusetts, Illinois, Georgia and elsewhere, authorities say.
The defendants and their conspirators also allegedly laundered the proceeds from their scheme and made international wire transfers to the leaders of the conspiracy overseas, according to federal prosecutors.
The eight defendants have been charged with conspiracy to commit wire fraud, money laundering and ID theft. In addition to fraudulent funds transfers, ATM withdrawals and purchases, the ring also allegedly stole U.S. identities to file fraudulent tax returns with the Internal Revenue Service, authorities say.
According to the complaint, Oleksiy Sharapka of Kiev, Ukraine, allegedly directed the conspiracy with the help of Leonid Yanovitsky, also of Kiev. Oleg Pidtergerya of Brooklyn, Robert Dubuc of Malden, Mass., and Andrey Yarmolitskiy of Atlanta allegedly managed crews in their respective cities. Richard Gundersen of Brooklyn and Lamar Taylor of Salem, Mass., allegedly worked for Pidtergerya and Dubuc, respectively. Ilya Ostapyuk of Brooklyn allegedly facilitated the movement of the fraudulent proceeds.                                 

Pidtergerya, Ostapyuk and Dubuc were arrested June 12 at their homes, and Yarmolitskiy was arrested June 11. Taylor and Gundersen are being pursued by law enforcement, and Sharapka and Yanovitsky, Ukrainian nationals, remain at large.
If convicted, each of the defendants faces a maximum penalty of 20 years in prison for conspiracy to commit wire fraud, 20 years for conspiracy to commit money laundering and 15 years for conspiracy to commit identity theft. The wire fraud and identity theft counts also carry a maximum fine of $250,000, or twice the gross amount of pecuniary gain or loss resulting from the offenses. The money laundering conspiracy count carries a maximum fine of $500,000, or twice the value of the monetary instruments involved.

'A Huge Carrot'

While it's promising to see law enforcement quickly making arrests, Aite's Inscoe says authorities will always be chasing the next case because these types of attacks have proven too easy to pull off. "The amount of money available represents a huge carrot that is potentially theirs," she says.
This is why banking institutions, as card issuers and overseers of online accounts, have to take responsibility for ensuring stronger security across the payments chain, says Joe Rogalski, a security consultant and former fraud and compliance officer for First Niagara Bank, a $36 billion institution in New York state.
"It comes down to assessing their systems the way an attacker would," he says. "This would typically involve a red team trying to break in and perpetrate fraud. It is important that they are looking at the entire program, when evaluating controls, because a breakdown in process could be, and usually is, more devastating than technology."
Inscoe says banking institutions also should monitor for system and network intrusions as well as shore up their firewalls. "Banks' fraud departments should be monitoring the account activity of their customers, using behavioral analytics and other forms of fraud detection," she adds. "These criminal rings have been withdrawing large amounts of cash at ATMs, sending wires, etc. If this happens on a customer's account who typically never withdraws large sums, has never sent a wire or never has sent a wire to the designated bank or account before, those transactions should be verified prior to the funds leaving the bank."

Monday, June 17, 2013

Another victim comes forward in massive ticketing software company breach


Another victim comes forward in massive ticketing software company breach

Hackers accessed the credit card information of tens of thousands customers of the University of Michigan's Union Ticket Office, the latest organization that has fallen victim to a breach affecting a third-party vendor.
How many victims? More than 33,000.

What type of personal information? Names, street addresses, email addresses, phone numbers, credit card numbers and expiration dates.

What happened? The database supplied by third-party ticketing solution provider Vendini was compromised by hackers who may have stolen the personal information of any U of M customer in the last two years.
What was the response? University officials have contacted all individuals affected by the breach.

Details: How the hackers were able to compromise the Vendini systems is currently unknown. According to a statement released by the company, the stolen information does not include credit card security access codes, account user names or passwords.
Other entities may also have been affected.
Quote: “It's not a U of M security breach, and it involves many other ticketing outlets across the United States and Canada,” Rick Fitzgerald, spokesman for the University of Michigan, said.
Source: www.cbslocal.com, CBS Local Detroit, “UM Warns Ticket Buyers of Security Breach,” June 13, 2013.

Sunday, June 16, 2013

FDA calls on manufacturers, hospitals to better protect medical devices

http://www.scmagazine.com//fda-calls-on-manufacturers-hospitals-to-better-protect-medical-devices/article/298842/


FDA calls on manufacturers, hospitals to better protect medical devices

The U.S. Food and Drug Administration on Thursday warned medical professionals to implement practices that will safeguard computer-embedded health care devices from attack.
"Many medical devices contain configurable embedded computer systems that can be vulnerable to cyber security breaches," the advisory said. "In addition, as medical devices are increasingly interconnected, via the internet, hospital networks, other medical [devices] and smartphones, there is an increased risk of cyber security breaches, which could affect how a medical device operates."
The FDA recommended that device manufacturers "take appropriate steps to limit the opportunities for unauthorized access" to these endpoints. This includes evaluating their security practices and policies, and deploying designs, strategies and methods to both prevent against attack and respond in the event of a breach.
Meanwhile, health care entities must ensure their networks are built to repel unauthorized access and attacks by monitoring for anomalous behavior, patching regularly and conferring with device makers.
The FDA also noted that manufacturers are required under Medical Device Reporting requirements to alert the agency of any security issues associated with their products. Health care staff can voluntarily report security "events" related to a medical device through the MedWatch program
The agency said it was not aware of any real-life attacks that have targeted these devices, nor any patient injuries or deaths associated with a compromise.
The government warning should come as no surprise to the security research community, whom for several years has showcased how these devices are susceptible to malicious attack.
For example, at the 2011 Black Hat conference in Las Vegas, researcher and Type 1 diabetic Jay Radcliffe demonstrated how he is able to send commands to wirelessly disable (within about 150 feet) the insulin pump he has been wearing since he was 22.
And as early as 2008, studies have shown how pacemakers could be manipulated by remote attackers.

Saturday, June 15, 2013

NSA-proof encryption exists. Why doesn’t anyone use it?

NSA-proof encryption exists. Why doesn’t anyone use it?


12068A04.TIFComputer programmers believe they know how to build cryptographic systems that are impossible for anyone, even the U.S. government, to crack. So why can the NSA read your e-mail?
Last week, leaks revealed that the Web sites most people use every day are sharing users’ private information with the government. Companies participating in the National Security Agency’s program, code-named PRISM, include Google, Facebook, Apple and Microsoft.
It wasn’t supposed to be this way. During the 1990s, a “cypherpunk” movement predicted that ubiquitous, user-friendly cryptographic software would make it impossible for governments to spy on ordinary users’ private communications.
The government seemed to believe this story, too. “The ability of just about everybody to encrypt their messages is rapidly outrunning our ability to decode them,” a U.S. intelligence official told U.S. News & World Report in 1995. The government classified cryptographic software as a munition, banning its export outside the United States. And it proposed requiring that cryptographic systems have “back doors” for government interception.
The cypherpunks won that battle. By the end of the  Clinton administration, the government conceded that the Internet had made it impossible to control the spread of strong cryptographic software. But more than a decade later, the cypherpunks seem to have lost the war. Software capable of withstanding NSA snooping is widely available, but hardly anyone uses it. Instead, we use Gmail, Skype, Facebook, AOL Instant Messenger and other applications whose data is reportedly accessible through PRISM.
And that’s not a coincidence: Adding strong encryption to the most popular Internet products would make them less useful, less profitable and less fun.
“Security is very rarely free,” says J. Alex Halderman, a computer science professor at the University of Michigan. “There are trade-offs between convenience and usability and security.”
Most people’s priority: Convenience
Consumers have overwhelmingly chosen convenience and usability. Mainstream communications tools are more user-friendly than their cryptographically secure competitors and have features that would be difficult to implement in an NSA-proof fashion.
And while most types of software get more user-friendly over time, user-friendly cryptography seems to be intrinsically difficult. Experts are not much closer to solving the problem today than they were two decades ago.
Ordinarily, the way companies make sophisticated software accessible to regular users is by performing complex, technical tasks on their behalf. The complexity of Google, Microsoft and Apple’s vast infrastructure is hidden behind the simple, polished interfaces of their Web and mobile apps. But delegating basic security decisions to a third party means giving it the ability to access your private content and share it with others, including the government.
Most modern online services do make use of encryption. Popular Web services such as Gmail and Hotmail support an encryption standard called SSL. If you visit a Web site and see a “lock” icon in the corner of your browser window, that means SSL encryption is enabled. But while this kind of encryption will protect users against ordinary bad guys, it’s useless against governments.
That’s because SSL only protects data moving between your device and the servers operated by Google, Apple or Microsoft. Those service providers have access to unencrypted copies of your data. So if the government suspects criminal behavior, it can compel tech companies to turn over private e-mails or Facebook posts.
That problem can be avoided with “end-to-end” encryption. In this scheme, messages are encrypted on the sender’s computer and decrypted on the recipient’s device. Intermediaries such as Google or Microsoft only see the encrypted version of the message, making it impossible for them to turn over copies to the government.
Software like that exists. One of the oldest is PGP, e-mail encryption software released in 1991. Others include OTR (for “off the record”), which enables secure instant messaging, and the Internet telephony apps Silent Circle and Redphone.
  
But it’s difficult to add new features to applications with end-to-end encryption. Take Gmail, for example. “If you wanted to prevent government snooping, you’d have to prevent Google’s servers from having a copy of the text of your messages,” Halderman says. “But that would make it much harder for Google to provide features like search over your messages.” Filtering spam also becomes difficult. And end-to-end encryption would also make it difficult for Google to make money on the service, since it couldn’t use the content of messages to target ads.
A similar point applies to Facebook. The company doesn’t just transmit information from one user to another. It automatically resizes users’ photos and allows them to “tag” themselves and their friends. Facebook filters the avalanche of posts generated by your friends to display the ones you are most likely to find the most interesting. And it indexes the information users post to make it searchable.
These features depend on Facebook’s servers having access to a person’s private data, and it would be difficult to implement them in a system based on end-to-end encryption. While computer scientists are working on techniques for creating more secure social-media sites, these techniques aren’t yet mature enough to support all of Facebook’s features or efficient enough to serve hundreds of millions of users.
Other user headaches
End-to-end encryption creates other headaches for users. Conventional online services offer mechanisms for people to recover lost passwords. These mechanisms work because Apple, Microsoft and other online service providers have access to unencrypted data.
In contrast, when a system has end-to-end encryption, losing a password is catastrophic; it means losing all data in the user’s account.
Also, encryption is effective only if you’re communicating with the party you think you’re communicating with. This security relies on keys — large numbers associated with particular people that make it possible to scramble a message on one end and decode it on the other. In a maneuver cryptographers call a “man in the middle” attack, a malicious party impersonates a message’s intended recipient and tricks the sender into using the wrong encryption key. To thwart this kind of attack, sender and recipient need a way to securely exchange and verify each other’s encryption keys.
“A key is supposed to be associated closely with a person, which means you want a person to be involved in creating their own key, and in verifying the keys of people they communicate with,” says Ed Felten, a computer scientist at Princeton University. “Those steps tend to be awkward and confusing.”
And even those who are willing to make the effort are likely to make mistakes that compromise security. The computer scientists Alma Whitten and J.D. Tygar explored these problem in a famous 1999 paper called “Why Johnny Can’t Encrypt.” They focused on PGP, which was (and still is) one of the most popular tools for users to send encrypted e-mail.
PGP “is not usable enough to provide effective security for most computer users,” the authors wrote.
Users expect software to “just work” without worrying too much about the technical details. But the researchers discovered that users tended to make mistakes that compromise their security. Users are supposed to send other people their “public key,” used to encode messages addressed to them, and to keep their private key a secret. Yet some users foolishly did the opposite, sending others the private key that allowed eavesdroppers to unscramble e-mail addressed to them. Others failed to make backup copies of their private encryption keys, so when their hard drives crashed, they lost access to their encrypted e-mail.
Using PGP is such a hassle that even those with a strong need for secure communication resist its use. When Edward Snowden, the man who leaked the details of the PRISM program, first contacted Glenn Greenwald at the Guardian in February, he asked the journalist to set up PGP on his computer so the two could communicate securely. He even sent Greenwald a video with step-by-step directions for setting up the software. But Greenwald, who didn’t yet know the significance of Snowden’s leaks, dragged his feet. He did not set up the software until late March, after filmmaker Laura Poitras, who was also in contact with Snowden, met with Greenwald and alerted him to the significance of his disclosures.
Going with the flow
Felten argues that another barrier to adopting strong cryptography is a chicken-and-egg problem: It is only useful if you know other people are also using it. Even people who have gone to the trouble of setting up PGP still send most of their e-mail in plain text because most recipients don’t have the capability to receive encrypted e-mail. People tend to use what’s installed on their computer. So even those who have Redphone will make most of their calls with Skype because that’s what other people use.
Halderman isn’t optimistic that strong cryptography will catch on with ordinary users anytime soon. In recent years, the companies behind the most popular Web browsers have beefed up their cryptographic capabilities, which could make more secure online services possible. But the broader trend is that users are moving more and more data from their hard drives to cloud computing platforms, which makes data even more vulnerable to government snooping.
Strong cryptographic software is available to those who want to use it. Whistleblowers, dissidents, criminals and governments use it every day. But cryptographic software is too complex and confusing to reach a mass audience anytime soon. Most people simply aren’t willing to invest the time and effort required to ensure the NSA can’t read their e-mail or listen to their phone calls. And so for the masses, online privacy depends more on legal safeguards than technological wizardry.
The cypherpunks dreamed of a future where technology protected people from government spying. But end-to-end encryption doesn’t work well if people don’t understand it. And the glory of Google or Facebook, after all, is that anyone can use them without really knowing how they work.

Friday, June 14, 2013

Differentiate Encryption From Compression Using Math

Differentiate Encryption From Compression Using Math


When working with binary blobs such as firmware images, you’ll eventually encounter unknown data. Particularly with regards to firmware, unknown data is usually either compressed or encrypted. Analysis of these two types of data is typically approached in very different manners, so it is useful to be able to distinguish one from the other.
The entropy of data can tell us a lot about the data’s contents. Encrypted data is typically a flat line with no variation, while compressed data will often have at least some variation:
Entropy graph of an AES encrypted file
Entropy graph of an AES encrypted file
Entropy graph of a gzip compressed file
Entropy graph of a gzip compressed file
But not all compression algorithms are the same, and some compressed data can be very difficult to visually distinguish from encrypted data:
Entropy graph of an LZMA compressed file
Entropy graph of an LZMA compressed file
However, there are a few tests that can be performed to quantify the randomness of data. The two that I have found most useful are chi square distribution and Monte Carlo pi approximation. These tests can be used to measure the randomness of data and are more sensitive to deviations in randomness than a visual entropy analysis.

Chi square distribution is used to determine the deviation of observed results from expected results; for example, determining if the outcomes of 10 coin tosses were acceptably random, or if there were potentially external factors that influenced the results. Substantial deviations from the expected values of truly random data indicate a lack of randomness.
Monte Carlo pi approximation is used to approximate the value of pi from a given set of random (x,y) coordinates; the more unique well distributed data points, the closer the approximation should be to the actual value of pi. Very accurate pi approximations indicate a very random set of data points.
Since each byte in a file can have one of 256 possible values, we would expect a file of random data to have a very even distribution of byte values between 0 and 255 inclusive. We can use the chi square distribution to compare the actual distribution of values to the expected distribution of values and use that comparison to draw conclusions regarding the randomness of data.
Likewise, by interpreting sets of bytes in a file as (x,y) coordinates, we can approximate the value of pi using Monte Carlo approximation. We can then measure the percentage of error in the approximated value of pi to draw conclusions regarding the randomness of data.
Existing tools, such as ent, will perform these calculations for us. The real problem is how to interpret the results; how random is encrypted data vs compressed data? This will depend on both the encryption/compression used, as well as the size of your data set (more data generally means more accurate results). Applying these tests to a (admittedly small) sample of files with varying size which had been put through different compression/encryption algorithms showed the following correlations:
  • Large deviations in the chi square distribution, or large percentages of error in the Monte Carlo approximation are sure signs of compression.
  • Very accurate pi calculations (< .01% error) are sure signs of encryption.
  • Lower chi values (< 300) with higher pi error (> .03%) are indicative of compression.
  • Higher chi values (> 300) with lower pi errors (< .03%) are indicative of encryption.
For example, here is a comparison of the same 24MB file after being put through the AES, 3DES, gzip and lzma algorithms:
AlgorithmChi Square DistributionPi Approximation Error
None15048.60.920%
AES351.68.022%
3DES357.50.029%
LZMA253.54.032%
Gzip11814.28.765%
As you can see, gzip has extreme differences between expected and observed data randomness, making it easy to identify. LZMA is much closer to the AES and 3DES encryption results, but still shows significant variations, particularly on the chi square distribution.
Using these tests, we can usually determine if an unknown block of data is encrypted or compressed and proceed with any further analysis accordingly (identification of specific algorithms may also be possible, but much more work would need to be done to determine if such an endeavor is even feasible, and I have my doubts).
The problem with using a tool like ent against a firmware image (or any third-party data for that matter) is that the entire image may not be encrypted/compressed. In a real-world firmware image, there may be multiple blocks of high-entropy data surrounded by lower entropy data.
Here, we see that binwalk has identified one high entropy block of data as LZMA compressed based on a simple signature scan. The second block of high entropy data however, remains unknown:
Binwalk signature scan
Binwalk signature scan
To prevent the low entropy data in this firmware image from skewing our results, we need to focus only on those blocks of high entropy data and ignore the rest. Since binwalk already identifies high entropy data blocks inside of files, it was a simple matter of adding the chi square and Monte Carlo tests to binwalk, as well as some logic to interpret the results:
Binwalk entropy scan, identifying unknown compression
Binwalk entropy scan, identifying unknown compression
DECIMAL    HEX        ENTROPY ANALYSIS
-------------------------------------------------------------------------------------------------------------------
13312      0x3400     Compressed data, chi square distribution: 35450024.660765 (ideal: 256), monte carlo pi approximation: 2.272365 (38.252134% error)
1441792    0x160000   Compressed data, chi square distribution: 6464693.329427 (ideal: 256), monte carlo pi approximation: 2.348486 (33.771003% error)
Here, binwalk has marked both high entropy blocks as compressed data, and the large deviations reported for both tests supports this conclusion. As it turns out, this is correct; after further analysis of the firmware, the second block of data was also found to be LZMA compressed. However, the normal LZMA header had been removed from the data thus thwarting binwalk’s signature scan.
If you want to play with it, grab the latest binwalk code from the trunk; this is still very preliminary work, but is showing promise. Of course, the usual cautions apply: don’t trust it blindly, errors will occur and false positives will be encountered. Also, since I’m not a math geek, any feedback from those who actually understand math is appreciated. :)

Thursday, June 13, 2013

New encryption method promises end-to-end cloud security

        
Encrypted data flows to the cloud and back

New encryption method promises end-to-end cloud security

Researchers at the Massachusetts Institute of Technology have developed an encryption technique that, down the road, could make cloud computing more secure by ensuring that data remains encrypted while being processed.
The system combines three existing schemes — homomorphic encryption, garbled circuit and attribute-based encryption — into what the researchers call a functional-encryption scheme, according to a report in MIT News. The result is that a database in the cloud could handle a request and return a response without data being decrypted.
A scheme that keeps data secure every step of the way would likely appeal to public-sector agencies, which are increasingly moving applications and services to cloud systems, although for the foreseeable future they’ll have to rely on current security measures. The key barrier right now is computing power — the functional-encryption scheme requires more of it than would be practical.
But the researchers point out that the scheme is nascent and performance improvements, as in other areas of computing, are likely. “It’s so new, there are so many things that haven’t been explored — like, ‘How do you really implement this correctly?’ ‘What are the right mathematical constructions?’ ‘What are the right parameter settings?’” MIT associate professor Nickolai Zeldovich, of the co-authors of a paper on the subject, told MIT News.
Homomorphic encryption has been researched for decades, but the first fully homomorphic scheme was developed four years ago by Craig Gentry of IBM. In 2011, he offered MIT Technology Review a very simple demonstration of the mathematical consistency required: A user sends a request to add the numbers 1 and 2, which are encrypted to become the numbers 33 and 54, respectively. The server in the cloud processes the sum as 87, which is downloaded from the cloud and decrypted to the final answer, 3.

Monday, June 10, 2013

Hackers may have had access to resort's credit card system for eight months

       

Hackers may have had access to resort's credit card system for eight months

The financial information of guests at Callaway Gardens was stolen by thieves who implanted malware on the Pine Mountain, Ga. resort's credit and debit card systems.
How many victims? Unknown.
What type of personal information? No specifics have been given in terms of what kind of financial data was accessed.
What happened? On Thursday, a credit card processing company notified Callaway Gardens, as well as a number of other companies that it services, of suspicious transactions involving guests. It's unclear if all of the companies were compromised by the same malware, but the victim organizations were identified because thieves made fraudulent purchases at common locations.
What was the response? Callaway Gardens notified guests via email and social networking sites. Additionally, the company alerted the major card brands and credit reporting agencies.
Details: According to a Callaway Gardens spokeswoman, the breach started in early September.
Quote: “In our team's immediate investigation, fraudulent malware was detected, contained and removed,” Barry Morgan, CFO of Callaway Gardens, said.
Source: www.ledger-enquirer.com, Ledger-Enquirer, Callaway Gardens credit, debit card records breached” May 25, 2013.

Alleen al vanwege PRISM geen EPD

Alleen al vanwege PRISM geen EPD 


NSA taps into user data
Het afgelopen weekend domineerde het spionageschandaal het nieuws. De Amerikaanse overheid bespioneert iedereen die niet landgenoten of ingezetene van de VS is. Ons gedrag wordt per seconde geanalyseerd op een manier die noch wijzelf noch een geliefde ooit zou kunnen.
Dat dit mogelijk is, komt door de Patriot Act. Dat is Amerikaanse wetgeving die terreur moet tegengaan. Kennelijk vreest Obama u en mij meer dan zijn eigen ingezetenen. Dat de aanslagen in Boston, Oklahoma, Colorado en natuurlijk op 11 september 2001 door mensen van eigen bodem werden gepleegd, doet er kennelijk niet toe.
Als het verhaal van Obama klopt is niet duidelijk hoe hij een ingezetene wil scheiden van een niet-ingezetene. In de Verenigde Staten bestaat namelijk geen bevolkingsregister. Afgaande op de bron rond het spionageschandaal worden inderdaad ook profielen van landgenoten bijgehouden. Dus eigenlijk komt iedereen met maar een beetje digitaal leven wel in de systemen van de NSA voor.
Compleet beeldPRISM geeft de Amerikanen een griezelig compleet beeld van uw en mijn leven. Want nu is duidelijk dat via Facebook, Google, Microsoft, Apple, Dropbox en diverse andere bedrijven gedetailleerde informatie wordt geleverd. Of het nu gaat om de flirtpartij, het zakelijk conflict, de documenten die gemeenteraadsleden vertrouwelijk op hun iPad krijgen, religieuze of politieke overtuiging of een voor de VS onacceptabel contact: alles maar dan ook alles wordt bewaard.
En dat is alleen nog maar de input uit het PRISM-programma. Via National Security Letters wordt informatie gevorderd en ook daarvan weten wij niet welke gegevens of databases ons betreffen. Hoe die informatie kan worden ingezet valt te zien in de video van de bron voor het PRISM-verhaal Edward Snowden uitlegt hoe het systeem werkt en gebruikt kan worden. De duiding doet griezelig veel aan Stasi-praktijken denken, maar betreft echt 2013 en niet 1983.
EPDMaar waar de Stasi met de vingers niet zomaar aan onze medische dossiers kon komen, is dat voor de Amerikaanse overheid nog niet zo zeker. Het Elektronisch Patiëntendossier (EPD) wordt nog altijd aan elkaar geknoopt door het Amerikaanse bedrijf CSC. Dat bedrijf – overigens een oud-werkgever van me uit een grijs verleden ver voor de aanslagen van 2001 – zal gewoon moeten leveren als de NSA dat wil.
Maar misschien hoeven ze niet te leveren, want de onderneming blijkt wel hele fundamentele fouten te hebben gemaakt met beveiliging van andere gevoelige gegevens. Zo is de database met Deense rijbewijsgegevens gehackt, de database met Deense BSN-informatie, het IT-systeem van de Schengenzone en de e-mailaccounts pluswachtwoorden van 10.000 politieagenten en belastingmedewerkers.
Het boezemt geen vertrouwen in als een club zo met informatie van Europese burgers omgaat. Weer een voorbeeld waarom de huidige aanpak van het EPD geen gelukkige is. Mij lijkt een nerd van de The Pirate Bay minder bedreigend dan een Amerikaanse overheid die mijn gegevens langer dan ik leef bewaart. Maar met CSC krijg je kennelijk beide bedreigingen. Daarom wil ik niet in het EPD, maar die uitzondering zal ook wel weer in een PRISM-systeem worden opgeslagen.

http://www.hpdetijd.nl/2013-06-10/alleen-al-vanwege-prism-geen-epd/

Saturday, June 8, 2013

Council of the European Union Discusses Progress on the Proposed EU Data Protection Regulation

Council of the European Union Discusses Progress on the Proposed EU Data Protection Regulation

Council of the European Union Discusses Progress on the Proposed EU Data Protection Regulation

          
On June 6, 2013, the European Union’s Justice and Home Affairs Council held legislative deliberations regarding key issues concerning the European Commission’s proposed General Data Protection Regulation (the “Proposed Regulation”). The discussions were based on the Irish Presidency’s draft compromise text on Chapters I to IV of the Proposed Regulation, containing the fundamentals of the proposal and reflecting the Presidency’s view of the state of play of negotiations. At the Council meeting, the Presidency was seeking general support for the conclusions drawn in their draft compromise text on the key issues in Chapters I to IV.
Alan Shatter, Minister for Justice, Equality and Defense of Ireland, who chaired the meeting, noted the extensive work completed and progress made since the previous Council meeting on March 7-8, 2013. He emphasized the theme, later repeated by the EU Member States, “Nothing is agreed until everything is agreed,” to reiterate that the Presidency is seeking broad support for the approach. He also acknowledged that nothing can be settled conclusively at this stage.
Although most Member States broadly supported the general approach of the Presidency, there is still a need for further discussions and additional work on several key issues, including the risk-based approach and flexibility for the public sector. Additionally, some Member States (e.g., Germany, Denmark and the UK) found it premature to endorse the document since they do not support all of its conclusions.
Some of the main issues discussed during the deliberations included:
Legal Form
Some Member States still have reservations regarding the use of a regulation as the legal mechanism (according to the Presidency’s Note, Belgium, Czech Republic, Denmark, Estonia, Hungary, Sweden, Slovenia and UK).
Consent
Although some Member States supported the Presidency’s suggestion that consent should be “unambiguous,” several ministers emphasized their support for the Commission’s “explicit” consent requirement (France, Poland, Italy, Romania and Greece).
Risk-based Approach
Member States expressed their broad support for the risk-based approach, but noted that more work was needed, including on the definition of “risk” and the relevant factors to be taken into account when assessing risk.
EU Institutions, Agencies, Bodies and Offices
There is strong support for ensuring that the EU institutions, agencies, bodies and offices are subject to equivalent data protection rules, and that those rules come into effect as soon as possible. The Council’s Legal Service noted that – for reasons of sound drafting and legislation – provisions applicable to EU institutions should be put in a separate instrument, not in the Proposed Regulation.
Vice-President of the European Commission and Commissioner for Justice, Fundamental Rights and Citizenship Viviane Reding also addressed the Council and highlighted that the current level of protection as laid down in the EU Data Protection Directive 95/46/EC should not be undermined. The Presidency concurred.
Shatter noted that the Proposed Regulation will remain a priority for the final three weeks of the Irish Presidency, with several working group meetings already scheduled for June.
For more information on the Proposed Regulation, visit our EU Data Protection Regulation Tracker.

Tags: Belgium, Council of the European Union, Estonia, EU Members States, EU Regulation, European Commission, European Union, France, Germany, Hungary, International, Italy, Legislation, Poland, Slovenia, Sweden, United Kingdom, Viviane Reding

 

Friday, June 7, 2013

Hacker charged with stealing from police databases

Hacker charged with stealing from police databases

Peter Stanners
 
Pirate Bay co-founder Gottfrid Svartholm Warg is suspected as an accomplice of the 20-year-old who was arrested yesterday following a tip-off from the Swedish authorities
img_src
Gottfrid Svartholm Warg from the Pirate Bay is wanted as an accomplice (Photo: Scanpix)
A suspected hacker was charged today with breaking into the police’s IT system and stealing vast amounts of private information including social security (CPR) numbers.
The 20-year-old Dane was arrested yesterday following a police investigation that was started in January when Swedish authorities alerted the Danish national police, Rigspolitiet, that their IT system may have been broken into.
Gottfrid Svartholm Warg, the notorious Swedish hacker and co-founder of filesharing website the Pirate Bay, has also been charged by the Danish police, who are demanding his extradition.
Warg is currently in custody in Sweden after being arrested on suspicion of hacking into the bank Nordea and the Swedish equivalent of the CPR register, the Folkbokföringsregistret.
The Danish suspect is accused of hacking into Rigspolitiet’s IT system, which is run by CSC, a computer firm that protects a number of sensitive databases belonging to the police and other public authorities.
The IT professional, whose name was not released, is charged with stealing around 4,000,000 pieces of information from CSC’s database last year and passing them onto Warg, who attempted to use them to earn money.
The data included the email addresses and passwords of 10,000 policeman as well as CPR numbers from the driving licence database and information about wanted persons in the Shengen region.
Rigspolitiet chief Jens Højberg stated in a press release that much of the stolen data was not legibile and that CPR numbers stolen from the driving licence database were not connected to people’s names.
Despite there being no evidence the hackers had abused the information, Højberg said that the incident was very serious.
“It’s a major attack on the IT system used for the police databases and which we expect CSC to protect for us,” Højberg stated. “That is why the police are treating the case very seriously. It is of course completely unacceptable that it was possible to access the police’s database despite the very high security standards that we demand and expect from our contractors.”
Rigspolitiet has called upon the domestic intelligence agency, PET, and the counter-cybercrime division of defence intelligence agency FE to help with the investigation.
“We will assess the extent of the security breach and whether other public IT systems have been affected, and also to ensure that the necessary security measures are taken to remedy the situation,” Jacob Scharf, the head of PET, wrote in a press release. “The police’s IT systems will be thoroughly examined.”

http://cphpost.dk/news/national/hacker-charged-stealing-police-databases

Thursday, June 6, 2013

A proposed new law would require EU countries to jail hackers for a minimum of two years

Mandatory 2-year jail sentence for EU hackers comes a step closer

A proposed new law would require EU countries to jail hackers for a minimum of two years

126 billion files publicly visible on Amazon cloud, security firm finds

126 billion files publicly visible on Amazon cloud, security firm finds

126 billion files publicly visible on Amazon cloud, security firm finds

Penetration tester Rapid7 discovers sensitive information in exposed documents on Amazon's cloud storage service

            
A security company in the US has discovered thousands of publicly accessible files on Amazon's S3 cloud storage service, many of which contain sensitive information.
Rapid7 discovered the files by searching for storage 'buckets' - logical pool of storage capacity - whose access setting has been changed to 'public', from the default setting of 'private'. 
This means that a list of the contents of the bucket can be seen to anyone that knows or guesses the URL.
The company successfully guessed the URL of 12,328 buckets on the S3 service, by inserting the names of Fortune 500 companies into the standard URL format for S3.
Of those, 1,951 of which were set to 'public'. These buckets contained a combined 126 billion files, so Rapid7 analysed a cross section of 40,000 files. 
These files were found to contain such sensitive information as sales records, employee personal information, unencrypted passwords and the source code for a video game. 
"Much of the data could be used to stage a network attack, compromise users accounts, or to sell on the black market," Rapid7 wrote in a blog post.
Rapid7 advises companies to check whether their S3 buckets are set to public. "If so, think about what you're keeping in that [those] buckets and whether you really want it exposed to the internet and anyone curious to take a look."
As Rapid 7 points out, not only does Amazon set the S3 buckets to private by default, it also provides a walkthrough guide to keeping data stored on the service secure. 
The research reveals that lax security practices companies may get away with on their own IT infrastructure are highly dangerous when data is stored on the cloud. 

Sunday, June 2, 2013

Oracle: We’re investing in Java security

Oracle: We’re investing in Java security

Oracle: We’re investing in Java security
In a blog post yesterday, Oracle outlined the steps its been taking and the investments it’s been making to ensure the security of Java, its beleaguered open-source programming language.
Starting in October 2013, the company will release quarterly security patches. It also says it will respond more quickly to security issues in the future and will do better at ensuring vulnerabilities don’t make it into the codebase in the first place using automated security testing tools.
“The company has made a number of product enhancements to default security and provide more end user control over security,” writes Oracle Java software development lead Nandini Ramani on the company blog.
For the enterprise, which still relies heavily on Java, Ramani said, “The public coverage of the recently published vulnerabilities impacting Java in the browser has caused concern to organizations committed to Java applications running on servers,” implying the problem was more with PR than security on the server side. In response, Oracle introduced Server JRE as a separate distro.
The trouble started nearly a year ago, when Oracle fixed a gaping Java security hole it may have known about for months. Slow security is no security, and Oracle’s investment at that time wasn’t nearly good enough, and that vulnerability set the stage for 2013, when a string of security issues popped up.
In January, the world found out about a Java vulnerability that would allow attackers to steal information or hook up a botnet to any user with a Java plugin-running browser. At that point, the Department of Homeland Security and Apple issued memos saying no one should use Java. Oracle issued a patch, but like a junkie with a $10 habit and a $5 stash, DHS said the fix was insufficient.
Then, after an attack in February, Facebook disabled Java in a high-profile vote of no confidence. (Microsoft and Apple underwent similar attacks.)
Finally, in March, Oracle issued a big emergency Java update. The company issued a further 42 security fixes in April.
“It is our belief that as a result of this ongoing security effort, we will decrease the exploitability and severity of potential Java vulnerabilities in the desktop environment and provide additional security protections for Java operating in the server environment,” Ramani said in conclusion.
“Oracle’s effort has already enabled the Java development team to deliver security fixes more quickly, resulting in fewer outstanding security bugs in Java.”

Read more at http://venturebeat.com/2013/06/01/oracle-java-security/#re37AqwkMpWIQqRV.99