Sunday, May 31, 2015

Average cost of computer breach is $3.79 million

Average cost of computer breach is $3.79 million

The average cost of a computer breach at large companies globally was $3.79 million, a survey released Wednesday found. For U.S.-based companies, the average cost was much higher, $6.5 million. USA TODAY
SAN FRANCISCO - The average cost of a computer breach at large companies globally was $3.79 million, a survey released Wednesday found. For U.S.-based companies, the average cost was much higher, $6.5 million.
The survey was conducted by the Ponemon Institute, a security research center, in conjunction with IBM. It surveyed 350 companies in 11 countries that had experienced a data breach, mostly in 2014. In the United States, 62 companies participated in the survey.
"The cost of a data breach, both the total organization cost as well as the cost per compromised record, increased substantially," said Larry Ponemon, chair of the institute.
Globally it has risen 23% since 2013. In the United States it's up 11%.
The average cost per lost or stolen record in the United States was $217. Globally the cost was $154.
Those costs included abnormal turnover of customers, reputation loss, diminished goodwill and paying for credit reports and aid to customers whose information was breached, said Ponemon.
While that's what each record costs the company that lost it, that same record is worth far less on the open market, said Caleb Barlow, vice president of IBM Security.
"Out on the dark side of the Internet, a credit card's worth about $1 if you're lucky, though a health care record can easily be worth $50," he said.
That's because credit cards can readily be cancelled so their worth plummets quickly. Health-care information, especially if it includes a Social Security number, is fixed and can be used by criminals for a long time.
Simply investigating breaches in and of itself is expensive, costing global companies on average just shy of $1 million per breach, the survey found.
While the public tends to see hackers behind every breach, actually slightly less than half of breaches, 47%, are caused by malicious or criminal attacks. Twenty-nine percent involved system glitches while 25% were the result of human error or negligence, the survey found.

Cyber-Security Is a Top Priority in Corporate Boardrooms

Cyber-Security Is a Top Priority in Corporate Boardrooms

By Sean Michael Kerner  | 

Posted 2015-05-28    

corporate boardroom

A new survey from NYSE Governance Services and Veracode on current IT security attitudes and trends finds that boards are taking cyber-security very seriously.

Security vendor Veracode and NYSE (New York Stock Exchange) Governance Services released a study today that examines the role of cyber-security in the boardroom. Over the course of the last year, cyber-security has increasingly become top of mind for many, including corporate boardroom executives.
"We got some interesting results," Chris Wysopal, co-founder and CTO at Veracode, said about the survey, which included responses from 184 directors of public companies, including those in financial services, technology and health care. "One finding that was surprising is how seriously boards are taking security," he told eWEEK.
Forty-six percent of respondents said that cyber-security matters are discussed at most board meetings, while 35 percent noted that security is a topic at every meeting.
Board participation in cyber-security was noted in an IBM-sponsored 2015 Cost of Data Breach Study released May 27 as being a key factor in reducing costs. The report found that board-level involvement in security can reduce the costs associated with a data breach by approximately $5.50 per record.
Another indicator of the increased importance of cyber-security was found in the response to the question of who is responsible for cyber-security. Wysopal noted that the top response was the CEO, indicating the CEO is the one who is ultimately responsible.
"Interestingly enough, at No. 2 is the CIO, and No. 3 is the whole C-suite team, while the CISO [chief information security officer] came in fourth," he said. "The CISO isn't the punching bag anymore if there is a cyber-security incident, and boards are understanding that security is an enterprisewide risk issue and the whole senior executive team is responsible."

Wysopal noted that in his experience, the reporting structure for the CISO is also changing. For many organizations, the CISO reports to the CIO as a function of compliance. What is now happening in some organizations, however, is the CISO reports directly to the CEO and, in some organizations, to the chief financial officer (CFO). Overall, the role of the CISO is becoming more important in organizations as security has become a cross-enterprise risk function, he said.
While security is a topic of conversation at the board level, there isn't a great deal of confidence in how well organizations are protected against threats. Sixty-six percent of respondents to the study said they are "less than confident" that their companies are properly secured against cyber-attacks. Twenty-nine percent said they are confident in their organization's cyber-security efforts, while only 4 percent reported being very confident.
Another key topic addressed in the study is how cyber-security information should be presented to boards. Thirty-three percent reported that they want high-level security descriptions, while 31 percent prefer to see risk metrics. Only 11 percent of respondents noted that they want descriptions of security technologies.
"Boards want real numbers on the risk posture of the organization," Wysopal said. "They don't want to hear about technology, they don't care about 'firewall this' or 'encryption that'—they want high-level strategy."
One area that the study didn't specifically examine is how and where organizations are investing in cyber-security.  Wysopal said it's difficult to make a direct connection between dollars spent and security risk reduction.
"Security is not a science. The best we can do today is look at how breaches are happening and understand what investments the breached organizations didn't make," he said.
Sean Michael Kerner is a senior editor at eWEEK and Follow him on Twitter @TechJournalist.

Are Artificial Pancreas vulnerable to cyber attacks?

May 20

A team of researchers explains that million of lives potentially depends on the resilience to cyber attacks of a new generation of “artificial pancreas.”
Medical devices are open to cyber attacks, many studies have demonstrated that a large number of a new enration of medical equipment could be affected by security flaws that could be exploited by hackers.
A few weeks ago, a group of researchers reported that drug infusion pumps are affected by numerous remotely exploitable vulnerabilities that could open the doors to hackers, now we will discuss about “artificial pancreas” used to manage the administration of insulin to diabetics.
The artificial pancreas could be vulnerable to cyber attacks that can alter the insulin level transmitted from a glucose monitor to the insulin pump.
According to a post published in the journal Diabetes, Technology and Therapeutics, the Dr. Yogish Kudva along with other colleagues analyzed the resilience of artificial pancreas to cyber attacks. Kudva highlighted the need to carefully evaluate the security of the devices and its components.
“We wanted to make sure that this important aspect of the field was adequately addressed as we get ready at scaling up on our studies,” explained the Dr. Kudva.
The mechanism behind the artificial pancreas is quite simple, the patient blood sugar is measured by a glucose meter that transmits the blood sugar value to the insulin pump which manages the insulin dose depending on it.
Dr. Kudva explained that data must be encrypted to avoid tampering that could allow attackers to change the insulin level with serious repercussion on the health of the patient.
“I think the most important issue to get security people more involved,” said Kudva. “I don’t think there is enough security expertise at this time.”
Despite the results of the test on the “artificial pancreas” aren’t yet available, security experts and medical staff agree on the need to implement security measures to protect the devices and introduce back up or warning mechanism to respond in case of attack.
In the specific case, an alarm could be triggered when the artificial pancreas intends to inject anomalous quantities of insulin.
The security implemented for medical devices is crucial, the new generation of medical equipment are always online and manages a huge quantity of sensitive data, for this reason the security must be a pillar of their design.
“I think that’s the next step,” Kudva said of the closed-loop “artificial pancreas” development.
Let’s wait for the results of the test.
Pierluigi Paganini

CareFirst data breach affects about 1.1M people

CareFirst data breach affects about 1.1M people

May 22

CareFirst BlueCross BlueShield fall victim of a major data breach, personal information belonging more than one million individuals could have been exposed.
Health insurer CareFirst BlueCross BlueShield is notifying more than one million individuals that it was the victim of a data breach which may have exposed personal information used by attackers to gain limited, unauthorized access to one of the company database. The investigators speculate attackers have accessed personal information, including names, birth dates, email addresses and subscriber identification numbers, usernames to access the CareFirst website.
“On May 20, 2015, CareFirst BlueCross BlueShield (CareFirst) announced that the company has been the target of a sophisticated cyberattack. The attackers gained limited, unauthorized access to a single CareFirst database.” states the advisory posted to the website.
“Approximately 1.1 million current and former CareFirst members and individuals who do business with CareFirst online who registered to use CareFirst’s websites prior to June 20, 2014 are affected by this event.”
CareFirst had hired security firm Mandiant to perform an assessment of internal IT systems that revealed the data breach. On April 21, security experts at Mandiant discovered evidence of unauthorized accesses to the database on June 19, 2014. The experts haven’t found evidence of additional attacks against the CareFirst systems.
The advisory highlighted that hackers accessed only usernames explaining that related passwords were stored in encrypted format on a separate system not breached by hackers. The message from CareFirst President and CEO, Chet Burrell confirmed that no member Social Security Numbers, medical claims information or financial information were exposed.
All the individuals potentially exposed by the data breach are being notified, the company urges them to change their credentials and offered two years of free credit monitoring and identity theft protection services.
“All affected members will receive a letter from CareFirst offering two free years of credit monitoring and identity theft protection. The letters will contain an activation code and you must have the letter to enroll in the offered protections. Out of an abundance of caution, CareFirst has blocked member access to these accounts and will request that members create new user names and passwords.”
Be aware of scammers that could try to exploit the incident, CareFirst remarked that it will not be contacting people by email, phone or social media.
Unfortunately, Health insurers are a privileged target of criminal organizations, in February the nation’s second largest health insurer Anthem announced that hackers violated its servers and stolen personal information for about 80 million people.
Pierluigi Paganini

Study: Cyber Crime Prevention & Detection On the Rise

Laurence Guihard-Joly, General Manager, IBM Resilience Services
Laurence Guihard-Joly, General Manager, IBM Resiliency Services

By Laurence Guihard-Joly
Last August hackers stole personal photos of young actress Jennifer Lawrence and other celebrities from their smartphones and posted them online. Several months later, Sony Entertainment was hacked and the group responsible has routinely leaked troves of sensitive information, including everything from email threads to financial and salary details.
A similar intrusion at JPMorgan Chase late last year compromised the records of 76 million households and seven million business clients. The common thread between celebrity photo hacks and digital corporate invasions is the tools and tactics thieves use to purloin private pics and to steal more lucrative loot are the same – from spreading malware that can damage systems and compromise data to distributing “phishing” emails designed to trick people into sharing passwords and other sensitive info.
But while swiped movie star snaps make for bigger headlines at the supermarket checkout, the danger is much greater from data breaches like the ones at Sony Entertainment and JPMorgan Chase – where the damage can result in lower revenue, lost business, legal exposures and irreparable damage to reputation.
These risks become bigger and guarding against them more difficult as people put their business and personal information “into the cloud” by using mobile devices to purchase consumer goods, monitor health and fitness, pay taxes and manage government benefits, check investment accounts, make bank transfers, and share business correspondence and documents.
For more than a decade, the Ponemon Institute has performed detailed global analyses on the threat and impact of data breaches, and the news this year is not good as hackers and the tools they use become more sophisticated and efficient. Ponemon’s 2015 survey (2015 Cost of Data Breach Study: Global Analysis) of more than 350 companies showed that the cost of such intrusions is now roughly $3.8 million per incident, up 23 percent over the past two years. Average cost per record of sensitive information compromised has risen by 6 percent to $154 over the past year.
The latest Ponemon study highlights three key new findings that illustrate how detecting and preventing such breaches is a rising priority for many companies as the frequency and severity of these intrusions increases:
  • Security starts at the top – The focus on data breaches is rising to the top of the business, with senior leaders becoming involved in decisions to increase investments in security and insurance out of concern not only for near-term costs but also long-term impacts on reputation and business prospects. The survey shows for the first time that board-level involvement decreases intrusion costs by $5.5 per compromised record;
  • Time is not on your side – For the first time, the survey showed a direct correlation between how quickly an organization detects and stops a breach and the overall cost, with the average time it takes to identify a malicious attack being 256 days. The more time intruders have to poke around in your system, the greater the damage and the more they can steal. Consider that the Sony Entertainment intrusion went on for more than a year before anyone even discovered it;
  • Be Prepared – Having a comprehensive security policy means having all the right people involved across the business. For example, the study said early involvement of the Business Continuity Management team in remediation of intrusions can lower the costs of a breach by an average of $7.1 per record. The BCM team focuses daily on the resiliency of IT systems, guarding against a range of threats including natural disasters, civil unrest and, yes, security breaches. If you have to take a server, a database, or data center down to fix a security breach, it’s the Business Continuity Management team who can keep the business up and running by using the data backup or moving the work to another location.
Stolen celebrity selfies may create a bigger media splash, but the trend lines clearly show that corporate security breaches are a bigger concern as hackers are far more interested in cash than flash.As the Ponemon Study illustrates, top-level focus on the issue, acting with speed, and involving the right people – like your Business Continuity Management team – are all essential to combating these threats quickly and effectively.

IRS Attack Demonstrates How Breaches Beget More Breaches

IRS Attack Demonstrates How Breaches Beget More Breaches
Weak authentication validation assumed only taxpayers would know their Social Security Numbers and other information that criminals have been stealing for years.
As the IRS begins to dig into forensics around a breach in its online "Get Transcript" application that exposed 100,000 tax accounts to intruders, early information released this week to the public is offering security food for thought to both public and private sector organizations. According to security pundits, the breach offers ample evidence of authentication weaknesses prevalent today and also shows how interconnected unrelated data breaches can really be.
The IRS said in a statement yesterday that criminals used taxpayer-specific data from "non-IRS sources" to gain unauthorized access to the breached accounts.
"These third parties gained sufficient information from an outside source before trying to access the IRS site, which allowed them to clear a multi-step authentication process, including several personal verification questions that typically are only known by the taxpayer," the statement said, explaining that the Treasury Inspector General and the IRS Criminal Investigation unit are looking into it and have shut down the application in the interim.
Sponsor video, mouseover for sound
According to Ken Westin, the way this breach went down illustrates how large scale breaches have transformed personal information into public information—or at least information publicly available on the black market.
"We live in a world where the Internet has become a database of ‘you’ and where one data breach can easily feed another. According to the IRS, the data came ‘from questionable email domains’ and at a high velocity of requests," he explains. "The information that was used to bypass the security screen, including Social Security numbers, dates of birth and street addresses, are all components of data that have recently been compromised in health insurance data breaches."
The authentication problems are two-fold. One is that agencies like the IRS, as well as private sector organizations, don't do enough to properly verify identity during enrollment for new accounts.
"Authentication relies on being able to properly identify people at least once.   But how do you know who you’re dealing with before that first identification happens?" says Jeff Williams, CTO of Contrast Security. "Well, the IRS decided that if you know a person’s SSN, birthday, and street address, then you must be that person. For government agencies in particular, we can do better. We should have an official channel that can provide higher assurance authentication before granting access to our personal information."
The second authentication weakness is the age-old weakness of depending solely on the lowly password to keep intruders at bay.
"This data breach demonstrates the limitations of using static authentication credentials, especially information that cybercriminals are showing they can easily steal and then repurpose for data breaches such as this," says Tsion Gonen, vice president of strategy in identity and data protection at Gemalto.
Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Thursday, May 28, 2015

IRS Data Breach: What to Do Right Now

IRS Data Breach: What to Do Right Now

By -
Source: Tom's Guide US                                          

If you find out you're among the 200,000 American taxpayers hit by the data breach disclosed yesterday (May 26) by the Internal Revenue Service, then you are in serious trouble. This is much worse than a typical data breach. Your identity is actively being used for fraud, and you need to take action as soon as you get a notice from the IRS.
First, some background: The thieves who hit the IRS between February and mid-May of this year didn't steal personal information from the tax agency. They already had it. That information was harvested in unspecified earlier data breaches — possibly the massive breaches at health insurers Anthem and Premera disclosed a few months ago, which contained full names, addresses, dates of birth and Social Security numbers of more than 90 million individuals.
With such information, the IRS "hackers" didn't have to hack the IRS website. Instead, they walked right in the front door, verifying stolen identities in half of roughly 200,000 attempts. (The other half were stymied by "security" questions, the answers to some of which could have been found on Facebook or Zillow.) Once in, they downloaded transcripts of previous tax filings, which were used to file fraudulent 2014 tax returns that paid out more than $50 million in refunds to crooks.

"These are not amateurs," IRS Commissioner John Koskinen told The New York Times. "These actually are organized crime syndicates that not only we, but everybody in the financial industry, are dealing with."
The IRS will mail out letters this week to all 200,000 individuals whose personal information was used. The 100,000 people whose transcripts were fraudulently obtained will be offered free credit monitoring for an unspecified period.
There's no harm in signing up for the free monitoring, but it won't be enough. If you get a letter from the IRS about this incident, whether or not a transcript was obtained, then your personally identifiable information is already being exploited by criminals. You're not at risk of identity theft; your identity has already been stolen.
Here's what you need to do.
1. Request a fraud alert, also known as a credit alert, on your file with one of the three main credit-reporting agencies. The agency you contact will inform the other two, you'll get a credit report from each, and it will cost you nothing.
For the next 90 days, you'll be informed whenever a credit report is run on you (a routine occurrence) and whenever someone tries to open an account in your name (not routine). You can renew the fraud alert every 90 days as many times as you like.
To contact Equifax, call 1-888-766-0008 or go to this Web page. To contact Experian, call 1-888-397-3742 or go here. For TransUnion, the phone number is 1-800-680-7289 and the link is here.
2. Sign up with a good credit-monitoring service, also known as an identity-protection service. "Protection" is a misleading term — what these services do is alert you if something is wrong, and, sometimes, help you resolve issues. Unfortunately, the services you get for free if you're the victim of a large-scale data breach are among the least impressive we've evaluated, doing the bare minimum to keep you informed of possible fraudulent activity.
Instead, it's worth paying for a solid service, such as LifeLock, IdentityForce and Identity Guard, which we found to be much more useful and thorough. Each offers different tiers of pricing and coverage; it can add up to a lot, but we recommend signing up for at least six months if you're part of the IRS 200,000.
3. File a police report of identity theft with your local police precinct. This may seem pointless, but it's extremely important because it will establish a legal basis with which you can dispute any future fraudulent activity, and may make you eligible for a free credit freeze (see below).
4. File a formal complaint of identity theft with the Federal Trade Commission, which you can do here, for the same reasons.
5. Consider instituting a credit freeze, also known as a security freeze, with the credit-reporting agencies. You may have to pay a small fee to each agency to both begin and end a credit freeze, although in most states, the fee is waived for persons who have filed police reports of identity theft.
A credit freeze will stop anyone from opening an account in your name without your explicit approval.

The downside is that it won't let anyone run a credit report on you, either — which may snarl things up if you're trying to buy a house or a car or even to change your cellphone company. Still, because your identity has already been stolen and anything could happen with your data, the inconvenience may be worth the peace of mind.

Monday, May 25, 2015

Can't Touch This--New Encryption Scheme Targets Transaction Tampering

An Estonia-based cybersecurity firm adopts a “blockchain” public ledger system to verify online transfers of sensitive information

Estonia–based Guardtime aims to replace RSA’s signature algorithm with one that uses a different type of encryption as well as a public ledger—a so-called blockchain—that records all transactions.

Courtesy of Roberto Saporito, Getty Images/iStockphoto / Thinkstock

In August 1977 popular mathematician Martin Gardner introduced the concept of RSA cryptography in the pages of Scientific American. Developed by three researchers at Massachusetts Institute of Technology, the new algorithm would go on to dominate the securing of transactions over the Internet. Nearly four decades later, with cryptocurrencies and smart-device communications adding to a growing list of online transactions, the search is well underway for an even more secure and scalable replacements for RSA.
Conceived by Ron Rivest, Adi Shamir and Leonard Adleman, RSA cryptography enables Web users to conduct their business in relative privacy rather than having to send their sensitive information openly over the Internet. Enter your credit card into a Web site’s order form, for example, and that information is turned into a code that’s unreadable to anyone except for the vendor who processes your order.
A weakness with RSA, though, is that it was not designed to verify the identity of the person initiating the transaction. If someone were to intercept your online order and, say, change the information to have it shipped to a new address, it would be difficult for the vendor, or anyone, to know that the transaction had been tampered with until well after the fact. There is no way to authenticate you as the person who initiated the order, as opposed to the person who changed the shipping address. As Chris Christensen, an analyst at research firm IDC, put it in a 2006 paper (pdf) on the subject, “How does the receiver know that a message really came from the person who ‘signed’ it?”
When looking at information stored in the cloud, transferred between smart devices—the basis for the “Internet of Things”—and managed by businesses, there is no way to know that data has not been changed, says Mike Gault, CEO of Guardtime. His Estonia–based cybersecurity firm aims to replace RSA’s signature algorithm with one that uses a different type of encryption as well as a public ledger—a so-called blockchain—that records all transactions.
Blockchains have gained notice of late for their role in securing transactions involving cryptocurrencies such as bitcoin. These digital public ledger systems record information—including time stamps and other data tags—for all transactions that have been deciphered and validated. Once a transaction is entered into the blockchain ledger, it cannot be deleted or changed. Blockchains would enable a vendor to verify that you were the person who sent an order or that a second alteration of an original communication was made, raising suspicion. They are also appealing from a security and privacy standpoint because they rely on information stored across a decentralized network of computers. There is no central repository for cyber attackers to target.
Guardtime’s authentication and signature protocol is called BLT, after the company cryptographers—Ahto Buldas, Risto Laanoja and Ahto Truu—who invented it. The company claims that, unlike RSA, its cryptographic scheme “cannot be efficiently broken” even if an attacker uses quantum-computing algorithms.
Replacing a venerable technology such as RSA is no easy task, so Guardtime has partnered with Swedish wireless-network equipment maker Ericsson, whose new cybersecurity offerings are based on BLT. Estonia has served as a test bed for Guardtime’s technology over the past few years. The Baltic nation relies heavily on the Internet for banking and other crucial day-to-day functions and is loath to see a repeat of the crippling cyber attack that paralyzed the country in 2007.

Thursday, May 21, 2015

Security Think Tank: Lessons to be learned from Sony breach


Security Think Tank: Lessons to be learned from Sony breach

While there is still some debate around how the attack on Sony was facilitated, what we do know is an attack this successful and of this magnitude will have required significant preparation and planning.
It would appear that one of three things has transpired – either it was facilitated by the acts of a malicious insider or ex-insider; it was a non-malicious insider or human error; or it was successful because of poorly configured, patched and locked-down networks.
I can not think of a successful hack of this nature that did not rely upon some kind of failing, and although it is possible a super hacker so bright that a vulnerability did not need to be exploited to achieve the completion was behind the attack, it seems unlikely.
Whether you think the Guardians of Peace, North Korea or a bunch of hacked-off hackers are responsible, it was still a hugely successful hack and one which only came to light after the hackers themselves – or those who knew about it – chose to announce it.
We have seen that before, so it is a far from isolated turn of events. Look at retail giant Target – it was the authorities knocking on the door to tell them they had been breached, not the other way around. So there are more questions than answers, which seems to be pretty much the way it goes with these mega-breaches.
So given we do not know what truly happened, is it right to speculate about what Sony might or might not have done wrong? Or indeed what facilitated the attack? Well possibly not, however we can question what happened once the breach had occurred and the hackers were on the inside.
This was a sustained attack of various visits and Sony was not aware until it was pointed out, and that is worth discussing.

Attack saw loss of highly sensitive information

This was a wholesale scouring of the Sony digital estate and resulted in some highly sensitive and personal information being removed or destroyed, not to mention the intellectual property theft. This attack saw the loss of personal medical information of employees, as well as other highly personal material.
It really is not shocking many employees who have had details of their salary, their medical histories and human resources records stolen, have decided to sue. They had a right to believe their employer would keep their personal information safe, segregated and protected. The information taken was perfect for ID theft and therefore could spawn a thousand further frauds or cyber crimes.
Once the attackers had found their way in, they took time to build a picture of the network architecture and then returned at a future point to attack specific servers – stealing information and then deleting the original files with sophisticated malware.
Depending on the effectiveness of the Sony backup regime, the malware trashing of the Sony servers could have left information permanently deleted – forensic recovery may not be possible
It is estimated at around 100TBs of data has been destroyed in total. Depending on the effectiveness of the Sony backup regime, the malware trashing of the Sony servers could have left information permanently deleted – forensic recovery may not be possible.
A bit like burglars breaking into your home then coming back and wiping down all the surfaces with bleach, no trace would remain. Sony may never know the full extent of what has been deleted.
There does not appear to have been effective segregation of data and this seems to be across the corporation as the hackers were able to easily move between areas, taking whatever they picked. If they were able to access all areas, was this failure to segregate also an internal failure? Were insiders allowed relative ease of access across areas that were not appropriate?
The lack of segregation of data is very poor security hygiene and given the details released by the hackers of usernames and passwords, this was not the only neglected area of security hygiene at Sony. 
Not only were individuals’ passwords revealed but also admins. Some of the passwords were woeful to say the least and proved a terribly low security awareness level, with no enforced secure password regimen or parameters.
It seems astonishing to many security practitioners that given Sony’s history, they would allow such lax internal security posture. I reiterate, we do not know how the attackers actually breached Sony, but once inside, Sony certainly made it easy for them to move around and take what they wanted with impunity.
So giving advice on how a business could prevent what happened to Sony happening to them could currently run to standard perimeter security that is appropriately and securely procured, installed, patched and maintained.
Organisations such as CESG have been talking about the same threats to information for years and the top ones rarely seem to change. Year after year we see similar breaches enabled by similar vulnerabilities being exploited.

Steps to prevent a cyber attack

Without knowing the full story of the initial breach it is difficult to advise what any other organisation should do to protect themselves from a similar type of attack. However, it is clear that certain steps could be taken to prevent the level of loss that Sony has experienced, including the following:
  • Installed, configured and proactively managed perimeter security controls which provide defence in depth.
  • Properly segregated network infrastructure with effectively hardened and locked-down components ensuring non-essential services are not left running.
  • Identification of particularly sensitive information assets enabling them to be afforded additional protection. The use of effective and properly trained information asset owners is invaluable here.
  • Have a fully documented and effective patching and configuration management regime in place.
  • Enforce an appropriate access management and user identification process ensuring quality passwords are selected – and not then left in plain text on the server – and regularly changed.
  • Implement a high-quality protective monitoring system and make sure you have the ability to react to the alerts it generates.
  • Incident management plans should be fully documented and regularly tested ensuring all staff involved in incident responses know exactly what they should be doing.
  • Carry out regular internal and external IT health checks/penetration testing. This should also be an activity that is intrinsically linked to good change and configuration management processes to ensure testing also occurs following any significant changes to the infrastructure.
  • Good recruitment processes utilising appropriate vetting and background checks, supported by ongoing pastoral care to minimise the risk posed by the insider threat.
  • High quality employee (including C-suite) security awareness training that is role-related, appropriately positioned and regularly updated to reflect the current threat landscape.
  • A well developed and regularly tested business continuity plan should be implemented and effectively communicated to all key personnel.

Mike Gillespie is director of cyber research and security at The Security Institute.

Wednesday, May 20, 2015

Tech giants don’t want Obama to give police access to encrypted phone data

Tech giants don’t want Obama to give police access to encrypted phone data

FBI Director James B. Comey has expressed concern that the growing use of encrypted technologies is hindering the ability of law enforcement agencies to do their jobs. (Andrew Harnik/AP)
Tech behemoths including Apple and Google and leading cryptologists are urging President Obama to reject any government proposal that alters the security of smartphones and other communications devices so that law enforcement can view decrypted data.
In a letter to be sent Tuesday and obtained by The Washington Post, a coalition of tech firms, security experts and others appeal to the White House to protect privacy rights as it considers how to address law enforcement’s need to access data that is increasingly encrypted.
“Strong encryption is the cornerstone of the modern information economy’s security,” said the letter, signed by more than 140 tech companies, prominent technologists and civil society groups.
The letter comes as senior law enforcement officials warn about the threat to public safety from a loss of access to data and communications. Apple and Google last year announced they were offering forms of smartphone encryption so secure that even law enforcement agencies could not gain access — even with a warrant.
“There’s no doubt that all of us should care passionately about privacy, but we should also care passionately about protecting innocent people,” FBI Director James B. Comey said at a recent roundtable with reporters.
Encryption techniques and the access they give
Last fall, after the announcements by Apple and Google, Comey said he could not understand why companies would “market something expressly to allow people to place themselves beyond the law.”
FBI and Justice Department officials say they support the use of encryption but want a way for officials to get the lawful access they need.
Many technologists say there is no way to do so without building a separate key to unlock the data — often called a “backdoor,” which they say amounts to a vulnerability that can be exploited by hackers and foreign governments.
The letter is signed by three of the five members of a presidential review group appointed by Obama in 2013 to assess technology policies in the wake of leaks by former intelligence contractor Edward Snowden. The signatories urge Obama to follow the group’s unanimous recommendation that the government should “fully support and not undermine efforts to create encryption standards” and not “in any way subvert, undermine, weaken or make vulnerable” commercial software.
Richard A. Clarke, former cyber­security adviser to President George W. Bush and one of three review group members to sign the letter, noted that a similar effort by the government in the 1990s to require phone companies to build a backdoor for encrypted voice calls was rebuffed. “If they couldn’t pull it off at the end of the Cold War, they sure as hell aren’t going to pull it off now,” he said.
Comey, he said, “is the best FBI director I’ve ever seen,” but “he’s wrong on this [issue].”
Congress, too, is unlikely to pass legislation that would require technology companies to develop keys or other modes of access to their products and services in the post-Snowden area.
Lawmakers on both sides of the aisle have expressed skepticism toward the pleas of law enforcement agencies. Rep. Ted Lieu, a California Democrat with a computer science degree, called backdoors in software “technologically stupid.”
Ronald L. Rivest, an inventor of the RSA encryption algorithm (his name is the “R” in “RSA”), said standards can be weakened to allow law enforcement officials access to encrypted data. “But,” he said, “you’ve done great damage to our security infrastructure if you do that.”
The issue is not simply national, said Rivest, a computer science professor at MIT who signed the letter. “Once you make exceptions for U.S. law enforcement, you’re also making exceptions for the British, the French, the Israelis and the Chinese, and eventually it’ll be the North Koreans.”
The signatories include policy experts who normally side with national-security hawks. Paul Rosenzweig, a former Bush administration senior policy official at the Department of Homeland Security, said: “If I actually thought there was a way to build a U.S.-government-only backdoor, then I might be persuaded. But that’s just not reality.”
Rosenzweig said that “there are other capabilities” that law enforcement can deploy. They will be “less satisfying,” he said, but “they will make do.”
Privacy activist Kevin Bankston organized the letter to maintain pressure on the White House. “Since last fall, the president has been letting his top law enforcement officials criticize companies for making their devices more secure and letting them suggest that Congress should pass pro-backdoor legislation,” said Bankston, policy director of the New America Foundation’s Open Technology Institute.
“It’s time for Obama to put an end to these dangerous suggestions that we should deliberately weaken the cybersecurity of Americans’ products and services,” he said. “It’s time for America to lead the world toward a more secure future rather than a digital ecosystem riddled with vulnerabilities of our own making.”

Tuesday, May 19, 2015

Why We Can't Afford To Give Up On Cybersecurity Defense

10:15 AM
Jeff Williams
Jeff Williams
 We Can't Afford To Give Up On Cybersecurity Defense
There is no quick fix, but organizations can massively reduce the complexity of building secure applications by empowering developers with four basic practices.
Cybersecurity is in the news all the time these days. The leading cause of these breaches is, unsurprisingly, insecure software. As Yahoo’s CISO Alex Stamos put it, “application security is eating security.” Are you surprised to know that only a mere 4% of security spending is allocated to improving in this area?
There are those who argue we should forget about cyberdefense and put all our effort into attack detection, or so-called “attack back” strategies. Nonsense. Anyone who has played even a few minutes of Plants vs. Zombies knows that you have to have a balanced approach. If your barn doors are open, your first priority is to put basic defenses in place.
Why can’t we build secure software? A better question might be why aren’t we spending all our resources getting better at writing secure code? The answer is that it’s not as easy as it seems. Many executives don’t fully understand the massive complexity of our critical software infrastructure and tend to assign blame to individuals rather than accepting that their culture doesn’t encourage security. So, many organizations go for a quick fix instead of doing the work to nurture security thinking in their culture.
Sponsor video, mouseover for sound
Go After Attackers?
The knee-jerk reaction is to focus on the attacker. Every CEO has a press conference the day after their organization gets exploited and blames the attack on an advanced persistent threat. This PR maneuver is intended to assure the public that the organization’s defenses are sound and only a well-funded state-sponsored attack team could have exploited them successfully. Unfortunately, it’s much more likely that it was a relatively unskilled, lone attacker who will never be identified.
Sony’s breach demonstrated the inherent difficulty knowing who the cyberattackers actually are. Was it Russia? Was it China? This challenge is known as the “attribution problem” and we are nowhere close to solving it. So while cracking down on cyberattackers and even “hacking back” sound appealing in “get tough” political speeches, nothing will happen until we make progress in attribution.
Better Intrusion Detection?
Another popular way to avoid the work of actually securing things is to say we just need more effective intrusion detection and prevention. The argument is if we stop attacks from getting in, then it doesn’t matter if our code is riddled with vulnerabilities.
The problem is that detecting all but the simplest attacks requires knowledge of where the vulnerabilities are. For example, consider a URL like That’s what your IDS/IPS technology will see on the wire.
Is it an attack? Well it really depends on the application that’s being protected. That “tgt” parameter could reference an unauthorized account, or cause the application to crash, or delete all users, or kill the database connection pool, denying everyone access to the website. No IDS/IPS could possibly identify any of these as an attack because these tools lack sufficient context.
Blame the Developers?
Another response is to blame developers or accuse them of not caring about security. But it’s not true. Aspect Security has taught over 20,000 developers about security and the vast majority were interested, even animated, about learning how to do it right. Nevertheless, developers are busy, and expecting them to also become application security experts isn’t reasonable.
So, instead of blaming attackers and developers, let’s focus on a few things organizations can to enable developers to build secure software. It’s not a shortcut or quick fix, but we can massively reduce the complexity of building secure applications with four simple practices:
Best Practice #1: Use Standard Defenses. One way to help simplify the problem is to provide developers with standard application defenses. Many organizations have libraries, frameworks, and products that defend against one threat or another. Encryption and logging libraries are very common. Validation and encoding libraries are also fairly popular. Authentication and access control are more often provided by an external gateway.
Would it surprise you to know that many organizations still haven’t standardized their security defenses? When every application has its own custom defenses, it virtually guarantees security vulnerabilities. Building strong security defenses is difficult, and requires more stringent testing than ordinary code. Many people say, “Don’t write custom encryption.” This mandate must be extended to ALL security defenses.
Best Practice #2: Continuously Verify that Defenses Are “Correct.” Whether you write your own standard defenses, use open source implementations, or purchase products, these defenses need to be continuously verified for correctness. That means that they are properly implemented, and can’t be bypassed or tampered with. They should also be easy to understand and use properly.
Many organizations perform security testing on their applications, but don’t think to test their standard security defenses. The defenses get partially tested as part of each application, but are never subject to direct scrutiny. Given the critical nature of this code, it deserves rigorous testing.
Best Practice #3: Verify Applications Are Using Defenses Properly.  Simply having a bunch of strong defenses isn’t enough. Developers need to use these defenses properly and in all the right places. This means establishing a verification program that continuously ensures this is happening. Most application security programs try to solve the very hard problem of proving that there are no vulnerabilities in the software being produced. Verifying that the right defenses are in place and being used is easier and provides more assurance. This depends on having a great threat model so you know what defenses are supposed to be in place.
Best Practice #4: Provide Training and Support for Defenses. Lastly, you can nurture and encourage your security culture by providing training and support for your standard defenses. Training that specifically talks about your standard defenses connects security back to every developer’s job. Developers won’t spend time reinventing input validation for the 900th time, and can instead focus on their business requirements.
How Mature Are Your Standard Security Defenses?
Try filling out this chart for your organization. In the first column, list the exact components or modules that provide the security defense function (a few examples of common components are provided). Give yourself a check in the second column if you have evidence that each specific defense has been tested for security. The third column is if you routinely check applications for use of the standard defense. And the last column covers whether you have training and support for the defense.

If you’re not providing your developers with a complete set of defenses, it’s likely that they will be forced to create their own and will make the same mistakes that others have made before them. It’s simply unreasonable to expect your developers to create secure code without giving them the building blocks they need to succeed. Investing in a strong set of defenses will make your development teams more agile and your enterprise more secure.

A pioneer in application security, Jeff Williams has more than 20 years of experience in software development and security. Jeff co-founded and is the CTO of Aspect Security, an application security consulting firm that provides verification, programmatic and training ... View Full Bio

Sunday, May 17, 2015

How technology solutions can help asset managers with regulation and compliance issues

How technology solutions can help asset managers with regulation and compliance issues

by Peter Raffelsberger, IT Architect, Pioneer Investments Austria
Fast growing demand for detailed portfolio data in high quality
The demand for very detailed portfolio data in high quality is continuously growing mostly due to new regulatory reporting requirements for both asset managers and their clients. Typically, the consolidation of data from various sources is required as well.
Consequently, the existing dozens or even hundreds of different interfaces constitute a horrible situation for any IT system or process. When using CSV interfaces, taking care of the different date/number and line formats is crucial. Maintenance and time consuming data quality processes are constantly lowering revenues.
The Alternative: FundsXML
An alternative to the increasing numbers of interfaces is the usage of an open xml format like FundsXML[1]. Asset management companies from several European countries have developed this xml format during the last decade. It offers a wide range of data fields in a very flexible way for different purposes:
-          static fund data (name and identifiers of fund, management company, strategy, …)
-          NAV data (NAV per share, NAV total, units outstanding, …)
-          price and dividend data (for performance calculation)
-          asset allocation data
-          FPP data, recommended by the European Fund and Asset Management Association (EFAMA)
-          KIID data
-          holdings with optional security and OTC master data
(including exposure and underlying information)
-          transactions and corporate actions
-          earnings data
-          fees data
-          country-specific data
Depending on the respective demands one, some or all parts of the format can be used. Furthermore, adding new sections at a later stage does not require adaptions of existing solutions.
Single XML format with built-in format checking
The usage of an xml schema has many advantages in comparison to a regular CSV format:
-          native support for optional fields/sections, tree structures and built-in loops
-          unified field format description
-          automatic format checking
-          built-in multilingual documentation
-          clear meaning of each data field
-          thousands of tools available

Reduction of costs for development and maintenance
A single format interface is much easier to develop, to test and to maintain. All data feeds benefit from enhancements of the interface. The implementation of new feeds is primarily a matter of configuration and requires a couple of hours instead of weeks of man-power.
Easy cross-system, cross-company and cross-border communication
Using a standard format makes it much easier to replace or upgrade IT systems and to exchange data with other companies in the respective country or abroad. One obvious benefit from the design work already done during the development of the FundsXML format, especially when data is exchanged between different countries.
Simple data quality process
A single data format enables a single set of quality checks. The name of the data fields do not differ from one interface to the other, which makes daily life much easier and documentation shorter. One set of quality checks supports a simple data quality process. Using the FundsXML format ensures a minimum level of data quality. Even the quality checks themselves can be shared within the community as long as they are based on original FundsXML fields.
No more discussions about the definition of specific data fields
Sometimes it is hard to get the exact definition of a field. E.g. what is meant by “NAV”? Is it the NAV per share or the total NAV? Is it the total NAV of the fund or the share class? What is meant by “Market Value”? Does it include interests or not? Is it meant in fund currency or in local asset currency? By using FundsXML, the definition of the respective field in the schema avoids unnecessary confusion and makes the practical handling easier.
Vendor independent organization
The FundsXML Standards Committee (FSC) consists of representatives from investment companies and national fund associations from Austria, Denmark, France, Germany, Luxembourg and The Netherlands.  This ensures a broad representation of the industry as well as a complete independence from commercially driven vendors.
Already used for regulatory reporting in Austria and Germany
The FundsXML format is already used for the “Bundesbank-Meldung” in Germany and the “OeNB-Investmentfonds-Statistik” in Austria. In addition to that the Austrian Control Bank (OeKB) is running a web-based data hub, on which FundsXML documents can be uploaded and downloaded in a fully automated way including a detailed authorization system.
We are facing more and more regulatory requirements in the investment fund industry. The use of FundsXML can help to improve efficiency and flexibility of collecting and distributing fund data needed.

Friday, May 15, 2015

Probeer menselijke factor bij ICT te verminderen’

Probeer menselijke factor bij ICT te verminderen’

Mens en machine liggen elkaar niet altijd even goed. Dat zien we bij de beveiliging van industriĆ«le omgevingen, maar ook nog steeds bij – wat heet – de kantoorautomatisering. De menselijke factor is een geducht aspect bij het waardebehoud van gegevens. Op technisch vlak is hier veel te bereiken, maar altijd in samenhang met bewustwordingsprogramma’s, want iets werkt pas als mensen snappen waarom het nodig is.
Om een beeld te krijgen van de gevoeligheden die bij computergebruik op de loer liggen, is het doornemen van het 2015 Data Breach Investigations Report van Verizon een aanrader. Het bedrijf werkt bij de opstelling van dit rapport samen met 70 organisaties in 61 landen. Denk aan Computer Security Information Response Teams, overheidsinstanties, service providers en beveiligingsbedrijven.
Enkele opvallende cijfers over 2014: 79.790 beveiligingsincidenten; 2.122 bevestigde gevallen van dataverlies; maar ook 700 artikelen in The New York Times over computerinbraak, tegenover minder dan 125 artikelen in 2013. En volgens het rapport was 2014 ook het jaar waarin ‘cyber’ is doorgedrongen tot de directiebesprekingen.
Een schokkend cijfer is dat – volgens het rapport – vijftig procent van de mensen binnen een uur een link opent in een phishing-bericht. En er wordt steeds meer gehengeld in de vijver, het aantal phishing-berichten neemt namelijk toe. Vaak zijn die niet alleen bedoeld om direct toegang te krijgen tot gegevens, maar om de PC onderdeel te maken van een botnet, zodat hij op een later tijdstip te gebruiken is in een georkestreerde DDoS-aanval. Maar ook hier is hoop. Lance Spitzner, Training Director for the SANS Securing The Human program, meent dat bewustwordingsacties vruchten afwerpen. “Je vermindert niet alleen het aantal slachtoffers tot minder dan 5 procent, maar je creĆ«ert tegelijkertijd een netwerk van menselijke sensoren die phishing-aanvallen effectiever herkennen dan welke technologie dan ook.”


Geen enkele software die op de markt komt, is foutloos. De producenten ontdekken fouten terwijl hun programmatuur wereldwijd al wordt gebruikt. Ze sturen herstelcodes rond, maar niet iedereen past deze zogenoemde patches meteen toe. Dit zorgt ervoor dat mensen die met slechte intenties gebruikmaken van die fouten, voor een open doel spelen.
Nogmaals een schokkend cijfer uit het Verizon-rapport: 99,9 procent van de geƫxploiteerde kwetsbaarheden gebeurde langer dan een jaar nadat de CVE was gepubliceerd. De Common Vulnerabilities and Exposures zijn de lijsten met herstelcodes. Met andere woorden: patches worden niet of nauwelijks toegepast; in elk geval veel te laat.
Overigens maakt Verizon wel de kanttekening dat niet elke kwetsbaarheid gelijk is, hetgeen dan gelijkelijk geldt voor de patches, maar dat je gewoon geen risico moet nemen. Want geduchte problemen (zoals Heartbleed, POODLE, Schannel en Sandworm) ontstonden binnen een maand nadat de patch was verstrekt. Conclusie van Verizon: ‘There is a need for all those stinking patches on all your stinking systems’.


Martijn van Tricht
Martijn van Tricht
Het centraal aanpakken van zaken kan hierbij helpen. Dat gaat bijvoorbeeld op voor de phishing-aanvallen: filter het e-mailverkeer op dergelijke verdachte berichten voordat ze in de mailbox komen van de medewerkers. Vooral voor beheerders van computersystemen is het handig om een centrale aanpak toe te passen. Het is immers veel eenvoudiger Ć©Ć©n mailserver onder surveillance te plaatsen en grondig te inspecteren dan de tientallen (zo niet duizenden) mailboxen binnen een organisatie.
Beheerders krijgen er trouwens wel een taak bij: het overbrengen van de boodschap dat niet alles is geoorloofd met PC’s, tablets, smartphones en wat dies meer zij. De IT’ers hebben evenwel diepgaande technische kennis, maar ontberen vaak de didactische vaardigheden om medewerkers zo ver te krijgen dat zij de beperkingen fluitend aanvaarden.

Techniek schiet te hulp

Ook op andere vlakken schiet de techniek te hulp: bijvoorbeeld door het centraliseren van de gegevensverwerking en beschikbaarstelling van applicaties. De medewerkers werken in dat geval met een zogenoemde thin client: een klein kastje dat beeldscherm, muis en toetsenbord van een gebruiker verbindt met een centrale server waarop alle toepassingen staan.
“Het grote voordeel van een thin client is dat de gebruiker over dezelfde functionaliteit beschikt, maar dat het systeem werkt met een embedded besturingssysteem”, zegt Martijn van Tricht, system engineer bij IGEL Technology Benelux, fabrikant van thin clients. “Gebruikers kunnen niet zelf iets installeren op een thin client, dus ook geen malware.”
Voor de meeste applicaties is de inzet van deze kastjes geschikt. Alleen voor bijvoorbeeld zware 3D ontwerpsoftware werkt het niet optimaal, omdat het netwerktransport van gebruiker naar centrale server te veel tijd vergt. Daar is wel wat aan te doen met het tweaken van de WAN-verbinding, maar een dergelijke oplossing is niet zaligmakend. Daar staat echter tegenover dat het hier om een heel gering aandeel van het applicatielandschap gaat. Het overgrote merendeel van de gebruikers is heel goed te bedienen met een thin client.

Liever Linux

Vanwege dit soort voordelen winnen thin clients terrein. Volgens onderzoeksbureau IDC groeit het gebruik van deze apparaten elk jaar gestaag met een kleine tien procent. Ook het aandeel van zero clients (thin clients zonder OS) groeit: in 2014 was dit goed voor 24,6% van alle thin clients; een stijging van 13,6% ten opzichte van 2013. Maar het merendeel van de gebruikte clients betreft thin clients met het Windows-embedded OS: 44,5%.
Wie echter helemaal voor veiligheid gaat, kiest een zero client of een Linux-client, want de meeste malware wordt gemaakt voor het Windows-platform dat immers het grootste is. Voor kwaadwillenden is het efficiƫnt hun code te schrijven voor het meest gebruikte besturingssysteem.

Geen USB-sticks

Social engineering is nog steeds een belangrijke manier voor kwaadwillenden om zich in een systeem te nestelen. Laat een USB-stick op een parkeerterrein slingeren met een pakkende tekst als ‘love scene Paris Hilton’ en succes is verzekerd. Er is altijd wel iemand die hem meeneemt en in de PC stopt. Eenmaal binnen weet het virus wel hoe het zich dient te gedragen.
Dus moet je ervoor zorgen dat iemand geen USB-stick in het apparaat kan steken. Bij de meeste thin clients is die mogelijkheid verhinderd. Beheerders zijn overigens vaak in de gelegenheid om centraal te regelen dat bepaalde personen wel een USB stick mogen gebruiken. Of dat medewerkers alleen USB sticks van een aangewezen merk mogen benutten. Hetzelfde geldt voor camera’s en andere randapparatuur.
Voor beheerders zijn thin clients met recht een zegen; voor organisaties een financieel voordeel, omdat het beheer zoveel goedkoper uitvalt dan dat van PC’s en omdat de beveiligingsrisico’s veel kleiner zijn.


Tegenwoordig zijn medewerkers echter steeds mobieler en gebruiken ze een hele reeks aan apps. Het gebruik van tablets neemt toe binnen organisaties. Kunnen we verwachten dat er op dit vlak ook thin clients komen? Van Tricht acht die kans klein. “Er zullen geen tablets van de makers van thin clients op de markt komen”, verwacht hij. “Er valt niet op te boksen tegen de iPads, de Samsungs en de andere grote namen.”
Wel verwacht hij dat er agents of apps komen die een bestaande tablet kunnen omtoveren in een mobiele thin client. Een dergelijke oplossing bestaat al voor laptops, in de vorm van thin client-software. Op die manier blijven gebruikers vrij om overal en altijd te werken, terwijl de beveiligingsrisico’s afnemen en het beheergemak toeneemt.
- See more at: