Wednesday, April 30, 2014

63% of orgs believe they can’t stop data theft

63% of orgs believe they can’t stop data theft

Infosec 2014: Datacentre security key to cloud security, says Google


Warwick Ashford

Wednesday 30 April 2014 The security challenges of the cloud are fundamentally the same as those of any in-house datacentre, says Peter Dickman, engineering manager at Google.
This means securing data in both can be tackled in the same way, he told attendees of Infosecurity Europe 2014 in London.

 “It is a question of adding as many layers of controls as possible without impairing usability,” said Dickman, which is the approach Google uses to continually evolve and improve security.
Although cloud computing is at an unprecedented scale, he said there are really no new security challenges in the cloud.
“Security is still about balancing controls with usability and, while it is not necessarily easy, it is also not impossible to achieve,” said Dickman.
Security professionals know there is no such thing as perfect security, but he said there are many things that can be done to ensure data in the cloud is as secure as possible.
Google, like most other cloud service providers, has had the advantage of building infrastructure with scalability and security in mind from the start.
“We recognised that devices could be compromised, some applications could be malicious and that we could not assume that users were security savvy, so we planned accordingly,” said Dickman.
First, this means that the computers in cloud datacentres are largely homogenous, making it quick and easy for service providers to update application software and security controls whenever needed.

“This homogeneity enables us to treat each datacentre like a single computer, which makes it easier to do security and get it right,” said Dickman.

Google uses a single, custom-built and security-hardened Linux-based software stack for all its servers in a single datacentre.
The servers are designed so they do not include unnecessary hardware or software to reduce the number of potential vulnerabilities.
This is important for cloud service providers, he said, as their business relies on preserving the trust placed in them as stewards of data belonging to hundreds of millions of users.
Although cloud computing tends to raise concerns about data security, Dickman said this approach was developed in response to the demand for access to data everywhere.
“People attempted to achieve this by making copies of data on portable media and mobile devices, but that was a security risk, and cloud computing essentially meets the need without the risk,” he said.
The next step, said Dickman, is to ensure physical security at the cloud datacentres, using multiple layers of access control technologies and processes.
“It is also important to build devices against possible malicious insiders, which is why our security teams build systems to check each other,” he said.
Also within the datacentre, Dickman said it is important to follow the principles of isolation, segregation and sandboxing, and deploy encryption wherever, and whenever possible.
“Encryption is no panacea, but it is worth the cost and Google is continually working to ensure our encryption algorithms are as fast and as secure as possible,” he said.
Unfortunately, many organisations still fail to keep things separate, said Dickman. “This is not rocket science, just tricky engineering,” he said.
Availability is another important component of security he said, but because cloud service providers take security seriously, they tend to build their datacentres to be fault tolerant.
“We test our fault tolerance by turning things off, which should work if systems have been designed and implemented correctly,” said Dickman.
Google has robust disaster recovery measures in place due to its ability to shift data access to other datacentres in various parts of the world, selected for their relatively high political stability.
Google does not store each user's data on a single machine or set of machines. Instead, the company distributes all data, including its own, across many computers in different locations.
The data is then split into chunks and replicated over multiple systems to avoid a single point of failure, and the data chunks are given random computer-readable only names as an extra measure of security.
Google also rigorously tracks the location and status of each hard disk in its datacentres, and it destroys hard disks that have reached the end of their lives in a thorough, multi-step process.
“No one knows yet how to build perfect security, but Google is continually working to make it better,” said Dickman.
All companies are faced with the security challenge of finding the correct balance between what is needed and what can be afforded, he said.
But Google, like most other cloud service providers, argues that because of the economies of scale, it is able to build and maintain security to a higher level than most companies could achieve on-premise.


http://www.computerweekly.com/news/2240219821/Infosec-2014-Datacentre-security-key-to-cloud-security-says-Google?asrc=EM_EDA_28719943&utm_medium=EM&utm_source=EDA&utm_campaign=20140430_Infosec%202014:%20Datacentre%20security%20key%20to%20cloud%20security,%20says%20Google_

Sunday, April 27, 2014

Password Requirements Quick Guide


Password Requirements Quick Guide
Question: Which characters are required in my password?
Answer: That depends on how long it is. The shorter it is, the more restrictions there are. Here are the specific requirements:
Number of charactersRequirements
8-11mixed case letters, numbers, and symbols
12-15mixed case letters and numbers
16-19mixed case letters
20 or moreAny characters you like!
Stanford recommends a password 16 or more characters long.
Longer passwords are inherently more secure because it takes hackers longer to guess them when employing a brute force method. So make your password 16 characters or longer!
Because they only require upper and lower case letters, passwords that are 16 characters or longer are much easier to type on a mobile device.
You may be thinking “How on earth can I come up with a password that long?!” It’s easy! Just select 4 random words. For example: orange eagle key shoe. That’s 21 characters including the spaces.
Now go forth and create your own awesome passwords and keep your account secure!


Wednesday, April 23, 2014

A guide to cloud encryption and tokenization

A guide to cloud encryption and tokenization

by Bob West - Chief Trust Officer, CipherCloud - Tuesday, 22 April 2014.
Bookmark and Share
Cloud adoption shows every sign of continuing to grow. The sharing of resources helps businesses achieve savings and agility based on economies of scale but there’s a problem: cloud computing can also be an attractive target for cyber thieves.

Businesses using the cloud are now increasingly looking to security experts for help on how to protect their data against unwanted intrusion. With Edward Snowden’s continuing revelations on government spying, and a string of headline-grabbing incidents like the recent Heartbleed security vulnerability, many are calling 2014 the year of encryption.

In order to achieve the best cloud information protection strategy, enterprises must understand what information they use to run their enterprise and what sensitive data should needs protection in the cloud. Businesses migrating to the cloud are being advised to lock down any sensitive data before it leaves their premises, which is why more companies are deploying encryption.

To encrypt or not to encrypt

U.S. cloud providers like Google and Microsoft have been upgrading their server encryption levels. This reinforces the relevance of encrypting sensitive data in the cloud for security and privacy compliance worldwide.

Another factor to consider is only a small percent of a company’s data needs to leverage this technology. A pragmatic approach is to encrypt the sensitive data, such as personally identifiable information or research and development materials that could damage the company and or its customers’ reputation in the event of a breach. All data does not need be encrypted in the same way either. Additionally, for functionality’s sake, information such as credit card numbers may need their formats preserved in ways that address information does not.

But is encryption enough to protect private data? To answer that question, it’s vital to understand the encryption methods and know how they work together to keep data protected against unwanted intrusion.

Symmetric and asymmetric encryption

Most secure online transactions rely on asymmetric encryption to encrypt the tunnels as data moves across servers. This is used by online banking or shopping sites to secure the credit card details entered onto transactions page. It relies on a pair of keys – a public one, used to encrypt the data, and a private one, used to decrypt the data.

Yahoo! joined Google and Microsoft in upgrading HTTPS, the encryption standard used to protect these tunnels, from RSA 1024-bit to 2048-bit. This upgrade fortifies the transport layers to protect network environment.

As a complement, symmetric encryption, which relies on one key, provides data-centric protection and typically encrypts the information before it goes to the cloud. Using the industry standard of AES 256-bit, symmetric encryption scrambles the data and gives the keys to the enterprise. This enables enterprises to tighten control over access to the encrypted information.

One of the factors that influences a company’s decision on how to encrypt their data are the privacy regulations they must follow and desired levels of control to meet internal security and privacy policies. Faced with a valid legal order to decrypt and surrender internal data by a government, enterprises must comply with this request. However, this process is still transparent and does not cede decision-making to a third party.

Cloud encryption best practices

Like with any technology, there are common concerns and best practices to follow when securing data with encryption. The first pitfall is whether a business is using strong enough encryption – especially in light of recent security issues.

Encryption comes in different strengths, and choosing the right kind for each data field’s needs is vital to a successful cloud information protection strategy. As mentioned earlier, due to their higher level of sensitivity, customers’ credit card numbers require a higher strength of encryption than customer post codes for example. Failing to use a strong enough encryption method for protected data can result in compliance violations or data breaches – two costly consequences every enterprise wants to avoid.

Collaboration is generally a good thing but any kind of encryption method that gives a third party access to the encryption keys leaves the enterprise more vulnerable to a breach. A third party, for example, could have a security breach or fall victim to an insider threat and, should they ever receive a government request for data, customer information could be turned over without their knowledge or consent.

To ensure that only the business alone has the power to unlock data, keep exclusive control of the encryption keys. This way, even if data is leaked or stolen, it will remain illegible to unauthorised viewers. In the event of cloud surveillance, the intruder can’t decrypt the content without the key.

The beauty of encryption is that it can lock down data so that only authorised parties can read or use it. When implementing an encryption strategy, ensure the software retains data formats and uses methods that preserve the data’s searchability, sortability, reportability, and general functionality in the cloud.

Tokenization best practices

Instead of encrypting data, tokenization replaces the data itself with a placeholder. The data itself is securely stored within an enterprise’s perimeter, and only the token is transmitted. Like encryption, it plays a vital role in a company’s compliance strategy and reduces cloud-related PCI DSS and HIPAA scope by limiting the amount of data that is to be sent outside of the data centre.

However, tokenization has its pitfalls and enterprises should consider the solutions that can address them. The first issue is similar to the uses of a cloud service provider to encrypt their data for the enterprise. By allowing a third party to handle tokenization off-premises, it means handing over sensitive data to a third party and trusting them to secure that data in their own data centres. If tokenization is part of an enterprise’s cloud information protection strategy, do it on premises to retain more control over the data.

Is there such a thing as tokenizing too much or not enough? Tokenization requires enterprises to store their data separately in a data centre, so overuse can result in excessive consumption of that storage resource. With this in mind, only tokenise what is needed.

A word on compliance

Before committing to the cloud, businesses need to understand exactly what cloud information protection measures must be taken to remain in regulatory compliance. Here are a few:

In the UK, the Information Commissioner's Office (ICO) can impose financial penalties of up to £500,000 for companies that breach the Data Protection Act. Its guidance clearly puts the onus on the companies owning the data.

The EU has sanctioned both the Data Protection Directive of 1995 (46/ EC) and Internet Privacy Law of 2002 (58/EC), where businesses are required to notify data owners if their personal data is being collected, secure data from potential abuses, and only share data with the subject’s consent.

The PCI DSS is a global information security standard every company must consider if they are to protect their credit card and customer account data from unauthorised access and misuse.

What now?
Adopting a well-thought out cloud information protection strategy will give the enterprise full control when it comes to securing its enterprise’s sensitive data. Encryption and tokenization are vital to enabling regulatory compliance and the security and privacy of sensitive data. Used correctly, they can help enterprises stay safe in the cloud and conduct business without interruptions.

Monday, April 21, 2014

Security experts at Mandiant uncovered attackers exploiting the Heartbleed vulnerability to circumvent Multi-factor Authentication on VPNs.

Security experts at Mandiant uncovered attackers exploiting the Heartbleed vulnerability to circumvent Multi-factor Authentication on VPNs.

We have practically read everything about HeartBleed bug which affects OpenSSL library, we have seen the effects on servers, on mobile devices and also on Tor anonymity,  now lets focus on the possibility to exploit it to hijack VPN sessions.
Cyber criminals are trying to exploit Heartbleed OpenSSL bug against organisations to spy on virtual private network connections hijacking multiple active web sessions.
Security experts at Mandiant discovered attackers are exploiting the Heartbleed vulnerability to circumvent Multi-factor authentication on VPNs. The investigators have found evidences of the attack analyzing IDS signatures and VPN logs.
Considering that through an Heartbleed request the attacker could gain access to a limited portion of memory (64KB of memory for each Heartbeat request), in order to fetch useful data he needs to send a huge quantity of requests. This stream of requests was identified by IDS once it was written a signature specifically for Heartbleed.
heartbleed VPN 2
During the intrusion observed by Mandiant the IDS detected more than 17,000 requests matching the pattern written for HearttBleed.
Mandiant confirmed that an unnamed organization suffered a targeted attack which exploited the “Heartbleed” bug in OpenSSL running in the client’s SSL VPN concentrator to remotely access organization’s internal network.
“This post focuses on a Mandiant investigation where a targeted threat actor leveraged the Heartbleed vulnerability in a SSL VPN concentrator to remotely access our client’s environment and steps to identify retroactively if this occurred to your organization.” reported the Mandiant official post.
The attacker is able to obtain active session tokens for currently authenticated users sending repeatedly malformed heartbeat requests to the HTTPS web server running on the VPN device. Once gained an active session token, the attacker successfully hijacked multiple active user sessions and deceived the VPN concentrator which considered it as legitimately authenticated.
With an active session token, the attacker successfully hijacked multiple active user sessions and convinced the VPN concentrator that he/she was legitimately authenticated.
“The attack bypassed both the organization’s multifactor authentication and the VPN client software used to validate that systems connecting to the VPN were owned by the organization and running specific security software.” wrote Mandiant experts Christopher Glyer and Chris DiGiamo. 
The following evidence proved the attacker had stolen legitimate user session tokens:
  1. A malicious IP address triggered thousands of IDS alerts for the Heartbleed vulnerability destined for the victim organization’s SSL VPN.
  2. The VPN logs showed active VPN connections of multiple users rapidly changing back and forth, “flip flopping”, between the malicious IP address and the user’s original IP address.  In several cases the “flip flopping” activity lasted for multiple hours.
  3. The timestamps associated with the IP address changes were often within one to two seconds of each other.
  4. The legitimate IP addresses accessing the VPN were geographically distant from malicious IP address and belonged to different service providers.
  5. The timestamps for the VPN log anomalies could be correlated with the IDS alerts associated with the Heartbleed bug.
The hackers once gained the access to the internal network of targeted organization attempted to move laterally and escalate his/her privileges.
Attacks like the one uncovered by Mantiant will increase in the next weeks, it is necessary to immediately identify and upgrade the component that make use of the flawed library.
Pierluigi Paganini
(Security Affairs –  VPN, Mandiant)

Saturday, April 19, 2014

IT snapt Heartbleed niet

door

heartbleed, vraagteken, question
door
Opinie - De reactie van IT-afdelingen op het Heartbleed-lek is bijna net zo verontrustend als het gat zelf.
Het patchen van OpenSSL, het installeren van nieuwe certificaten en het veranderen van wachtwoorden is prima, maar de kritieke follow-up ontbreekt. We zouden beter moeten nadenken over het omgaan met security van missiekritische software.
Als je er als buitenstaander naar kijkt, is Heartbleed een godsgeschenk voor IT: het is een belangrijke wake-up call voor fundamentele problemen in de omgang met internetsecurity. Als iedereen nu wakker wordt, gaat security er flink op vooruit. Maar als deze kwetsbaarheid gewoon wordt gepatcht en verder genegeerd, dan zijn we ten dode opgeschreven. (Ik denk al jaren dat we er allemaal aan gaan, maar nu heb ik daar meer bewijs voor in handen.)
Laten we eens teruggaan hoe Heartbleed ontstond. Het is kennelijk per ongeluk twee jaar terug aangemaakt door de Duitse softwareontwikkelaar Robin Seggelmann. In een interview met de Australische Sydney Morning Herald, zei Seggelmann: "Ik werkte aan het verbeteren van OpenSSL, voerde verschillende bugfixes door en voegde een aantal nieuwe features toe. Voor een van deze nieuwe functies miste ik, helaas, de validatie van een variabele."
Nadat Seggelmann de code had ingestuurd, heeft de reviewer de missende validatie kennelijk niet opgemerkt en is de code van de developmentfase in de productieversie terecht gekomen. Seggelmann stelt dat de fout zelf "vrij triviaal" is, terwijl de consequenties dat allerminst zijn . "Het was een simpele programmeerfout voor een nieuwe feature. Helaas heeft dit plaatsgevonden in een voor security belangrijk gebied."
Wat Segelmann deed was perfect te begrijpen en te vergeven. Het grote probleem is dat er onvoldoende veiligheidsmaatregelen in het systeem zijn ingebouwd om simpele wiskundige fouten te voorkomen. Als de checks and balances in het systeem zo fragiel zijn dat een typfout alle getroffen securitymaatregelen kan ondermijnen, zijn er een aantal fundamentele zaken die oplossing behoeven. Laten we niet vergeten dat toen Robert Tappan Morris in 1988 zijn Internet Worm losliet, het hele internet crashte. Ook dit was het resultaat van een rekenfout. Hij had nooit voorzien dat servers zouden crashen, maar toch gebeurde dit.
CIO David Schoenberger van securityleverancier Transcertain stelt dat de echte securityzwakheid waar we hier mee te maken hebben een teveel aan vertrouwen is onder IT-professionals. Persoonlijk denk ik dat je daaraan kunt twijfelen, maar Schoenberger maakt wel een interessant punt.
"Dit gaat mensen opnieuw doen nadenken over wat we eigenlijk doen. Er worden zoveel dingen over het hoofd gezien en als vanzelfsprekend ervaren. In de IT-wereld hebben we te lang op vertrouwen geleund", zegt hij. "Kijk maar naar de miljardenbedrijven die vertrouwen op peer-reviewed open-source. We nemen niet meer de moeite om zelf de proef op de som te nemen. Omdat het meestal werkt en het, in onze perceptie, goed werkt, slaagt het voor onze tests. Op dit moment heeft open-source gefaald, maar het had ook een commercieel product kunnen gebeuren. Dit had overal kunnen plaatsvinden."


Microsoft vertrouwt al jaren op crowdsourcing. Een softwareproduct wordt voor miljoenen mensen gelanceerd en Microsoft vertrouwt erop dat de community de gaten vindt. Ik grapte eerder daarover dat Microsoft kwaliteit op plaats 1.1 zet. Het idiote is dat de strategie van Microsoft werkt. Maar hoe kon Heartbleed twee jaar lang rondspoken zonder dat een securityonderzoeker het opmerkte?




Sommigen zijn ervan overtuigd dat mensen en instanties ervan hebben moeten weten. De NSA is beschuldigd van kennis en misbruik van het gat. Deze beschuldiging leidde tot een ontkenning. In een statement liet men weten: "De federale overheid was niet op de hoogte van de recent geopenbaarde kwetsbaarheid in OpenSSL totdat deze publiek gemaakt werkt in een cybersecurity rapport afkomstig uit de private sector."
Er zijn twee onderdelen van het volledige statement (lees die hier) waarin de geloofwaardigheid van de NSA te betwisten valt. Allereerst is het idioot om te verklaren dat niemand van de federale overheid (dus de CIA, FBI, NSA, het leger en 200 overheidsinstellingen) ook maar iets van het gat wist. Hoe weet je zeker dat een securityspecialist van het leger het niet wist? Niet ieder gat dat wordt ontdekt wordt noodzakelijkerwijs gerapporteerd aan het senior management. Als ze hadden gezegd "voor zover ons bekend" of "we kunnen het niet met zekerheid stellen dat, maar...", dan zou dat wel plausibel zijn. Het is alsof mijn tienerdochter vertelt dat niemand op haar school drinkt of drugs gebruikt.
Mijn tweede punt bij het NSA-statement heeft betrekking op de laatste regel: "Tenzij er een heldere noodzaak is vanuit nationale veiligheid of wetshandhaving, brengen wij zulke kwetsbaarheden altijd naar buiten." Niets in het statement wijst erop dat zo'n noodzaak er in dit geval was. Het is vergelijkbaar met mijn dochter die na haar eerdere uitspraak zegt: "Ik vertel altijd de waarheid over zulke dingen, tenzij het mijn vrienden in de problemen zou brengen, dan zou ik wel liegen."
Ik ben nooit een fan van het toevoegen van extra bureaucratie, maar het is misschien tijd om formele controlerende procedures in te stellen - liefst meerlaags - waarbij mensen actief en transparant onderzoek doen naar mogelijke gaten. Peer review is goed, maar voor iets cruciaals als internetbeveiliging, zijn we allang het punt gepasseerd waarbij er soms iets op mag duiken. We moeten er proactief naar op zoek.


Evan Schuman was eerder hoofdredacteur van techsite StorefrontBacktalk en columnist bij CBSNews.com, RetailWeek en eWeek. Tegenwoordig schrijft hij columns voor zustersite Computerworld.com.

Wednesday, April 16, 2014

Rapporteren volgens Solvency-II nog 'grote uitdaging'

Risicomanagement         


Hoewel bijna 80 procent van de Europese verzekeraars denkt aan de vereisten van Solvency-II te kunnen voldoen voor de naar 1 januari 2016 verschoven invoering, vormen de rapportagevereisten nog een 'enorme uitdaging'.
Rapporteren volgens Solvency-II nog 'grote uitdaging'
Foto: 123 RF
Dat beeld komt naar voren uit de European Solvency II Survey van EY, waarbij 170 verzekeringsmaatschappijen in 20 Europese landen zijn onderzocht. De Nederlandse, Britse en Scandinavische verzekeraars zijn het best voorbereid, terwijl de Franse, Duitse, Griekse en Centraal- en Oost-Europese verzekeraars er minder vertrouwen in hebben.

Uit het onderzoek komt naar voren dat de verzekeraars klaar zijn voor de implementatie van Pilaar 1 en voldoen aan de meeste vereisten voor Pilaar 2, het governance-systeem. Pilaar 3, de rapportagevereisten, vormt volgens de onderzoekers nog steeds 'een enorme uitdaging'.

Minder goed voorbereid
De opschorting van de deadline voor implementatie van Solvency II tot 2016 heeft volgens EY onder verzekeraars tot een groter vertrouwen geleid dat zij op tijd aan de vereisten kunnen voldoen. Wel wordt duidelijk dat sommige van hen minder voorbereid zijn dan zij zelf hadden verwacht.

Volgens EY-partner Paul de Beus hebben veel maatschappijen hun plannen simpelweg met minstens een jaar uitgesteld, wat deze partijen nu wel eens in de problemen zou kunnen brengen. Hoewel verzekeraars er geen twijfel over laten bestaan dat zij de effectiviteit van hun risicomanagement proberen te verbeteren, dient er wat betreft rapportage, data en IT nog wel het een en ander te gebeuren.'

Pilaar 1 en 2
Verzekeringsmaatschappijen lijken over het algemeen goed voorbereid te zijn op alle onderdelen van Pilaar 1, waarbij de Franse, Nederlandse en Italiaanse verzekeraars vrijwel volledig aan de eisen zullen voldoen. De Griekse, Portugese en Centraal- en Oost-Europese verzekeraars zijn minder voorbereid. Bijna 85 procent van de respondenten geeft echter aan dat zij wat betreft de effectiviteit en/of efficiëntie bij het voldoen aan de vereisten op grond van Pilaar 2 nog ruimte voor verbetering zien.

De Beus: 'Verzekeraars weten dat zij de inbedding van een risicocultuur in de eerste lijn effectiever moeten aanpakken. De top-4 van verbeteringen waarvan verzekeraars hebben aangegeven dat deze de effectiviteit van hun risicomanagement zouden kunnen verhogen, hebben allemaal te maken met de interactie met de eerste lijn.'

Risicobeoordeling
'De veranderingen scoorden echter ook hoog als het ging om hoe moeilijk het zou zijn om ze te realiseren. Ondanks dat Nederland het verst is met de implementatie van de Own Risk & Solvency Assessment (ORSA), heeft de ORSA oefening 2013 van DNB nog wel een aantal bevindingen opgeleverd die verzekeraars kunnen gebruiken bij het maken van de Eigen Risico Beoordeling die sinds dit jaar verplicht is. Ook blijkt uit de beoordeling van DNB dat nog steeds slechts de helft van de verzekeraars het kapitaalbeleid voldoende inzichtelijk heeft kunnen maken.'

Verslaggeving overgang
De Beus zegt dat de mate waarin de maatschappijen klaar zijn voor implementatie sinds 2012 amper is verbeterd. 'Onzekerheid over de implementatie en herhaald uitstel wat betreft het tijdpad kunnen hiervoor een verklaring zijn maar het is van belang om deze projecten in 2014 voorrang te geven.'

'Gezien de huidige status moet de overgangsverslaggeving over 2015 grotendeels handmatig worden gedaan. Dat is geen wenselijke situatie gezien het feit dat Nederlandse verzekeraars al in 2015 verplicht zijn diverse kwartaal- en jaarstaten op Solvency II grondslagen en richtlijnen aan DNB te rapporteren.'

Data en systemen
De gereedheid van de data en systemen voor Pilaar 3 blijft volgens EY nog steeds achter op Pilaar 1 en 2. Circa 25 procent van de verzekeraars heeft een systeem geselecteerd om aan de vereisten van Pilaar 3 te kunnen voldoen. Daarnaast geeft tweederde van de respondenten aan dat de data en systemen er niet op zijn ontworpen om buiten de normale verslaggevingscyclus om ORSA's uit te voeren.

Monday, April 14, 2014

CISOs Respond to Heartbleed Bug

CISOs Respond to Heartbleed Bug                                                                                                                     


























Outline Steps Taken to Mitigate Risks in Wake of Vulnerability

By , April 14, 2014.                                                                                                                              




CISOs in all sectors are taking steps to mitigate the risks posed by the OpenSSL vulnerability known as the Heartbleed bug.
Christopher Paidhrin, security administration manager at PeaceHealth, a healthcare provider in the Pacific Northwest, says the entire security community has been "laser focused" on the Heartbleed bug.
"The scope and potential depth of compromise should remind all of us how interdependent we are on trust controls," he says.
Paidhrin says PeaceHealth was not exposed to the vulnerability because it does not use any of the vulnerable platforms. "Still, we checked to be sure. We have a checklist for this vulnerability. We do partner with many others, so we have been cautious to validate the exposure of our peers, partners, vendors and customers," he says.
"PeaceHealth is reaching out to our strategic partners to confirm our shared remediation status. Most of our partners share our concern and have taken steps to address this event."

Three Steps

Elayne Starkey, chief security officer for the State of Delaware, says her department responded in three steps. "Step one was to learn everything we could about it," she says. "Step two was to test our public-facing websites and identify what needed attention."
Step three, Starkey says, "was to alert our customer state agencies and begin the process of applying patches and replacing certificates."
Starkey says some of the state's systems and servers were exposed to the Heartbleed vulnerability, so security specialists are continuing to apply patches and replace certificates.
Organizations should remain vigilant regarding the OpenSSL vulnerability, Starkey says. "Monitor advisories closely [and] promptly assess the situation before taking action," she advises.

A Top Concern

The Heartbleed issue is a top concern at the University of Pittsburgh Medical Center, says CISO John Houston.
"It is an OpenSSL issue that is very broad in scope," he says. "We have been actively assessing the issue and have determined that many of our systems are not affected. For those systems that are affected, we are developing plans to remediate the issue."
Houston says his organization is also implementing a signature on its network traffic scanner to actively watch for malicious traffic.
A security leader at a major southeastern bank, who asked not to be identified, says the institution's first action upon learning about Heartbleed was to examine its Internet-facing services to determine if there was exposure. "Fortunately, there was not," he says. "We then began scanning our internal network for systems which were potentially vulnerable."
Based on its investigation, the institution found internal servers that were susceptible to the exploit, as well as additional low-level systems, such as printers. "We continue to work with the vendors to receive patches and replace the OpenSSL certificates which could potentially be compromised."
Kennet Westby, president at the risk management consulting firm Coalfire, says that a number of its internal platforms were affected by the bug. Additionally, two service providers and a remote access client were affected. "All of these have been addressed, patched and validated secure," he says.
Coalfire immediately initiated an internal alert as soon as information about the vulnerability was released. "Initial steps were to inventory any systems, applications or service providers where we could identify the use/integration of the vulnerable version of OpenSSL," Westby says. "We incorporated discovery and scanning tools to assist with this process as these checks were released."
Westby says the company will continue to focus on reducing the risk of any compromise by changing all account passwords in its internal systems, updating all SSL keys and certificates that could have been compromised and encouraging all users to change passwords with external service providers' services.

Heartbleed Updates

Technology companies Cisco and Juniper Networks, along with several other vendors, issued alerts about which of their products are vulnerable to the Heartbleed bug (see: Cisco, Juniper Issue Heartbleed Alerts).










Heartbleed exposes a flaw in OpenSSL, a cryptographic tool that provides communication security and privacy over the Internet for applications such as Web, e-mail, instant messaging and some virtual private networks (see: Heartbleed Bug: What You Need to Know).
"The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software," says Codenomicon, the Finland-based security vendor that discovered the bug, along with a researcher at Google Security.
Codenomicon says Fixed OpenSSL has been released and needs to be deployed now across websites vulnerable to the bug. Additionally, organizations can use an online tool to see if their website is vulnerable.
The Federal Financial Institutions Examination Council issued a statement April 10 stating that it expects financial institutions to incorporate patches on systems and services, applications and appliances using OpenSSL and upgrade systems as soon as possible to address the Heartbleed vulnerability (see: Heartbleed: Gov. Agencies Respond).
(Managing Editor Marianne Kolbasuk McGee contributed to this story).

Security still the biggest concern for cloud adoption

Security still the biggest concern for cloud adoption
Posted on 14 April 2014.


IT companies say tighter security tops the list of their clients' concerns, rating even more important than mobile-device management, support for the cloud and data management, according to Autotask.


The survey also found that the expansion of cloud services is the biggest factor driving demand for IT services, surpassing the headline-grabbing but less critical areas of mobile connectivity and "always-on" environments.

Forrester forecasts that spending on cloud software, platform and infrastructure services will grow from $28 billion today to $258 billion in 2020, reaching 45% of total IT services spend.1

"The expansion of the cloud is a golden opportunity for managed service providers," said Len DiCostanzo, SVP of Community and Business Development at Autotask. "To capitalize on the cloud's growth, IT service providers need to focus on expanding their managed service offerings to include cloud services, increasing security and evolving their business model to be more consultative, where they deliver integrated business solutions and trusted service, not just IT support."

IT companies are predicting continued growth this year, the Autotask survey found. 87% of IT companies plan to do the same or more hiring this year compared to last year, according to the survey. In the southwestern U.S., 94% of companies plan the same or more hiring in 2014, the highest rate of any region in the country. Even in the U.S. regions least bullish about hiring – the East, West and Midwest – 87% of IT companies plan the same or more hiring this year.

The recovering economy isn't the only factor boosting the hiring of information technology workers. IT companies cited new client requests (78%), the need for new skills (45%) and geographic expansion (25%) along with the improved economy as the factors driving their hiring plans in 2014.

The Autotask survey was conducted by the research firm Decision Tree Labs and included responses from 1,300 IT service providers in North America, Europe and around the world.

Wednesday, April 2, 2014

HIPAA and cloud computing: What you need to know

March 31, 2014 3:44 pm by Allan Tate     



Many of my clients are in the healthcare field, so a common question is if data can be managed on IBM cloud computing solutions in compliance with the Health Insurance Portability and Accountability Act (HIPAA). The relevant part of this law, enacted by the U.S. Congress in 1996, establishes rules for the storage and transmission of electronic health information. In summary, these rules are:
• Privacy Rule: regulates the use and disclosure of protected health information
• Security Rule: sets national standards for the security of electronic protected health information
• Breach Notification Rule: requires that entities and business associates notify affected individuals (and others) following a breach of unsecured protected health information
Cloud computing HIPAAIn 2009, the Health Information Technology for Economic and Clinical Health (HITECH) Act strengthened and clarified these rules. In 2010, the Omnibus rule refined the definitions of covered entities, such as health care providers, and business associates, such as IT service providers. A cloud service provider, such as SoftLayer, an IBM company, is considered a business associate and must demonstrate compliance with relevant provisions of HIPAA-HITECH rules.
Hosting an application in compliance with HIPAA-HITECH rules is a shared responsibility between the customer and SoftLayer. A Business Associate Agreement (BAA), which clearly defines the respective responsibilities of SoftLayer and the customer, must be signed. Sensitive workloads are best hosted on SoftLayer’s bare metal or private dedicated cloud offerings. Responsibility is divided as follows:
• SoftLayer is solely responsible for the security of the physical data center hosting the SoftLayer provided infrastructure
• SoftLayer is responsible for the managing the environment and Softlayer administrators according to security best practices required by HIPAA controls
• Customer is responsible for managing the workloads, with the exception of the physical infrastructure, so as to comply with HIPAA-HITECH rules
A customer should work with subject matter experts and legal advisors to ensure that they have put in place the required controls. SoftLayer’s infrastructure as a service (IaaS) platform provides a number of offerings to help achieve HIPAA-HITECH compliance, including:
• Strict access control and physical security for data centers, including two-factor access authentication and CCTV monitoring
• Servers, labeled with a barcode only, obscure their identity and ensure only authorized and approved access
• Completely automated management of the environment: hands-on management of devices is only done when physical access is required and in response to a customer raised ticket
• A complete history of all SoftLayer actions taken on any device
• Access to SoftLayer hosted storage is through only the private network and not the Internet-accessible public network
• Servers and storage are wiped when de-provisioned; if the wipe is unsuccessful or the server/storage fails, the device is decommissioned and physically destroyed.
• Flexible portal and application programming interface (API) that allows the design of comprehensive failover, disaster recovery and high availability solutions
In addition, SoftLayer provides services to assist customers in creating security and privacy solutions, including:
• Vulnerability scanning
• Host-based intrusion protection
• Anti-virus protection
• Firewall and network-based threat protection
• Two factor authentication to the SoftLayer customer portal
• SSL certificates that enable confidentiality of data-in-transit
In summary, the security solution to achieve HIPAA-HITECH compliance is a shared responsibility. SoftLayer’s dedicated bare metal or private virtualized cloud offerings should be used for sensitive workloads. A Business Associate Agreement (BAA) needs to be signed as part of the sales agreement. Subject matter and legal experts should be consulted for expertise and guidance.
I’d be interested to hear about your experiences with hosting workloads that require HIPAA-HITECH compliance. Comment below or connect with me on Twitter @allanrtate to continue the discussion.