Friday, November 27, 2015

Why government and tech can't agree about encryption

Why government and tech can't agree about encryption

Updated:   11/24/2015 04:09:31 PM PST

FILE - In this July 30, 2014, file photo, Silicon Valley pioneer and Silent Circle co-founder Jon Callas holds up Blackphone with encryption apps displayed
FILE - In this July 30, 2014, file photo, Silicon Valley pioneer and Silent Circle co-founder Jon Callas holds up Blackphone with encryption apps displayed on it at the Computer History Museum in Mountain View, Calif. The Paris terrorist attacks have renewed the debate between law-enforcement officials and privacy advocates over whether there should be limits to encryption technology. (AP Photo/Eric Risberg, File) ( Eric Risberg )
NEW YORK -- Your phone is getting better and better at protecting your privacy. But Uncle Sam isn't totally comfortable with that, because it's also complicating the work of tracking criminals and potential national-security threats.
For decades, tech companies have steadily expanded the use of encryption -- a data-scrambling technology that shields information from prying eyes, whether it's sent over the Internet or stored on phones and computers. For almost as long, police and intelligence agencies have sought to poke holes in the security technology, which can thwart investigators even when they have a legal warrant for, say, possibly incriminating text messages stored on a phone.
The authorities haven't fared well; strong encryption now keeps strangers out of everything from your iMessages to app data stored on the latest Android phones. But in the wake of the Paris attacks, U.S. officials are again pushing for limits on encryption, even though there's still no evidence the extremists used it to safeguard their communications.
While various experts are exploring ways of resolving the impasse, none are making much headway. For now, the status quo favors civil libertarians and the tech industry, although that could change quickly -- for instance, should another attack lead to mass U.S. casualties. Such a scenario could stampede Congress into passing hasty and potentially counterproductive restrictions on encryption.


"There are completely reasonable concerns on both sides," said Yeshiva University law professor Deborah Pearlstein. The aftermath of an attack, however, "is the least practical time to have a rational discussion about these issues."
Encryption plays a little heralded, yet crucial role in the modern economy and daily life. It protects everything from corporate secrets to the credit-card numbers of online shoppers to the communications of democracy advocates fighting totalitarian regimes.
At the same time, recent decisions by Apple and Google to encrypt smartphone data by default have rankled law enforcement officials, who complain of growing difficulty in getting access to the data they feel they need to build criminal cases and prevent attacks. For months, the Obama administration -- which has steered away from legislative restrictions on encryption -- has been in talks with technology companies to brainstorm ways of giving investigators legal access to encrypted information.
But technology experts and their allies say there's no way to grant law enforcement such access without making everyone more vulnerable to cybercriminals and identity thieves. "It would put American bank accounts and their health records, and their phones, at a huge risk to hackers and foreign criminals and spies, while at the same time doing little or nothing to stop terrorists," Sen. Ron Wyden, D-Ore., said in an interview Monday.
Lawmakers on the U.S. Senate Select Committee on Intelligence remain on what they call an "exploratory" search for options that might expand access for law enforcement, although they're not necessarily looking at new legislation.
The FBI and police have other options even if they can't read encrypted files and messages. So-called metadata -- basically, a record of everyone an individual contacts via phone, email or text message -- isn't encrypted, and service providers will make it available when served with subpoenas. Data stored on remote computers in the cloud -- for instance, on Apple's iCloud service or Google's Drive -- is also often available to investigators with search warrants. (Apple and Google encrypt that data, but also hold the keys.)
Some security experts suggest that should be enough. Michael Moore, chief technology officer and co-founder of the Baltimore, Maryland-based data security firm Terbium Labs, noted that police have managed to take down online criminals even without shortcuts to encryption. He pointed to the 2013 take down of Silk Road, a massive online drug bazaar that operated on the "dark Web," essentially the underworld of the Internet.
"The way they figured that out was through good old-fashioned police work, not by breaking cryptography," Moore said. "I don't think there's a shortcut to good police work in that regard."
Others argue that the very notion of "compromise" makes no sense where encryption is concerned. "Encryption fundamentally is about math," said Mike McNerney, a fellow on the Truman National Security Project and a former cyber policy adviser to the Secretary of Defense. "How do you compromise on math?" He calls the idea of backdoors "silly."
Some in law enforcement have compromise ideas of their own. The Manhattan District Attorney's office, for instance, recently called for a federal law that would require smartphone companies to sell phones they could unlock for government searches -- in essence, forcing them to hold the keys to user data.
In a report on the subject, the office called its suggestion a "limited proposal" that would only apply to data stored on smartphones and restrict searches to devices that authorities had already seized. Privacy advocates and tech companies aren't sold, saying it would weaken security for phones that are already too vulnerable to attack.
Marcus Thomas, the chief technology officer at Subsentio and former assistant director of the FBI's operational technology division, argued that it's too late to turn back the clock on strong encryption, putting law enforcement in a "race against time" to obtain investigatory data whenever and wherever it can. But he urged security experts to find ways to help out investigators as they design next-generation encryption systems.
The idea of allowing law enforcement secure access to encrypted information doesn't faze Nathan Cardozo, a staff attorney for the San Francisco-based Electronic Frontier Foundation, provided a warrant is involved. Unfortunately, he says, cryptographers agree that the prospect is a "pure fantasy."

Dell computer blijken opnieuw gevaarlijk certificaat te bevatten

Dell computer blijken opnieuw gevaarlijk certificaat te bevatten

Opnieuw is een gevaarlijke certificaat opgedoken op Dell computers. Het certificaat DSDTestProvider wordt door ‘Dell System Detect’ geïnstalleerd op machines en installeert net als eDellRoot een rootcertificaat met bijbehorende privésleutel.
EncryptieHet CERT Coordination Center (CERT/CC) van de Amerikaanse Carnegie Mellon Universiteit waarschuwt voor DSDTestProvider. Het certificaat wordt geïnstalleerd door de Dell System Detect, software die communiceert met de Dell Support website. De software is standaard geïnstalleerd op sommige Dell systemen, maar Dell gebruikers kunnen de software ook zelf installeren.

Certificaten met DSDTestProvider signeren

Net als het eDellRoot wordt DSDTestProvider als certificaat autoriteit geïnstalleerd. Het root-certificaat wordt inclusief bijbehorende privésleutel op machines geïnstalleerd. Deze kan door een aanvaller worden misbruikt om certificaten te genereren die door de DSDTestProvider zijn gesigneerd. Computers die DSDTestProvider vertrouwen, vertrouwen hierdoor ook certificaten die door deze CA zijn ondertekend.
Dit geeft aanvallers de mogelijkheid websites en andere diensten na te bootsen, software en e-mailberichten te ondertekenen of netwerkverkeer en andere data te ontsleutelen. Aanvallen die op dergelijke wijze kunnen worden opgezet zijn phishing aanvallen, man-in-the-middle aanvallen om HTTPS verkeer te ontsleutelen en het installeren van malafide software.

Certificaat intrekken

CERT/CC adviseert het DSDTestProvider certificaat in te trekken. Dit kan door gebruik te maken van Windows certificaat manager (certmgr.msg) en het het DSDTestProvider certificaat te verplaatsen van ‘Trusted Root Certificate Store’ naar ‘Untrusted Certificates’.
- See more at:

Thursday, November 26, 2015

Mattijs R. Koot: [Dutch] Kabinetsstandpunt over encryptie — pending

Matthijs R. Koot's notebook

Mattijs R. Koot

Hobbies: IT-engineering, privacy, digital conflict, democracy.

[Dutch] Kabinetsstandpunt over encryptie — pending

In de uitstelbrief van 23 november over de beloofde toezending van kabinetsstandpunt op het gebied van encryptie stelt minister Van der Steur dat ernaar wordt gestreefd dat standpunt nog dit jaar toe te sturen aan de Tweede Kamer:
“Op 8 oktober jl. tijdens het AO JBZ heb ik u toegezegd te komen met een kabinetsstandpunt op het gebied van encryptie. Het afronden van dit standpunt blijkt meer tijd te vergen dan de maand die ik had voorzien. De complexiteit van het vraagstuk, de dilemma’s en de afstemming die hiermee samenhangt vragen een zorgvuldige behandeling en weging.
Wij streven ernaar u voor het einde van dit jaar het kabinetsstandpunt toe te sturen.
Hiermee zal tevens aan de toezegging worden voldaan van de Minister van Economische Zaken, gedaan tijdens het AO Telecomraad op 10 juni jl., om te komen met een gezamenlijke notitie over de dilemma’s rondom encryptie (Kamerstuk 21 501-33, nr. 552).”
Dat een kabinetsstandpunt over encryptie wordt vastgesteld volgt op de agendering van het onderwerp tijdens de informele JBZ-Raad in Riga op 31 januari 2015, waar de EU-contraterrorismecoördinator dit onder de aandacht van lidstaten heeft gebracht. Hoewel in de geannoteerde agenda (.pdf) voor de JBZ-Raad van 8 & 9 oktober 2015 niet wordt gerefereerd aan encryptie, gebeurt dat wél in de begeleidende brief:
“Naar aanleiding van het verslag van de op 9 en 10 juli 2015 gehouden informele Raad Justitie en Binnenlandse Zaken heeft de vaste commissie voor Veiligheid en Justitie van uw Kamer verzocht om nader te worden geïnformeerd over wat wordt bedoeld met het in dat verslag genoemde probleem van encryptie.
Het gaat hier om encryptie van data, gebruikt en/of aangeboden door de industrie, over internet en in telecommunicatie. Die encryptie belemmert het werk van politie en diensten om gelegitimeerde toegang te krijgen tot communicatie van terroristen. Tijdens de informele JBZ-Raad in Riga op 29 en 30 januari 2015 is dit probleem nadrukkelijk door de EU-contraterrorisme coördinator onder de aandacht gebracht van de lidstaten.”
Volgens het verslag van het algemeen overleg op 7 oktober 2015 van de vaste cie-V&J en de vaste cie-Europese Zaken met minister Van der Steur stelde mevrouw Berndsen-Jansen (D66) hierover het volgende:
“(…) In een begeleidend briefje bij de geannoteerde agenda schrijft de minister dat encryptie van data, gebruikt en/of aangeboden door de industrie, over internet en in telecomcommunicatie het werk van politie en diensten belemmert om gelegitimeerde toegang te krijgen tot communicatie van terroristen. Dat heeft de aandacht van mijn fractie getrokken, want wat bedoelt de minister hier nu eigenlijk te zeggen? Bedoelt hij dat alle encryptie een achterdeur moet hebben voor politie en justitie? Is dat niet een herhaling van de discussie over de bewaarplicht telecomgegevens? Is het verder een nadrukkelijk thema op de cybersecurityconferentie die Nederland tijdens het voorzitterschap zal organiseren?”
Daarop antwoordde minister Van der Steur:
“(…) encryptie: is dat onderdeel van de cyberconferentie of een speerpunt tijdens het Nederlands voorzitterschap? Het onderwerp van de encryptie zal niet op de agenda van de cyberconferentie staan en is geen speerpunt van het kabinet en het voorzitterschap. Nationaal hebben we namelijk nog geen standpunt ingenomen over encryptie. Het is wel duidelijk dat de georganiseerde misdaad en terroristen in toenemende mate gebruikmaken van encryptie. Dat is een actueel probleem voor politie en de inlichtingen- en veiligheidsdiensten. Het debat daarover gaan we binnenkort voeren, maar er is nog geen kabinetsstandpunt over geformuleerd.”
We kijken met belangstelling uit naar het kabinetsstandpunt — met in het achterhoofd het concept-wetsvoorstel voor de Wiv20xx en het nog te verschijnen concept-wetsvoorstel voor de wet bestrijding cybercrime. Op 9 december 2015 vindt een algemeen overleg van de vaste cie-V&J plaats over terrorismebestrijding. Leestip voor de tussentijd: Who’s right on crypto: An American prosecutor or a Lebanese coder? (el Reg, 24 november 2015).

The Data Breach Notification & the guidelines of the Data Protection Authority

The Data Breach Notification & the guidelines of the Data Protection Authority

Privacy - more than meets the eye

On January 1st 2016, the Dutch Data Breach Notification will come into effect. The new ‘privacy law’ can have serious consequences for organizations that fail to adequately protect the personal data they process. It will grant the Dutch DPA the right to impose a fine on organizations that do not notify a data breach – a fine that can amount to € 810,000.
Join us at Privacy With a View, November 6. 

More than meets the eye

For Deloitte, privacy is a key asset for organizations, regardless of fines and legislations. This is why, for the third time in a row, we organize Privacy with a View. The event is titled Privacy – More than meets the eye, and will take place on November 6. One of the keynote speakers will be the Head of the Supervision Private Sector Department of the Dutch Data Protection Authority (DPA), or the College bescherming persoonsgegevens. The Dutch DPA supervises processing of personal data in order to ensure compliance with the provisions of the law on personal data protection. It also advises on new regulations. Speaker Udo Oelen will be addressing different aspects of the Data Breach Notification.

Security measures

According to Oelen, companies need to take both organizational and technical measures to make sure personal data are not being processed illegitimately or become prey to hackers. Examples of breaches in security are USB sticks getting lost, client data that are hacked or medical files ending up in the recycling bin. The DPA needs to be notified when there is a considerable chance of serious negative consequences as a result of such a data breach. These consequences can be materialistic or non-materialistic, e.g. identity fraud.

Innovate? Remember the data!

Personal data represent a tremendous value. Therefore, companies need to secure them according to the latest technological insights. As the practical experience of the DPA confirms, that still doesn’t happen often enough. For example, it’s very easy to start a web shop. But anyone that handles personal data needs to think about a secured line beforehand. When you innovate, you also need to take data into account from day one. The new legislation is meant to enhance the privacy awareness of companies, to make sure that there is an increase of transparency around data breaches, and to discourage organizations from sweeping incidents under the rug for fear of reputational damage.

Prepare and be alert

Organizations can prepare themselves by having their security in order, analyzing the kind of data they work with, and knowing the risks that can occur. They need to check regularly with which data they are still dealing and what the new techniques in security are. They also need to prepare for a scenario where a data breach does happen, and to answer the multitude of questions they will face. How do you handle such an event internally? Have you appointed one specific individual to judge whether the breach needs to be notified? Have you made sure the incident will be registered? Have you thought about the way you will interact with the press? And how can you be sure you keep an open eye for signals from the outside world that might suggest a security breach?

The new green

Today, a number of technologies are being developed that will make all our lives a whole lot easier. But meanwhile – as with the Internet of Things – we will be gathering more and more personal data. People are worrying what is being done with these data and what effect this will have on their freedom of choice. As an organization, you can probably distinguish yourself in the future by treating personal data with care and accuracy.
The importance of privacy will grow in the coming years. Oelen even calls privacy ‘the new green’. Would you like to hear more? Please join us at Privacy with a view on November 6.

Wednesday, November 25, 2015

Ernstig lek gevonden in beveiliging Amazon-cloud

Ernstig lek gevonden in beveiliging Amazon-cloud

Software bedoeld voor de bescherming van het datatransport van en naar Amazon Web Services (AWS), bleek een oude kwetsbaarheid te bevatten.
De lekke software, genaamd s2n, was nog niet in productie genomen. Het ging echter om afgeronde software die ook al door drie externe partijen was gecontroleerd en aan hackpogingen was blootgesteld.  S2n, een afkorting van Signal to noise, was juist bedoeld om de snelheid van het datatransport te verbeteren en tegelijk ook de veiligheid te verhogen. Het standaard protocol voor beveiligd datatransport  - TLS of Transport Security Layer - maakt gebruik van de OpenSSL library die zo'n 70.000 regels code nodig heeft om TLS uit te voeren. Amazons doel is om s2n datzelfde te laten doen met slechts 6000 regels code.
Amazon had net de komst van s2n aangekondigd in juni, toen hoogleraar Kenny Paterson en zijn onderzoeksgroep van de University of London het bedrijf confronteerde met het lek. In de code waren fouten ontdekt die de software kwetsbaar maken voor 'Lucky 13', een aanvalsmethode op TLS die al in 2013 aan het licht kwam, meldt Ars Technica.
De s2n-software is daarop onmiddellijk aangepast. Begin deze week gaf Colm MacCarthaigh, hoofd engineer bij AWS in een blog een uitgebreide uitleg van het probleem.

Friday, November 20, 2015

Public, Private & Hybrid Cloud: Why Compliance (Done Right) is the Easy Part

Regardless of the provider, all providers operate under the following model - the provider is responsible for the physical infrastructure, the shared networking, the computing, storage and the hypervisor. Everything that sits on top of a basically virtual machine and the guest instance is the responsibility of the customer. This includes securing data, the application code, the application framework and the Operating Systems that is sitting on top of the infrastructure itself.
Depends on how an organization views this - it provides the flexibility to enforce consistency and a similar level of controls as the organization does in its other environments, including in its data centers. However, it's extremely challenging to achieve this using the traditional network and system security controls. And compliance with industry regulations - such as SOX404, PCI DSS, GLBA - is still an organization's responsibility.
All of this requires a new way of thinking.
In this informative webinar we will deliver practical advice on achieving and continually maintaining compliance with industry regulations when operating under any type of distributed computing environment, including private, public and hybrid-cloud environment.
Viewers will learn:
  • The compliance challenges organizations face integrating cloud services with their data centers
  • How to assess the compliance posture of your infrastructure, even if it's distributed across the data center, public cloud services, offsite facilities, IaaS and PaaS installs and hosted applications
  • How compliance automation works to integrate legacy infrastructures with cloud-based ones - and ensure compliance requirements aren't overlooked
  • Why focusing on security across your hybrid IT infrastructure is the best way to alleviate many compliance headaches

According to one of the largest cloud services provider, Amazon Web Services "...the customer should assume responsibility and management of, but not limited to, the guest operating system...and associated application software..." It further adds " is possible for customers to enhance security and/or meet more stringent compliance requirements with the addition based firewalls, host based intrusion detection/prevention, encryption and key management."
Regardless of the provider, all providers operate under this model.
The security and compliance requirements in any form of cloud environment haven't changed. We still need - strong access controls, privileged accounts monitoring, multi-factor authentication, user auditing, device verification, file integrity monitoring etc. We need to reduce the attack surface on a continual basis and find ways to implement corporate policies and ensure compliance in a consistent manner. All of this - basically anything that sits on top of a virtual machine and the guest instance is the responsibility of the customer.

Tuesday, November 17, 2015

Totale oorlog tegen encryptie uitgebroken

Totale oorlog tegen encryptie uitgebroken
Het einde van versleuteling nadert nu snel. Want terrorisme.

Versleuteling van internetverkeer nadert nu snel het einde. Overheden, media en politici hebben de totale oorlog verklaard aan vooral encryptie op berichtenverkeer.


Totale oorlog tegen encryptie uitgebroken

Versleuteling van internetverkeer nadert nu snel het einde. Overheden, media en politici hebben de totale oorlog verklaard aan vooral encryptie op berichtenverkeer.

Het grote framen is begonnen. Volgens Amerikaanse politici heeft Snowden indirect schuld aan de aanslagen in Parijs omdat hij met zijn onthullingen het moeilijker heeft gemaakt voor veiligheidsdiensten in goed fatsoen stiekem al ons internetverkeer af te luisteren, waardoor terroristen nu gewoon hun iPhone kunnen gebruiken om elkaar berichtjes te sturen.
En dat niet alleen: alle technologiebedrijven zijn eveneens schuldig aan het bloedbad, zo vindt de Amerikaanse senator Dianne Feinstein, het hoofd van de commissie Stiekem aldaar. "Ik heb met de meeste bazen van de technologiebedrijven ontmoetingen gehad waarin ik hen om hulp heb gevraagd, en ik heb niks gekregen", meldt ze volgens ABC7 News. De hulp die ze wilde was het ontsleutelen van berichtenverkeer via onder meer sociale media.

Ook de Amerikaanse zender NBC News doet een duit in het zakje, hier geciteerd door Europese anonieme functionarissen zeggen tegen NBC dat de terroristen gebruik maken van "geavanceerde communicatietechnieken", waarmee wordt bedoeld dat ISIS onder meer Whatsapp en Telegram gebruikt om elkaar te berichten. Ook Tor en het netwerk van Playstation zouden worden gebruikt.
Technologie dus die jij en ik ook dagelijks gebruiken, dus vreemd is het niet. Maar in tegenstelling tot ons gebruik ervan heeft ISIS daar wel een Jihadi Help Desk voor nodig, volgens NBC.
In ieder geval wordt de druk nu groter op de technologiebedrijven om alsnog end-to-end encryptie op een of andere manier open te gooien voor overheden, zoals onder meer het Verenigd Koninkrijk in een nieuwe wet wil forceren.

Monday, November 16, 2015

Crunch Network

The Hierarchy of IoT “Thing” Needs

I received a lot of feedback on an article I wrote a few months ago about changing the way we perceive the “Things” in the Internet of Things (IoT).
The gist of my argument was that we should start treating these “Things” more like people — not in the sense of giving them the right to vote and the responsibility of paying taxes, but in the sense of thinking about them the way you would think about an employee hired to fulfill a specific function. Our perception of smart “Things” needs to be “people-ified,” if you will.
I thought it would be nice to follow up on that notion and formalize what, exactly, one of these “Things” comprises. After all, there are a lot of things in the world, objects too numerous to count, but not all of these things are (or should be, or ever will be) IoT “Things.”
To define an IoT “Thing,” I’m going to employ Maslow’s hierarchy of needs — that well-known human psychology paradigm typically displayed in the shape of a pyramid, with the most fundamental human needs (physiological needs like air, food and water) at the bottom and rising to the most esoteric needs (self-actualization or expression of full potential) at the apex.
I’ve sketched out a derivative needs pyramid for IoT “Things,” charting their ascent to Thing-actualization.
GW IoT_hierarchyOfNeeds pyramid
Maslow’s theory suggests that if the basic lower levels are not met, a human being will have no desire for the higher levels. I suggest something similar applies to “Thing” needs. That is, a “Thing” has no use for the higher level if the lower-level building block is not first met.
At the very base of the pyramid are the most basic-existence needs of a “Thing.” Obviously, it will need power, a physical mechanism for connecting and transmitting (such as a radio) and a material housing for its functionality.
But for IoT, we also need to account for its interaction with its environment — the physical conditions in which the “Thing” will be operating. Is it a sensor for an arctic ice-monitoring project that will be operating in subfreezing temperatures? Is it a wristband activity-and-exercise tracker that should be able to withstand sports impact and jostling, human sweat and rapid changes in body temperature? Should it be waterproof? Heat resistant? Encased in lightweight fabric or titanium?
Finally, since the “Thing” is a thing, it must meet a specific need or bring a value to be useful. As such, we must also account for its ability to meet functional expectations as a core existence requirement.
Once the core physical needs of “Things” are met, and before external connectivity is possible, security is needed. To be quite clear: Security is key for IoT adoption, and thus needs to be addressed for individual “Things” that can be externally accessible.
Accessibility does not mean just connectivity. It also applies to things that can be physically “cracked” open, where lack of security could put stored data at risk.
As we’ve all learned by now, when it comes to anything Internet-related, whatever can be exploited, will be exploited. This truth really has to be faced early on in the creation of each IoT device. Every “Thing” in IoT requires a means to encode, encrypt and authenticate its data.
With the security challenges met, the next layer of the pyramid concerns communication needs. Though we noted that a “Thing” needs a physical mechanism for connection as part of its physical needs, this layer addresses the self-expression realm of need.
In short, this layer pinpoints whatever it takes for a “Thing” to share its voice with the world. The specifics of interface and networking are addressed in this layer. What protocols will the “Thing” use for its transport and network layers? IEEE 802.15.4? 6LoWPAN? The protocol and language that the “Thing” will speak are addressed at this level, as well. The communication needs supply the “Thing” with its link to the “I” part of the IoT equation.
When it comes to anything Internet-related, whatever can be exploited, will be exploited.
The next step up in our needs hierarchy pertains to data. Here is where you decide how the “Thing” will handle its collected information. What transactions does it perform? How does it log data? Does it perform diagnostics? What is the function of the data being collected and how is that expressed by the “Thing”?
At the very top of our pyramid, we find smart needs — the equivalent to Maslow’s self-actualization. Here is where the needs of the “Thing” become an expression, not just of a single sensor or communication gateway, but of those combined constructive properties that make the “Thing” useful for the Internet of Things.
Does it contribute to analytics and exhibit logic? Does it present learned and predictive behavior? Is it scalable and self-configuring, operating without the need for human intervention? While it’s not necessary that the “Thing” pass the Turing test, this realm is where it exhibits its true nature.
What I find interesting about this construct is that while its initial scope was that of an individual IoT “Thing,” it is not limited to a single thing.
Consider that individual people join to form committees and organizations, where the new group becomes its own individual entity. In much the same way, a single thing will combine with other things to create groups and networks of things that are regarded as other more “complex” things.
These combined things will have their own needs that can be defined by this construct, as well. I especially like this Seussian micro to macro perspective when thinking about both things and the data that they collect and evolve into information.
The reason to think this way is to enable the use of familiar paradigms when “Thing” architecture and interaction models are designed. For example, consider this simple question: “What should you consider when purchasing an IoT thing?” With this new thinking, the answer becomes: “The same stuff you consider when you hire a new employee.” Trustworthiness, reliability and ability to work well with others form a great basis for consideration in both cases.
As you “people-ify” things, notice how the perspective shift opens a world of paradigms to leverage.

Bitlocker-encryptie was jarenlang onveilig

Bitlocker-encryptie was jarenlang onveilig

Het is maar te hopen dat niemand eerder ontdekt heeft wat Ian Haken afgelopen vrijdag liet zien: het Bitlocker-encryptietool dat Microsoft zakelijke klanten meelevert met Windows, was vanaf het begin te kraken.
Het probleem ontstond door de manier waarop Microsoft laptops beveiligde die normaal gesproken inloggen in een domein van hun bedrijfsnetwerk. Bij die domeingebaseerde authenticatie wordt het wachtwoord van de gebruiker gecheckt op de domeincontroller. Maar  voor gevallen dat de laptop extern wordt gebruikt, slaat de laptop lokaal ook inloggegevens op.
Haken slaagde erin om die lokale gegevens aan de pc te ontfutselen. Daarvoor moest hij wel weten wat de naam is van de domeincontroller waar de gestolen laptop gewoonlijk contact mee legt. Met die informatie kan hij een nep-domeincontroller opzetten waarop hij het gebruikersaccount aanmaakt dat hoort bij de laptop, met een wachtwoord dat ver in het verleden is aangemaakt. Wanneer de laptop dan contact maakt met die domeincontroller, wordt de gebruiker 'uitgenodigd' om een nieuw wachtwoord aan te maken. Dat wachtwoord vervangt het lokaal opgeslagen wachtwoord, zodat met dat nieuwe wachtwoord kan worden ingelogd zodra de laptop weer is losgekoppeld van het netwerk.

Bitlocker in gebruikelijke implementatie omzeild

Op die manier kan de dief toch toegang krijgen tot de laptop. Ook gebruik van Bitlocker helpt dan normaal gesproken niet om de inhoud van de harde schijf vertrouwelijk te houden. Als de dief eenmaal geauthenticeerd is, krijgt hij namelijk ook toegang tot de Bitlocker-sleutel die is opgeslagen in de zogeheten Trusted Platform Module.
Haken heeft zijn bevinding gedeeld met Microsoft. Dat heeft afgelopen dinsdag een patch verspreid die deze specifieke aanvalsvorm de pas af snijdt. Het is dus aan te raden om die patch zonder dralen aan te brengen. Nog beter is het om gebruik te maken van de mogelijkheid om gegevens op schijf extra te beschermen met een pincode of een speciale sleutel op een USB-stick. Veel bedrijven zien daarvan af om de gebruikers niet extra te belasten. Maar met zo'n extra beschermingsmaatregel moet bij een hack zoals Haken die nu toonde, toch een extra horde worden genomen. 

Wednesday, November 11, 2015

Bad news for encryption security, PKI certificate revocation

While one group of researchers reported that certificate revocation -- a fundamental feature of public key infrastructure security -- was almost completely dysfunctional, and just as open source encryption project GnuPG announced two new features to make its personal encryption program safer and easier to use, another research group reported that modern personal encryption software was virtually unusable by ordinary people.
And the one big cryptography success story this week was not good news: Dark Web vendors have been reported to be offering malware packages that include Stuxnet-style code signing certificates for signing malicious code, thus making it virtually undetectable.

Revoked certificates widely used, rarely verified

Researchers from Northeastern University in Boston, the University of Maryland, Duke University in Durham, N.C., and Stanford University in Stanford, Calif., reported that "a surprisingly large fraction (8%) of the certificates served" on live Internet Web servers have been revoked, which was bad enough, but they also found that most browsers don't check certificate revocation status, "including mobile browsers, which uniformly never check."
They wrote: "This uncertainty leads to a chicken-and-egg problem: Administrators argue that they need not revoke because clients rarely check, while clients argue they need not check because administrators rarely revoke."
Apple, Adobe and the U.S. government were found to be using revoked certificates; of the browsers tested for certificate verification, Internet Explorer 11 was the best at verifying certificates, while none of the mobile browsers correctly tested or validated TLS certificates.
The group also published research last year on the effects of the Heartbleed vulnerability, which it suggested was the reason many of the certificates found had been revoked.
The researchers found that certificate revocation lists (CRLs) "impose significant bandwidth and latency overhead on clients: The median certificate has a CRL of 51 KB, and some certificates have CRLs up to 76 MB in size."
One possible route to better distribution of certificate revocation is via the Online Certificate Status Protocol (OCSP), which allows clients to query the status of one digital certificate at a time, thus reducing the overhead of frequent CRL updates.
However, the researchers found that "OCSP Stapling, which addresses many of the difficulties of obtaining revocation information, is not widely deployed: Only 3% of certificates are served by hosts supporting OCSP Stapling." Even when OCSP is used, there is still a performance issue, as it "requires the client to delay accepting the connection until the OCSP responder can be contacted."
Another troubling result: They discovered 18 nonrevocable, top-level certificate authority (CA) certificates that can be used to issue trusted certificates for any domain. "Being unable to revoke a CA certificate is particularly worrisome, as possessing a CA certificate's private key allows one to generate certificates for any Internet domain -- and private keys for CA certificates have been inappropriately given out multiple times in the past."
They concluded: "Overall, our results paint a bleak picture of the ability to effectively revoke certificates today." However, they also note that there are ways to improve significantly in the near term, including by using an improved method for constructing certificate revocation lists to make them smaller, and thus, less of an issue for Web browsing performance.

GnuPG introduces a new trust model, DANE support for key distribution

Meanwhile, open source encryption project GnuPG introduced a new trust model, Trust on First Use (TOFU), which does what it says: It trusts that the key belongs to who says it belongs to the first time it encounters that key. The downside of this model is that there is no guarantee as far as the identity of the key holder; the only thing a recipient can trust is that the key is associated with the key's source. The new feature will ship in the next release of GnuPG and will be turned off by default.



Also newly implemented from GnuPG is support for DNS-Based Authentication of Named Entities (DANE), a protocol that uses "the DNSSEC infrastructure to store and sign keys and certificates that are used by TLS." According to GnuPG, "The basic idea is that users publish their keys in the Secure DNS. Then, when someone is looking up a key, they simply use DNS to find it."
DANE is implemented in the current release of GnuPG.

But even modern PGP is practically unusable for most users

Researchers at Brigham Young University in Provo, Utah, published Why Johnny Still, Still Can't Encrypt: Evaluating the Usability of a Modern PGP Client, in which they reported that "modern [Pretty Good Privacy] PGP tools are still unusable for the masses."
The 20 participants, grouped into 10 pairs, attempted to exchange encrypted email using the highly rated Mailvelope browser extension front end for PGP; only one pair -- the one with the single PGP-experienced user in the group -- was able to exchange encrypted messages in the time allotted to the task.
None of the participant teams were able to encrypt without making some egregious errors -- such as revealing secrets or encrypting with a public key rather than private key. And even the successful team needed the maximum time allotted, one hour, to complete the task -- and they also stumbled before succeeding.

Yet, not entirely bad for everyone

One place the public key infrastructure is working: the dark side. InfoArmor Inc., based in Scottsdale, Ariz., identified a new trend: underground malware vendors selling digital certificates for code signing. InfoArmor reported that the GovRAT malware includes the ability to take malware binaries and sign them with apparently valid digital certificates to defeat antivirus and other malware detection methods.
InfoArmor wrote, "Signed binary code is interpreted as potentially trusted and verified software, which allows it to slide under the radar of antiviruses and proactive defense systems." This technique is similar to that used for the Stuxnet worm and in the Sony attack, and previously had been associated with "state-sponsored cyberattacks."
InfoArmor reported that Dark Web vendors were offering authentic code signing certificates from CAs, including Thawte and Comodo for as little as $600. For the budget-conscious attacker, Dark Web vendors offered code signing of individual malware files for as little as $60.

Sunday, November 8, 2015

Would you hire a lawyer who thinks cybercrime is an Act of God?

Would you hire a lawyer who thinks cybercrime is an Act of God?

June 2, 2014 Businesswoman looking through a magnifying glass to contractWith cybercrime insurance, insurance companies charge money to cover the risk of cybercrime exposure and underwrite any losses up to an agreed level. In practice, there’s lots of legal ‘small print’ to navigate detailing the many exemptions and get-out clauses. Your car insurance is invalidated if you leave your doors unlocked, so it makes sense that your cybercrime insurance won’t cover you if you don’t have your security systems running properly.
Cybercrime is increasingly referred to in legal documentation. Most people will recognise force majeure (Act of God) from the small print of their airline ticket; the part where it says you can’t claim a refund in the event of your plane being sucked into a tornado or hit by an asteroid. Unbelievably, this common contract clause is also included in many legal documents in relation to cyber security.
I have a lot of respect for people’s religious beliefs, but – I’m sorry – you are crazy if you think that it’s the Almighty hammering out code to exploit the cyber security weaknesses of target organisations.
Providers of IT services have legitimate claims to use the force majeure exemption when it applies to their datacentre being flooded or demolished by an earthquake. However, I don’t think these protections against corporate risk should be used as a get-out clause when hackers, viruses and security breaches strike!!
The opportunities for good cybercrime insurance are long overdue, especially if they stay away from defining cybercriminal activity in the same way as a bolt of lightning.
The earliest business insurance related to ships and their cargo. A ship container full of plasma TVs undoubtedly has its contents insured. Consider for a moment just how much vital data cargo is being held and transported everyday between organisations. Shouldn’t this be insured too?
Having car insurance doesn’t stop you trying to avoid accidents. In fact, it stops working when you stop trying! Cybercrime insurance policies could be a great idea for business to adopt, but it won’t stay in force if you fail to keep your cybercrime protection updated.

FCC Fines Cox $595K Over Lizard Squad Hack

06 Nov 15

FCC Fines Cox $595K Over Lizard Squad Hack

facebooktwittergoogle_plusredditpinterestlinkedinmailIn September 2014, I penned a column called “We Take Your Privacy and Security. Seriously.” It recounted my experience receiving notice from my former Internet service provider — Cox Communications — that a customer service employee had been tricked into giving away my personal information to hackers. This week, the Federal Communications Commission (FCC) fined Cox $595,000 for the incident that affected me and 60 other customers.
coxletterI suspected, but couldn’t prove at the time, that the band of teenage cybercriminals known as the Lizard Squad was behind the attack. According to a press release issued Thursday by the FCC, the intrusion began after LizardSquad member “Evil Jordie” phoned up Cox support pretending to be from the company’s IT department, and convinced both a Cox customer service representative and Cox contractor to enter their account IDs and passwords into a fake, or “phishing,” website.
“With those credentials, the hacker gained unauthorized access to Cox customers’ personally identifiable information, which included names, addresses, email addresses, secret questions/answers, PIN, and in some cases partial Social Security and driver’s license numbers of Cox’s cable customers, as well as Customer Proprietary Network Information (CPNI) of the company’s telephone customers,” the FCC said. “The hacker then posted some customers’ information on social media sites, changed some customers’ account passwords, and shared the compromised account credentials with another alleged member of the Lizard Squad.”
My September 2014 column took Cox to task for not requiring two-step authentication for employees: Had the company done so, this phishing attack probably would have failed. As a condition of the settlement with the FCC, the commission said Cox has agreed to adopt a comprehensive compliance plan, which establishes an information security program that includes annual system audits, internal threat monitoring, penetration testing, and additional breach notification systems and processes to protect customers’ personal information, and the FCC will monitor Cox’s compliance with the consent decree for seven years.
It’s too bad that it takes incidents like this to get more ISPs to up their game on security. It’s also too bad that most ISPs hold so much personal and sensitive information on their customers. But there is no reason to entrust your ISP with even more personal info about yourself — such as your email. If you need a primer on why using your ISP’s email service as your default or backup might not be the best idea, see this story from earlier this week.
If cable, wireless and DSL companies took customer email account security seriously, they would offer some type of two-step authentication so that if customer account credentials get phished, lost or stolen, the attackers still need that second factor — a one-time token sent to the customer’s mobile phone, for example. Unfortunately, very few if any of the nation’s largest ISPs support this basic level of added security, according to, a site that tracks providers that offer it and shames those that do not.
Then again, perhaps the FCC fines will push ISPs toward doing the right thing by their customers: According to The Washington Post‘s Brian Fung, the FCC is offering in this action another sign that it is looking to police data breaches and sloppy security more closely.

JPMorgan Chase CSO reportedly reassigned following data breach

JPMorgan Chase CSO reportedly reassigned following data breach

JPMorgan Chase & Co.’s CSO Jim Cummings reportedly was reassigned to a new position within the bank following the company’s major data breach this past year.
JPMorgan Chase & Co.’s CSO Jim Cummings reportedly was reassigned to a new position within the bank following the company’s major data breach this past year.
JPMorgan Chase & Co.'s CSO Jim Cummings was reportedly reassigned to a new position within the bank following the company's major data breach this past year.
Bloomberg reported that it obtained a memo indicating that Cummings would be moving to Texas to “work on military and veterans housing initiatives for the bank.” During his CSO tenure, Cummings supervised more than 1,000 people. He formerly served as the head of the U.S. Air Force's cyber-combat unit.
Greg Rattray formerly served as CISO at the bank and was reassigned in June to become the head of global cyber partnerships and government strategy.
Bloomberg reported that company insiders said both Cummings and Rattray brought military culture to the bank, which didn't always mesh with JPMorgan's Wall Street ways.
The duo reportedly blamed Russia for the major breach, which was later attributed to cybercriminals.

Saturday, November 7, 2015

15 reasons not to start using PGP

15 reasons not to start using PGP

Because of popular demand, here's the collection of reasons to stop using PGP, or at least not to start.
Pretty Good Privacy is better than no encryption at all, and being end-to-end it is also better than relying on SMTP over TLS (that is, point-to-point between the mail servers while the message is unencrypted in-between), but is it still a good choice for the future? Is it something we should recommend to people who are asking for better privacy today?
The text concludes mentioning some of the existing alternatives, so, again, this is NOT about not using encryption as some critics like to presume!!!

1. Downgrade Attack: The risk of using it wrong.

With e-mail the risk always remains that somebody will send you sensitive information in cleartext - simply because they can, because it is easier, because they don't have your public key yet and don't bother to find out about it, or just by mistake. Maybe even because they know they can make you angry that way – and excuse themselves pretending incompetence. Some people even manage to reply unencrypted to an encrypted message, although PGP software should keep them from doing so.
The way you can simply not use encryption is also the number one problem with OTR, the off-the-record cryptography method for instant messaging.
This opens up for a great possibility for attack: It's enough to flip a bit in the communication between sender and recipient and they will experience decryption or verification errors. How high are the chances they will start to exchange the data in the clear rather than trying to hunt down the man in the middle?
The mere existence of an e-mail address in the process is a problem. Modern cryptographic communication tools simply do not provide means to exchange messages without encryption, so if something goes wrong at least there is no doubt it could be you doing it wrong -- and giving up on privacy becomes at least a very conscious choice.
Update: And it's not like it's a problem only for the less careful or less tech-savvy. A notable cryptographer recently sent out confidential mail unencrypted. People told him, but he didn't believe it. He wrote himself encrypted mail and indeed, there it was, the mail in the clear. Turned out that one specific version of enigmail was in some strange way incompatible with a specific version of Thunderbird, sufficiently to pretend a completely normal user experience, yet the mails would go out unencrypted, leaving just a remark somewhere in the messages log. There was no way even for the most experienced user to protect himself from a software attack of this kind. This can happen to you, too. Anytime you upgrade your operating system. But only with encryption-on-top systems like PGP.

2. The OpenPGP Format: You might aswell run around the city naked.

Thanks to its easily detectable OpenPGP Message Format it is an easy exercise for any manufacturer of Deep Packet Inspection hardware to offer a detection capability for PGP-encrypted messages anywhere in the flow of Internet communications, not only within SMTP. So by using PGP you are making yourself visible. Stf has been suggesting to use a non-detectable wrapping format.
Update: Gregory mentions that by using the –hidden-recipient flag you can tell PGP to, at least, hide who you are talking to. Hardly anyone does that: "PGP easily undoes the privacy that an anonymity network like Tor can provide" (by including the recipient's public key in the message).

3. Transaction Data: Mallory knows who you are talking to.

Should Mallory not possess the private keys to your mail provider's TLS connection yet, he can simply intercept the communication by means of a man-in-the-middle attack, using a valid fake certificate that he can make for himself on the fly. It's a bull run, you know?
Side note: Did you ever see a mail returned to you because of an invalid TLS certificate? And you can bet the net is full of invalid certificates. In most cases the mail will be delivered anyway, so Mallory doesn't even have to fake a valid certificate. He can use an invalid one, too.
Even if you employ PGP, Mallory can trace who you are talking to, when and how long. He can guess at what you are talking about, especially since some of you will put something meaningful in the unencrypted Subject header.
Should Mallory have been distracted, he can still recover your mails by visiting your provider's server. Something to do with a PRISM, I heard. On top of that, TLS itself is being recklessly deployed without forward secrecy most of the time.
Update: This so-called metadata about who is talking to whom is of constitutional importance. It is a founding requirement of democracy to be able to share critical thinking and organize as a political group outside the view of government and not give anyone the power to influence, manipulate or keep a new democratic movement from growing and developing its potential. See the update below for more on this kind of reasoning.

4. No Forward Secrecy: It makes sense to collect it all.

As Eddie has told us, Mallory is keeping a complete collection of all PGP mails being sent over the Internet, just in case the necessary private keys may one day fall into his hands. This makes sense because PGP lacks forward secrecy. The characteristic by which encryption keys are frequently refreshed, thus the private key matching the message is soon destroyed. Technically PGP is capable of refreshing subkeys, but it is so tedious, it is not being practiced – let alone being practiced the way it should be: at least daily.
Update 2015: At least two new crypto schemes over SMTP have been invented that implement forward secrecy but aren't PGP-compatible. One is called opmsg. The other one I forgot.

5. Cryptogeddon: Time to upgrade cryptography itself?

Mallory may also be awaiting the day when RSA cryptography will be cracked and all encrypted messages will be retroactively readable. Anyone who recorded as much PGP traffic as possible will one day gain strategic advantages out of that. According to Mr Alex Stamos that day may be closer than PGP advocates think as RSA cryptography may soon be cracked.
This might be true, or it may be counter-intelligence to scare people away from RSA into the arms of elleptic curve cryptography (ECC). A motivation to do so would have been to get people to use the curves recommended by the NIST, as they were created using magic numbers chosen without explanation by the NSA. No surprise they are suspected to be corrupted.
With both of these developments in mind, the alert cryptography activist scene seems now to converge on Curve25519, a variant of ECC whose parameters where elaborated mathematically. "They are the smallest numbers that satisfy all mathematical criteria that were set forth" explains Christian Grothoff of GNUnet.
ECC also happens to be a faster and more compact encryption technique, which you should take as an incentive to increase the size of your encryption keys.
Unfortunately, thanks to RFC 6637 GnuPG will soon support ECC with the suspicious NIST curves. Should it better break with OpenPGP and support Curve25519 instead?
Nadia Heninger tells us some more on the topic, and concludes that there is no proof that mathematical discoveries cannot cause a cryptographic meltdown anytime: "Just because nothing has happened for two decades doesn't mean that something cannot happen." It is up to you to worry if it's more likely that RSA or ECC could be cracked in future. Should a mathematical breakthrough drop from the sky, probably both would be affected.
As a side note, OpenPGP requires the use of SHA1 for its fingerprinting. That means the way most people are authenticated in PGP may someday fall apart.

6. Federation: Get off the inter-server super-highway.

NSA officials have been reported saying that NSA does not keep track of all the peer-to-peer traffic as it is just large amounts of mostly irrelevant copyright infringement. It is thus a very good idea to develop a communications tool that embeds its ECC- encrypted information into plenty of P2P cover traffic.
Although this information is only given by hearsay, it is a reasonable consideration to make. By travelling the well-established and surveilled paths of e-mail, PGP is unnecessarily superexposed. Would be much better, if the same PGP was being handed from computer to computer directly. Maybe even embedded into a picture, movie or piece of music using steganography.
Also, there are several issues about Federation itself…

7. Discovery: A Web of Trust you can't trust.

Mike Perry has made a nice collection of reasons why the PGP Web of Trust is suboptimal. It is in many ways specific to the PGP approach and not applicable to other social graphs like secushare's. Let's summarize: The PGP WoT
  1. is publicly available for data mining,
  2. has many single points of failure (social hubs with compromised keys) and
  3. doesn't scale well to global use.
So these are actually three more reasons not to use PGP, but since you can use PGP without WoT we'll count them as one.
Update: Just found out that when you look up a key your amazing PGP client will by default do a cleartext HTTP request to the key server. Thus anyone can see who your conversation partners are. Maximum total privacy failure!

8. PGP conflates non-repudiation and authentication.

"I send Bob an encrypted message that we should meet to discuss the suppression of free speech in our country. Bob obviously wants to be sure that the message is coming from me, but maybe Bob is a spy … and with PGP the only way the message can easily be authenticated as being from me is if I cryptographically sign the message, creating persistent evidence of my words not just to Bob but to Everyone!" (Thanks, Gregory, for providing this eleventh reason ;-)).

9. Statistical Analysis: Guessing on the size of messages.

Especially for chats and remote computer administration it is known that the size and frequency of small encrypted snippets can be observed long enough to guess the contents. This is a problem with SSH and OTR more than with PGP, but also PGP would be smarter if the messages were padded to certain standard sizes, making them look all uniform.

10. Workflow: Group messaging with PGP is impractical.

Have you tried making a mailing list with people sharing private messages? It's a cumbersome configuration procedure and inefficient since each copy is re-encrypted. You can alternatively all share the same key, but that's a different cumbersome configuration procedure.
Modern communication tools automate the generation and distribution of group session keys so you don't need to worry. You just open up a working group and invite the people to work with.

11. Complexity: Storing a draft in clear text on the server

Update: These days mail tools are too complicated. Here come enigmail that is in charge of encrypting mails before they leave Thunderbird. But wait, didn't Thunderbird just store a draft? Yes, and since I happen to have IMAP configured it stored the draft to my server. Did it bother that I had checked the flag that I intend to encrypt the mail? No, the draft is on the server in the clear. I look around and find out that Claws has been having the same bug. I'm not surprised, after all it's the most natural way of doing things. One person implements IMAP, another implements PGP support, and they never bump into each other and realise that the default behaviour of a mail agent that supports both is to do what it should in no way ever do: send the unencrypted mail to the server. This makes the entire effort to use PGP useless. I looked around for warnings, but even the best manuals for doing PGP correctly are aware of a lot of problems, but not this one. I am only on day three of really using PGP, and I already discovered a security flaw that no-one has talked about much ever before. Is this normal? I have Thunderbird 17.0.8 and you?
P.S. I recommend you to turn off saving mail drafts to the server.

12. Overhead: DNS and X.509 require so much work.

This may seem unrelated, but PGP builds upon e-mail, and e-mail unnecessarily enforces a dependency on DNS and X.509 on us (the TLS and HTTPS certification standard that makes us need certificates, signed by an /authority,/ and then can be fooled and broken anyway). Both cost money to participate in and have to be meticulously administered. Anyone who tried to do it, knows: Mail (and also Jabber) server administration is annoying and expensive.
All the modern alternatives are either based on DHT technology, social graph discovery or opportunistic broadcast. All of them are powered by the mere fact that you are using the software. Frequently there will be sponsored servers providing for faster service, as it has become the standard for Tor, but the administration of such servers is trivial: Just unpack the software and run it (exit nodes are a special case).
Why are you accepting being enslaved by e-mail?

13. Targeted attacks against PGP key ids are possible

PGP has a bad habit of using truncated fingerprints as key ids, organizing keys in its database by short key id and dealing keys with the same short key id as probably being the same, although it isn't so hard to make a new key pair that resolves to the same key id as an existing one. This seems to be a problem even with long key ids. Now people say you should use the full fingerprint, but I remember a time when it was said that the purpose of fingerprints is just for simplifying the comparison of keys among human beings. Computers should always ensure the identity of a public key by comparing nothing less than the public key. By using short ids for maintaining keys the PGP software implementations are doing it wrong.
One possible consequence of this is that users could be tricked into accepting a false replacement key from a key server or in some other way confuse their key management to the point of corrupting a communication path that used to be safe and allowing a man in the middle into the game. People who have just their short key id printed on their business card could suffer targeted man in the middle attacks: The MITM just needs to intercept the keyserver look-up, which as we know is unencrypted by default, and produce the false recipient data. The MITM must then also intercept in- and outgoing SMTP traffic in order to re-encrypt the mail conversation on the fly to the actual key the recipient expects and vice versa. This can in fact be automated to undermine the PGP infrastructure on a large scale, but it would not go unnoticed whereas a targeted attack most likely would.
You can make the attack slightly more difficult by using encrypted key server look-ups (= learn to configure gpg to use sane defaults), but since the key servers do not use PGP to authenticate themselves you can still suffer a MITM attack on the TLS certification level (see X.509 above). And of course there is also the possibility of the key server itself being used in a targeted operation against you. In practice the only currently secure way to communicate a key on a business card is to print its entire fingerprint along with the look-up id – and not forget to actually check it (happened to me, so I bet it happens to you).

14. TL;DR: I don't care. I've got nothing to hide.

So you think PGP is enough for you since you aren't saying anything reeaally confidential? Nobody actually cares how much you like to lie to yourself stating you have nothing to hide. If that was the case, why don't you do it on the street, as John Lennon used to ask?
It's not about you, it's about your civic duty not to be a member of a predictable populace. If somebody is able to know all your preferences, habits and political views, you are causing damage to democratic society. That's why it is not enough that you are covering naughty parts of yourself with a bit of PGP, if all the rest of it is still in the nude. Start feeling guilty. Now.
Update: There's a mistaken assumption that the generation of 1968 made that has repercussions until today: the idea that all political work should be open and transparent. Whereas during the period of Enlightment, with all the suffered experience of absolutist rule, the idea of Secrecy of Correspondence may not only have been about giving a "right" to the citizen but to create a platform where alternative democratic thinking has a chance to form and grow before the government in power can inspect and influence the process. Given such a perspective in a world of XKEYSCORE and KARMA POLICE it is maybe not so surprising that we hardly ever see alternative democratic thinking actually manifesting itself in government. From this perspective I see that most government politics should be open and transparent whereas all NGO and innovation thinking should happen in a safe place from government scrutiny - and in these days of globalization that includes all governments plus their cloud-based helpers. Whoever says they got nothing to hide, may, seen with the eyes of the 18th century, be walking all over founding principles of democracy with their dirty feet - because they are impeding others from bringing renovation to the democracy the way democracy is intended to have it.

15. The Bootstrap Fallacy: But my friends already have e-mail!

But everyone I know already has e-mail, so it is much easier to teach them to use PGP. Why would I want to teach them a new software!?
That's a fallacy. Truth is, all people that want to start improving their privacy have to install new software. Be it on top of super-surveilled e-mail or safely independent from it. In any case you will have to make a safe exchange of the public keys, and e-mail won't be very helpful at that. In fact you make it easy for Mallory to connect your identity to your public key for all future times.
So installing a brand new software that only provides for safe encrypted communications is actually an easier challenge than learning how to use PGP without messing it up.
If you really think your e-mail consumption set-up is so amazing and you absolutely don't want to start all over with a completely different kind of software, look out for upcoming tools that let you use mail clients on top. Not the other way around.

But what should I do then!??

So now that we know n reasons not to use e-mail and PGP, let's first acknowledge that there is no obvious alternative. Electronic privacy is a crime zone with blood freshly spilled all over. None of the existing tools are fully good enough. We have to get used to the fact that relevant new tools will come out all the time, and you will want to switch to a new software twice a year. Mallory has an interest in making us believe encryption isn't going to work anyway – but internal data leaked by Mr Snowden confirms that encryption actually works. We should just care to use it the best way.

There is no one magic bullet you can learn about.

You have to get used to learning new software frequently. You have to teach the basics of encryption independently from any software.
In the comparison we have listed a few currently existing technologies that provide a safer messaging experience than PGP. The problem with those frequently is, that they haven't been peer reviewed. You may want to invest time or money in getting projects reviewed for safety.
Pond is currently among the most interesting projects for mail privacy, hiding its padded undetectable crypto in the general noise of Tor. Tor is a good place to hide private communication since the bulk of Tor traffic seems to be anonymized transactions with Facebook and the like. Even better source of cover traffic is file sharing, that's why RetroShare and GNUnet both have solid file sharing functionality to let you hide your communications in. Bitmessage even tries to get it working on top of a Bitcoin-like architecture. Very daring. Other interesting developments are Briar and our own, secushare, but they aren't ready yet.
Mallory will try to adapt and keep track of our communications as we dive into cover traffic, but it will be a very hard challenge for him, also because all of these technologies are working to switch to Curve25519. GNUnet intends to only support Curve25519 to impede downgrade attacks. Until the next best practice comes out. It's an arms race. Time to lay down your old bayonet while Mallory is pointing a nuclear missile at you.

Thank you, PGP.

Thank you Mr Zimmermann for bringing encryption technology to the simple people, back in 1991. It has been an invaluable tool for twenty years, we will never forget. But it is overdue to move on.

No wait, let's use PGP just a little bit longer.

Jacob Appelbaum recommends to use PGP over Pond instead of over E-Mail. Indeed, in that case most weaknesses listed above are no longer a problem. Also you don't depend totally on the safety of Tor and Pond, so it doesn't matter if Pond hasn't been peer-reviewed yet as long as it works. You can even use PGP in a non-repudiable way, since Pond takes care of authentication. Actually this should work with any of the P2P alternatives to SMTP.

Questions and Answers

Some questions were posed on libtech which deserve an answer:

What's the threat model here?

What if Mallory isn't a well-funded governmental organization but is the admin who runs your employer's email servers?
That's a good point. The reason why I don't pay attention to lesser threat models is that the loss in quality of democracy we are currently experiencing is large enough that I don't see much use for a distinction of threat models - especially since alternatives that work better than PGP exist, so they are obviously also better for lesser threat models.
For example, I don't think that a dissident in Irya (ficticious country) is better off if no-one but Google Mail knows that he is a dissident. Should at any later time in his life someone with access to that data find it useful to use it against him, he will. And who knows what the world looks like in twenty years from now?
Not saying give up and die. Saying if you can opt for better security, don't postpone learning about it. If you can invest money in making it a safe option, don't waste time with yet another PGP GUI project or the crowdfunding hype of the day.
If employers, schools, parents, skiddies can find out who you are exchanging encrypted messages with, that can be a very real threat to you. Using a tool that looks like it does something totally different.. on your screen, over the network and even on your hard disk.. can save your physical integrity.

Is this about PGP or rather about e-mail?

I don't think it makes much difference for the end user whether SMTP federation or actual PGP is failing her. It's slightly more about SMTP.

What about S/MIME?

"S/MIME unfortunately suffers from many of the same issues as OpenPGP, and then some more." I don't find S/MIME worth mentioning anymore. It has so failed us.

We need a new open standard first!

Open standards are part of the problem, not the solution. It is a VERY BAD development that it has become en vogue to require standardization from projects that haven't even started functioning. It has been detrimental to the social tool scene: None of them work well enough to actually scale and replace Facebook, but the scalability problems are already being cemented into "open standards," ensuring that they never will function. Same thing happened with Jabber as it turned into XMPP.
You must ALWAYS have a working pioneer tool FIRST, then dissect the way it works and derive a standard out of it. Bittorrent is a good example for that. It's one of the few things that actually works. Imagine if Napster and Soulseek had developed an open standard. It would only have delayed the introduction of Bittorrent, promoting an inferior technology by standardization.

Why don't we fix all of these problems with PGP and e-mail?

Even if all the effort is done that a project like LEAP is striving for, you will still be receiving SPAM and unencrypted mail, just because you have a mail address. You will still have a multitude of hosts that are still "unfixed" because they don't care to upgrade. You will still carry a dependency on DNS and X.509 around your neck just to be able to be backwards compatible to an e-mail system of which you hope you won't have to send or receive any messages since they will damage your privacy. And I still don't see by which criteria a dissident should pick a trustworthy server. I know I can rent one, but even if I have a root shell on my "own" server, it doesn't mean it is safe. It's better not to need any!
So what is this terrific effort to stay backward compatible good for? I don't see it being a worthwhile goal. There is so much broken about it while a fresh start, where every participant is safe by definition, is so much more useful. Especially you don't have that usability challenge of having to explain to your users that some addresses are superduper safe while other addresses are lacking solid degree of privacy.
One major problem with the new generation of privacy tools is, they are so simple, people have a hard time believing they are actually working.