Friday, May 31, 2013

LinkedIn gets a little safer with two-step verification

The new security measure makes it more complicated for hackers to access your account because it requires access to your password and your mobile phone.

(Credit: LinkedIn)
Following Twitter's lead, LinkedIn introduced two-step verification as an optional security feature members can use to protect their accounts.
LinkedIn's new security measure emulates the two-step verification process of other sites and requires members to input a code, sent via SMS, when logging in from an unrecognized device for the first time.
"Most internet accounts that become compromised are illegitimately accessed from a new or unknown computer," LinkedIn Director Vicente Silveira wrote on the company blog. "When enabled, two-step verification makes it more difficult for unauthorized users to access your account, requiring them to have both your password and access to your mobile phone."
The update follows a similar addition from Twitter, where high profile members are often the frequent targets of hacking exploits. But LinkedIn has been far from a safe haven. Last year, the company was publicly embarrassed when it fell victim to hackers who managed to get access to millions of passwords that were then posted online.
Enabling two-step authentication should make it more difficult for hackers to access your LinkedIn account, but it's not an impenterable system as CNET senior reporter Seth Rosenblatt explains in his FAQ on the login system.

Tuesday, May 28, 2013

How secure is quantum cryptography?

How secure is quantum cryptography?

California law would require breach notice if online account information is stolen


California law would require breach notice if online account information is stolen

The state Senate in California unanimously has passed a law that would require organizations that are breached to alert victims when intruders access online account information belonging to consumers.
Existing state law only requires notification when unauthorized individuals obtain "unencrypted Social Security numbers, driver's license numbers, medical information, health insurance information and specific financial account information, such as credit card numbers with security codes," according to Senate Majority Leader Ellen Corbett, who introduced the measure.
The new legislation, passed last week, would amend the definition of "personal information" under the state's breach notification law to also include "a username or email address, in combination with a password or security question and answer that would permit access to an online account."
Many consumers use the same login information across several websites, so theft of this data from one entity could allow fraudsters to potentially raid other accounts, such as online banking. According to documents chronicling the bill's history, it appears the flurry of mega password breaches this year, affecting companies like Yahoo and LinkedIn, prompted the update to the breach notification law.
“Cyber criminals are becoming increasingly savvy, particularly now that more individuals are using laptops, smartphones and even tablets to conduct personal business and shop online," Corbett said in a statement. "It is critical that consumers are informed whenever their information is accessed or stolen to minimize potential theft and damages."
The bill, dubbed SB-46, now makes its way to the state Assembly.
In 2003, California was the first state to enact a data breach notification law. Since then, nearly all other states have followed suit. There is no federal law, though there are national notification guidelines related to health care breaches.

KPMG In Germany Selects SunGard’s Adaptiv Analytics For CVA’s Under Basel III

KPMG In Germany Selects SunGard’s Adaptiv Analytics For CVA’s Under Basel III

KPMG AG Wirtschaftspr├╝fungsgesellschaft (KPMG) has chosen to use SunGard’s Adaptiv Analytics to help provide its clients with assessments of credit valuation adjustments (CVA) and to support their use of simulation-based approaches to compute derivatives exposures for internal risk steering and regulatory capital calculations.
For the latter purpose, KPMG has integrated Adaptiv Analytics with its proprietary risk-weighted asset (RWA) calculator and related tools to create “IMM-2-Go,” a framework to help firms rapidly implement an internal models method (IMM) to calculate Basel III default risk and CVA risk charges with respect to derivatives exposures.  The use of an IMM can help to alleviate a firm’s regulatory capital constraints which can help provide them with an important competitive advantage under Basel III.
Cubillas Ding, research director at Celent, said, “In today’s increasingly competitive and unpredictable global markets, it is critical to efficiently use and manage capital, with Basel III RWA and CVA as areas of particular concern for many financial institutions. It is important for firms to prepare and equip themselves to perceive and discern risk more clearly to develop competitive advantage in our rapidly changing environment as new risks and regulatory realities emerge on the horizon.”
Juerg Hunziker, president, trading and risk, SunGard’s capital markets business, said, “Basel III is pressuring firms to rapidly adopt advanced simulation approaches to calculate measures like RWA and CVA for internal steering of credit risk. SunGard’s Adaptiv Analytics can help firms and their clients to quickly respond to these requirements and support efficient management of counterparty exposure and CVA.”

Monday, May 27, 2013

Proxy research firm settles charges with SEC over client breach

Proxy research firm settles charges with SEC over client breach

Institutional Shareholder Services (ISS), a research firm the advises clients on voting in proxy fights, must pay $300,000 to the U.S. Securities and Exchange Commission (SEC) to settle charges that it failed to protect client information due to access control shortfalls.
The breach happened after an ISS employer allegedly divulged sensitive information to a proxy solicitor, a person hired by shareholders to find proxy voters to replace shareholder votes, in exchange for gifts.
"An SEC investigation found that an employee at ISS provided a proxy solicitor with material, nonpublic information revealing how more than 100 ISS institutional shareholder advisory clients were voting their proxy ballots," the SEC said in a Thursday news release. "In exchange for voting information, the proxy solicitor provided the ISS employee with meals, expensive tickets to concerts and sporting events, and an airline ticket."
The ISS is a registered SEC investment adviser.
The breach, which had been ongoing from 2007 to 2012, was enabled by ISS failing "to establish or enforce written policies and procedures reasonably designed to prevent the misuse of material, nonpublic information by ISS employees. Specifically, ISS lacked sufficient controls over employee access to databases of confidential client vote information."

Sunday, May 26, 2013

9 Biggest Data Encryption Myths Busted

9 Biggest Data Encryption Myths Busted

  • By David Tishgart, Gazzang

Image: CJ Schmit/Flickr

Rarely a day goes by that you don’t hear about a data breach. Hospital records stolen. Social media accounts hacked. Education transcripts revealed. Every industry is susceptible and every company is at risk. The result can be embarrassing and expensive at best and absolutely crippling at worst, with potential fines, time-consuming lawsuits, and subsequent loss of customer trust.
The steady pace of breaches reinforces the need for encryption as a last line of defense. Recently however, one of the oldest and most effective security tactics has been largely relegated to an afterthought in today’s new cloud and big data environments.
This is the result of some common misperceptions about encryption and key management related to cost, performance and ease of use.
Today we set the record straight, breaking down the nine biggest encryptions myths.
MYTH 1: Encryption is only for organizations that have compliance requirements. Certainly any company in a regulated industry that mandates data security and privacy should encrypt. That’s a no brainer. But a better way to think about encryption is this: if you’ve got data about your products, customers, employees or market, that you believe is sensitive/competitive, then you should ALWAYS encrypt it, whether there’s a legal obligation or not.
MYTH 2: SSL encrypts data everywhere.
SSL only encrypts data in motion; it does not cover data at rest. As data is written to disk, whether it’s stored for one minute or several years, it should be encrypted.
MYTH 3: Encryption is too complicated and requires too many resources.
Data encryption can be as complicated or as easy as you want to make it. The key is to understand the type of data that needs to be encrypted, where it lives and who should have access to it. There are plenty of readily available, easy to use and affordable encryption tools on the market. If application performance is important, look for a transparent data encryption solution that sits beneath the application layer and does not require modifications to your operating system, application, data or storage.
MYTH 4: Encryption will kill database performance.
There are a number of factors that impact database performance, and encryption is just one. Application-level encryption tends to pack the greatest performance hit, while the file-level encryption penalty is much lower. For maximum application performance, run block-level encryption on a system utilizing the Intel AES-NI co-processor.
MYTH 5: Encryption doesn’t make the cloud more secure.
On the contrary, in many cases storing encrypted data in the cloud is oftentimes more secure than keeping it on premises where insiders may have easier access. To ensure the safekeeping of encrypted data in the cloud, make sure you, not your cloud provider, maintain control of the encryption keys. If your provider requires you to hand over your keys, find another cloud service.
MYTH 6: Encrypted data is secure data.
Too many organizations fail to effectively manage their encryption keys, either storing them on the same server as the encrypted data or allowing a cloud provider to manage them. Storing the key on the same server as your data or handing them over to your cloud provider is akin to locking your car and leaving the keys in the door. Good key management, with strong policy enforcement makes all the difference.
MYTH 7: Key management requires expensive, cloud-adverse hardware.
While this was once true, today there are effective software-based solutions that enable organizations to deploy key management in the cloud or on premises. These solutions can typically be provisioned far faster than hardware security modules (HSMs), are very cloud friendly and meet most compliance statutes.
MYTH 8: If your data is encrypted, it can’t be stolen.
There is no security solution that will protect your data 100%. In fact, companies should operate with the mindset that their data can and likely will be compromised at some point in time. Data encryption can make the breach aftermath much more palatable though, since encrypted data cannot be decrypted without the key
MYTH 9: Encryption is old school. I need a newer security technology to protect big data.
Data encryption is a proven security technique that works very well in modern NoSQL environments. As big data projects move from pilot to production, sensitive data such as protected health information (PHI), financial records, and other forms of personally identifiable information (PII) will likely be captured, processed, analyzed and stored.  Encryption is just as integral to securing data in NoSQL as it is in traditional relational database systems.
Firewalls and VPNs can provide some protection against data breaches and theft, but there is no substitute for strong encryption and effective key management, especially in big data and cloud environments. Now that the biggest myths have been busted, there’s no longer an excuse not to encrypt.
David Tishgart is director of product marketing and strategic alliances at Gazzang.
Originally posted by:

Wednesday, May 22, 2013

Hold Merchants Accountable for Breaches?

Banking Group Asks Congress to Take Action

By , May 22, 2013.Follow Tracy @FraudBlogger

Banking institutions rarely recover the financial losses they suffer after cards are exposed as the result of a retail breach. Losses have increased in the last year as a result of targeted malware attacks specifically designed to capture card data.
Card issuers say they don't hold their breath for much to change, at least not near-term. But the National Association of Federal Credit Unions is asking Congress to step in and hold breached retailers and processors accountable when their lax security practices result in the leakage of card data.
 Merchants and processors should be investing in systems and technologies that help them better detect attacks to their own networks, but they have little incentive to do so. 
Will Congress take notice of the recommendations? And is the NAFCU the right group to press for legislation? I don't know if the NAFCU on its own has the muscle to push Congress to take notice, but the group's advocacy is commendable.
I'm hopeful other banking organizations, such as the American Bankers Association and the Independent Community Bankers of America, join the cause.
Retail card compromises have for too long been pain points for banks and credit unions that card brands have failed to address. Retailers need to take on more responsibility for the breaches they suffer. Regulatory reform, which calls for more scrutiny of their networks and systems, is a viable solution.

A 5-Point Plan

The NAFCU's Five-Point Plan for Regulatory Relief recommends establishing national standards for the protection of all financial information, including payment card data. It also recommends holding merchants accountable for expenses, such as costs associated with card re-issuance, if card numbers and details are exposed during a breach. It calls for creating uniform federal enforcement standards for data security, which would prevent merchants from storing card and other financial information. And it asks that merchants be required to share their data security policies with customers.
The five-point plan also recommends that the burden of proof after data breaches fall back onto the merchant and/or processor that is attacked, rather than, as is the current practice, relying on card issuers to trace the fraud back to a common point of suspected compromise.
David Carrier, NAFCU's chief economist, says the average annual cost to a credit union after a retail breach involving card numbers is $86,000, based on a recent survey of the association's 800 institution members. Those expenses include the issuance of new cards and covering losses - such as account losses - when fraudulent transactions occur. "That was much higher than expected," he says. "We think merchants need to be held accountable for breaches due to their own negligence. As it is right now, credit unions end up paying."
Complying with the Payment Card Industry Data Security Standard should mean that processing networks and POS devices and systems are not storing or exposing card data. But it doesn't, as recent retail attacks prove.

Retailers' Role for Better Security

Ensuring point-of-sale devices and systems are secure isn't easy. Nick Percoco, senior vice president at security vendor Trustwave, says legacy POS terminals, for example, often inadvertently store data.
"Today we see malware that is much more advanced," he says. "There is a population of merchants in the U.S. that still have point-of-sale systems that are ripe for these types of attacks. Right now, not all merchants are secure."
The PCI Security Standards Council, the card brands and others are pushing merchants to get all of their outdated devices and systems upgraded to avoid these types of security vulnerabilities. But that effort will take time.
And while the PCI-DSS clearly prohibits the storing of card data, it does not require full, point-to-point data encryption.
"PCI does not require encryption of data if it's being transmitted over a private network," Percoco says. "So if you have a merchant with a corporate office and 1,000 locations, and the data is being transmitted to other locations over a VPN, it can be sent in the clear."
Criminals know if they hack a corporate environment, they likely will have access to clear text data, he adds.

And then there's the issue of enforcement. The PCI SSC oversees the PCI-DSS, but it has no authority to enforce compliance. Visa and MasterCard require merchants and processors to attain PCI compliance in order to transact on their networks. But there's no uniformity to PCI audits, nor is there uniformity to how the qualified security assessors who perform the audits carry out their reviews.
And for banking institutions, as issuers, the costs associated with protecting card data after it's exposed are tough to recoup. Tracing card compromises to their source is becoming increasingly difficult as well.
Card issuers have to ensure they detect compromises as quickly as possible to limit their losses. As it is, issuing institutions are typically the first to identify an attack and link it to a breach.
But merchants and processors should be investing in systems and technologies that help them better detect the attacks their networks suffer. The problem is, they have little incentive to do so.
Until retailers and processors are held more accountable for losses and insufficient security practices, not much will change.
Legislation could really make a difference, and the NAFCU deserves praise for its five-point plan. I hope other groups will lend their support to the effort as well.

Monday, May 20, 2013

SunGard Introduces New Consolidated Protegent Compliance Platform To Help Manage Regulatory Compliance

SunGard has developed the Protegent Compliance Platform, a new consolidated technology platform to help financial firms achieve competitive advantages by streamlining their regulatory compliance management.
A common platform can help firms use their compliance technology budgets more efficiently, enabling them to keep up with industry changes while continuing to effectively mitigate increasingly complex regulatory risks. SunGard’s Protegent Compliance Platform’s integrated core components help firms reduce the time and resources required for implementation, support and training while increasing compliance visibility of key data through a single, consolidated interface. The web-based Protegent Compliance Platform addresses the needs of global organizations by providing multi-language and multi-time zone capabilities while also positioning the applications for use on mobile and tablet devices.
SunGard’s Protegent solutions for compliance help firms manage:
  • Market abuse and insider trading
  • Personal trading and conflicts of interest
  • Sales practice review
  • Best execution
  • Trade cost analysis
  • Policy management
  • Social media surveillance
  • Customer onboarding
Sang Lee, managing partner, Aite Group, said, “The nature of new regulations is leading firms to rethink their organizational structures, business processes, and underlying technology architectures. Investing in a consolidated compliance technology framework could greatly enhance firms’ ability to keep pace with rapidly changing regulatory demands, thereby helping enable these firms to re-focus on identifying new revenue opportunities and staying ahead of the competitive curve.”
Steve Sabin, chief operating officer, SunGard’s Protegent business, said, “SunGard’s consolidated Protegent Compliance Platform helps our customers achieve a competitive advantage by being able to respond more quickly to regulatory inquiries and audits such as providing evidence of review and other requested data without causing a costly and disruptive drain in resources.”      

DDoS attack trends highlight increasing sophistication, larger size

DDoS attack trends highlight increasing sophistication, larger size

The CSO perspective on healthcare security and compliance

The CSO perspective on healthcare security and compliance
by Mirko Zorz - Editor in Chief - Monday, 20 May 2013.

The CSO perspective on healthcare security and compliance

Friday, May 17, 2013

Configuration Management for virtual and Cloud Infrastructures

Configuration Management for Virtual and Cloud Infrastructures

The top seven things to consider

May 17, 2013.
By Ronni J. Colville and George Spafford

Configuration management is a key process for any IT endeavor — including

legacy IT systems, as well as private and public clouds. Without visibility to the

configuration of the relevant IT service, IT will not be able to manage the

multisourced cloud infrastructure and software.

Organisations adopting virtualisation and cloud delivery services need to review

their configuration management processes to ensure that they are optimised to

support these services.

A review of the configuration management process should focus on, and alter as

required, the process design, including inputs and outputs, workflows, controls,

roles and responsibilities, data models, reporting and opportunities for process


Through 2015, 80% of outages impacting mission-critical services will be

caused by people and process issues, and more than 50% of those outages will

be caused by change/configuration/release integration and hand-off issues.

As IT adopts technologies such as virtualisation and cloud services, new

dynamics will be introduced (e.g., mobility and offline/online), as well as

opening its doors to external providers (e.g., infrastructure as a service [IaaS]).

This complexity will require IT to add more rigor (not less) to their configuration

management process.

As the number of internal and external service providers increases, the need for

timely, accurate and secure information flows also increases. With any delivery

method, configuration plays a vital role in providing logical views of IT services,

including changes to configurations.

Consider the following questions and responses to rightsise your configuration

management process for virtual and cloud infrastructures:

1. How well are standards defined and followed? Standard implementations

bring predictability and speed in deployment, but the mobility of virtualisation

adds unpredictability in performance, because changes can be done in real time

without an impact assessment. Add a shared infrastructure (e.g., multiple VMs

per host and cluster) and what was standard and predictable for one IT service

will potentially be affected by other IT services. These new dynamics will affect

how standards are assessed and maintained, and will require closer inspection

of how dynamic (versus standard and static) the IT service blueprint should be.

Standards will need to be reassessed on an ongoing basis to ensure scalability

and predictable availability.

2. How well are IT services documented or tracked in systems such as the

configuration management database (CMDB)/configuration management system

(CMS)? The CMDB/CMS will maintain a trusted view using integration and

federation to bring in configuration data from a wide variety of sources. Some

discovery sources can take triggers from virtual infrastructures and become

closer to a "real-time view." This view, coupled with a runtime view for

application performance, will enable better predictive planning. Because having

visibility to public cloud infrastructures can be limited with today's discovery

tools, it is critical for IT organisations to understand the service or application,

and how it is manifested (internally and externally).

3. How well is automation used to discover and execute changes? While IT

resources are often experts, they are still prone to human errors. Using

automation to discover and better target changes will significantly reduce

outages. Automating provisioning without understanding the impact of the

single change to a system or software on the broader IT service or application

may have a negative effect (e.g., outage) systemwide. In addition, with the

frequency of changes to the virtual and cloud infrastructures, coupled with new

agile development and deployment, automation will improve the speed of

changes and reduce the errors to which humans cannot scale, to accommodate

the increase in changes without an increase in errors.

4. How well are audit requirements for contractual and regulatory compliance

addressed? Enterprises can no longer exist without mechanisms that prove

sufficient control is in place. Virtualisation enables the swift and real-time

movement of servers and applications from one place to another. Due to this

type of movement, organisations could fail to comply with restrictions, which

could subject the enterprise to significant consequences. This applies not just to

country- or industry-specific regulations (e.g., payment card industry) or

security-based regulations (e.g., Center for Internet Security[CIS]), but broader

regulations (e.g., the USA Patriot Act) that will impact or support global


5. How well are software licenses tracked and are they accurate? Virtual

infrastructures add mobility and offline dynamics that can present a challenge

for tracking application and software usage. IT organisations will have to be

prepared with documentation and discovery methods that can prove license


6. How well does IT already manage multisourced or multivendor operating

environments? The public cloud is not necessarily new; in many respects, it's

another flavor of outsourcing or software as a service (SaaS). IT organisations

are still responsible for their data, their application availability, etc., but now

there is a middleman. IT organisations that have best practices in place for

multisourced or SaaS infrastructures likely will have less of a challenge adapting

their configuration strategies to the public cloud. IT should seek out lessons

learned from traditional outsourcing vendors and incorporate them for the

broader use cases in the public cloud.

7. What is the degree of business risk that IT organisations will tolerate,

associated with specific types of changes (e.g., to business-critical systems,

preapproved changes, emergency changes, etc.)? Today, changes are controlled

within the IT infrastructure, but cloud infrastructures will take change-impact

assessment beyond the corporate firewall into more "opaque" environments

(public clouds). As the scope of control alters with public cloud scenarios,

business risk factors will need to be re-examined, and existing policies will

need to change to enable a 90% success rate or better.

This report is based on independent technology advisory research from Gartner,

Inc. Gartner delivers the technology-related insight necessary for IT leaders to

make the right decisions every day.

Thursday, May 16, 2013

The State of UK Cybersecurity - PKWARE Blog

The State of UK Cybersecurity - PKWARE Blog

Cybercriminals Net Big Payday Targeting Banks - PKWARE Blog

Cybercriminals Net Big Payday Targeting Banks - PKWARE Blog

Important Considerations for Seamless McAfee E-Business Server Replacement - PKWARE Blog

Important Considerations for Seamless McAfee E-Business Server Replacement - PKWARE Blog

List of mandatory documents required by ISO 27001

'By 'Dejan Kosutic on April 09, 2013
It’s actually funny, but it is rather difficult to find a list of all mandatory documents required by ISO 27001 anywhere on the Internet – this problem came to my attention when one of the readers of my blog told me he had to read several of my articles to assemble this list.
Anyway, a complete list of mandatory documents has two parts: the first part is related to documents which are required in the main part of the standard (clauses 4 to 8), and the second part is related to Annex A.
Mandatory documents required in the main part of ISO 27001
The first part is rather straightforward – most of required documents are listed in clause 4.3.1:
  • ISMS scope
  • ISMS policy and objectives
  • Risk assessment methodology
  • Risk assessment report
  • Statement of Applicability
  • Risk treatment plan
  • Description on how to measure effectiveness of controls
  • Procedure for document management
  • Controls for record management
  • Procedure for internal audit
  • Procedure for corrective action
  • Procedure for preventive action
Records required by the main part of the standard are as follows:
  • Records related to effectiveness and/or performance of the ISMS
  • Records of management decisions
  • Records of significant security incidents
  • Records of training, skills, experience and qualifications
  • Results of internal audit
  • Results of management review
  • Results of corrective actions
  • Results of preventive actions
Documents for Annex A
This is where it gets confusing – ISO 27001 doesn’t require all the controls from Annex A to be implemented, and it doesn’t clearly indicate how each control should be documented. To learn how to determine which controls to implement, read this article: ISO 27001 risk assessment & treatment – 6 basic steps.
The documents that are mandatory in Annex A (providing that the control is applicable) are the following:
  • Information security policy
  • Inventory of assets
  • Rules for acceptable use of assets
  • Definition of roles and responsibilities
  • Operating procedures for information technology and communications management
  • Access control policy
  • List of relevant statutory, regulatory and contractual requirements
  • Records provided by third parties
  • Logs recording user activities, exceptions, events, etc.
And, here are the documents that are quite commonly used when implementing controls from Annex A, although they are not mandatory:
  • Classification policy
  • Change management policy
  • Backup policy
  • Disposal and destruction policy
  • Information exchange policy
  • Password policy
  • Clear desk and clear screen policy
  • Policy on use of network services
  • Mobile computing and teleworking policy
  • BYOD – Bring your own device policy
  • Incident management procedure
Which documents do you think should be used in ISO 27001 implementation?
Click here to download a white paper Checklist of ISO 27001 Mandatory Documentation with more detailed information on the most common ways for structuring and implementing mandatory documents and records.

Wednesday, May 15, 2013

Cloud-service contracts and data protection: Unintended consequences

May 13, 2013, 11:52 AM PDT
Takeaway: There are things your cloud-service (Facebook, Amazon, Google, Dropbox, etc.) contracts aren’t telling you. Michael P. Kassner interviews an attorney concerned about what’s not being said.
“If it’s not private, it’s not protected.”
When I heard Tyler Pitchford mention the above quote in his ShmooCon 2013 talk: “The Cloud, Storms on the Horizon,” I thought he was stating the obvious. I mean duh, if it’s public; of course, it’s not protected. Fortunately for me, I kept watching the video, eventually learning that’s not what Tyler was trying to say.
What’s more, by the end of the video it became apparent that I needed to rethink how and why I use cloud services. Using cloud services could lead to significant legal implications, and ultimately, financial hardships.
If you’re thinking this is yet more chastising to get everyone to read End User’s License Agreements (EULA), it’s not. I’m taking aim at what’s not being said in EULAs and privacy policies.
First things first: who is this guy Tyler Pitchford? And, why does an attorney know so much about IT, especially software? Well, Tyler followed a different drummer prior to seeing the judicial light. He graduated with a B.A. in Software Systems Design. After which, Tyler put his expertise to use. If you ever used the file-sharing protocol BitTorrent, you are probably familiar with his BitTorrent client — Azureus.
I don’t know what more a “non-legalese speaking” guy writing about the legal implications of cloud services could ask for.

The cloud legally is?

Like all good attorneys, Tyler first defined the terms under discussion, in this case — the cloud:
The cloud is loosely defined as services (think Google, Facebook, Amazon, LinkedIn, and a whole host of others) delivered over a network. For our purposes: market-speak for resource and cost sharing.
Tyler added one caveat:
The cloud is an excellent way to maximize your resources, but filled with potential legal pitfalls. The larger your operation, the more hassles you’ll face.

Third-party legal issues

Now to the crux of what I wanted to talk about. It may not be correct legalese, but I call it third-party legal issues — something unfortunate happens that is outside our control. In the legal realm, third party refers to:
An individual or group who does not have a direct connection with a legal action, but is affected by it.
Third-party legal issues are particularly important to those of us who use or provide cloud services. Third-party eDiscovery can affect our personal or company’s ability to function. Tyler provided two real-world examples to explain how serious it can be.
First example: A small web-hosting service rented space to a business for its website. The business came under government investigation. The web-hosting service received a third-party subpoena even though it was not under investigation. The web-hosting service had to hire an attorney, produce documents, and shut down servers for eDiscovery, ultimately spending 50,000 dollars to meet the conditions of the subpoena.
Second example: A mid-sized business located its servers at a colocation facility. The government began investigating the owners of the colocation facility, issuing warrants, seizing everything in the building, including the servers of the mid-sized business even though the owners were not part of the investigation. The exact figure is unknown, but minimally, the mid-sized business was unable to function until the government returned their servers.
As you can see, through no fault of our own, we can suffer some serious digital and financial trauma. Tyler had several suggestions to reduce the fallout from being an innocent participant in a third-party legal action:
  • Encrypt, encrypt, encrypt!
  • Implement data-retention policies, and follow them religiously.
  • Delete redundant copies.
  • Quarantine data as much as possible.
Each bullet helps isolate your data or your company’s data from other third-party data stored on the cloud service, lowering the interest level of the civil, criminal, or governmental entity investigating the cloud service or another third party using the same cloud service.
Now that we are up to legalese speed, let’s get to some questions.
Kassner: Everyone mentions we should retain an attorney if we do not understand contracts related to cloud services. What kind of attorney is that? What is your specialty?

Pitchford: Sadly, like all things legal, it depends. Generally, you should be able to talk to any good business litigation or contract attorney to handle a general review of cloud-service contracts. If you’re worried about a specific question (privacy, intellectual-property rights, etc.) then you’d want to speak to a specialist.
As for me, I’m an appellate attorney, which means I deal with cases spanning the entire legal field. That said, the areas where I focus most of my time are mass-torts, complex commercial litigation, constitutional law, cyber law, and intellectual property.

Kassner: If you were tasked with setting up a cloud service for a company, what specifically would you want in the agreement?

Pitchford: I’d want the venue, forum-selection, and choice-of-law provisions (clauses that determine the location of the suit, the forum of the suit, court vs. arbitration; and what laws the court will apply) to match the location of the company headquarters, the main location of their legal offices, or anywhere I know that has laws favorable to the company’s expected battles. Depending on the company’s resources, and various other factors, I’d also consider an arbitration clause.
Specifically related to cloud computing, I’d want a guaranteed uptime with a defined penalty provision even though damages resulting from an outage can be difficult to quantify. I would also want some assurance as to whom I’d be sharing servers with.

Kassner: In your talk, you emphasize the need for companies to create a “data-retention policy.” What is it? And, why is it important?

Pitchford: A data-retention policy defines how long an entity stores data. For example, a company might issue a policy stating employees are only to keep emails for 180 days, or back-up servers should only retain two weeks’ worth of information.
A proper policy needs to balance how much of a data archive the corporation really requires to function versus the risk of a complete failure and an inability to recoup the data. The policy must keep the company functional, but should prevent data hoarding. And here’s why: the more data you have, the more data you’ll have to protect and search through if you’re ever involved in litigation.
A retention policy becomes even more important when you realize that you can be required to provide information as part of a lawsuit against a third party.
Kassner: That’s interesting, Tyler. I was under the assumption that a business or person being served a subpoena would be in trouble if they did not have the asked-for data?

Pitchford: As with all things legal, there’s a catch, and it varies by jurisdiction. The general rule is if you’re aware that litigation is likely, you must preserve relevant information within your control. Put simply, you can’t intentionally delete information relevant to a lawsuit against the company directly, or as the result of a third party, it’s illegal. But if there is no threat of litigation, eliminate the data; then there’s nothing to hand over.
It’s less expensive to explain that all potentially relevant information has been destroyed as part of the company’s retention policy, than it is to sort through umpteen years’ worth of archives.
Kassner: You talked about something rather scary, “plain-view doctrine.” If I understand correctly, the government can charge a person based solely on evidence found while looking for something else. Is that right?

Pitchford: That’s correct. Coolidge v. New Hampshire, 403 U.S. 443 (1971), established the parameters of the plain-view doctrine, but they have since been massaged by the more recent Horton v. California, 496 U.S. 128 (1990).
A common example is the traffic stop; where during the stop the officer notices drugs sitting on the passenger seat. The doctrine, however, is also applicable to electronic information. If an officer were to lawfully seize and search a server as part of a raid on a cloud-service provider, immediately incriminating data located while executing the warrant would, arguably, be subject to the plain-view doctrine.
I should note there’s a split between the jurisdictions on exactly what the limits of the doctrine are as they apply to electronic search, but a full explanation would require an article all itself.

Kassner: Could the government take on a whole service like Dropbox, using a third-party subpoena, and then gather evidence using the plain-view doctrine?

Pitchford: Well, no. If a party turned over information by subpoena than the plain-view doctrine wouldn’t apply because the information was handed over voluntarily, and they could do as they pleased with it.
If, however, we tweak your question a little to seizing an entire cloud service by warrant (think Megaupload), then it’s possible the government could utilize the plain-view doctrine to justify locating any incriminating information seized outside the scope of the warrant. But there are certainly limits.

Waiving privacy

Remember, “If it’s not private, it’s not protected.”
I thought I had better explain what Tyler was trying to get at. Most cloud-service contracts are agreements made between a person or company and a third-party service provider. What’s interesting is they can include clauses which define and or waive any expectation of privacy.
When the agreements contain these types of clauses, data residing on a cloud-service provider’s servers is neither considered private, nor protected under the Fourth Amendment. And even if the agreement contains no explicit waivers, the government can still argue a waiver of privacy simply because you have provided your data to a third party.
The government has used these arguments successfully to get data turned over if a warrant could not be obtained; so, those private comments on Facebook — not so private. Now you also understand why Tyler earlier emphasized “encrypt, encrypt, encrypt.” It is the only way data stored in the cloud is truly private.

Final thoughts

It’s been a long, challenging piece. I’ll end by asking Tyler for his “big picture” view.
I think cloud services are valuable tools, but they’re not the answer to everyone’s problems. When a company is deciding whether to adopt cloud services or not, it’s important they evaluate the full picture, not just how much money it can save by slashing IT budgets. And while there are plenty of discussions about the danger of service outages, there simply aren’t enough discussions going on about the possible legal ramifications.
I definitely wanted to thank Tyler, and his mother for allowing me time today — Mother’s Day — to ask a few last-minute questions. As an extra bonus, here are a few “bits of legal wisdom” from Tyler:
Reasonable Searches
  • Ideal: Probable cause required, vetted by the courts, and limited in scope to only what’s required.
  • Reality: Government will get the benefit of the doubt, and they’ll take everything. If you balk, they may give some back.
Due Process
  • Ideal: You’ll be given equal footing in court to present your case; if the government deprives you of property; you’ll be paid.
  • Reality: Courts will typically defer to the government, and there are many exceptions to takings.

  • Ideal: To strike a balance between your rights, and the ability for the civil and criminal systems to function in a meaningful manner.
  • Reality: The laws are outdated, and don’t offer much protection. If you have the means, you may be able to put up a fight, but by that point you’ll already have suffered major loses.

Hacking charge stations for electric cars

Hacking charge stations for electric cars

Hacking charge stations for electric cars
by Mirko Zorz - Wednesday, 15 May 2013.
The vision of electric cars call for charge stations to perform smart charging as part of a global smart grid. As a result, a charge station is a sophisticated computer that communicates with the electric grid on one side and the car on the other. To make matters worse, it’s installed outside on street corners and in parking lots.

Electric vehicle charging stations bring with them new security challenges that show similar issues as found in SCADA systems, even if they use different technologies.

In this video recorded at Hack In The Box 2013 Amsterdam, Ofer Shezaf, founder of OWASP Israel, talks about what charge stations really are, why they have to be ‘smart’ and the potential risks created to the grid, to the car and most importantly to its owner’s privacy and safety.


    Tips for validating DDoS defenses

    Tips for validating DDoS defenses

    Dodd-Frank Regulatory Framework: What Questions Remain Unanswered

    CFTC Commissioner Scott D. O’Malia  Speaking At The Energy Risk, USA 2013

    As I was preparing my speech and reflecting on the most important answers that the Commission must give to market participants, I thought of the famous statement made by former Defense Secretary Donald Rumsfeld, “[t]here are known knowns; there are things we know that we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”1 It has occurred to me that these three categories could just as easily be applied to the Commission’s implementation of Dodd-Frank.
    Today, I will focus on three topics: (1) the Commission’s rule implementation process, (2) the impact of the Commission’s rules on end users and (3) risk management.
    Specifically, when discussing my first topic, the Commission’s rule implementation process, I will go over some of the Commission’s final rules that have now become “known knowns.” Then, I will transition to the “known unknowns”: that is, the Commission’s numerous no-action letters that supersede some of the Commission’s final rules, as well the rules that are now being litigated in court and some of the Commission’s proposed rules, such as the proposed Cross Border Guidance. I will also touch upon some of the “unknown unknown” market developments – such as futurization. This was hard to predict at the time the Commission drafted its rules and the role of the Swap Execution Facility Rule (SEF) may accelerate this phenomenon.
    I. The Commission’s Rule Implementation Process
    Known Knowns: Final Rules
    I will begin with the Commission’s rule implementation process. With almost two thirds of the Dodd-Frank rulemakings complete, you would think the industry should have a pretty good sense of the “known knowns.”
    For example, market participants are now familiar with the Commission’s definition of a swap. The industry also knows that certain classes of interest rate swaps and credit default swaps are subject to mandatory clearing. Additionally, the rules for Designated Contract Markets (DCMs) and business conduct standards are fairly well understood, as is the Commission process to register as a Swap Data Repository (SDR). And I think we generally understand what Derivatives Clearing Organizations (DCOs) are, though the margin methodology is uncertain.
    Let me move now to the “known unknowns.” This list is unfortunately a little longer and seems to be growing. Let me provide you with three examples of known unknowns, which I believe could have been avoided and which contribute to regulatory uncertainty and compliance challenges.
    Known Unknown #1: No-Action and Exemptive Relief
    In a rush to implement all the rules, the Commission has finalized some rules that are either unworkable or simply make no sense.
    Instead of amending these rules, the Commission has issued an unprecedented number of no-action and exemption letters. So far, the Commission has provided 90 exemptions, with a number of these exemptive loopholes providing indefinite relief.
    Clearly, this process is at odds with the basic principles of the Administrative Procedure Act (APA). It also violates President Obama’s directive committing the government to create “an unprecedented level of openness in Government” and to working “to establish a system of transparency, public participation, and collaboration,”2 and more specifically his executive order directing independent agencies to consider how best to promote retrospective analysis of existing rules and “to modify, streamline, expand, or repeal them in accordance with what has been learned.”3
    Virtually every no-action letter that the CFTC staff issues contains some complicated and needless restriction that results in the relief being available to some entities, but not all. I have yet to see a no-action letter that provides a clear justification for different regulatory treatment of these entities. And those firms that do benefit from the relief are subject to numerous conditions, needless restrictions and arbitrary compliance timelines.
    Another negative consequence of this parallel track of Commission actions is that the industry can no longer simply rely on the Commission’s regulations to determine their compliance obligations. Instead, market participants must review all the no-action and exemption letters in addition to the regulations in order to determine the full scope of what they are required to do.
    Now that the Commission has started the rule implementation phase, it simply cannot carry out its regulatory oversight in this fashion. Instead, it must re-visit the rules that have proved to be unworkable, incorporate indefinite permanent relief into the amended rules, make the necessary adjustments, and consistently and fairly apply such amended rules to all regulated entities.
    Known Unknown #2: Legal Challenges to the Commission’s Final Rules
    To force the Commission to provide some clarity and to reconsider some of its rules, the industry resorted to litigation. There have been an unprecedented four lawsuits filed against the Commission regarding Dodd-Frank rules. Let me briefly address three of these.
    One example is the Commission’s position limits rule. As you know, last year, the Commission’s position limits rule was struck down by the federal district court in Washington DC. The court held that before setting such position limits, the Commission is required to determine whether position limits were necessary and appropriate to prevent excessive speculation in the commodity markets. Unfortunately, the Commission ignored the court’s order to undertake the necessary analysis; instead, it is gearing up to defend its rule in a court of appeals and at the same time, is drafting a new rule.
    Despite the loss in district court, the Commission has failed to learn from its mistakes. We have courted litigation by failing to take the appropriate steps in changing our Part 45 Data Rules, which has resulted in a case filed by the DTCC against the Commissionfor approving the CME Group’s Rule 1001 amendment regarding the trade reporting obligations of a clearing house. This litigation relates to who must report cleared swap trades and to which SDR they must be reported and whether or not the Commission followed the Administrative Procedures Act in approving CME’s amendment. Again, rulemaking shortcuts and inconsistencies within the rule and among other rules contributed to DTCC’s legal arguments.
    Finally, it seems like everyone predicted the Bloomberg lawsuit. And Bloomberg gave the Commission plenty of warning signs of the problem with the Commission’s margin calculation for swaps. But we did not want to listen. Earlier this month, Bloomberg brought a lawsuit, contending that the Commission has created an uneven playing field by making it far more expensive to trade and clear swaps than economically equivalent futures. This is because the Commission’s regulations mandate a minimum liquidation time for financial swaps that is five times the minimum liquidation time for the equivalent exchange-traded swap future.
    Known Unknown #3: Proposed Cross-Border Guidance
    Aside from the litigation matters, I must mention the Commission’s proposed Cross Border Guidance. Last July, the Commission published a proposed Cross-Border Guidance that has been generally characterized as over-reaching and overly prescriptive. I have noted with interest the strong and vocal opposition to the Commission’s proposed cross-border application of its Dodd-Frank rules. While the goal has been to harmonize our rules, the over-reaching and prescriptive outcomes in the proposed guidance would likely have the opposite effect and create market fragmentation, regulatory uncertainty and a disincentive to trade with U.S. persons.
    It remains to be seen what the global derivatives market will look like once Europe and Asia implement their derivatives rules and once the Commission defines substituted compliance in such key areas as clearing and trading.
    But I’d like to note that one of the many lessons learned from the financial crisis was that the regulators for major economic markets around the world must work together to ensure that similar principles are enforced around the globe. The issues we need to resolve require international coordination and I encourage the Commission to work with our counterparties around the globe to bring our goals to fruition.
    This brings me to the unknown unknowns in the Commission’s rule implementation process. These, of course, by definition are impossible to identify, but in hindsight, they are the result of the Commission’s inconsistent rules.
    Unknown Unknown #1: Futurization
    The first unknown unknown that comes to mind is “futurization.” It came about as a direct result of two rules that led market participants to make unanticipated changes in their trading behaviors. One is the Commission’s margin calculation rule for swaps that is in litigation now and the second one is the Commission’s vague and complex swap dealer rule. These rules drove energy market participants away from the swaps space to the well-defined futures market.
    End User State of the Market Meeting: One Year Anniversary of Futurization
    To understand the impact of this move to futures in energy, I have been working with staff to look at data before the October 15, 2012 move and after. At this point we don’t have enough data to form any conclusions, but I would like to hold an End Users State of the Market meeting near the one-year anniversary of the October futurization move. I would like to know how futurization is working for end users and whether end users are able to take advantage of the hedging exemption, and I will report to you on the status of the Commission’s data reporting issues. I invite everyone in the audience to participate in these discussions.
    Unknown Unknown #2: SEFs
    There is another rule that has the potential to accelerate the move to futures: the SEF rule. To avoid regulatory arbitrage, it is important for the Commission to come up with flexible SEF rules that provide for an efficient and clear registration process and that allow for various execution methods. I am concerned that the Commission’s overly restrictive proposal will impede the innovative capacity and flexibility of these new platforms.
    After much delay and great anticipation, the Commission is scheduled to vote on all the transaction rules, including SEFs, swap blocks and the made available to trade rules, this Thursday. I am heading back to Washington DC tonight to continue the final negotiations on these rules.
    I remain optimistic that we can find an agreement to develop flexible swap execution platforms that will encourage on-screen trading, as provided in the statute, and that we will do away with the overly prescriptive rules that were included in the original draft.
    Minimum RFQ: Let the Data Tell Us the Right Level
    One issue that has caused a high level of debate and negotiation is the transition on the minimum number of markets participants that are sent a request for quote (RFQ). The big question is, will the transition be automatic or will we actually look at the data to inform a Commission decision? It seems obvious to me that we would use data and facts to inform our decision. After all, what is the point of collecting all of this data if we aren’t going to effectively utilize it?
    Unknown Unknown #3: Volcker Rule
    While on the subject of the unknown unknowns, I would like to mention the Volcker rule. The rule was proposed in October 2011. Almost a year and a half and over 18,000 comment letters later, the main issues remain unresolved. For this rule to get done and get done right, the Commission must ensure that the final rule accomplishes the overarching Dodd-Frank goal of reducing systemic risk by providing the Commission with real means to enforce violations of the rules.
    Now, turning to my second topic: end users.
    II. End users
    Known Unknown #1: Swap Dealer Rule
    Although many unknowns remain regarding how the Commission will come out on the position limits rule, the SEF rule, and the capital and margin rule, it is nevertheless clear that the over-the-counter (OTC) derivatives world is changing for end users. I agree with Sean Owens, an economist with Woodbine Associates, who stated that under the Dodd-Frank rules, “end-users face a tradeoff between efficient, cost-effective risk transfer and the need for hedge customization. The costs implicit in this tradeoff include: regulatory capital, funding initial margin, market liquidity and structural factors.”4
    So far, the Commission has not done its best to protect end users.
    The swap dealer rule makes it difficult to determine whether an entity is a swap dealer. Instead of coming up with a specific bright line test, the rule lists numerous factors that should be considered in determining whether an entity is a swap dealer. If an entity determines that it is a swap dealer, it must then determine if its swaps activity is large enough to require registration.
    Under our rules, a swap dealer does not need to register with the Commission if its aggregate swap dealing activity on a yearly basis is below the arbitrary $8 billion threshold. By the way, the threshold is reduced to $3 billion following the five-year phase in period.5 According to the rule, market participants are allowed to exclude from this calculation trades executed to hedge physical positions.6
    This part sounds good. But who would have guessed that the Commission decides that this rule must have a definition of “hedging activity” that is different from the definition used elsewhere in the Commission’s regulations? For purposes of both simplicity and consistency, the Commission should adopt one uniform definition of hedging applicable across all Commission regulations. And we should certainly not define hedging one way for dealers and another for major swap participants as the rule currently provides.
    By the way, hedging is not the only category of transactions that should be removed from the $8 billion de minimis calculation. Swap transactions that are cleared through a DCO ensure that both parties deal at arm’s length and are able to mitigate risk and should be excluded from the de minimis calculation as well.
    I would also like to correct the Special Entities definition. When trading with municipal energy companies, such as state, city and county municipalities, the $8 billion threshold drops to $25 million. The reasoning behind this distinction was that Special Entities need special protection because any loss incurred by a Special Entity would result in the public bearing the brunt of the damage.7
    Sounds like a noble intention. But, as they say, the road to hell is paved with good intentions. By reducing the threshold to $25 million, the end result has been a reduction in the number of market participants that are willing to do business with Special Entities. Many counterparties that would fall well below the $8 billion de minimis threshold are not willing to trade with Special Entities out of fear of exceeding the $25 million threshold.
    Now, in response to repeated requests from various Special Entities, the Commission issued no-action relief allowing the de minimis threshold to be increased to $800 million for utility commodity swaps.8
    In trying to protect Special Entities from the perils of trading in the swaps market, we have forced them to trade with large Wall Street banks since no other entity is willing to trade with them for fear of becoming a swap dealer. Instead of providing them greater protection, we have limited the pool of counterparties with which Special Entities can trade, concentrating risk to fewer market participants.
    This surely goes against the goal of reducing systemic risk.
    Known Unknown #2: Capital and Margin Calculation
    Another rule that impacts end users’ activity is the capital and margin rule. Just before the 2010 passage of Dodd-Frank, Senators Dodd and Lincoln, who authored the Senate Dodd-Frank legislation, wrote a letter to their House counterparts stating that “Congress clearly stated that the margin and capital requirements are not to be imposed on end-users.”9
    On April 13, 2011, the Commission proposed rules regarding capital and margin requirements for uncleared swaps. I supported the proposal and the exemptive relief that rule proposes to provide to end users. Under the proposal, when swap dealers trade with end users, the end user is not required to post margin. This is consistent with the Congressional intent.
    While the margin rules as proposed would provide relief to end users, the capital rules will impact end users as their bank counterparties are forced to apply this charge to all uncleared swap positions.
    It is imperative that the Commission’s regulations do not divert working capital into margin accounts in a way that would discourage hedging by end users or impair economic growth. Whether swaps are used by an airline hedging its fuel costs or a manufacturing company hedging its interest rates, derivatives are an important tool that companies use to manage costs and market volatility.
    As you may know, the Basel Committee on Banking Supervision (“BCBS”) and the International Organization of Securities Commissions (“IOSCO”) are working to finalize harmonized standards for margin requirements on uncleared derivatives. This effort appears largely consistent with the Commission’s proposal. The decision to harmonize margin requirements was a good step and I support the Commission’s effort to engage in international coordination with foreign regulators.
    Finally, turning to my last topic and the main topic of this conference: risk.
    III. Managing RiskKnown Unknowns or Unknown Unknowns?
    The Commission is struggling to ramp up and meet its new goals and objectives of developing a capacity to understand and monitor risk at the lowest levels of a fund manager to overseeing the systemically important entities like a clearing house. Identifying risk can only be done effectively if we are able to manage our data. Under Dodd-Frank we require a wide variety of new forms and data reporting on every aspect of the markets we oversee. Ingesting, harmonizing, aggregating, and analyzing this data to identify risk patterns is a new skill set we haven’t mastered yet. Recently, a staff presentation pinpointed our challenge when they stated they couldn’t see the London Whale in our data. That obviously set off alarm bells.
    Before I describe what we need to do to get our arms around our data dilemma and develop our essential risk tools, let me share with you some numbers regarding the relative size of the margin requirements in the clearing houses today. Keep in mind, the only mandate so far to clear swaps has been on the dealers; funds are not required to clear until mid-June.
    In the clearing area alone, the Commission conducts surveillance of about $150 billion in aggregate margin requirements across all DCOs. Today, futures markets makes up 69 percent of the aggregate margin requirement ($101.7 billion), while the swaps market holds $45.2 billion. The two largest cleared swap categories, which I expect to grow significantly, are the interest rate market ($23.6 billion) and the CDS market ($22.6 billion).
    Among the largest entities, CME’s margin requirements make up $87 billion of the $146.9 billion total. Another interesting fact is that customer funds make up $87 billion or 59% of the total margin, with clearing member house funds at $60 billion.
    To help put these numbers in perspective, the cost to taxpayers for the government’s bailout of Freddie Mac and Fannie Mae peaked at $187.5 billion in 2011. And the Troubled Asset Relief Program provided $419 billion in assistance, now down to less than $23 billion outstanding.10
    So obviously these margin amounts are a lot of money, and we are consolidating risk in clearing houses and among the largest clearing houses and clearing members. By driving more trades from the bilateral space we are extending the interconnection and systemic nature to all parties. The obvious benefits of central clearing are that additional funds such as margin and guarantee funds will mitigate the systemic shock to the system or to one or more entities.
    However, a recent study by Mark Roe, a professor at Harvard Law School, questions whether the clearing mandate will make the system safer.11 In his study, he states: “clearinghouses are weaker bulwarks again financial contagion, financial panic, and systemic risk than is commonly thought.”12
    He goes on to say: “[t]he stakes are high in correctly assessing the value of clearinghouses in containing systemic risk, because the reigning over-confidence in clearinghouses lulls regulators to be satisfied that they have done much to arrest problems of contagion and systemic risk, when they have not.”13 Needless to say, that caught my attention.
    Mr. Roe does highlight the fact that clearinghouses are very good at managing the risk of failure of a single firm, but aren’t well suited to manage contagion and system-wide risk. He points out that the financial crisis of 2008-2009 “suffered deeply from both a downward asset price spiral of collateral value and from information contagion.”14 He goes on to say, “Clearinghouses may well reduce counterparty risk. But when they do so, how much systemic risk are they reducing?”15
    This is a great question and something that policy makers are struggling with right now. How do you contain Too Big to Fail and where does it reside? Without a doubt, clearinghouses are systemically important and their interconnections spread both to and from a clearinghouse. So what are we going to do about it?
    This is where our data and technology mission intersects. And it highlights the importance of the Commission finding the London Whale or any other systemic entities and to better understand their connections.
    I have found that we are very good at prescribing appropriate behavior in the market, but not so good at developing our strategy for developing our own tools to prepare for the massive amounts of new data: to integrate, harmonize, aggregate and – most importantly – analyze this data.
    Attacking the Data Dilemma to Identify Risk and Oversee Derivatives Markets
    So, what are we doing to monitor and manage data? Right now, the Commission receives data that is unusable for the Commission’s oversight mission. It will take some time for the Commission to sort out its data challenges, but I am confident that by working closely with all SDRs, we will resolve these challenges and will be able to utilize transaction data as directed by Congress. There are a few things to note in terms of reaching that objective.
    First, each division of the Commission needs to come up with a business and operational plan to support its mission needs with the necessary technology plan and the staffing they need to complete the job. Until we identify our priority needs, we are likely to remain in the known unknown category of government bureaucracy.
    Second, as Chairman of the Commission’s Technology Advisory Committee Meeting (TAC), I have committed the TAC to help solve our data challenges. At our most recent TAC meeting on April 30 we had very open and productive discussions about the data reporting challenges, and we are now working with the SDRs and market participants to quickly define critical data standards to improve the harmonization of data that is so essential to perform critical analysis.
    The TAC meeting was the first step towards bringing market participants, SDRs and the Commission together to find a solution for harmonization of data across SDRs. Additional meetings will be scheduled to organize and standardize regulatory data as soon as possible.
    Third, I have advocated for the creation of a cross-divisional data unit with staff dedicated to organizing and analyzing the data, and most importantly developing the necessary analytical tools to identify market risk.
    Under the current structure, although the Division of Market Oversight wrote the data reporting rules, it has very little involvement with their implementation. The Office of Data Technology is now working with the SDRs to implement the rules. I believe robust data reporting can only be achieved if all the Commission’s divisions, including the Division of Clearing and Risk, the Division of Swap Dealer and Intermediate Oversight and the Division of Enforcement, work closely together to integrate, aggregate and analyze data.
    Finally, I have called on the Commission to identify in its budget plan how the Commission is spending its technology resources and how the technology budget meets the objectives of Dodd-Frank and the realities of today’s global markets. The ticket to preventing and deterring market abuses is technology. The technology of yesterday will not be able to keep up with the market realities of today. For the Commission to stay on top of its oversight mission, it must have a system that can process and analyze the massive volumes of data.
    Derivatives markets are becoming increasingly complex in the new post-Dodd-Frank era, as more and more transactions move to exchanges and clearing houses and market participants are trying to respond to new regulatory demands. In this changing environment, it is crucially important for the Commission to provide regulatory clarity and reduce unknowns for all market participants.
    So, what questions should the Commission answer to provide regulatory certainty to the market and to accomplish Dodd-Frank objectives? First and foremost, to be able to effectively conduct its market oversight mission, it should be able to understand and analyze swaps data. Second, the Commission must fix broken rules and not wait for market participants to drag us into litigation. And last but not least, the Commission must ensure that its rules, while implementing the congressional mandate to regulate the derivatives markets, do not harm this country’s vital economic forces: the manufacturers, energy companies, real estate developers, and other businesses that provide important services to the American people.
    1U.S. Secretary of Defense Donald Rumsfeld, February 22, 2002,
    2Transparency and Open Government, Memorandum for the Heads of Executive Departments and Agencies,
    3Executive Order 13579 – Regulation and Independent Regulatory Agencies, July 14, 2011,
    4See Sean Owens, Optimizing the Cost of Customization, Review of Futures Market (July 2012).
    5See Further Definition of ‘‘Swap Dealer,’’ ‘‘Security-Based Swap Dealer,’’ ‘‘Major Swap Participant,’’ ‘‘Major Security- Based Swap Participant’’ and ‘‘Eligible Contract Participant,” 77 FR 30595 at 30744.
    617 C.F.R. §1.3(ggg)(6)(iii) of the Commission’s regulations, excluding swap transaction entered into to hedge physical positions from the de minimis swap dealer calculation.
    7See 77 FR at 30628 (referring to documented cases of municipalities losing millions of dollars on swaps transactions because they did not fully understand the underlying risks of the instrument).
    8Staff No-Action Relief: Temporary Relief from the De Minimis Threshold for Certain Swaps with Special Entities, October 12, 2012.
    9June 30, 2010 letter from Senators Lincoln and Dodd to Congressmen Frank and Peterson.
    11The Dodd-Frank Act’s Maginot Line: Clearinghouse Construction, March 5, 2013.
    12The Dodd-Frank Act’s Maginot Line, abstract of article.
    13The Dodd-Frank Act’s Maginot Line, abstract of article.
    14The Dodd-Frank Act’s Maginot Line, page 8.
    15The Dodd-Frank Act’s Maginot Line, page 9.

    More stories within Regulations and Compliance

    Regulations and Compliance


    ISDA Publishes ISDA 2013 Reporting Protocol And Side Letters

    The ISDA 2013 Reporting Protocol, which contains a counterparty’s consent to the disclosure of information, is intended to facilitate market participants’ compliance with mandatory trade reporting requirements.

    Regulations and Compliance


    German Banking Industry Associations Presents The Clearing Framework Agreement For German Banks And Their Buy-Side Customers

    German Banking Industry Associations presents the clearing framework
    agreement (Clearing-Rahmenvereinbarung – CRV) for German banks and their
    buy-side customers Eurex Clearing welcomes approval of CRV/
    CRV can be used with the Eurex Clearing's Individual Segregation Model and
    EurexOTC Clear for Interest Rate Swaps.

    Regulations and Compliance


    AML/CFT Landscape In Russia In Need Of Regulatory Control

    BankersAccuity has provided an assessment of money laundering and financial terrorism in Russia following their attendance at the recent International AML/CFT Conference hosted by Association of Russian Banks, Federal Financial Monitoring Service and the Bank of Russia. 

    Regulations and Compliance


    One In Three Pension Funds Highlight Difficulties In Keeping Pace With Regulatory Developments

    A new study from State Street Corporation   highlights that since the financial crisis, one in three European pension schemes claims it is either ‘extremely difficult’ or ‘difficult’ to keep up with new regulatory developments in the pensions industry. 

    Regulations and Compliance


    TradingScreen Meets Dodd-Frank Foreign Exchange Requirements

    TradingScreen Inc. (TradingScreen), independent provider of liquidity, trading, and investment technology via SaaS, is in compliance with the final CFTC rules under the Dodd-Frank Act covering certain foreign exchange (FX) transactions, requiring firms display mid-point pricing on foreign exchange pre-spot, forward

    Tuesday, May 14, 2013

    Cyber Threat Top Concern for NATO

    News / USA

      May 13, 2013