Wednesday, December 1, 2010

Improving security vulnerabilities in open source Web applications

By Dustin Larmeir, Contributor
29 Nov 2010


Some would argue that online security has changed for the worse. As open source Web applications become popular within businesses, they have also become appealing to hackers.

As more company websites run on open source applications like Drupal and with corporate blogs powered by WordPress, more victims may suffer from hacks and costly exploits. Both 960 Grid System (960.gs) and Learning jQuery learned this lesson the hard way. Before these companies took a serious look at hardening open source platforms, embarrassing and costly attacks wrought havoc. Other companies that haven’t taken proper precautions to insulate themselves against such threats could face the same fate.

A look at real-world exploits of Linux security vulnerabilities
We’ll highlight some security issues that open source Web applications pose and propose solutions if you’ve considered making open source applications part of your business.

Common vulnerabilities in open source Web applications
Like you, hackers love that open source Web applications are free and provide easy access given their “open” source code. If, for example, a hacker can deploy a script to steal information or take control of a Web application on a single piece of hardware, he can easily reproduce these devastating results to affect multiple users or multiple websites that share the same code base. Here’s why:

•Many open source applications depend on older versions of scripting languages that remain subject to exploitation.
•Modules plugged into open source applications must be maintained separately from the parent project. Left unpatched, these modules can create problems for the entire application.
•Smaller open source projects often go unpatched for long periods of time. This extended window puts your files at high risk for exploitation.
•Hackers create bots that specifically target application vulnerabilities. When a tireless army of “workers” tries to penetrate code around the clock, exploits are easy to achieve.
•Locking down administrative privileges is a common oversight that enables cyber-thieves to easily compromise code.
•Procedure calls such as XML-RPC are frequently exploited, and cross-site scripting hacks and SQL injections commonly cause trouble for open source platforms.
Locking down open source Web applications
Knowing is half the battle, and there are many tactics to lock down open source Web applications. To succeed in your online business and gain the trust of end users, proper protection is paramount.

Let’s use the two company examples as a backdrop for discussing common breaches to open source and what can be done to achieve better protection for the rest of us.

960.gs experienced a hack that compromised the operating system while running Textpattern CMS. The breach gave full server and FTP access to the bad guys, and once inside, hackers uploaded malicious, embarrassing images to the site with the aim of inducing a negative search engine optimization benefit. This type of hack was difficult to detect because, for public visitors, the site appeared to run smoothly and correctly. A number of techniques could have prevented 960.gs from falling victim to these problems while running an open source Web application:

•Application hardening (includes OS and databases). Operating system and database installations should be completed carefully. Avoid default settings and maintain strict permissions controls. Rewrite file extensions to mask the application type, and remove all unnecessary functions and features to close as many virtual “holes” as possible. Additionally, patch, patch, patch. Particularly in an open source environment, updates go far in preventing compromises. The same rules also apply to scripting languages that may be used on your server.
•Server hardening. Remove information (such as response headers) that could help a bot or hacker identify the version and type of application running on a server. Patch and perform frequent manual checks of server logs to help identify unusual occurrences.
•Strong passwords and access control. Implement passwords containing alphanumeric, uppercase, lowercase and special characters, and never use dictionary terms. Additionally, reset them regularly. Control access to administrative passwords and grant database credentials only on an as-needed basis. Never use an SA or root account for the database user, block all public and port access to site administrator areas, and refrain from opening up a server to any ports, except 80/443 -- these are required to transmit web pages over HTTP/HTTPS, respectively.
•System log monitoring. Watch your system logs closely and ensure that no unauthorized login attempts are successful. Run vulnerability audits and scans on your application regularly (quarterly at minimum) to help identify threats, breaches and suspect activity quickly.
Learning jQuery, a customer of FireHost, experienced a completely different type of attack: a SQL injection that exploited an open security vulnerability in the database layer of WordPress. WordPress and other content management system (CMS) providers work hard to stay ahead of SQL injection vulnerabilities by addressing them proactively via patches. Unfortunately, Learning jQuery’s site was an early victim of this particular problem.

Cyclically, hackers innovate and adapt while CMS providers just try to keep up. Web application firewalls (WAFs) help bridge the gap between hackers’ innovation and CMS providers’ patching. WAFs inspect Web traffic before it can reach the code and block suspect visitors from reaching your services. The ability to block an attack increases exponentially when WAFs team up with intrusion prevention and intrusion detection systems, and other network-level barriers. Had this type of network-layer protection been in place, Learning jQuery’s site might have never experienced an onslaught of malicious attacks.

Keeping open source Web application breaches at bay
The growth and popularity of open source content management systems have changed the security landscape and made traversing it more perilous. But with the help of a developer or technical engineer experienced in securing Web applications (and their hosting environment), you can implement these methods and keep cyber-thieves at bay. With proper precautions, attention to detail and commitment to maintaining your open source websites, companies that use (or plan to use) open source Web applications can have a successful and fruitful run.

Dustin Larmeir is a Linux security specialist at FireHost. He has an extensive background in system administration and Web hosting working with Linux and open source technologies.

29 Nov 2010

.

Sunday, September 19, 2010

IT Audit and IT Security Audits: Is There a Difference?

Posted by David Hoelzer on November 10, 2009 – 5:37 pm
Filed under Compliance, Security, Standards
Last week I had an interesting conversation with some principals in one of the Big Four. We were discussing some upcoming plans that we have for creating a course to assist non-IT folks to transition into IT Audit in addition to assisting non-Audit folks to take on more of an audit role.

During the conversation, we were asked by one person, “Well, are you teaching IT Audit or are you teaching IT Security Audit?” What an interesting question, we thought. We went on to explain our point of view.

The purpose of IT Audit is to ensure that all of the controls are functioning correctly to meet the objectives of the business. This includes operational matters like user creation process, active directory management, group policy settings, firewall configurations, router infrastructure configurations, etc. Almost all of the controls in IT today include security settings. In our view, there is no sense auditing these items to verify that the settings match the policies unless you are also validating that the processes governing the policies are correct.

In other words, if your IT Audit isn’t validating that, in addition to operating correctly, your organization is correctly applying security principles and controls, what exactly are you auditing??? The folks we were speaking with, fortunately, seemed to agree that this was the correct view even though they had posed the original question. It does give us pause to wonder, however.

For example, consider the recent findings regarding FISMA, specifically the notion that FISMA has failed because the IT auditors who are doing the evaluations have been tasked with verifying that everyone is doing what NIST says in terms of procedures without any consideration for where the actual risks are to the business!

This is also precisely the reason that Sarbanes-Oxley has language requiring that the IT systems support the accuracy of the financial results. In the past I have railed against the lack of specificity in Sarbanes-Oxley, but given what’s happened with FISMA it makes me wonder if it might be better in some respects.

In the end, determining the best strategy or standard to use to ensure security will always be a task best done as a retrospective, but it seems safe to say that the “right” answer falls somewhere between too much and too little. Like Goldilocks, we’re all looking for the “Just Right” level of detail in standards, forcing organizations to develop well thought out controls that connect to business and security objectives!

For a comprehensive course on how to identify critical controls, validate that the correct controls are in place and validate processes, consider the SANS 6 day course, “Advanced System & Network Auditing“. David Hoelzer is the SANS IT Audit Curriculum Lead and the author of several SANS IT Audit related courses.

Friday, August 20, 2010

The Massachusetts Data Protection Law

sponsored by SearchSecurity.com & SearchCompliance.com


Regulatory compliance can be a challenging task for any corporation, but it can be particularly onerous if the regulation is a moving target. This is the case with Massachusetts data protection regulation 201 CMR 17.00, which seemed ready to go into effect Jan. 1, 2010 (already delayed once from a May 2009 enforcement date). Just a few months ago, this state regulation was positioned as a game changer. It framed data privacy in a way that forced organizations to take steps to protect personal data.
Today most state privacy laws focus on notifying people of a data breach rather than protecting the information in the first place. MA 201 CMR 17.00 was proactive, rather than reactive, security. But due to the uncertain economy, costs associated with meeting the regulations and complaints from the public, businesses and organizations, the Massachusetts Senate is now considering weakening the scope and specifics of the regulation.

But in a legislatively aggressive climate such as we are in now, with new security exploits being discovered every day and data breach disclosures such as those from The TJX Cos. and Heartland Payment Systems Inc., strict privacy and data protection laws from the state and federal levels are inevitable. Simply stated, MA 201 CMR 17.00 is good security practice. So despite the near-term uncertainty about the particulars, the prudent move by corporate IT is to take steps now to be ready for tough encryption and policy statements later.

Massachusetts businesses facing down MA 201 CMR 17.00 can meet the challenge with preparation and execution. The first step to preparation is education. Read this e-book to learn more about important topics such as identity theft, prevention of breaches, mandatory encryption, and getting ahead of the game where Massachusetts data protection law is concerned.

Sponsored By: BeCrypt, GuardianEdge, Lumension, Razorpoint Security Technologies, Sophos, and CDW

Sunday, July 4, 2010

What is Cryptography?

Cryptography is an important part of preventing private data from being stolen. Even if an attacker were to break into your computer or intercept your messages they still will not be able to read the data if it is protected by cryptography or encrypted. In addition to concealing the meaning of data, cryptography performs other critical security requirements for data including authentication, repudiation, confidentiality, and integrity.

Cryptography can be used to authenticate that the sender of a message is the actual sender and not an imposter. Encryption also provides for repudiation, which is similar to authentication, and is used to prove that someone actually sent a message or performed an action. For, instance it can used to prove a criminal performed a specific financial transaction.

Cryptography ensures confidentiality because only a reader with the correct deciphering algorithm or key can read the encrypted message. Finally, Cryptography can protect the integrity of information by ensuring that messages have not been altered.

Cryptography comes from Greek words meaning “hidden writing”. Cryptography converts readable data or cleartext into encoded data called ciphertext. By definition cyrptography is the science of hiding information so that unauthorized users cannot read it.

Secret writing is an ancient practice that dates back to ancient Egypt but it is still critical to securing data today. In fact, encryption is absolutely necessary when transmitting sensitive data over unsecure mediums like the Internet. The three types of algorithms used for encryption are:

•Hashing
•Symmetric, also called private or secret key
•Asymmetric, also called public key
A hashing algorithm is used to create an irreversible code of a piece of information. This hashed code is called a hash or digest and is unique to the information and can be used as a signature for the data. A hash is used for comparison purposes to make sure data has not been changed; thus it ensures the integrity of a message.

A symmetric cryptographic algorithm can be decrypted, as opposed to being irreversible like hashing. There are several types of symmetric algorithms. Some of the most popular are:

•Data Encryption Standard (DES)
•Advanced Encryption Standard (AES)
•Rivest Cipher (RC)
•International Data Encryption Algorithm (IDEA)
•Blowfish
DES was one of the first widely used algorithms however it has been cracked and is no longer considered secure. AES has not been cracked and is used by the US government while IDEA is favored by European nations.

RC stands for “Ron’s Code” and is a family of algorithms written by Ron Rivest in 1987. Blowfish is a strong open-source symmetric algorithm created in 1993.

Asymmetric cryptographic algorithms differ from symmetric algorithms in that it requires two “keys” to encrypt and decrypt data as opposed to the symmetric algorithm’s single key. Asymmetric or public key encryption uses two mathematically related keys: a public key known by everyone to encrypt messages and a private key, known only by the receiver of the message to decrypt the information.

Asymmetric cryptography is widely used and underlies Transport Layer Security (TLS) and PGP (Pretty Good Privacy) protocols. Some common asymmetric algorithms are RSA and Diffie-Hellman.

This entry was posted on Saturday, July 3rd, 2010 at 5:34 pm and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Sunday, June 13, 2010

CYBERSECURITY BILL TO MODERNIZE, STRENGTHEN, AND COORDINATE CYBER DEFENSES

Press Contact: Leslie Phillips
(202) 224-2627 (202) 224-2627
June 10, 2010

LIEBERMAN, COLLINS, CARPER UNVEIL MAJOR CYBERSECURITY BILL TO MODERNIZE, STRENGTHEN, AND COORDINATE CYBER DEFENSES
WASHINGTON – Homeland Security and Governmental Affairs Committee Chairman Joe Lieberman, ID-Conn., Ranking Member Susan Collins, R-Me., and Federal Financial Management Subcommittee Chairman Tom Carper, D-De., Thursday introduced comprehensive legislation to modernize, strengthen, and coordinate the security of federal civilian and select private sector critical infrastructure cyber networks.
The Protecting Cyberspace as a National Asset Act of 2010, S.3480, would create an Office of Cyber Policy in the White House with a director accountable to the public who would lead all federal cyberspace efforts and devise national cyberspace strategy. A National Center for Cybersecurity and Communications within the Department of Homeland Security, also led by a director accountable to the public, would enforce cybersecurity policies throughout the government and the private sector. The bill would also establish a public/private partnership to set national cyber security priorities and improve national cyber security defenses.
The Committee will hold a hearing on the legislation June 15, 2010.
“The Internet may have started out as a communications oddity some 40 years ago but it is now a necessity of modern life, and sadly one that is under constant attack,” said Lieberman. “It must be secured, – and today, Senators Collins, Carper, and I have introduced a bill which we believe will do just that. The Protecting Cyberspace as a National Asset Act of 2010 is designed to bring together the disjointed efforts of multiple federal agencies and departments to prevent cyber theft, intrusions, and attacks across the federal government and the private sector. The bill would establish a clear organizational structure to lead federal efforts in safeguarding cyber networks. And it would build a public/private partnership to increase the preparedness and resiliency of those private critical infrastructure cyber networks upon which our way of life depends.
“For all of its ‘user-friendly’ allure, the Internet can also be a dangerous place with electronic pipelines that run directly into everything from our personal bank accounts to key infrastructure to government and industrial secrets. Our economic security, national security and public safety are now all at risk from new kinds of enemies -- cyber-warriors, cyber-spies, cyber-terrorists and cyber-criminals.
“The need for this legislation is obvious and urgent.”
Collins said: “As our national and global economies become ever more intertwined, cyber terrorists have greater potential to attack high-value targets. From anywhere in the world, they could disrupt telecommunications systems, shut down electric power grids, and freeze financial markets. With sufficient know-how, they could cause billions of dollars in damage and put thousands of lives in jeopardy. We cannot afford to wait for a “cyber 9/11” before our government finally realizes the importance of protecting our digital resources, limiting our vulnerabilities, and mitigating the consequences of penetrations of our networks.
“Yet, for too long, our approach to cyber security has been disjointed and uncoordinated. Our vital legislation would fortify the government’s efforts to safeguard America’s cyber networks from attack. This bill would build a public/private partnership to promote national cyber security priorities and help prevent and respond to cyber attacks.”
Carper said: “Over the past few decades, our society has become increasingly dependent on the internet, including our military, government, and businesses of all kinds. While we have reaped enormous benefits from this powerful technology, unfortunately our enemies have identified cyber space as an ideal 21st century battlefield. We have to take steps now to modernize our approach to protecting this valuable, but vulnerable, resource. This legislation is a vital tool that America needs to better protect cyber space. It encourages the government and the private sector to work together to address this growing threat and provides the tools and resources for America to be successful in this critical effort.”
Key elements of the legislation include:


1. Creation of an Office of Cyberspace Policy in the Executive Office of the President run by a Senate-confirmed Director, who will advise the President on all cybersecurity matters. The Director will lead and harmonize federal efforts to secure cyberspace and will develop a national strategy that incorporates all elements of cyberspace policy, including military, law enforcement, intelligence, and diplomatic. The Director will oversee all related federal cyberspace activities to ensure efficiency and coordination.
2. Creation of a National Center for Cybersecurity and Communications (NCCC) at the Department of Homeland Security (DHS) to elevate and strengthen the Department’s cyber security capabilities and authorities. The Director will regularly advise the President on efforts to secure federal networks. The NCCC will be led by a Senate-confirmed Director, who will report to the Secretary. The NCCC will include the United States Computer Emergency Response Team (US-CERT), and will lead federal efforts to protect public and private sector cyber and communications networks.
3. Updates the Federal Information Security Management Act (FISMA) to modernize federal agencies practices of protecting their internal networks and systems. With strong leadership from DHS, these reforms will allow agencies to move away from the system of after-the-fact paperwork compliance to real-time monitoring to secure critical systems.
4. Requiring the NCCC to work with the private sector to establish risk-based security requirements that strengthen cyber security for the nation’s most critical infrastructure that, if disrupted, would result in a national or regional catastrophe.
5. Requiring covered critical infrastructure to report significant breaches to the NCCC to ensure the federal government has a complete picture of the security of these sensitive networks. The NCCC must share information, including threat analysis, with owners and operators regarding risks to their networks. The Act will provide specified liability protections to owners/operators that comply with the new risk-based security requirements.Creation of a responsible framework, developed in coordination with the private sector, for the President to authorize emergency measures to protect the nation’s most critical infrastructure if a cyber vulnerability is being exploited or is about to be exploited. The President must notify Congress in advance before exercising these emergency powers. Any emergency measures imposed must be the least disruptive necessary to respond to the threat and will expire after 30 days unless the President extends them. The bill authorizes no new surveillance authorities and does not authorize the government to “take over” private networks.
6. Development of a comprehensive supply chain risk management strategy to address risks and threats to the information technology products and services the federal government relies upon. This strategy will allow agencies to make informed decisions when purchasing IT products and services.
7. Requiring the Office of Personnel Management to reform the way cyber security personnel are recruited, hired, and trained to ensure that the federal government has the talent necessary to lead the national cyber security effort and protect its own networks.


Among the bill’s supporters are: anti-virus software companies McAfee and Symantec; Karen Evans, former Administrator for E-Government and IT, Office of Management and Budget; Stewart Baker, former Assistant Secretary for Policy at DHS; the Intelligence and National Security Alliance; the Professional Services Council; and the Coalition for Government Procurement.

Tuesday, June 1, 2010

Heartland settles with MasterCard over data breach

21 May 2010

Heartland Payment Systems, the fifth-largest payment card processor in the US, has made a third settlement deal in what was one of the largest data breach incidents in history. This time, MasterCard has agreed to take a 41.4m payout for its card issuers.
MasterCard and Heartland agreed to the payment arrangement in which the card processor will fund up to 41.4m in reimbursement expenses claimed by MasterCard’s issuers affected by the data breach. The deal is contingent on acceptance by at least 80% of the card issuers.

Bob Carr, chairman and CEO of Heartland, said: “We are pleased to have reached an equitable settlement agreement that helps issuers of MasterCard-branded cards obtain a recovery with respect to losses they may have incurred from the intrusion.” The Heartland statement also suggested that MasterCard would recommend its card issuers accept payment offers from the agreement.

Heartland had previously settled similar cases with American Express and Visa.

“We feel that this settlement represents an appropriate and fair resolution for our issuing financial institution customers and will enable them to avoid uncertainties and delays associated with potentially protracted litigation," said Wendy Murdock, chief franchise officer for MasterCard Worldwide, in a press release statement. "The agreement underscores MasterCard's continuing efforts to maintain the integrity of payment card industry standards and mitigate the impact of account data compromise events."

The well publicized data breach occurred in 2008 when hackers compromised Heartland’s systems and made off with more than 100 million credit and debit card numbers processed by the company. One of the hackers, Albert Gonzalez, is currently serving a 20-year sentence related to the event, the stiffest such penalty ever handed out by a US court for a hacking-related incident.

Wednesday, May 26, 2010

Shavlik offers 'cloud patching' with free service

SMBs with 10 or fewer PCs go free
By John E. Dunn | Techworld
Published: 17:24 GMT, 25 May 10

Patch management company Shavlik is offering small networks of 10 or fewer PCs access to a new online patch management service at no cost.

The new service, IT.Shavlik.com, is designed to scan for missing patches on a machine-by-machine basis, or using an IP address range or domain, reporting the results through the web portal. Missing patches across Windows versions are rated for severity and can be downloaded using links to the appropriate vendor website or using the ‘FixIT’ button. The service also supports VMWare ESX and ESXi hypervisors.

The only setup requirements are that users download Shavlik’s Clickonce application and already have Microsoft’s .NET Framework 3.5 installed.

Longer term, the company looks likely to introduce a degree of automation to future versions, which could allow specified patches to be fixed on an ongoing basis without the need for manual intervention.

There are a number of standalone patch discovery tools available at no cost, but IT.Shavlik is unusual in offering to manage this for up to 10 PCs and 100 separate scans per month, more than enough for a network of this size. It is also pioneering in offering what can be a tricky technology as an online service rather than a standalone app, which it believes extends the degree to which complexity can be hidden.

Separately, the company has re-purposed its established distributed patch management system as ‘PatchCloud’, which morphs existing technology for enterprises into a more ‘cloud-like’ form if they happen to want that.

Given that the announcements come as the company has adopted a new logo, the embracing of the cloud could be interpreted as a low-key re-launch of sorts.

Shavlik still sells software licenses but the future will be dominated by platforms run by giants such as Google, Amazon and Microsoft on which third-party services will integrate technology, including patch management, from specialists such as Shavlik.

Sunday, May 23, 2010

Heartland, MasterCard Settle (http://www.bankinfosecurity.com/articles.php?art_id=2552)

Issuers Face June 25 Deadline to Accept $41.4 Million Offer
May 19, 2010 - Linda McGlasson, Managing Editor

MasterCard and Heartland Payment Systems have settled on a $41.4 million payment to recover losses from the processor's card data breach that was made public in January 2009.

Heartland has now settled with all three major card brands, Visa ($60 million), American Express ($3.6 million) and MasterCard.

This proposed settlement resolves the claims from MasterCard and its issuers from the 2008 data breach. Under the agreement, alternative recovery offers totaling $41.4 million will be made to eligible MasterCard issuers with respect to losses incurred by them as a result of the criminal intrusion. MasterCard will recommend eligible MasterCard issuers accept the offer.

"We feel that this settlement represents an appropriate and fair resolution for our issuing financial institution customers and will enable them to avoid uncertainties and delays associated with potentially protracted litigation," says Wendy Murdock, chief franchise officer for MasterCard Worldwide in the press announcement.

Bob Carr, Heartland's chairman and chief executive officer, states in a release about the settlement: "We are pleased to have reached an equitable settlement agreement that helps issuers of MasterCard-branded cards obtain a recovery with respect to losses they may have incurred from the intrusion."

The settlement will be contingent upon financial institutions representing 80 percent of the claimed-on MasterCard accounts accepting their alternative recovery offers by June 25, 2010. The settlement also includes mutual releases between Heartland and its sponsoring bank acquirers on the one hand - and MasterCard and the accepting issuers on the other.

Need Help?
The settlement states that issuers who accept their alternative recovery offers must waive rights to any other recovery of alleged intrusion-related losses from Heartland and its sponsoring bank acquirers through litigation or other remedies and release MasterCard, Heartland and its sponsoring bank acquirers from all legal and financial responsibility related to the intrusion.

MasterCard says all eligible issuers will soon receive notification with full details of the settlement agreement and how to accept their alternative recovery offers before the offers expire.

A consumer-related class action suit against the payments processor was proposed to the judge and got preliminary approval in late April.





Next Related Article:
P2P Payments: What You Need to Know
Topics of InterestPatriot Act
Privacy Breaches: Protect Yourself - and Your Vendor
Money-Laundering Update: Kevin Sullivan on Emerging Threats
Social Engineering
Redspin Security Report: Top 10 Network Security Threats of 2008 - Q2 Update
Banking Agenda: Beating the Drum for Safety & Soundness
Skimming
How to Avoid Being a Victim of Multi-Channel Fraud
Ohio Skimming Scam Nets $50K
FFIEC
ID Theft Red Flags Examinations: What to Expect?
BankInfoSecurity.Com Week in Review: May 1, 2010
Authentication
The Future of Authentication for Online Financial Services
Defeating Man-in-the-Browser: How to Prevent the Latest Malware Attacks against Consumer & Corporate Banking
Remote Capture
Achieving PCI Compliance for: Privileged Password Management & Remote Vendor Access
New Banking Services: Growth Among Community Banks - Insights from Christine Barry of Aite Group


Latest Tweets & Mentionssrcsecurity Heartland, MasterCard Settle Issuers Face June 25 Deadline to Accept $41.4 Million Offer www.bankinfosecurity.com... 6 minutes ago reply
Join the conversation Recent ContentMost Popular 11 Bank Closed on May 21
2Regulatory Reform Clears Senate
3FDIC: Now 775 'Problem Banks'
4Bank vs. Customer Suit Settled
5ACH Fraud: How to Fight Back
6PCI Update Gets Mixed Reviews
7Regulatory Reform's impact on Main St.
8Failed Banks and Credit Unions, 2010
9Bank vs. Business: Judge Rejects Motions
105 Tips to Reduce Banking Fraud
View More 122 Banking Breaches So Far in 20102ATM Skimming: 8 Tips to Fight Fraud3Failed Banks and Credit Unions, 20104Hancock Breach Reveals New Trend5Job Hunter's Guide to Social Media65 Lessons from the Comerica Suit7How to Respond to Vishing Attacks8Agencies Issue ACH, Wire Advisory9Heartland Hacker Gets 20 Years10Should Banks Cover Fraud Losses?View More

PCI Issues New POS Standard http://www.bankinfosecurity.com/articles.php?art_id=2519)

PIN Transaction Security Update is Effective Immediately

May 12, 2010 - Linda McGlasson, Managing Editor


A new measure to strengthen credit card data protection was released by the PCI Security Standards Council today.

Version 3.0 of the PIN Transaction Security (PTS) Point of Interaction (POI) standard is designed to streamline and simplify testing and implementation by providing a single set of modular evaluation requirements for all Personal Identification Number (PIN) acceptance Point of Interaction terminals. This standard is meant to enhance and prevent payment card fraud on devices that accept payment transactions and will cover everything from retail point of sale card readers to unattended payment terminals at gas stations and parking lots.

The new standard's rollout comes after a several years of noted credit card breaches such as those at retailer TJX and payment processor Heartland Payment Systems. The most recent card-related breach was Hancock Fabrics, where point of sale devices were swapped out with bogus equipment that had skimming devices in them to collect card data.

The PCI Council says the new standard is effective immediately. Version 3.0 also includes three new modules for device vendors and their customers to secure sensitive card data.

Up to now there were three separate sets of requirements for Point of Sale PIN Entry Devices (PED), Encrypting PIN Pads (EPP), and Unattended Payment Terminals (UPT). This version of the standard simplifies the testing process and eliminates overlap of documentation by providing one modular security evaluation program for all terminals and a single reference listing of approved products.

Need Help?
Bob Russo, general manager of the PCI Security Standards Council says to help everyone better understand the new standards and how they should be applied, the council will host two webinars next week. Registration information is available at the PCI website.

"By combining all of the requirements into one program, we have simplified one-stop shopping when it comes to secure devices," says Russo in a statement. This new approach and additional modules make it easier for manufacturers and merchants to make sure that at any point in a transaction, account data is being protected, he adds.

The updated standard and detailed listing of approved devices are available on the PCI Council's website .

Monday, May 10, 2010

Basel Comité gaat toezicht banken aanscherpen

Gebaseerd op: Bank- en Effectenbedrijf (april 2010)
Het Basels Comité, het wereldwijde orgaan voor bankentoezicht, heeft ingrijpende aanscherpingen aangekondigd voor de bestaande regels van het bancaire toezicht om de fundamenten van het systeem te versterken. Zo worden de kapitaaleisen voor activiteiten in het handelsboek flink verhoogd.

Hiermee beoogt het Comité de prikkel tot het zogeheten regulatory arbitrage weg te nemen. Regulatory arbitrage is het verschijnsel dat banken de neiging hebben om yield te zoeken op plaatsen waar de toezicht- of kapitaaleisen het laagst zijn. Ook wil het Comité de complexe activiteiten in het bankenboek, zoals de hersecuritisaties, zwaarder belasten en moeten banken de onderliggende posities van hun activiteiten beter in de gaten houden. Een bank mag voortaan niet zomaar blind varen op het oordeel van de rating agencies.

Ten slotte moeten de banken transparanter zijn over de complexe producten op de balans. Deze specifieke maatregelen zullen naar verwachting al eind 2010 worden ingevoerd. De plannen kennen een groot internationaal draagvlak en in Europa zullen ze ingevoerd worden doormiddel van een aanpassing van de Europese bankenrichtlijn.

Saturday, May 8, 2010

FISMA Reform Bill Clears House Panel copy from Government Information Security

Measure Would Require Real-Time Monitoring of IT Systems
May 5, 2010 - Eric Chabrow, Executive Editor, GovInfoSecurity.com


A bill to require federal agencies to employ real-time security monitoring of their information systems to replace the current paper process cleared its first hurdle Wednesday, receiving approval by the House Oversight and Government Reform Subcommittee on Government Management, Organization and Procurement.

The measure, the Federal Information Security Amendment Act, or H.R. 4900, goes to the full committee.

The bill would require that the president's top cybersecurity adviser and the federal chief technology officer be confirmed by the Senate. The measure also would establish a panel of government IT security specialists to direct agencies on the steps they must take to secure federal digital assets.

The subcommittee accepted an amendment offered by Rep. Gerald Connolly, D.-Va., to require the CTO be confirmed by the Senate. Last year, under existing authority, President Obama named Aneesh Chopra to the newly created job of federal chief technology officer, a post that didn't require Senate confirmation. Chopra serves as a presidential adviser, but reports to John Holdren, director of the White House Office of Science and Technology Policy.

"To ensure that the chief technology officer can continue to improve federal use of technology in the future, we need to make this a statutory position," Connolly (pictured above) said in a statement. "My amendment does that, and gives the chief technology officer the authority he needs by enabling him to report directly to the president."

Click to Get Updates on the Latest Information Security News
Company*


Title*


Email*

Subscription Type: HTML Text

Government Enews General Government Enews Blogs Enews Careers Enews Training Enews Webinars Enews Podcasts Enews White Papers Enews Banking Enews General Banking Enews Blogs Enews Careers Enews Training Enews Webinars Enews Podcasts Enews White Papers Enews Credit Union Enews General Credit Union Enews Blogs Enews Careers Enews Training Enews Webinars Enews Podcasts Enews White Papers Enews



Need Help?
Using similar authority, Obama tapped Howard Schmidt last December to be White House cybersecurity coordinator, a post that did not require Senate confirmation. In seeking to codify these positions, and by requiring Senate confirmation, Congress would provide some oversight over their performance.

The bill, sponsored by committee chair Diane Watson, D.-Calif., primarily is aimed at updating the 8-year-old Federal Information Security Management Act, the primary law regulating federal information security.

The measure would:

•Create a National Office for Cyberspace within the Executive Office of the President to coordinate and oversee the IT security of agency information systems and infrastructure, headed by a presidentially nominated director who would be confirmed by the Senate.

•Institute a Federal Cybersecurity Practice Board within the National Office of Cyberspace - chaired by the director - charged with developing the processes agency would follow to defend their IT systems. Board members would come from the Office of Management and Budget, Department of Defense and select members from civilian and law enforcement agencies. The policies the board would develop include minimum security controls, measures of effectiveness for determining cyber risk and remedies for security deficiencies.

•Establish requirements for agencies to undertake automated and continuous system monitoring to identify system compliance, deficiencies and potential risks. These activities would move agencies away from manually intensive periodic assessments that fail to incorporate emerging tends or information about an agency's current security posture.

•Require agencies to conduct regular evaluations of their systems, including so-called red-team penetration tests.

•Oblige agencies and contractors managing government systems to obtain an annual, independent audit of their IT programs to determine their overall effectiveness and compliance with FISMA requirements.

•Authorize the National Office of Cyberspace director to approve policies for the operation of a central federal information security incident center.

•Establish requirements for the purchase of secure commercial, off-the-shelf IT products and services as well as policies for mitigating supply chain risks associated with those products.
The House bill is similar to a FISMA reform measure in the Senate, the United States Information and Communications Enhancement Act, or U.S. ICE, sponsored by Sen. Tom Carper, D.-Del., which also would replace so-called FISMA paper compliance with real-time monitoring of government IT systems. The major difference of the two bills is that the House version places cybersecurity authority in the White House whereas the Senate measure - as redrafted last summer - would grant much cybersecurity governance clout in the Department of Homeland Security. Several other cybersecurity bills are at various stages in Congress.

Friday, April 30, 2010

Insider Steals $2 Million from 4 Credit Unions

April 30, 2010 - Linda McGlasson, Managing Editor
Bank Information Security Articles

In a case of insider fraud, a Utah computer consultant was sentenced to five years in prison for stealing nearly $2 million from four Utah credit unions by programming extra deposits for himself.

On April 27, a judge sentenced 43-year-old Zeldon Thomas Morris to 63 months in prison and ordered he pay back over $1.8 million.

Morris pleaded guilty to taking the funds from Deseret First Credit Union, First Credit Union, Alpine Credit Union and Family First Credit Union in 2008. The FBI says they discovered he was hired to help the credit unions with computer upgrades. Instead, he used the passwords to create accounts for himself.

Morris admitted to transferring the money to his joint business account, Lee and Morris Enterprises LLC. He remodeled his home and paid for two cars with the money. He begins his prison sentence June 18. Morris was investigated after a business partner saw something suspicious and reported it.

Friday, April 23, 2010

FISMA Compliance

FISMA Background

Federal Information Security Management Act (FISMA) requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.
Safend Data Protection Suite helps you control your endpoints and address data leakage and targeted attack threats.

Security Compliance in a Cloud

Security Compliance in a Cloud

Sunday, April 18, 2010

Sunbelt warns on game console security risks

16 April 2010

Sunbelt Software has warned businesses to be aware of the growing security risks posed by network-connected game consoles in the work environment.
The problem, Sunbelt says, stems from the increased use of network-connected consoles in break and waiting areas, which heightens the chances of distributed denial of service (DDoS) and phishing attacks

Sunbelt has issued its warning after a study of more than 200 senior IT figures in the public and private sector, which reveals that 39% had no idea about any of the documented threats that relate to online console gaming, including DDoS attacks, phishing and social engineering.

The study also found that 80% of those questioned said their organisations keep no record of who uses the game consoles within the workplace, making it almost impossible to track down the source of any data leaks or brand-damaging in-game behaviour that might take place via services such as Xbox Live and Sony PlayStation.

According to Sunbelt, console users participating in online play risk exposing both their IP address, increasing the risk of that address being targeted for DDoS attacks designed to cripple the target's internet connection.

These types of attacks, which can render the organisation's connection unusable, are frequently used by opportunistic criminals and disgruntled players, the company says.

And, the IT security vendor adds, innocent players in the workplace are also potential targets for social engineering and phishing scams intent on extracting usernames, passwords and other sensitive data from users via chat forums, in-game speech and email.

Chris Boyd, a senior threat researcher with the firm, who recently joined Sunbelt from Facetime Communications, said that there are benefits to having game consoles in the workplace, as they can boost morale by providing staff with a fun diversion during lunch and other break periods.

"Consoles, meanwhile, in the lobby and waiting areas help convey a sense of a modern, fun and tech-savvy organisation", he said.

"However, these benefits must be weighed against the business implications of a threat, such as a DDoS attack, which can harm productivity significantly", he added.

"In most cases, the most practical option for an organisation is to disconnect consoles from the internet and use them for offline play only."



This article is featured in:
Application Security • Compliance and Policy • Data Loss • Internet and Network Security • Malware and Hardware Security

Tuesday, April 13, 2010

Missouri's Breach Notification Law

Posted by Stephen Wu, Esq. on Apr 13, 2010 11:06:23 AM

Missouri became the 45th state to enact a breach notification law. Mo. Rev. Stat. §§ 407.1500.1-407.1500.4. Missouri’s governor signed the enabling legislation, H.B. 62, into law last July. It went into effect last August 28. For a copy of H.B. 62, click here.



H.B. 62 covers “personal information” consisting of a name in combination with a driver’s license number, Social Security number, or account number together with an access code. Id. §§ 407.1500.1(9). These are the usual elements of “personal information” seen in California’s SB 1386. In addition, however, the Missouri law also covers personal information in the form of medical information, health insurance information, and identifier and access codes permitting a person to access a financial account. Id.



Businesses must notify Missouri residents if there is unauthorized access to residents’ personal information that the businesses are maintaining. Id. § 407.1500.2(1). No notification is necessary if, following an investigation and consultation with law enforcement, the business “determines that a risk of identity theft or other fraud to any consumer is not reasonably likely to occur as a result of the breach.” Id. § 407.1500.2(5). A business making such a determination must record it in writing and preserve the writing for five years. Id. In addition, a business may delay notification if law enforcement informs the person that notification may impede a criminal investigation. Id. § 407.1500.2(3).


The Missouri law states that the Attorney General has the “exclusive authority” to bring an action for damages or a civil money penalty. The “exclusive authority” phrase implies that there is no private right of action. The maximum penalty the A.G. may seek is $150,000 for one breach or a “series of breaches of a similar nature that are discovered in a single investigation.” Id. § 407.1500.4.


Stephen S. Wu

Parner, Cooke Kobrick & Wu LLP

http://www.ckwlaw.com

swu@ckwlaw.com
28 Views Tags: compliance, data_breach, law, legal, policy_and_government

Friday, April 9, 2010

Real-world PCI-DSS: identity is key

March 3, 2010 - 10:08 A.M.

Amir Lev
Security Levity


In this week's Security Levity, I'm interviewing Abhilash V. Sonwane, vice president of product management at Cyberoam. Abhilash has extensive experience building credit card data loss-prevention solutions that help organizations achieve regulatory compliance. I'm sure you'll agree that in this interview, he brings some thoughtful insights into real-world Payment Card Industry Data Security Standard (PCI-DSS) compliance and the importance of user identity.


Abhilash, give us a quick backgrounder on PCI, as a starting point...

Here's how we describe it ... our elevator pitch, if you will.

PCI-DSS aims to give cardholders the assurance that their card details are safe and secure when their debit or credit card is offered at the point-of-sale. To be compliant with the standards, merchants and other service providers holding cardholder data need to do 12 things:

1.use a firewall,

2.change default passwords and other vendor-supplied security parameters,

3.protect stored cardholder data,

4.encrypt data in transmission,

5.use anti-virus software and keep it updated,

6.develop and maintain secure systems and applications,

7.keep access to cardholder data on need-to-know basis,

8.assign a unique ID to each internal user,

9.restrict physical access to the data,

10.track and monitor all access,

11.regularly test security systems and processes,

12.maintain an information security policy.


Those seem like sensible precautions.


[Laughs] You'd surprised how many merchants weren't implementing some of those best practices, or were implementing them in a half-hearted way. That's why the industry got together to form the PCI Security Standards Council.


OK, so any thoughts on which of those requirements are most important?

From the perspective of actual compliance with the regulations, they're all important. But it's too easy to forget the need to link true user identity to network security. It's a fundamental part of PCI compliance, but too often overlooked.

You really need complete visibility into who is doing what in the network, as well as access controls based on the user’s work profile. Identity becomes particularly significant in dynamic environments like retail stores, e-stores, hospitality, banking and other service provider industries -- anywhere that multiple users and customer service executives work in shifts over shared machines.


What does that mean for technologies that help enterprises comply with PCI?

A chosen technology should allow user-identity-based access policies. Based, for example, on work profile, department and individual user. It shouldn't depend on IP addresses to identify internal users. The point is that you need to have protection even in often-changing environments that incorporate DHCP, Wi-Fi and shared machines.

In other words, you need to bind user identity to security features.


OK, that sounds reasonable, but can you tell us why you think user identity is so important?

It allows granular policy creation in enterprises. You're able to create schedule-based or temporary web access policies, or policies allowing certain people access to applications like IM and P2P but which restrict file transfer over these functions to ensure data security.

You also need to be able to easily make dynamic changes to security policies -- while accounting for user movement in the network -- and maintain visibility into network access by individual users. This enables enterprises to modify the user access policies for tighter security controls and to prevent probable security breaches.


But this all sounds like a lot of extra work for IT administrators. How can they be expected to track users' movements?

In the real-world, identity isn't an island. You need to integrate with Active Directory, LDAP, RADIUS or an enterprise's custom internal database.

Centralized authentication, authorization and single sign-on maximizes security, employee productivity and convenience -- particularly in shared workstation spaces.


So, in summary, focusing on user identity is a Good Idea, especially in enterprises where multiple users share the same machine.


In fact, identifying users and taking proactive action isn't just a good idea -- it's a key part of PCI-DSS compliance.


I want to make this an interactive place: where I can answer questions and cover topics that you suggest. Feel free to add comments and ask Amir!


When he's not interviewing PCI-DSS experts, Amir Lev is the CTO, President, and co-founder of Commtouch (NASDAQ:CTCH), an e-mail and Web defense technology provider. MORE...


Disclosure: Cyberoam, a division of Elitecore Technologies, has been a partner of Commtouch since 2007, when the company licensed the Commtouch RPD anti-spam engine as part of its identity-based UTM appliance. However, no consideration has been exchanged in respect of this interview, and Amir Lev retained full editorial control

Tuesday, April 6, 2010

'Free Webinar of SkyView Partners - Implementing Object Level Security'

'Free Webinar - Implementing Object Level Security'

AWARD Winning Webinar by Mrs. Carol Woodbury, President of SkyView Partners

Date: April 20, 2010 10.00am Pacific.

More Heartland-Related Fraud Detected

More Heartland-Related Fraud Detected (source: BankInfoSecurity.com)

A Florida credit union must issue 12,000 new debit cards after new fraud attempts traced back to the Heartland Payment Systems data breach.

The MidFlorida Federal Credit Union's is taking this action, according to chief operating officer Kathy Britt, because of the continued risk of fraud.

Britt says the $1 billion-asset, Lakeland, FL-based credit union already reissued new cards to about 5,000 of its members in 2009, after the breach was made public. Britt says the new replacements follow recent fraud attempts on cards involved in the Heartland breach.

The credit union has about 80,000 debit card holders. The credit union sent notices out to affected cardholders March 26, telling them they will receive new cards. Britt says customers are being asked to review their accounts for possible suspicious activity.

Heartland, a New Jersey-based payment processing company, announced a major data breach in January 2009. The largest such breach on record, it involved 130 million credit and debit card transactions. The breach affected MidFlorida customers who used their debit cards at retailers on Heartland's network.

Albert Gonzalez, the mastermind behind the Heartland breach and similar incidents, was sentenced to concurrent prison terms on March 25 and 26.

MidFlorida FCU is not the first institution that has reported new fraudulent activity related to the Heartland breach. In March, First National Bank of Durango in Colorado came forward, saying it was forced to replace 5,000 debit cards because of fraudulent transactions.

Monday, April 5, 2010

How the Credit CARD Act Will Affect Types of Credit Cards

By Allie Johnson
Thursday, April 1, 2010
Changes abound for rewards, low interest and student cards.

How will the CARD act affect you? That depends in part on which type of credit card you've got in your wallet.

The combined impact of the economic downturn and the restrictions placed on credit card companies by the Credit CARD Act mean card issuers will be changing how they do business in ways that will affect every credit card -- but the impact will vary depending on the type.

"I think we'll see a reverting back to the model of the 1980s -- annual fees and higher interest rates," says Dennis Moroney, research director for TowerGroup, a financial services industry research and consulting firm. "But in those days, everything was pretty plain vanilla -- there will be much more creativity now."

One by one for each of 10 types of cards, here's how experts see the CARD Act's impact:

Bad Credit Credit Cards

The CARD Act's crackdown on extremely high fees will severely curtail the ability of issuers to offer so-called "fee harvesting" credit cards -- cards with hefty upfront fees and extremely low credit limits -- geared toward people with bad credit, experts say.

"I think the more reputable issuers, if they were issuing these cards in the past, are going to be much more reluctant to do so now," Moroney says. Issuers that do market cards geared toward consumers in the subprime market will have to strike a balance between charging enough to cover the increased risk and following the new law, according to Ken Paterson, vice president, research operations/director credit advisory service for Mercator Advisory Group, a consumer payments industry research and consulting firm. One immediate impact was the introduction of high-rate cards to replace high-fee cards: One card issuer, First Premier, experimented with. To date, no one has followed its lead.

"One of the silver linings of the CARD Act is that it has built in more protections against some of the more egregious pricing that sometimes creeps into that market," Paterson says. In the future, Moroney predicts customers with shaky credit will gravitate toward prepaid cards and secured cards.

Balance Transfer Cards

For most consumers, being able to get a balance transfer card that offers a 0 percent, 1 percent or 2 percent interest rate on a transferred balance for much more than a year will become a thing of the past.

"Teaser rates aren't going to go away, but they're probably not going to be as lucrative for the consumer as they were -- you're going to see a higher rate and a shorter introductory term," says Jerry Straessle, president and CEO of JLS Associates, a consulting firm specializing in the credit and debit card industry. Even before the act's passage, card issuers were retreatingn from one-year introductory periods and toward the minimum of six months mandated by the CARD Act. Expect introductory rates of 7 percent to 9 percent or higher, Straessle predicts.

"The CARD Act is going to have upward pressure on rates simply because the ability to adjust rates on outstanding balances is severely limited now," Straessle says. Issuers "can't do anything about accounts that have protected balances, so they will book new accounts at higher rates of interest to make up for lost revenue from penalty fees and penalty interest."

However, there will always be issuers bucking the latest trend that make it worth shopping around. Citi, for example, just extended one of its 0 percent balance transfer card offers from a maximum of 12 months to a maxiumum of 15 months.

Business Cards

None of the provisions in the CARD Act apply to business credit cards. "So far, small business cards are unaffected by the Act -- only consumer cards were included," says Mercator's Paterson. "But it wouldn't surprise me if some of the improved disclosure that was legislated on the consumer side eventually found its way to the small business side too."

Though business owners should keep personal and business expenses separate, Paterson says the increased protections on the consumer side might push very small business owners away from business cards. "I haven't seen data evidence of this, but a one or two-person business -- a freelance programmer, artist or Web designer -- might say, 'My personal card works just fine for business purposes. I don't need a small business product.'"

Debit Cards

Debit cards have never been all that profitable for banks, but new rules on overdraft charges mean banks will make even less. Starting in July 2010, new customers will not be allowed to overdraft using their debit cards unless they opt in ahead of time. Overdraft fee income had been a big profit center for banks.

To help make up the lost revenue, many banks may start charging annual fees for debit cards, probably in the $20 to $30 range, Moroney says. Or, banks might charge for other services, such as financial planning or linking accounts to help customers avoid the embarrassment of having their card declined at a store, Robertson says.

Banks probably will get innovative; for example, providing more rewards debit cards and more hybrid credit/debit cards, as well as cards geared toward students who now cannot get credit cards because of the new law, experts say. Also, banks will reinforce responsible management of personal finances -- maybe with more programs similar to Bank of America's BAC Keep the Change, in which the bank automatically rounds up each check card purchase to the nearest dollar and transfers the difference to the cardholder's savings account. "We'll see more products that tap into consumer appeal," Moroney says.

Gas Cards

The CARD Act will indirectly influence the most popular type of gas card -- the co-branded card, which typically is issued by a bank in partnership with an oil company, and offers perks and rewards to the customer, experts say.

"If there's a revolving feature, it's going to be more expensive," Straessle says, noting that there has been a lot of talk in the industry about controlling costs by paring down rewards. "If you get 5 cents in fuel credits per gallon of gas now, you can probably expect in the future it's going to be a lesser amount -- maybe a penny or two pennies less," Straessle says.

Low Interest Cards

In the near future, interest rates on fixed rate low interest cards, as well as cards with low introductory rates, likely will go up several points, and issuers will be even more selective about who gets these cards, experts say.

"Low interest is a lot less desirable for most card companies because they don't have the ability to change rates as readily as they did in the past" because of the CARD Act, says Beth Robertson, director of payments research for consulting firm Javelin Strategy & Research. "So low interest cards will be more for very valuable and very creditworthy transactors -- people who carry high balances, pay regularly, have good credit scores and have a high volume of transactions, probably more than $1,000 a month. Often someone in that category is someone who travels a lot on business and is purchasing airfare, hotel rooms and meals out, but it could also be someone who is especially wealthy and is spending money on higher-ticket items."

Prepaid and Gift Cards

The Credit CARD Act imposes prepurchase disclosure of certain fees, such as inactivity fees, associated with prepaid cards -- and mandates that the cards not expire before five years. The new rules for prepaid cards -- including gift certificates, reloadable prepaid cards and gift cards -- go into effect Aug. 22, 2010.

"In the past, some expired after a year -- if you still had money on it, you lost it," Straessle says. He predicts that, to make up for this lost revenue, issuers will start charging a higher upfront fee to get a prepaid card and also a higher fee to reload the card -- as high as the market will bear. "It will depend what they think they can do competitively," he says. "It's the logical place for additional revenue to happen because there are not many revenue sources in a prepaid cards program."

Reward Cards

Rewards card issuers already have started to move away from a mass-market mentality in which the goal is to create buzz around a rewards program and get as many people as possible to apply, according to John Bartold, vice president, Loyalty Solutions for Epsilon, a marketing services firm. "Issuers already have tightened up requirements for who gets into a loyalty or rewards program," Bartold says. "The recession and the indirect impact of the CARD Act are making issuers look at these things a little differently and a little more smartly."

Credit card companies have reams of data on their customers and probably will start using that data they've collected to target their customers in a more relevant way, Bartold says. "It's not going to happen right away, but I think we're going to start seeing cards more focused for certain types of lifestyles -- where consumers can find a card that matches them rather than a generic spend-a-dollar, get-a-point," Bartold says.

Card issuers might do that by creating a general program customers can tailor to their own preferences -- similar to the Discover CardBuilder approach -- or by creating a card targeted toward a specific group of consumers such as sports fans, eco-conscious consumers or music lovers. "For example, with music and entertainment, you could have a site where customers could download music, you could have a newsletter that reviews artists by genres, you could look at sponsoring a concert," Bartold says.

Student Cards

The days of the big credit card issuers setting up tables on college campuses and offering free pizza to entice throngs of students to sign up for easy credit are over. The CARD Act prohibits that type of marketing and requires anyone under 21 to prove a source of income or have a parent co-sign to get a card.

"We probably will see fewer student cards out there because the CARD Act restricts a lot of it," says Greg Meyer, community relations manager for Meriwest Credit Union in San Jose, Calif., who predicts more issuers will offer debit and prepaid cards geared toward teens and young adults.

New student cards probably will have higher interest rates and lower credit limits and will be treated more as a vehicle for building financial responsibility, according to TowerGroup's Moroney. "They might offer little things that will reinforce responsible behavior," he says. "'You paid your bill on time this month, Bob or Sally -- let us treat you to half off your next latte at Starbucks.' Or, it could be a discount on textbooks. To a college kid, that's a big deal. They could send the merchant promotions directly to a PDA with a bar code and the student could spend it immediately."

Tuesday, March 30, 2010

JC Penney tried to block publication of data breach

By Jeremy Kirk
March 30, 2010 07:00 AM ET

IDG News Service - Retailer JC Penney fought to keep its name secret during court proceedings related to the largest breach of credit card data on record, according to documents unsealed on Monday.

JC Penney was among the retailers targeted by Albert Gonzalez's ring of hackers, which managed to steal more than 130 million credit card numbers from payment processor Heartland Payment Systems and others. Gonzalez was sentenced to 20 years in prison on Friday in U.S. District Court for the District of Massachusetts.

In December, JC Penney -- referred to as "Company A" in court documents -- argued in a filing that the attacks occurred more than two years ago, and that disclosure would cause "confusion and alarm."

However, it was already suspected JC Penney was one of the retailers after the Web site StorefrontBacktalk was the first outlet to accurately report in August 2009 that JC Penney was among the retailers targeted by Gonzalez's group.

New Jersey, where the Gonzalez case started, agreed to keep JC Penney's identity secret but the case was moved to Massachusetts where authorities decided otherwise, prompting JC Penney's motion.

Disclosing Company A's identity "may discourage other victims of cybercrimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage that may arise from a policy of disclosure as espoused by the government in this case," wrote JC Penney attorney Michael D. Ricciuti.

In a Jan. 12 filing, U.S. prosecutors argued for disclosure. "Most people want to know when their credit or debit card numbers have been put at risk, not simply if, and after, they have clearly been stolen," the government wrote. "The presumption of disclosure has an additional significant benefit, though, besides the right of the card holder to know when he has been exposed to risk."

The U.S. Secret Service had told JC Penney that its computer system had been broken into. The retailer's system had "unquestionably failed," but the government said the Secret Service did not have evident that payment card numbers were stolen, U.S. prosecutors wrote.

Another retailer, The Wet Seal, said in a statement issued Monday that it had also been targeted by Gonzalez's gang around May 2008. The Wet Seal has been referred to as "Company B" in court documents.

"We found no evidence to indicate that any customer credit or debit card data or other personally identifiable information was taken," the company said.

Other retailers affected by the breach included TJX, 7-Eleven, Hannaford Brothers, Dave & Busters, BJ's Wholesale Club, OfficeMax, Boston Market, Barnes & Noble, Sports Authority, Forever 21 and DSW.

Monday, March 29, 2010

Company says 3.3M student loan records stolen

By Jeremy Kirk
March 29, 2010 06:59 AM ET

IDG News Service - Data on 3.3 million borrowers was stolen from a nonprofit company that helps with student loan financing.

The theft occurred on March 20 or 21 from the headquarters of Educational Credit Management Corp. (ECMC), which services loans when student borrowers enter bankruptcy. The data was contained on portable media, said the organization, which is a dedicated guaranty agency for Virginia, Oregon and Connecticut.

The data included names, addresses, birth dates and Social Security numbers but no financial information such as credit card numbers or bank account data, ECMC said in a news release.

Law enforcement has been notified. "ECMC is cooperating fully with local, state and federal law enforcement agencies conducting the investigation," it said in a statement.

ECMC will send a written notification to affected borrowers "as soon as possible" and offer them free services from Experian, a credit monitoring agency.

Data loss can occur in a variety of ways, including by remote hacking or employee theft. ECMC didn't say whether the data taken was encrypted.

The information could be useful for data thieves, who use personal information to apply for loans and credit cards or to assemble portfolios for larger identity theft schemes.

ECMC's data loss is significant but far short of some of the largest incidents.

More than 130 million credit card numbers were stolen around 2008 from Heartland Payment Systems, an attack ranked as the largest to date by DataLossDB, which tracks incidents. One of the hackers, Albert Gonzalez , was sentenced to 20 years in prison on Friday in U.S. District Court for the District of Massachusetts.

In 2006, a laptop and hard drive containing personal information of 26.5 million military veterans and their spouses was stolen from the home of a U.S. Department of Veterans Affairs employee.

Tuesday, March 23, 2010

Online corporate banking channels are under threat from sophisticated and sustained attacks by malicious sources

23 March 2010

Online corporate banking channels are under threat from sophisticated and sustained attacks by malicious sources. According to annual figures released by the UK Cards Association, 'phishing' attacks in the UK rose by 16% in 2009, resulting in the total amount of online banking losses hitting £59.7m, up 14% year-on-year.

One particular prevalent type of fraud that will be boosting these numbers is the so-called 'man-in-the-browser' attack. Following on from basic Trojan viruses that existed for many years, online banking became susceptible to 'man-in-the-middle' attacks, where the hacker would place themselves between the corporate and its bank, intercepting and modifying online instructions from the corporate for their own ends. Banks were able to tackle this fraud, however, as the messages from the hacker came from a different IP address to the corporate, making the fraud detectable. Unfortunately this is not the case with man-in-the-browser attacks - here the Trojan embeds itself in an internet browser application on a user's computer. When a user logs on to specific online banking sites, the Trojan is activated and intercepts and manipulates data as it is being communicated from the legitimate user's PC to an online banking system. All the while, this appears to be coming from the user's legitimate IP address.

So how can banks guard against this type of fraud? One method is to engage in profiling a user's account, keeping a record of the typical funds that flow in and out, and comparing any suspicious activity to these regular trends. Authentication could also be enhanced to confirm that the user transferring funds is the genuine bank client, as opposed to a malicious source - banks in the future may look to use 'multiband' authentication, requiring use of a secondary device (such as a smartphone) to confirm online banking transactions. As with all fraud, the perpetrators are inventive and cunning. Banks and their clients have to be able to respond to these challenges in a similar way, while ensuring they are not adversely affecting the online banking experience.

Source:Ben Poole, Editor GTNews

Sunday, March 7, 2010

Argos allegedly emails out embedded HTML payment card credentials

Argos allegedly emails out embedded HTML payment card credentials
04 March 2010 (source: infosecurity.com (UK)

Reports are coming in that discount retailer Argos, which allows customers to buy from its website, as well as order goods online for pickup from one of its many stores, has allegedly been mailing out customer payment card details – including the three and four digit CVV codes normally found on the signature strip or the front of the card – in its confirmatory emails
According to Barry Collins of PC Pro, emails sent out Argos customers have – embedded in the HTML code of the message, and so invisible in a normal message frame – complete details of the customer's payment card.

The card verification value (CVV), Infosecurity notes, is normally only found physically printed on the payment card, and is not stored on the magnetic stripe or smart card chip data. In theory, since the CVV is not printed on a retailer receipt, the only person that should know the CVV is the – quite literally – the holder of the card.

As Collins said when reporting the apparent security faux pas, "customers clicking on that web link would therefore leave plain text details of their card numbers in their browser web history, which could be particularly problematic on shared or public PCs, such as those used by web cafes."

"It would also leave the customers' details stored in the server logs that are maintained by employers and ISPs, as well as Argos' own web analytics software, which logs the URLs used to access its website", he explained.

The flaw was apparently spotted by Paul Lomax, PC Pro's chief technology officer, who ordered goods from Argos' website and later had his card details compromised.

"PC Pro reader Tony Graham, who alerted us to the flawed emails in the first place, also had his card details stolen after placing an order with Argos, although there's no evidence to tie Argos to the credit-card thefts," said Collins in his report on the saga.

Commenting on the apparent security problem, Ed Rowley, M86 Security's product manager, said that organisations who trade online need to be extra careful about what and how information – especially financial data – is exchanged.

"It is incomprehensible that this credit card data was sent out in an unencrypted format; even if the sensitive information was not visible in the main body it should have been protected from being sent out. A good email content filtering product could have enforced encryption or blocked this data from being sent out at all by Argos, using standard or default email security rules", he said.

"This case highlights the need to filter both inbound and outbound email in order to guard against malware coming in but also to block sensitive information from leaking out", he added.

"It's astonishing that larger companies are not using these well established security tools and procedures."

The value of vulnerability checks

Vulnerability scanning got its start as a tool for the bad guys; now it's helping companies find exposed network ports and at-risk applications.
Brian Robinson of IT Security

For something that can be such an effective weapon against those who want to do damage to a network it’s ironic that vulnerability scanning got its start as a tool for the bad guys. Before they can get into networks hackers need to know where the most vulnerable spots are in an enterprise’s security. That means using scanning tools to trawl for such things as open network ports or poorly secured applications and operating systems.

In the past few years these intentions have been turned around, to where scanning tools now give the guys in the white hats a good idea of where the vulnerabilities are and a chance to repair them before the hackers get there.

At least they provide the potential for that. The fact is, many companies don’t seem to be taking advantage of these tools or if they do have them, they are not making much use of them. Gartner Research believes as many as 85% of the network attacks that successfully penetrate network defenses are made through vulnerabilities for which patches and fixes have already been released.


Endless Exploits

Now there is the rapidly expanding universe of Web based applications for hackers to exploit. A recent study by security vendor Acunetix claimed that as many as 70% of the 3,200 corporate and non-commercial organization Web sites its free Web based scanner has examined since January 2006, contained serious vulnerabilities and were at immediate risk of being hacked.

A total of 210,000 vulnerabilities were found, the company said, for an average of some 66 vulnerabilities per web site ranging from potentially serious ones such as SQL injections and cross-site scripting, to relatively minor ones such as easily available directory listings.

“Companies, governments and universities are bound by law to protect our data,” said Kevin Vella, vice president of sales and operations at Acunetix. “Yet web application security is, at best, overlooked as a fad.”


Patch Patrol

Vulnerability scanners seek out known weaknesses, using databases that are constantly updated by vendors to track down devices and systems on the network that are open to attack. They look for such things as unsafe code, misconfigured systems, malware and patches and updates that should be there but aren’t.

They also have several plus factors. They can be used to do a “pre-scan” scan, for example, to determine what devices and systems there are on the network. There’s nothing so vulnerable as something no-one knew was there in the first place, and it’s surprising how often those turn up in large and sprawling enterprises.

Many scanners can also be set to scan the network after patches have been installed to make sure they do what they are supposed to do. What vulnerability scanners can’t do is the kind of active blocking defense carried out by such things as firewalls, intrusion prevention systems and anti-malware products though, by working in combination with them, vulnerability scanners can make what they do more accurate and precise.


Passive Aggressive

Vulnerability scanners come as either passive or active devices, each of which have their advantages and disadvantages. Passive scanners are monitoring devices that work by sniffing the traffic that goes over the network between systems, looking for anything out of the ordinary. Their advantage is that they have no impact on the operation of the network and so can work 24 x 7 if necessary, but they can miss vulnerabilities particularly on more quiet parts of a network.

Active scanners probe systems in much the way hackers would, looking for weaknesses through the responses devices make to the traffic the scanners send to them. They are more aggressive and in some ways more thorough than passive scanners, but they can cause service disruptions and crash servers.

Many people see the two as complementary and recommend using passive and active scanners alongside each other. The passive scanners can provide the more continuous monitoring, while active scanners can be used periodically to flush out the cannier vulnerabilities.


Software vs. Hardware

The scanners can also come as either software-based agents placed directly on servers or workstations, or as hardware devices. Host-based scanners can use up processor cycles on the system, but are generally considered more flexible in the kinds of vulnerabilities they can scan. The network-based scanners are plug-and-play hardware devices that are self-contained and need less maintenance than software agents.

The focus of vulnerabilities has been changing over the past several years. On the one hand, organizations have become savvier about protecting their networks and systems, and hackers have had a harder time penetrating those defenses. At the same time, as Web-based services have become the lifeblood of many witnesses, hackers have found a goldmine of potential exploits.

That’s because Web traffic flows back and forth primarily through Port 80 on a network, which has to be kept open if those Web-bases services are to be available to a company’s customers and business partners.

It’s a hard to defend weak spot in enterprise defenses, and once hackers gain access to Web applications they can use them to get information from databases, retrieve files from root directories, or use a Web server to send malicious content in a Web page to unsuspecting users.


Interpreting the Results

Vulnerability scanning works with Web applications by launching simulated attacks against those applications and then reports the vulnerabilities it finds with recommendations on how to fix or eliminate them.

However, as powerful an addition as vulnerability scanning can be to the overall security of an enterprise, some observers advise caution in interpreting those results.

Kevin Beaver, an independent security consultant with Atlanta-based Principal Logic, LLC, says it takes a combination of the vulnerability scanner and a human knowledge of the network and context in which the scans were carried out to accurately interpret the results.

Left to themselves, he says, scanners will tend to spit information that their vendors think is important. What’s also needed is an understanding of what was being tested at the time, how it was being tested, why the vulnerability is exploitable and so on. That will show whether vulnerabilities flagged as high priority actually are important in a particular user’s environment, and therefore whether it’s worthwhile putting in the effort to remediate them.

You absolutely need vulnerability scanners, Beaver said, because they take a lot of the pain out of security assessments.

“But you cannot rely on them completely,” he said. “A good tool plus the human context is the best equation for success.”

Friday, March 5, 2010

RSA: Schmidt announces transparent national US cybersecurity strategy

02 March 2010

Howard Schmidt, Cyber security advisor to President Obama, announced the launch of www.whitehouse.org/cybersecurity - a brand new web page launched to prove the commitment of the US government to its transparent cybersecurity strategy - during his keynote at RSA conference 2010 in San Francisco.

Ultimately, the National cybersecurity strategy has two main objectives, explained Schmidt: “to improve our resilience to cyber incidents, and to reduce the cyber threat”.


The commendable web page, now live, details the President’s Cyberspace Policy Review and identifies 10 near term actions to support the cybersecurity strategy. These are:
Appoint a cybersecurity policy official responsible for coordinating the Nation’s cybersecurity policies and activities.
Prepare for the President’s approval an updated national strategy to secure the information and communications infrastructure.
Designate cybersecurity as one of the President’s key management priorities and establish performance metrics
Designate a privacy and civil liberties official to the NSC cybersecurity directorate.
Conduct interagency-cleared legal analyses of priority cybersecurity-related issues.
Initiate a national awareness and education campaign to promote cybersecurity.
Develop an international cybersecurity policy framework and strengthen our international partnerships.
Prepare a cybersecurity incident response plan and initiate a dialog to enhance public-private partnerships.
Develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure.
Build a cybersecurity-based identity management vision and strategy, leveraging privacy-enhancing technologies for the Nation.
Schmidt, who announced his pride “at representing President Obama”, admitted that there is “still a long way to go” to achieving the actions outlined above. “Over the past year, thousands of hours have gone into this policy”, Schmidt said. “We all know that collaboration is important, and we recognise that the government and industry need to work together. These vulnerabilities are shared, so we need to work together.”

Schmidt acknowledged the importance for transparency when asking the industry for help. “In order to be successful, we must seek new and innovative partnerships, with government, industry, academia, and the public. Working together is the most powerful tool we have.”

In reference to the ten initiatives outlined in the cybersecurity strategy, Schmidt reliably informs the audience that they are certainly making headway. “We’re making sure that President Obama and Congress are thinking about cybersecurity in everything they do. Leadership at the top is very important”, he said.

“Appointing a cybersecurity policy official and designating a privacy and civil liberties official has been done. Here I am”, exclaimed Schmidt. “Updating the strategy is a work in progress. While there are a lot of things that will remain the same, it has to be updated.

“There is a working group that has been divided into four tracks dedicated to the international awareness campaign. There have been meetings, there are plans, and there are milestones. We’re making sure that the policy and framework address the international threat, and we’re ensuring that the cybsecurity response plans looks not only at how we co-ordinate, but how we get it right”.

In regards to action 9, Schmidt assured the audience that they have been looking at specific projects and economics.

In conclusion, Schmidt declared cybersecurity “a shared responsibility for all of us. We can only do what we can do, and that’s all we’re asking”.








This article is featured in:
Internet and Network Security • Public Sector

Thursday, February 18, 2010

Simplifying Patch Management | IBM i | IBM Systems Magazine

Simplifying Patch Management | IBM i | IBM Systems Magazine

For patch and asset management of Windows servers and workstations SRC Secure Solutions is reseller of Shavlik NetChk Protect. Shavlik NetChk Protect is in use with more than 12.500 companies worldwide.

Monday, February 15, 2010

CMS zips up sensitive data for security in transit (source: GCN, Government Computer News)

DATA SECURITY
◦Jun 25, 2009
“We could send them encrypted [data], but the recipients might not have any way to decrypt it,” said Ray Pfeifer, senior technical adviser at CMS’ Office of the Chief Information Security Officer.

As a result, tapes full of data that other users needed were piling up at the data center. “I said there has to be a way to encrypt data on any type of media and provide a capability at the other end to decrypt it,” Boughn said.

CMS officials eventually settled on PKware’s SecureZIP PartnerLink for the task. The company does not provide an engine or algorithms for encrypting data, but it offers a platform for encrypting data and a virtual container in which to ship it.

“We do not consider ourselves an encryption company,” said George Haddix, PKware’s chairman and chief executive officer. “We deliver products that allow people to use encryption in managing their data. The Zip archive, I think, is an ideal container for data-centric security.”

PKware maintains the Zip data compression and archiving format, which works with most computing platforms. Many organizations have used it for years to transmit and store files. The Zip archive container includes metadata about the files so users can identify and individually retrieve them.

“We developed the concept about four years ago that this could be done in a secure manner by encrypting, signing or otherwise authenticating the files contained in the container,” Haddix said.

The company implemented the concept by extending the Zip archive definition to include the ability to use a strong password or pass phrase for access control and digital certificates for signing or encrypting the data being compressed.

The PartnerLink package requires customers to license the enterprise version of SecureZIP. Customer-sponsored partners can then download free software to decrypt and encrypt files using digital certificates that the customer provides. The 50 states are also preparing to send their data securely to CMS via PartnerLink.

SecureZIP can also be used with existing engines, and many mainframe customers, including CMS, use the hardware engines on their existing equipment’s co-processors.

“We’re taking advantage of that because it speeds things up,” Pfeifer said. The encryption engines are certified to meet Federal Information Processing Standard 140-2 for government encryption modules. CMS uses the Triple Data Encryption Standard.

Every year, the Centers for Medicare and Medicaid Services process about 1.25 billion Medicare claims, 1 billion prescription claims and 1 billion Medicaid files, along with claims for Children’s Health Insurance Programs — all on behalf of more than 90 million Americans.

“We’re a very large health insurer with a public health mission,” said Julie Boughn, director of the Office of Information Services and chief information officer at CMS. “We send and receive vast quantities of health care data.”

The data is not only voluminous but also a valuable public asset, she said. “Since we have so much of it, there is a lot of interest in using our information” for activities such as medical research and fraud investigations.



--------------------------------------------------------------------------------


In this report:

Sharing data with many partners requires a common encryption platform



--------------------------------------------------------------------------------


Some of the data is scrubbed of personally identifiable information and made available publicly for statistical purposes. Medical researchers often do not need to know the identity of the people whose information they are using, but some details are necessary to correlate data. On the other hand, law enforcement investigations typically require full information.

CMS has strict requirements for how partner organizations use data, and officials spell out the required levels of protection. But it is not enough for CMS’ partners to handle the data securely — CMS has to ensure that the information is secure while in transit.

That is not a simple task. The list of partners sending and receiving CMS information includes all 50 states and numerous health care providers and research institutions, each of which use different information technology systems.

When the Office of Management and Budget mandated that agencies secure data on mobile and portable devices in 2006, “we took that very seriously,” Boughn said. “We had already started down that path” by encrypting laptop PCs and using Pointsec to encrypt e-mail messages and data on USB drives.

As for data shared with partners, “I said I was not shipping any data out of our data center that is not encrypted,” Boughn said.

But that approach had unintended consequences. CMS is a mainframe shop, with IBM hardware that can encrypt data transferred to other media, such as the tape cartridges the agency sends to its partners. However, those recipients use everything from mainframes to servers and desktop PCs to access that data.
Another advantage of using SecureZIP to encrypt files for transit is that it also compresses the files by about 80 percent, an important consideration when dealing with the huge volumes of data CMS routinely sends. Using SecureZIP, a shipment of 25 tape cartridges can be compressed to five, Pfeifer said.

One issue recipients of encrypted data need to keep in mind is the format in which it will be used. Many recipients convert the mainframe data for use on other platforms, such as servers and desktop PCs. SecureZIP supports conversion from one format to another, “but you have to be aware you are doing the conversion of encrypted data at the other end,” Pfeifer said.

Overall, the tool is user-friendly, and customer support from PKware has been good, he said. “The time it takes for installation [ranges] from less than an hour for users of Windows to a few hours for a mainframe,” he added.

William Jackson is a senior writer for GCN and the author of the CyberEye column.