By Jeremy Kirk
March 30, 2010 07:00 AM ET
IDG News Service - Retailer JC Penney fought to keep its name secret during court proceedings related to the largest breach of credit card data on record, according to documents unsealed on Monday.
JC Penney was among the retailers targeted by Albert Gonzalez's ring of hackers, which managed to steal more than 130 million credit card numbers from payment processor Heartland Payment Systems and others. Gonzalez was sentenced to 20 years in prison on Friday in U.S. District Court for the District of Massachusetts.
In December, JC Penney -- referred to as "Company A" in court documents -- argued in a filing that the attacks occurred more than two years ago, and that disclosure would cause "confusion and alarm."
However, it was already suspected JC Penney was one of the retailers after the Web site StorefrontBacktalk was the first outlet to accurately report in August 2009 that JC Penney was among the retailers targeted by Gonzalez's group.
New Jersey, where the Gonzalez case started, agreed to keep JC Penney's identity secret but the case was moved to Massachusetts where authorities decided otherwise, prompting JC Penney's motion.
Disclosing Company A's identity "may discourage other victims of cybercrimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage that may arise from a policy of disclosure as espoused by the government in this case," wrote JC Penney attorney Michael D. Ricciuti.
In a Jan. 12 filing, U.S. prosecutors argued for disclosure. "Most people want to know when their credit or debit card numbers have been put at risk, not simply if, and after, they have clearly been stolen," the government wrote. "The presumption of disclosure has an additional significant benefit, though, besides the right of the card holder to know when he has been exposed to risk."
The U.S. Secret Service had told JC Penney that its computer system had been broken into. The retailer's system had "unquestionably failed," but the government said the Secret Service did not have evident that payment card numbers were stolen, U.S. prosecutors wrote.
Another retailer, The Wet Seal, said in a statement issued Monday that it had also been targeted by Gonzalez's gang around May 2008. The Wet Seal has been referred to as "Company B" in court documents.
"We found no evidence to indicate that any customer credit or debit card data or other personally identifiable information was taken," the company said.
Other retailers affected by the breach included TJX, 7-Eleven, Hannaford Brothers, Dave & Busters, BJ's Wholesale Club, OfficeMax, Boston Market, Barnes & Noble, Sports Authority, Forever 21 and DSW.
Tuesday, March 30, 2010
Monday, March 29, 2010
Company says 3.3M student loan records stolen
By Jeremy Kirk
March 29, 2010 06:59 AM ET
IDG News Service - Data on 3.3 million borrowers was stolen from a nonprofit company that helps with student loan financing.
The theft occurred on March 20 or 21 from the headquarters of Educational Credit Management Corp. (ECMC), which services loans when student borrowers enter bankruptcy. The data was contained on portable media, said the organization, which is a dedicated guaranty agency for Virginia, Oregon and Connecticut.
The data included names, addresses, birth dates and Social Security numbers but no financial information such as credit card numbers or bank account data, ECMC said in a news release.
Law enforcement has been notified. "ECMC is cooperating fully with local, state and federal law enforcement agencies conducting the investigation," it said in a statement.
ECMC will send a written notification to affected borrowers "as soon as possible" and offer them free services from Experian, a credit monitoring agency.
Data loss can occur in a variety of ways, including by remote hacking or employee theft. ECMC didn't say whether the data taken was encrypted.
The information could be useful for data thieves, who use personal information to apply for loans and credit cards or to assemble portfolios for larger identity theft schemes.
ECMC's data loss is significant but far short of some of the largest incidents.
More than 130 million credit card numbers were stolen around 2008 from Heartland Payment Systems, an attack ranked as the largest to date by DataLossDB, which tracks incidents. One of the hackers, Albert Gonzalez , was sentenced to 20 years in prison on Friday in U.S. District Court for the District of Massachusetts.
In 2006, a laptop and hard drive containing personal information of 26.5 million military veterans and their spouses was stolen from the home of a U.S. Department of Veterans Affairs employee.
March 29, 2010 06:59 AM ET
IDG News Service - Data on 3.3 million borrowers was stolen from a nonprofit company that helps with student loan financing.
The theft occurred on March 20 or 21 from the headquarters of Educational Credit Management Corp. (ECMC), which services loans when student borrowers enter bankruptcy. The data was contained on portable media, said the organization, which is a dedicated guaranty agency for Virginia, Oregon and Connecticut.
The data included names, addresses, birth dates and Social Security numbers but no financial information such as credit card numbers or bank account data, ECMC said in a news release.
Law enforcement has been notified. "ECMC is cooperating fully with local, state and federal law enforcement agencies conducting the investigation," it said in a statement.
ECMC will send a written notification to affected borrowers "as soon as possible" and offer them free services from Experian, a credit monitoring agency.
Data loss can occur in a variety of ways, including by remote hacking or employee theft. ECMC didn't say whether the data taken was encrypted.
The information could be useful for data thieves, who use personal information to apply for loans and credit cards or to assemble portfolios for larger identity theft schemes.
ECMC's data loss is significant but far short of some of the largest incidents.
More than 130 million credit card numbers were stolen around 2008 from Heartland Payment Systems, an attack ranked as the largest to date by DataLossDB, which tracks incidents. One of the hackers, Albert Gonzalez , was sentenced to 20 years in prison on Friday in U.S. District Court for the District of Massachusetts.
In 2006, a laptop and hard drive containing personal information of 26.5 million military veterans and their spouses was stolen from the home of a U.S. Department of Veterans Affairs employee.
Tuesday, March 23, 2010
Online corporate banking channels are under threat from sophisticated and sustained attacks by malicious sources
23 March 2010
Online corporate banking channels are under threat from sophisticated and sustained attacks by malicious sources. According to annual figures released by the UK Cards Association, 'phishing' attacks in the UK rose by 16% in 2009, resulting in the total amount of online banking losses hitting £59.7m, up 14% year-on-year.
One particular prevalent type of fraud that will be boosting these numbers is the so-called 'man-in-the-browser' attack. Following on from basic Trojan viruses that existed for many years, online banking became susceptible to 'man-in-the-middle' attacks, where the hacker would place themselves between the corporate and its bank, intercepting and modifying online instructions from the corporate for their own ends. Banks were able to tackle this fraud, however, as the messages from the hacker came from a different IP address to the corporate, making the fraud detectable. Unfortunately this is not the case with man-in-the-browser attacks - here the Trojan embeds itself in an internet browser application on a user's computer. When a user logs on to specific online banking sites, the Trojan is activated and intercepts and manipulates data as it is being communicated from the legitimate user's PC to an online banking system. All the while, this appears to be coming from the user's legitimate IP address.
So how can banks guard against this type of fraud? One method is to engage in profiling a user's account, keeping a record of the typical funds that flow in and out, and comparing any suspicious activity to these regular trends. Authentication could also be enhanced to confirm that the user transferring funds is the genuine bank client, as opposed to a malicious source - banks in the future may look to use 'multiband' authentication, requiring use of a secondary device (such as a smartphone) to confirm online banking transactions. As with all fraud, the perpetrators are inventive and cunning. Banks and their clients have to be able to respond to these challenges in a similar way, while ensuring they are not adversely affecting the online banking experience.
Source:Ben Poole, Editor GTNews
Online corporate banking channels are under threat from sophisticated and sustained attacks by malicious sources. According to annual figures released by the UK Cards Association, 'phishing' attacks in the UK rose by 16% in 2009, resulting in the total amount of online banking losses hitting £59.7m, up 14% year-on-year.
One particular prevalent type of fraud that will be boosting these numbers is the so-called 'man-in-the-browser' attack. Following on from basic Trojan viruses that existed for many years, online banking became susceptible to 'man-in-the-middle' attacks, where the hacker would place themselves between the corporate and its bank, intercepting and modifying online instructions from the corporate for their own ends. Banks were able to tackle this fraud, however, as the messages from the hacker came from a different IP address to the corporate, making the fraud detectable. Unfortunately this is not the case with man-in-the-browser attacks - here the Trojan embeds itself in an internet browser application on a user's computer. When a user logs on to specific online banking sites, the Trojan is activated and intercepts and manipulates data as it is being communicated from the legitimate user's PC to an online banking system. All the while, this appears to be coming from the user's legitimate IP address.
So how can banks guard against this type of fraud? One method is to engage in profiling a user's account, keeping a record of the typical funds that flow in and out, and comparing any suspicious activity to these regular trends. Authentication could also be enhanced to confirm that the user transferring funds is the genuine bank client, as opposed to a malicious source - banks in the future may look to use 'multiband' authentication, requiring use of a secondary device (such as a smartphone) to confirm online banking transactions. As with all fraud, the perpetrators are inventive and cunning. Banks and their clients have to be able to respond to these challenges in a similar way, while ensuring they are not adversely affecting the online banking experience.
Source:Ben Poole, Editor GTNews
Friday, March 19, 2010
Sunday, March 7, 2010
Argos allegedly emails out embedded HTML payment card credentials
Argos allegedly emails out embedded HTML payment card credentials
04 March 2010 (source: infosecurity.com (UK)
Reports are coming in that discount retailer Argos, which allows customers to buy from its website, as well as order goods online for pickup from one of its many stores, has allegedly been mailing out customer payment card details – including the three and four digit CVV codes normally found on the signature strip or the front of the card – in its confirmatory emails
According to Barry Collins of PC Pro, emails sent out Argos customers have – embedded in the HTML code of the message, and so invisible in a normal message frame – complete details of the customer's payment card.
The card verification value (CVV), Infosecurity notes, is normally only found physically printed on the payment card, and is not stored on the magnetic stripe or smart card chip data. In theory, since the CVV is not printed on a retailer receipt, the only person that should know the CVV is the – quite literally – the holder of the card.
As Collins said when reporting the apparent security faux pas, "customers clicking on that web link would therefore leave plain text details of their card numbers in their browser web history, which could be particularly problematic on shared or public PCs, such as those used by web cafes."
"It would also leave the customers' details stored in the server logs that are maintained by employers and ISPs, as well as Argos' own web analytics software, which logs the URLs used to access its website", he explained.
The flaw was apparently spotted by Paul Lomax, PC Pro's chief technology officer, who ordered goods from Argos' website and later had his card details compromised.
"PC Pro reader Tony Graham, who alerted us to the flawed emails in the first place, also had his card details stolen after placing an order with Argos, although there's no evidence to tie Argos to the credit-card thefts," said Collins in his report on the saga.
Commenting on the apparent security problem, Ed Rowley, M86 Security's product manager, said that organisations who trade online need to be extra careful about what and how information – especially financial data – is exchanged.
"It is incomprehensible that this credit card data was sent out in an unencrypted format; even if the sensitive information was not visible in the main body it should have been protected from being sent out. A good email content filtering product could have enforced encryption or blocked this data from being sent out at all by Argos, using standard or default email security rules", he said.
"This case highlights the need to filter both inbound and outbound email in order to guard against malware coming in but also to block sensitive information from leaking out", he added.
"It's astonishing that larger companies are not using these well established security tools and procedures."
04 March 2010 (source: infosecurity.com (UK)
Reports are coming in that discount retailer Argos, which allows customers to buy from its website, as well as order goods online for pickup from one of its many stores, has allegedly been mailing out customer payment card details – including the three and four digit CVV codes normally found on the signature strip or the front of the card – in its confirmatory emails
According to Barry Collins of PC Pro, emails sent out Argos customers have – embedded in the HTML code of the message, and so invisible in a normal message frame – complete details of the customer's payment card.
The card verification value (CVV), Infosecurity notes, is normally only found physically printed on the payment card, and is not stored on the magnetic stripe or smart card chip data. In theory, since the CVV is not printed on a retailer receipt, the only person that should know the CVV is the – quite literally – the holder of the card.
As Collins said when reporting the apparent security faux pas, "customers clicking on that web link would therefore leave plain text details of their card numbers in their browser web history, which could be particularly problematic on shared or public PCs, such as those used by web cafes."
"It would also leave the customers' details stored in the server logs that are maintained by employers and ISPs, as well as Argos' own web analytics software, which logs the URLs used to access its website", he explained.
The flaw was apparently spotted by Paul Lomax, PC Pro's chief technology officer, who ordered goods from Argos' website and later had his card details compromised.
"PC Pro reader Tony Graham, who alerted us to the flawed emails in the first place, also had his card details stolen after placing an order with Argos, although there's no evidence to tie Argos to the credit-card thefts," said Collins in his report on the saga.
Commenting on the apparent security problem, Ed Rowley, M86 Security's product manager, said that organisations who trade online need to be extra careful about what and how information – especially financial data – is exchanged.
"It is incomprehensible that this credit card data was sent out in an unencrypted format; even if the sensitive information was not visible in the main body it should have been protected from being sent out. A good email content filtering product could have enforced encryption or blocked this data from being sent out at all by Argos, using standard or default email security rules", he said.
"This case highlights the need to filter both inbound and outbound email in order to guard against malware coming in but also to block sensitive information from leaking out", he added.
"It's astonishing that larger companies are not using these well established security tools and procedures."
The value of vulnerability checks
Vulnerability scanning got its start as a tool for the bad guys; now it's helping companies find exposed network ports and at-risk applications.
Brian Robinson of IT Security
For something that can be such an effective weapon against those who want to do damage to a network it’s ironic that vulnerability scanning got its start as a tool for the bad guys. Before they can get into networks hackers need to know where the most vulnerable spots are in an enterprise’s security. That means using scanning tools to trawl for such things as open network ports or poorly secured applications and operating systems.
In the past few years these intentions have been turned around, to where scanning tools now give the guys in the white hats a good idea of where the vulnerabilities are and a chance to repair them before the hackers get there.
At least they provide the potential for that. The fact is, many companies don’t seem to be taking advantage of these tools or if they do have them, they are not making much use of them. Gartner Research believes as many as 85% of the network attacks that successfully penetrate network defenses are made through vulnerabilities for which patches and fixes have already been released.
Endless Exploits
Now there is the rapidly expanding universe of Web based applications for hackers to exploit. A recent study by security vendor Acunetix claimed that as many as 70% of the 3,200 corporate and non-commercial organization Web sites its free Web based scanner has examined since January 2006, contained serious vulnerabilities and were at immediate risk of being hacked.
A total of 210,000 vulnerabilities were found, the company said, for an average of some 66 vulnerabilities per web site ranging from potentially serious ones such as SQL injections and cross-site scripting, to relatively minor ones such as easily available directory listings.
“Companies, governments and universities are bound by law to protect our data,” said Kevin Vella, vice president of sales and operations at Acunetix. “Yet web application security is, at best, overlooked as a fad.”
Patch Patrol
Vulnerability scanners seek out known weaknesses, using databases that are constantly updated by vendors to track down devices and systems on the network that are open to attack. They look for such things as unsafe code, misconfigured systems, malware and patches and updates that should be there but aren’t.
They also have several plus factors. They can be used to do a “pre-scan” scan, for example, to determine what devices and systems there are on the network. There’s nothing so vulnerable as something no-one knew was there in the first place, and it’s surprising how often those turn up in large and sprawling enterprises.
Many scanners can also be set to scan the network after patches have been installed to make sure they do what they are supposed to do. What vulnerability scanners can’t do is the kind of active blocking defense carried out by such things as firewalls, intrusion prevention systems and anti-malware products though, by working in combination with them, vulnerability scanners can make what they do more accurate and precise.
Passive Aggressive
Vulnerability scanners come as either passive or active devices, each of which have their advantages and disadvantages. Passive scanners are monitoring devices that work by sniffing the traffic that goes over the network between systems, looking for anything out of the ordinary. Their advantage is that they have no impact on the operation of the network and so can work 24 x 7 if necessary, but they can miss vulnerabilities particularly on more quiet parts of a network.
Active scanners probe systems in much the way hackers would, looking for weaknesses through the responses devices make to the traffic the scanners send to them. They are more aggressive and in some ways more thorough than passive scanners, but they can cause service disruptions and crash servers.
Many people see the two as complementary and recommend using passive and active scanners alongside each other. The passive scanners can provide the more continuous monitoring, while active scanners can be used periodically to flush out the cannier vulnerabilities.
Software vs. Hardware
The scanners can also come as either software-based agents placed directly on servers or workstations, or as hardware devices. Host-based scanners can use up processor cycles on the system, but are generally considered more flexible in the kinds of vulnerabilities they can scan. The network-based scanners are plug-and-play hardware devices that are self-contained and need less maintenance than software agents.
The focus of vulnerabilities has been changing over the past several years. On the one hand, organizations have become savvier about protecting their networks and systems, and hackers have had a harder time penetrating those defenses. At the same time, as Web-based services have become the lifeblood of many witnesses, hackers have found a goldmine of potential exploits.
That’s because Web traffic flows back and forth primarily through Port 80 on a network, which has to be kept open if those Web-bases services are to be available to a company’s customers and business partners.
It’s a hard to defend weak spot in enterprise defenses, and once hackers gain access to Web applications they can use them to get information from databases, retrieve files from root directories, or use a Web server to send malicious content in a Web page to unsuspecting users.
Interpreting the Results
Vulnerability scanning works with Web applications by launching simulated attacks against those applications and then reports the vulnerabilities it finds with recommendations on how to fix or eliminate them.
However, as powerful an addition as vulnerability scanning can be to the overall security of an enterprise, some observers advise caution in interpreting those results.
Kevin Beaver, an independent security consultant with Atlanta-based Principal Logic, LLC, says it takes a combination of the vulnerability scanner and a human knowledge of the network and context in which the scans were carried out to accurately interpret the results.
Left to themselves, he says, scanners will tend to spit information that their vendors think is important. What’s also needed is an understanding of what was being tested at the time, how it was being tested, why the vulnerability is exploitable and so on. That will show whether vulnerabilities flagged as high priority actually are important in a particular user’s environment, and therefore whether it’s worthwhile putting in the effort to remediate them.
You absolutely need vulnerability scanners, Beaver said, because they take a lot of the pain out of security assessments.
“But you cannot rely on them completely,” he said. “A good tool plus the human context is the best equation for success.”
Brian Robinson of IT Security
For something that can be such an effective weapon against those who want to do damage to a network it’s ironic that vulnerability scanning got its start as a tool for the bad guys. Before they can get into networks hackers need to know where the most vulnerable spots are in an enterprise’s security. That means using scanning tools to trawl for such things as open network ports or poorly secured applications and operating systems.
In the past few years these intentions have been turned around, to where scanning tools now give the guys in the white hats a good idea of where the vulnerabilities are and a chance to repair them before the hackers get there.
At least they provide the potential for that. The fact is, many companies don’t seem to be taking advantage of these tools or if they do have them, they are not making much use of them. Gartner Research believes as many as 85% of the network attacks that successfully penetrate network defenses are made through vulnerabilities for which patches and fixes have already been released.
Endless Exploits
Now there is the rapidly expanding universe of Web based applications for hackers to exploit. A recent study by security vendor Acunetix claimed that as many as 70% of the 3,200 corporate and non-commercial organization Web sites its free Web based scanner has examined since January 2006, contained serious vulnerabilities and were at immediate risk of being hacked.
A total of 210,000 vulnerabilities were found, the company said, for an average of some 66 vulnerabilities per web site ranging from potentially serious ones such as SQL injections and cross-site scripting, to relatively minor ones such as easily available directory listings.
“Companies, governments and universities are bound by law to protect our data,” said Kevin Vella, vice president of sales and operations at Acunetix. “Yet web application security is, at best, overlooked as a fad.”
Patch Patrol
Vulnerability scanners seek out known weaknesses, using databases that are constantly updated by vendors to track down devices and systems on the network that are open to attack. They look for such things as unsafe code, misconfigured systems, malware and patches and updates that should be there but aren’t.
They also have several plus factors. They can be used to do a “pre-scan” scan, for example, to determine what devices and systems there are on the network. There’s nothing so vulnerable as something no-one knew was there in the first place, and it’s surprising how often those turn up in large and sprawling enterprises.
Many scanners can also be set to scan the network after patches have been installed to make sure they do what they are supposed to do. What vulnerability scanners can’t do is the kind of active blocking defense carried out by such things as firewalls, intrusion prevention systems and anti-malware products though, by working in combination with them, vulnerability scanners can make what they do more accurate and precise.
Passive Aggressive
Vulnerability scanners come as either passive or active devices, each of which have their advantages and disadvantages. Passive scanners are monitoring devices that work by sniffing the traffic that goes over the network between systems, looking for anything out of the ordinary. Their advantage is that they have no impact on the operation of the network and so can work 24 x 7 if necessary, but they can miss vulnerabilities particularly on more quiet parts of a network.
Active scanners probe systems in much the way hackers would, looking for weaknesses through the responses devices make to the traffic the scanners send to them. They are more aggressive and in some ways more thorough than passive scanners, but they can cause service disruptions and crash servers.
Many people see the two as complementary and recommend using passive and active scanners alongside each other. The passive scanners can provide the more continuous monitoring, while active scanners can be used periodically to flush out the cannier vulnerabilities.
Software vs. Hardware
The scanners can also come as either software-based agents placed directly on servers or workstations, or as hardware devices. Host-based scanners can use up processor cycles on the system, but are generally considered more flexible in the kinds of vulnerabilities they can scan. The network-based scanners are plug-and-play hardware devices that are self-contained and need less maintenance than software agents.
The focus of vulnerabilities has been changing over the past several years. On the one hand, organizations have become savvier about protecting their networks and systems, and hackers have had a harder time penetrating those defenses. At the same time, as Web-based services have become the lifeblood of many witnesses, hackers have found a goldmine of potential exploits.
That’s because Web traffic flows back and forth primarily through Port 80 on a network, which has to be kept open if those Web-bases services are to be available to a company’s customers and business partners.
It’s a hard to defend weak spot in enterprise defenses, and once hackers gain access to Web applications they can use them to get information from databases, retrieve files from root directories, or use a Web server to send malicious content in a Web page to unsuspecting users.
Interpreting the Results
Vulnerability scanning works with Web applications by launching simulated attacks against those applications and then reports the vulnerabilities it finds with recommendations on how to fix or eliminate them.
However, as powerful an addition as vulnerability scanning can be to the overall security of an enterprise, some observers advise caution in interpreting those results.
Kevin Beaver, an independent security consultant with Atlanta-based Principal Logic, LLC, says it takes a combination of the vulnerability scanner and a human knowledge of the network and context in which the scans were carried out to accurately interpret the results.
Left to themselves, he says, scanners will tend to spit information that their vendors think is important. What’s also needed is an understanding of what was being tested at the time, how it was being tested, why the vulnerability is exploitable and so on. That will show whether vulnerabilities flagged as high priority actually are important in a particular user’s environment, and therefore whether it’s worthwhile putting in the effort to remediate them.
You absolutely need vulnerability scanners, Beaver said, because they take a lot of the pain out of security assessments.
“But you cannot rely on them completely,” he said. “A good tool plus the human context is the best equation for success.”
Friday, March 5, 2010
RSA: Schmidt announces transparent national US cybersecurity strategy
02 March 2010
Howard Schmidt, Cyber security advisor to President Obama, announced the launch of www.whitehouse.org/cybersecurity - a brand new web page launched to prove the commitment of the US government to its transparent cybersecurity strategy - during his keynote at RSA conference 2010 in San Francisco.
Ultimately, the National cybersecurity strategy has two main objectives, explained Schmidt: “to improve our resilience to cyber incidents, and to reduce the cyber threat”.
The commendable web page, now live, details the President’s Cyberspace Policy Review and identifies 10 near term actions to support the cybersecurity strategy. These are:
Appoint a cybersecurity policy official responsible for coordinating the Nation’s cybersecurity policies and activities.
Prepare for the President’s approval an updated national strategy to secure the information and communications infrastructure.
Designate cybersecurity as one of the President’s key management priorities and establish performance metrics
Designate a privacy and civil liberties official to the NSC cybersecurity directorate.
Conduct interagency-cleared legal analyses of priority cybersecurity-related issues.
Initiate a national awareness and education campaign to promote cybersecurity.
Develop an international cybersecurity policy framework and strengthen our international partnerships.
Prepare a cybersecurity incident response plan and initiate a dialog to enhance public-private partnerships.
Develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure.
Build a cybersecurity-based identity management vision and strategy, leveraging privacy-enhancing technologies for the Nation.
Schmidt, who announced his pride “at representing President Obama”, admitted that there is “still a long way to go” to achieving the actions outlined above. “Over the past year, thousands of hours have gone into this policy”, Schmidt said. “We all know that collaboration is important, and we recognise that the government and industry need to work together. These vulnerabilities are shared, so we need to work together.”
Schmidt acknowledged the importance for transparency when asking the industry for help. “In order to be successful, we must seek new and innovative partnerships, with government, industry, academia, and the public. Working together is the most powerful tool we have.”
In reference to the ten initiatives outlined in the cybersecurity strategy, Schmidt reliably informs the audience that they are certainly making headway. “We’re making sure that President Obama and Congress are thinking about cybersecurity in everything they do. Leadership at the top is very important”, he said.
“Appointing a cybersecurity policy official and designating a privacy and civil liberties official has been done. Here I am”, exclaimed Schmidt. “Updating the strategy is a work in progress. While there are a lot of things that will remain the same, it has to be updated.
“There is a working group that has been divided into four tracks dedicated to the international awareness campaign. There have been meetings, there are plans, and there are milestones. We’re making sure that the policy and framework address the international threat, and we’re ensuring that the cybsecurity response plans looks not only at how we co-ordinate, but how we get it right”.
In regards to action 9, Schmidt assured the audience that they have been looking at specific projects and economics.
In conclusion, Schmidt declared cybersecurity “a shared responsibility for all of us. We can only do what we can do, and that’s all we’re asking”.
This article is featured in:
Internet and Network Security • Public Sector
Howard Schmidt, Cyber security advisor to President Obama, announced the launch of www.whitehouse.org/cybersecurity - a brand new web page launched to prove the commitment of the US government to its transparent cybersecurity strategy - during his keynote at RSA conference 2010 in San Francisco.
Ultimately, the National cybersecurity strategy has two main objectives, explained Schmidt: “to improve our resilience to cyber incidents, and to reduce the cyber threat”.
The commendable web page, now live, details the President’s Cyberspace Policy Review and identifies 10 near term actions to support the cybersecurity strategy. These are:
Appoint a cybersecurity policy official responsible for coordinating the Nation’s cybersecurity policies and activities.
Prepare for the President’s approval an updated national strategy to secure the information and communications infrastructure.
Designate cybersecurity as one of the President’s key management priorities and establish performance metrics
Designate a privacy and civil liberties official to the NSC cybersecurity directorate.
Conduct interagency-cleared legal analyses of priority cybersecurity-related issues.
Initiate a national awareness and education campaign to promote cybersecurity.
Develop an international cybersecurity policy framework and strengthen our international partnerships.
Prepare a cybersecurity incident response plan and initiate a dialog to enhance public-private partnerships.
Develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure.
Build a cybersecurity-based identity management vision and strategy, leveraging privacy-enhancing technologies for the Nation.
Schmidt, who announced his pride “at representing President Obama”, admitted that there is “still a long way to go” to achieving the actions outlined above. “Over the past year, thousands of hours have gone into this policy”, Schmidt said. “We all know that collaboration is important, and we recognise that the government and industry need to work together. These vulnerabilities are shared, so we need to work together.”
Schmidt acknowledged the importance for transparency when asking the industry for help. “In order to be successful, we must seek new and innovative partnerships, with government, industry, academia, and the public. Working together is the most powerful tool we have.”
In reference to the ten initiatives outlined in the cybersecurity strategy, Schmidt reliably informs the audience that they are certainly making headway. “We’re making sure that President Obama and Congress are thinking about cybersecurity in everything they do. Leadership at the top is very important”, he said.
“Appointing a cybersecurity policy official and designating a privacy and civil liberties official has been done. Here I am”, exclaimed Schmidt. “Updating the strategy is a work in progress. While there are a lot of things that will remain the same, it has to be updated.
“There is a working group that has been divided into four tracks dedicated to the international awareness campaign. There have been meetings, there are plans, and there are milestones. We’re making sure that the policy and framework address the international threat, and we’re ensuring that the cybsecurity response plans looks not only at how we co-ordinate, but how we get it right”.
In regards to action 9, Schmidt assured the audience that they have been looking at specific projects and economics.
In conclusion, Schmidt declared cybersecurity “a shared responsibility for all of us. We can only do what we can do, and that’s all we’re asking”.
This article is featured in:
Internet and Network Security • Public Sector
Subscribe to:
Posts (Atom)