Archive for March, 2011

NASA network holes may jeopardize missions

Wednesday, March 30th, 2011

Weak security practices and critical holes in NASA’s agency-wide network could allow an attack over the Internet that would disrupt missions and expose sensitive data, according to a government report.

“Until NASA addresses these critical deficiencies and improves its IT security practices, the Agency is vulnerable to computer incidents that could have a severe to catastrophic effect on Agency assets, operations, and personnel,” said the Inspector General’s report, titled “Inadequate Security Practices Expose Key NASA Network to Cyber Attack (PDF),” released yesterday.

NASA uses a series of networks to carry out its various missions, which include controlling spacecraft like the International Space Station and conducting science missions like the Hubble Telescope.

The Office of Inspector General (OIG) found that servers on the NASA network had “high-risk” vulnerabilities that were exploitable from the Internet and that specifically six servers containing critical data and used for controlling spacecraft were found to have holes that would allow a remote attacker to take control over them or render them inaccessible. Once inside the network, an attacker could exploit other weaknesses auditors identified, which could “severely degrade or cripple NASA’s operations,” the report said.

Poorly configured network servers revealed encryption keys and encrypted passwords and one server disclosed sensitive account data for all its authorized users. The information could be used to target NASA personnel with phishing attacks and e-mails containing malicious code designed to compromise the recipient’s computer.

The OIG recommended last May that NASA immediately establish an IT (information technology) security oversight program for the key network. As of last month, such a program was not implemented despite the fact that NASA agreed with the recommendation, the report said.

The problems are not just theoretical; NASA’s network has been breached. In January 2009, attackers stole 22 gigabytes of export-restricted data from a Jet Propulsion Laboratory computer system, according to the report. Later that year, a computer system that supports one of NASA’s mission networks was infected and was causing the system to make more than 3,000 unauthorized connections to domestic and international Internet Protocol addresses including addresses in China, the Netherlands, Saudi Arabia and Estonia, the OIG said.

“The sophistication of both of these Internet-based intrusions confirms that they were focused and sustained efforts to target assets on NASA’s mission computer networks,” the report said.

NASA representatives could not be reached for comment late today.

Source:  cnet.com

U.S. spy agency is said to investigate Nasdaq hacker attack

Wednesday, March 30th, 2011

The National Security Agency, the top U.S. electronic intelligence service, has joined a probe of the October cyber attack on Nasdaq OMX Group Inc. amid evidence the intrusion by hackers was more severe than first disclosed, according to people familiar with the investigation.

The involvement of the NSA, which uses some of the world’s most powerful computers for electronic surveillance and decryption, may help the initial investigators — Nasdaq and the FBI — determine more easily who attacked and what was taken. It may also show the attack endangered the security of the nation’s financial infrastructure.

“By bringing in the NSA, that means they think they’re either dealing with a state-sponsored attack or it’s an extraordinarily capable criminal organization,” said Joel Brenner, former head of U.S. counterintelligence in the Bush and Obama administrations, now at the Washington offices of the law firm Cooley LLP.

The NSA’s most important contribution to the probe may be its ability to unscramble encrypted messages that hackers use to extract data, said Ira Winkler, a former NSA analyst and chief security strategist at Technodyne LLC, a Wayne, New Jersey-based information technology consulting firm.

The probe of the attack on the second biggest U.S. stock exchange operator, disclosed last month, is also being assisted by foreign intelligence agencies, said one of the people, who declined like the others to be identified because the investigation is confidential and in some cases classified. One of the people said the attack was more extensive than Nasdaq previously disclosed.

Motive Undetermined

Investigators have yet to determine which Nasdaq systems were breached and why, and it may take months for them to finish their work, two of the people familiar with the matter said.

Disclosure of the attack prompted the House Financial Services Committee in February to begin a review of the safety of the country’s financial infrastructure, according to the committee’s chairman, Spencer Bachus, an Alabama Republican.

The widening investigation may also complicate Nasdaq’s ability to strike deals to buy or merge with other exchanges at a time when several competitors have announced such moves, according to Alexander Tabb, a partner at Tabb Group LLC, a financial-markets research firm based in Westborough, Massachusetts.

“For an organization like Nasdaq, it does have an impact on the overall perception of their security, their resiliency and their value,” Tabb said. “For potential partners of the company, that has to be a concern.”

Exchange Acquisitions

More than $20 billion of exchange acquisitions have been announced in the past five months, including Singapore Exchange Ltd.’s $8.3 billion offer for ASX Ltd., London Stock Exchange Group Plc’s agreement to acquire TMX Group Inc. for $3.1 billion, and Deutsche Boerse AG’s $9.5 billion deal for NYSE Euronext.

Nasdaq operators will be hard pressed to assure potential partners that they have resolved the matter, Tabb said.

“Uncertainty in the functioning of the market is the biggest blow-back to this event,” Tabb said.

Nasdaq reported in February that the breach of its computers was limited to a single system known as Directors Desk, a product used by board members of companies to exchange confidential information. The company said that as far as investigators could determine, no data or documents on that system were taken.

Other Systems

The NSA-assisted probe is now focused on how far the attack may have reached, including the breach of other systems, said one of the people familiar with the probe.

Frank De Maria, a Nasdaq spokesman, declined to comment on the effect the security breach might have on the company’s future strategic moves. He said Nasdaq is pursuing its probe and has no new information about the scope of the attack.

“With every company now, searching the networks for break- ins and insuring they’re secure has got to be a full-time job,” De Maria said in an interview.

NSA spokeswoman Vanee Vines declined to comment and referred all questions to the Federal Bureau of Investigation, the lead agency in the investigation. Jenny Shearer, a spokeswoman for the FBI, declined to comment.

Directors Desk, where the break-in was discovered, is designed to allow directors and executives of Nasdaq client companies to share private files, nonpublic information that cyber criminals could trade on. Nasdaq bought Directors Desk in 2007 as part of its effort to diversify into corporate services.

Sophisticated hackers often enter computer networks through a single system, like Directors Desk, then hop to other secure parts of a computer network, the people familiar with the investigation said.

Network Vulnerabilities

Tabb said investigators are likely trying to chart which parts of Nasdaq’s network might have been accessible through Directors Desk and to ensure those vulnerabilities weren’t exploited — a time-consuming process, he said.

Brenner, the former counter-intelligence chief, said he couldn’t independently confirm the NSA’s role in the probe. He said the agency rarely gets involved in investigating cyber attacks against companies.

Brenner said that the NSA played a part in probing the 2009 attack against Google Inc., saying that represented “a major change” for the agency, which monitors the electronic communications of foreign entities and helps secure the networks of U.S. government agencies.

“It’s part of an increasing awareness that the distinction between economic and national security is rapidly breaking down,” he said.

Unique Tools

The NSA, based at Fort Meade, Maryland, has the government’s most detailed knowledge of cyber attackers and their methods, Brenner said. A 2008 executive order signed by President George W. Bush expanded the NSA’s responsibilities to include monitoring U.S. government computer networks to detect cyber attacks.

The NSA could help identify and analyze electronic clues left behind by the hackers, including communication between the malicious software used in the attack and the outside computers that controlled it, Winkler said.

One challenge in analyzing the scope of cyber attacks is that the information captured by intruders is often sent out in an encrypted form, making it difficult to tell what was taken, according to the FBI.

Stealthy Software

Another obstacle, Brenner said, is that the most sophisticated cyber attacks employ stealthy software that’s programmed to go dormant for months and can be altered by hackers in response to changing security measures. That makes it difficult for investigators to be sure they’ve found all the malicious software and removed it from the network.

“In theory, the NSA should have the ability to reconstruct the data that is being obfuscated,” said Winkler, the former NSA analyst.

One line of inquiry pursued by investigators is whether the attack is linked to state-based cyber espionage or sabotage, which would raise national security concerns, one of the people familiar with the probe said.

De Maria, the Nasdaq spokesman, said in February in response to an article in the Wall Street Journal that the exchange had been hacked, that there was no evidence the trading platform the company runs was breached.

Security dangers include the potential for intruders to alter trading algorithms and cause a market crash, according to Larry Dignan, who writes for ZDNet, a technology publication that’s a unit of CBS Interactive.

Doubts on Trades

Brenner said intruders might do just as much damage by manipulating trading to create doubt about the validity of trades. More than 93 billion shares were traded on the Nasdaq exchange in the fourth quarter of 2010, equal to almost 20 percent of the U.S. equities market, according to the company’s final quarterly report to the Securities and Exchange Commission last year.

Initial reports that the computers used in the attack were based in Russia weren’t correct, the people familiar with the probe said. The investigation has yet to determine the origin of the attack, they said.

The attack’s sophistication doesn’t rule out that an organized crime group was responsible, Brenner said. Criminal enterprises have narrowed the skills gap with state-sponsored hackers, launching attacks that can penetrate even the best- guarded computer networks, he said.

Source:  businessweek.com

Iran accused in ‘dire’ net security attack

Thursday, March 24th, 2011

Hackers in Iran have been accused of trying to subvert one of the net’s key security systems.

Analysis in the wake of the thwarted attack suggests it originated and was co-ordinated via servers in Iran.

If it had succeeded, the attackers would been able to pass themselves off as web giants Google, Yahoo, Skype, Mozilla and Microsoft.

The impersonation would have let attackers trick web users into thinking they were accessing the real service.

Fake identity

The attack was mounted on the widely used online security system known as the Secure Sockets Layer or SSL.

This acts as a guarantee of identity so users can be confident that the site they are visiting is who it claims to be. The guarantee of identity is in the form of a digital passport known as a certificate.

Analysis of the attack reveals that someone got access to the computer systems of one firm that issue certificates. This allowed them to issue bogus certificated that, if they had been used, would have let them impersonate any one of several big net firms.

It appears that the attackers targeted the SSL certificates of several specific net communication services such as Gmail and Skype as well as other popular sites such as Microsoft Live, Yahoo and the Firefox browser.

SSL certificate issuer Comodo published an analysis of the attack which was carried out via the computer systems of one of its regional affiliates.

It said the attack exhibited “clinical accuracy” and that, along with other facets of the attack led it to one conclusion: “this was likely to be a state-driven attack.”

It is thought it was carried out by the Iranian authorities to step up scrutiny of opposition groups in the country that use the web to co-ordinate their activity.

The bogus certificates have now been revoked and Comodo said it was looking into ways of improving security at its affiliates.

Browsers have also been updated so anyone visiting a site whose credentials are guaranteed by the bogus certificates will be warned.

Writing on the blog of digital rights lobby group the Electronic Frontier Foundation, Peter Eckersley, said the attack posed a “dire risk to internet security”.

“The incident got close to — but was not quite — an internet-wide security meltdown,” he said.

“We urgently need to start reinforcing the system that is currently used to authenticate and identify secure websites and e-mail systems,” said Mr Eckersley.

Source:  BBC News

Spectrum debate likely hot topic for CTIA

Tuesday, March 22nd, 2011

ORLANDO FL – While many gadget fans will be looking for the latest smartphones and cool services coming out at this week’s CTIA 2011 trade show here, policy wonks will be looking for news in the heated battle between the wireless industry and TV broadcasters over spectrum reallocation.

In recent weeks, the National Association of Broadcasters has called into question the Federal Communications Commission’s plan to reallocate spectrum, much of which will come from unused broadcast licenses that have been voluntarily given up. The NAB has called many current spectrum holders, which have participated in previous spectrum auctions, hoarders. The group claims these companies are not efficiently using the spectrum they have already bought.

For example, satellite TV provider DirecTV, as well as cable operators Comcast, Time Warner Cable, and Brighthouse have all bought spectrum in recent FCC auctions and have not yet used that spectrum nor have they disclosed how they plan to use it. Even large carriers such as Verizon Wireless and AT&T have not used all the spectrum licenses they have purchased in recent auctions.

“Maybe you should develop that spectrum before you come to broadcasters asking for 40 percent more of their spectrum,” Dennis Wharton, NAB’s executive vice president for media relations, told the IDG News Service in a recent interview. “Why is it taking so long, if there really is a national spectrum crisis?”

The CTIA, which represents the wireless industry, and the Federal Communications Commission say spectrum reallocation is necessary because there’s a looming spectrum crisis. Without additional spectrum allocated, wireless operators will not have enough airwaves available to meet the rapidly growing demand for wireless data services, these groups say.

While it’s clear which side the CTIA is on in this debate, the topic will likely be a hot one at the group’s biannual trade show this week where the industry is gathered not only to announce and view cool new products, but also to discuss important policies essential to the industry.

On Monday, I will be helping Chris Guttman-McCabe, vice president of regulatory affairs for the CTIA, host a panel discussion that will include several officials from the FCC, as well as representatives from AT&T and Verizon, who will be talking about wireless spectrum issues and policy.

The spectrum debate
The FCC’s National Broadband Report, released last year, recommended that the FCC make 500MHz of new wireless spectrum available within 10 years for licensed and unlicensed use. The plan recommends that 300MHz of that spectrum should become available within the next five years.

One of the most controversial issues to come out of that plan is the reallocation of wireless spectrum. While the report recommends that the FCC reallocate about 20MHz of underutilized government spectrum, it also recommends that the agency get about 120MHz of spectrum from TV broadcasters.

The FCC is currently studying a plan for reallocating spectrum. The commission has said it doesn’t plan to force broadcasters to give it up. Instead, it said it would create incentive auctions that would let broadcasters who aren’t using some spectrum to voluntarily give it up in exchange for some kind of compensation.

While the NAB is open to a voluntarily approach, the lobby group has been questioning the FCC’s premise for even asking for this spectrum, given that spectrum sold in recent auctions hasn’t been used yet.

The NAB also says it may be difficult for broadcasters to give up spectrum, since the areas where spectrum is most needed is in cities, where many broadcasters are already using spectrum to provide free TV programming, as well as mobile digital TV.

Broadcasters also believe that as an industry, they’ve given up plenty of spectrum already. For example, the government forced the TV broadcast industry to move to broadcasting signals in digital rather than analog form, which freed up spectrum in the 700MHz band. That spectrum was auctioned in 2008. Verizon Wireless is using its 700MHz wireless licenses to build its “4G” LTE network.

But the wireless industry and the FCC believe that TV broadcasters, which were given their spectrum licenses for free during the dawn of TV, need to give more of it back for reallocation. In a column published Friday on CNET, CTIA CEO Steve Largent said that TV broadcasters have 294MHz of spectrum in each market, much of which is currently unused.

He claims that the CTIA estimates that revenue “from auctions of broadcast spectrum reclaimed through a voluntary mechanism would gross at least $36 billion for the federal government. This process would [retain] free over-the-air broadcast service while the industry would pay billions to the U.S. Treasury and billions more to the U.S. economy to deploy new technologies. Ultimately, consumers continue to get the world’s best products and services. Everyone wins.”

AT&T’s senior executive vice president for external and legislative affairs said in a blog post Friday that the NAB itself is guilty of under using its spectrum resources and should not be pointing fingers at the wireless industry, which has paid millions for its spectrum licenses.

“NAB (insuates that) the problem isn’t their own massive warehousing and underuse of precious spectrum resources,” Cicconi wrote in a blog post Friday. “Instead, the problem is everyone else. It’s not their 1950s transmission method that’s inefficient; the fault is with modern devices that receive their signals. And somehow those companies making the largest capital investments in the U.S., and perhaps the largest private capital investments in American history, aren’t investing fast enough to suit the broadcasters.”

The CTIA and the Consumer Electronics Association say that the NAB is simply trying to deflect attention from the spectrum crisis. In a letter to Congressional leaders this week, the two groups said that the “NAB has once again endeavored to search for any hint of outlier instances where spectrum allegedly is not being put to productive use–a point that has been consistently refuted.”

FCC Chairman Genachowski has also downplayed the NAB’s claims. In a speech this week at the Mobile Future Forum, he said that the FCC’s recently completed “baseline” spectrum inventory provides enough data to conclude that incentive auctions are needed.

“The spectrum crunch will not be solved by the build-out of already allocated spectrum,” Genachowski said. “That spectrum was already built into the FCC’s analysis of the spectrum shortage and does not detract from the desirability and necessity of adding the incentive auction tool to the FCC’s arsenal.”

He said there were “no hidden vacant lots of commercial airwaves.” But he said that there are a few “areas well-suited to mobile broadband, such as the TV and [mobile satellite services] bands.”

Meanwhile, the NAB says it wants the government to do a full inventory of spectrum to see how efficiently all spectrum holders are using their licenses. Such a broad inventory of spectrum that includes usage by wireless companies and other auction license holders has not been done.

It will be interesting to see how the debate plays out and what the wireless industry will say at this week’s conference to flame the political fires. Stay tuned.

Source:  cnet.com

Taking Down Botnets: Microsoft and the Rustock Botnet

Friday, March 18th, 2011

Just over a year ago, we announced that the Microsoft Digital Crimes Unit (DCU), in cooperation with industry and academic experts, had successfully taken down the botnet Waledac in an operation known as “Operation b49”. Today, I’m happy to announce that based on the knowledge gained in that effort, we have successfully taken down a larger, more notorious and complex botnet known as Rustock. This botnet is estimated to have approximately a million infected computers operating under its control and has been known to be capable of sending billions of spam mails every day, including fake Microsoft lottery scams and offers for fake – and potentially dangerous – prescription drugs.

This operation, known as Operation b107, is the second high-profile takedown in Microsoft’s joint effort between DCU, Microsoft Malware Protection Center and Trustworthy Computing – known as Project MARS (Microsoft Active Response for Security) – to disrupt botnets and begin to undo the damage the botnets have caused by helping victims regain control of their infected computers. Like the Waledac takedown, this action relied on legal and technical measures to sever the connection between the command and control structure of the botnet and the malware-infected computers operating under its control to stop the ongoing harm caused by the Rustock botnet. As you may have read, the Rustock botnet was officially taken offline yesterday, after a months-long investigation by DCU and our partners, successful pleading before the U.S. District Court for the Western District of Washington and a coordinated seizure of command and control servers in multiple hosting locations escorted by the U.S. Marshals Service.

As in the legal and technical measure that enabled us to take down the Waledac botnet, Microsoft filed suit against the anonymous operators of the Rustock botnet, based in part on the abuse of Microsoft trademarks in the bot’s spam. However, Rustock’s infrastructure was much more complicated than Waledac’s, relying on hard-coded Internet Protocol addresses rather than domain names and peer-to peer command and control servers to control the botnet. To be confident that the bot could not be quickly shifted to new infrastructure, we sought and obtained a court order allowing us to work with the U.S. Marshals Service to physically capture evidence onsite and, in some cases, take the affected servers from hosting providers for analysis. Specifically, servers were seized from five hosting providers operating in seven cities in the U.S., including Kansas City, Scranton, Denver, Dallas, Chicago, Seattle, Columbus and, with help from the upstream providers, we successfully severed the IP addresses that controlled the botnet, cutting off communication and disabling it. This case and this operation are ongoing and our investigators are now inspecting the evidence gathered from the seizures to learn what we can about the botnet’s operations.

Bots are versatile, limited only by the imagination of the bot-herder. That’s why Microsoft and our partners are working so aggressively on innovative approaches to quickly take out the entire infrastructure of a botnet, so that it stays inactive as we assist in cleaning the malware off of infected computers. This is how we approached the Waledac takedown and are currently approaching the Rustock takedown. We will continue to invest similar operations in the future as well in our mission to annihilate botnets and make the Internet a safer place for everyone.

However, no single company or group can accomplish this lofty goal alone. It requires collaboration between industry, academic researchers, law enforcement agencies and governments worldwide. In this case, Microsoft worked with Pfizer, the network security provider FireEye and security experts at the University of Washington. All three provided declarations to the court on the dangers posed by the Rustock botnet and its impact on the Internet community. Microsoft also worked with the Dutch High Tech Crime Unit within the Netherlands Police Agency to help dismantle part of the command structure for the botnet operating outside of the United States. Additionally, Microsoft worked with CN-CERT in blocking the registration of domains in China that Rustock could have used for future command and control servers.

We are also now working with Internet service providers and Community Emergency Response Teams (CERTs) around the world to help reach out to help affected computer owners clean the Rustock malware off their computers. Without multi-party public and private collaboration efforts like these, successful takedowns would not be possible. The central lesson we’ve learned from all our efforts to fight botnets has been that cooperation is the key to success.

Botnets are known to be the tool of choice for cybercriminals to conduct a variety of online attacks, using the power of thousands of malware-infected computers around the world to send spam, conduct denial-of-service attacks on websites, spread malware, facilitate click fraud in online advertising and much more. This particular botnet is no exception.

Although its behavior has fluctuated over time, Rustock has been reported to be among the world’s largest spambots, at times capable of sending 30 billion spam e-mails per day. DCU researchers watched a single Rustock-infected computer send 7,500 spam emails in just 45 minutes – a rate of 240,000 spam mails per day. Moreover, much of the spam observed coming from Rustock posed a danger to public health, advertising counterfeit or unapproved knock-off versions of pharmaceuticals.

As mentioned previously, because Rustock propagated a market for these fake drugs, drug-maker Pfizer served as a declarant in this case. Pfizer’s declaration provides evidence that the kind of drugs advertised through this kind of spam can often contain wrong active ingredients, incorrect dosages or worse, due to the unsafe conditions fake pharmaceuticals are often produced in. Fake drugs are often contaminated with substances including pesticides, lead-based highway paint and floor wax, just to name a few examples.

Spam is annoying and it can advertise potentially dangerous or illegal products. It is also significant as a symptom of greater threats to Internet health. Although Rustock’s primary use appears to have been to send spam, it’s important to note that a large botnet can be used for almost any cybercrime a bot-herder can dream up. Botnets are powerful and, with a simple command, can be switched from a spambot to a password thief or DDOS attacker.

Again, DCU’s research shows there may be close to 1 million computers infected with Rustock malware, all under the control of the person or people operating the network like a remote army, usually without the computer’s owner even aware that his computer has been hijacked. Bot-herders infect computers with malware in a number of ways, such as when a computer owner visits a website booby-trapped with malware and clicks on a malicious advertisement or opens an infected e-mail attachment. Bot-herders do this so discretely that owners often never suspect their PC is living a double life.

It’s like a gang setting up a drug den in someone’s home while they’re on vacation and coming back to do so every time the owner leaves the house, without the owner ever knowing anything is happening. Home owners can better protect themselves with good locks on their doors and security systems for their homes. Similarly, computer owners can be better protected from malware if they run up-to-date software – including up-to-date antivirus and antimalware software – on their computers.

Finally, we encourage every computer owner to make sure their machine isn’t doing a criminal’s dirty work. If you believe your computer may be infected by Rustock or other type of malware, we encourage you to visit support.microsoft.com/botnets for free information and resources to clean your computer.

With your help, and the continued public and private cooperation of industry, academia and law enforcement such as Operation b107, we can stop criminals from using botnets to wreak havoc on the Internet.

To follow the Microsoft Digital Crimes Unit for news and information on proactive work to combat botnets and other digital threats, visit www.facebook.com/MicrosoftDCU or twitter.com/MicrosoftDCU.

Source:  microsoft.com

The security limitations of solid state drives (SSDs)

Tuesday, March 8th, 2011

Speaking of flash media SSDs as the standard implementation for the near future, the most obvious security disadvantages relative to HDDs revolve around encryption and secure deletion. Under the hood, these turn out to be effectively the same category of problem.

Magnetic storage media rely on the alignment of magnetism in ferromagnetic materials on the surfaces of the platters. Because of this, passing a read/write head over a platter to apply a magnetic field to that ferromagnetic material can change the data currently recorded there in one simple operation. The overwriting process need not account for whatever data was previously recorded.

By contrast, flash media uses transistors to store data. A group of these transistors has an “empty” or “erased” state and a “programmed” or “written” state. Each of them must be reset to an “erased” state before they can be reset to some “written” state to store data. As a result, while writing to empty storage space only requires a single operation as with magnetic media, “overwriting” is effectively impossible. Any data in the space where new data is to be written must be erased first, in a separate operation.

Encrypting data already on disk

This leads to the first problem with data security on SSDs: encrypting data already stored on the media. Because of the way filesystems interact with storage media, encrypting a file on magnetic media such that your data is secure involves merely writing the newly encrypted data over the old data. This leaves the HDD only storing an encrypted copy of the data, because the unencrypted copy was destroyed by the process of writing the encrypted copy to disk.

By contrast, this operation is effectively impossible for an SSD, and in the general case, the encrypted copy of the file will write data to a currently empty region of the media, leaving the unencrypted copy where it is. The plaintext copy is “erased” only in that it is eliminated from the filesystem’s mechanism for tracking files — e.g., a file allocation table or inode. Bypassing the filesystem to directly scan the media can reveal “deleted” data that is still there to be found. The controller built into an SSD abstracts the management of data on the device so that implementation specific drivers are not needed by the computer, but this abstraction also creates the problem that there is currently no standard means of ensuring unencrypted data is truly erased from flash media.

Secure data deletion

This leads directly to the second major security issue afflicting SSDs: secure deletion. Standard secure deletion software such as the Unix utility shred is sufficient for secure deletion on modern HDDs, but largely ineffective for consumer flash media storage devices.

UCSD researchers Michael Wei, Laura M. Grupp, Frederik E. Spada, and Steven Swanson have published the results (PDF download) of practical testing, showing the dismal state of secure deletion on SSDs. The results of their tests led them to three conclusions:

  • First, built-in commands are effective, but manufacturers sometimes implement them incorrectly.
  • Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive.
  • Third, none of the existing hard drive oriented techniques for individual file sanitization are effective on SSDs.

The third conclusion should come as no surprise to those who understand the rudiments of flash media storage technology. The second is disappointing, but not entirely surprising. This leaves users with a single recourse for reliably secure deletion: functionality built into the storage device itself. The dismaying fact that we must rely on “black box” implementations of secure deletion technology that ship with the hardware may raise warnings in the minds of the practical paranoids of the world, based simply on the fact that without extensive testing we really do not know what the devices actually do under the hood. One is implicitly required to trust in the manufacturer’s good intentions for reliably secure deletion of data.

Worse, the research shows that regardless of good intentions, the secure deletion capabilities of SSDs may not even be correctly implemented. In short, the secure deletion functionality of your SSD may simply not work correctly, resulting in false confidence in the secure deletion of data that is still sitting on the device, waiting to be discovered.

In addition to these problems, there is the inconvenience of the fact that there is no effective mechanism for secure deletion of a single file. If the user wishes to securely delete anything on the media, he or she must securely delete everything in the media’s user-accessible storage space.

Not all hope is lost. The researchers who worked on this project have developed some techniques for both continuous data sanitization (which impacts performance substantially) and on-demand sanitization. There is still much that can be done to improve the interfaces and integrated functionality of SSDs to accommodate secure deletion operations in the future. Potential solutions using today’s implementations may undo many of the media lifespan optimizations currently in place that minimize the number of writes to any individual parts of the complete SSD’s user accessible address space, however.

The end result is that for security-critical uses, SSDs are often not the best choice of storage technology. The greater maturity of HDD storage technologies allows for greater reliability and flexibility of secure data management without damaging the expected lifespan of the storage devices, at least for now.

Source:  techrepulic.com

Wallaby: Convert Adobe Flash FLA files into HTML and reach more devices

Tuesday, March 8th, 2011

“Wallaby” is the codename for an experimental technology that converts the artwork and animation contained in Adobe® Flash® Professional (FLA) files into HTML. This allows you to reuse and extend the reach of your content to devices that do not support the Flash runtimes. Once these files are converted to HTML, you can edit them with an HTML editing tool, such as Adobe Dreamweaver®, or by hand if desired. You can view the output in one of the supported browsers or on an iOS device.

Please note that not all Flash Professional features are supported in the HTML5 format. The Wallaby Release Notes describe what features are supported, what differences we have already discovered between the various browsers, what device variations have been found, and any currently known issues.

Download the Wallaby application

Features and Support

Feature Implementation Status Notes
3D transforms Unsupported
ActionScript 1,2 Unsupported
ActionScript 3 Unsupported
Blend Modes Unsupported Unsupported by HTML or CSS
Button – visuals (normal, hover, active) SVG, CSS Complete Buttons inside of a button are not supported
Button – events Unsupported Requires JavaScript for actions
Compiled Clips Unsupported Requires Actionscript support
Components Unsupported Most require Actionscript and Compiled Clip support
Fills – gradients or images SVG Complete
Fills – solid colors SVG Complete
Filters (DropShadow, Glow, Blur, ColorMatrix) SVG Unsupported Supported by SVG but not Safari and Safari Mobile
Filters – Advanced (Bevel, Gradient Bevel/Glow) Unsupported No SVG, HTML, or CSS support
FrameSets HTML and Javascript Complete
Gradients SVG Complete
Images HTML or SVG Complete A few formats have issues with alpha
Inverse Kinematics Unsupported Right click on the Armature and select ‘Convert to Frame by Frame Animation’
Layers – Art/Normal SVG Complete
Layers – Folder Complete
Layers – Guide Complete
Layers – Mask SVG Mask Artwork, Webkit Clip Partial No support for multiple framesets in Mask layer, several Webkit bugs
Scale 9 graphics Unsupported Need to dynamically scale
Paths – Cubic & Quadratic SVG Complete
Scenes Complete
Sound – Stream, Event Unsupported
Strokes – gradients or images Unsupported
Strokes – solid colors SVG Partial No advanced dashing (i.e. Dotted, Hatched, Stippled, Ragged)
Text – Classic Static SVG Partial Text Limitations
Text – Classic Dynamic, Input SVG Partial Text Limitations
Text – Font Embedding SVG Complete Text Limitations
Text – TLF SVG Partial Text Limitations
Timelines Partial Nested timelines difficult and there are a few known bugs
Tweening – Shape Complete One SVG file is created for each frame of a Shape Tween. This can cause a large number of SVG files for complex animations using Shape Tween leading to playback performance issues on iOS devices. Some of the complex cases may not convert correctly.
Tweening – Classic CSS3 animation Partial No Filters or ColorMatrix support
Tweening – Motion CSS3 animation Partial No Filters, Brightness, Tint or Advanced Color support
Video – Embedded/External Unsupported

Source:  labs.adobe.com

Microsoft security updates for March 2011

Tuesday, March 8th, 2011

Several critical security updates are pending for Microsoft products as of 3/8/11.  Be sure to update your server and workstation operating systems and MS Office products to repair vulnerabilities.  Details provided by Microsoft are as follows:

Windows XP

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

Critical

Important

Windows XP Service Pack 3

Windows XP Service Pack 3
(Critical)

Windows XP Service Pack 3
(Important)

Windows XP Professional x64 Edition Service Pack 2

Windows XP Professional x64 Edition Service Pack 2
(Critical)

Windows XP Professional x64 Edition Service Pack 2
(Important)

Windows Server 2003

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

None

Important

Windows Server 2003 Service Pack 2

Not applicable

Windows Server 2003 Service Pack 2
(Important)

Windows Server 2003 x64 Edition Service Pack 2

Not applicable

Windows Server 2003 x64 Edition Service Pack 2
(Important)

Windows Vista

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

Critical

Important

Windows Vista Service Pack 1 and Windows Vista Service Pack 2

Windows Vista Service Pack 1 and Windows Vista Service Pack 2
(Critical)

Windows Vista Service Pack 1 and Windows Vista Service Pack 2
(Important)

Windows Vista x64 Edition Service Pack 1 and Windows Vista x64 Edition Service Pack 2

Windows Vista x64 Edition Service Pack 1 and Windows Vista x64 Edition Service Pack 2
(Critical)

Windows Vista x64 Edition Service Pack 1 and Windows Vista x64 Edition Service Pack 2
(Important)

Windows Server 2008

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

None

Important

Windows Server 2008 for 32-bit Systems and Windows Server 2008 for 32-bit Systems Service Pack 2

Not applicable

Windows Server 2008 for 32-bit Systems and Windows Server 2008 for 32-bit Systems Service Pack 2**
(Important)

Windows Server 2008 for x64-based Systems and Windows Server 2008 for x64-based Systems Service Pack 2

Not applicable

Windows Server 2008 for x64-based Systems and Windows Server 2008 for x64-based Systems Service Pack 2**
(Important)

Windows Server 2008 for Itanium-based Systems and Windows Server 2008 for Itanium-based Systems Service Pack 2

Not applicable

Windows Server 2008 for Itanium-based Systems and Windows Server 2008 for Itanium-based Systems Service Pack 2
(Important)

Windows 7

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

Critical

Important

Windows 7 for 32-bit Systems and Windows 7 for 32-bit Systems Service Pack 1

Windows 7 for 32-bit Systems and Windows 7 for 32-bit Systems Service Pack 1
(Critical)

Windows 7 for 32-bit Systems
(Important)

Windows 7 for x64-based Systems and Windows 7 for x64-based Systems Service Pack 1

Windows 7 for x64-based Systems and Windows 7 for x64-based Systems Service Pack 1
(Critical)

Windows 7 for x64-based Systems
(Important)

Windows Server 2008 R2

Bulletin Identifier

Bulletin 1

Bulletin 2

Aggregate Severity Rating

Important

Important

Windows Server 2008 R2 for x64-based Systems and Windows Server 2008 R2 for x64-based Systems Service Pack 1

Windows Server 2008 R2 for x64-based Systems and Windows Server 2008 R2 for x64-based Systems Service Pack 1**
(Important)

Windows Server 2008 R2 for x64-based Systems**
(Important)

Windows Server 2008 R2 for Itanium-based Systems

Not applicable

Windows Server 2008 R2 for Itanium-based Systems
(Important)

Microsoft Office Programs

Bulletin Identifier

Bulletin 3

Aggregate Severity Rating

Important

Microsoft Groove 2007

Microsoft Groove 2007 Service Pack 2
(Important)

 

Use Windows Update to apply all recommended patches or visit the Microsoft Download Center to select updates a la carte.

Source:  microsoft.com