Archive for July, 2012

Researcher creates proof-of-concept malware that infects BIOS, network cards

Tuesday, July 31st, 2012

New Rakshasa hardware backdoor is persistent and hard to detect, researcher says

Security researcher Jonathan Brossard created a proof-of-concept hardware backdoor called Rakshasa that replaces a computer’s BIOS (Basic Input Output System) and can compromise the operating system at boot time without leaving traces on the hard drive.

Brossard, who is CEO and security research engineer at French security company Toucan System, demonstrated how the malware works at the Defcon hacker conference on Saturday, after also presenting it at the Black Hat security conference on Thursday.

Rakshasa, named after a demon from the Hindu mythology, is not the first malware to target the BIOS — the low-level motherboard firmware that initializes other hardware components. However, it differentiates itself from similar threats by using new tricks to achieve persistency and evade detection.

Rakshasa replaces the motherboard BIOS, but can also infect the PCI firmware of other peripheral devices like network cards or CD-ROMs, in order to achieve a high degree of redundancy.

Rakshasa was built with open source software. It replaces the vendor-supplied BIOS with a combination of Coreboot and SeaBIOS, alternatives that work on a variety of motherboards from different manufacturers, and also writes an open source network boot firmware called iPXE to the computer’s network card.

All of these components have been modified so they don’t display anything that could give their presence away during the booting process. Coreboot even supports custom splashscreens that can mimic the ones of the replaced BIOSes.

Existent computer architecture gives every peripheral device equal access to RAM (random access memory), Brossard said. “The CD-ROM drive can very well control the network card.”

This means that even if someone were to restore the original BIOS, rogue firmware located on the network card or the CD-ROM could be used to reflash the rogue one, Brossard said.

The only way to get rid of the malware is to shut down the computer and manually reflash every peripheral, a method that is impractical for most users because it requires specialized equipment and advanced knowledge.

Brossard created Rakshasa to prove that hardware backdooring is practical and can be done somewhere in the supply chain, before a computer is delivered to the end user. He pointed out that most computers, including Macs, come from China.

However, if an attacker would gain system privileges on a computer through a different malware infection or an exploit, they could also theoretically flash the BIOS in order to deploy Rakshasa.

The remote attack method wouldn’t work in all cases, because some PCI devices have a physical switch that needs to be moved in order to flash a new firmware and some BIOSes have digital signatures, Brossard said.

However, Coreboot has the ability to load a PCI extension firmware that takes precedence before the one written on the network card, therefore bypassing the physical switch problem.

The attack “totally works when you have physical access, but remotely it only works 99 percent of the time,” Brossard said.

The iPXE firmware that runs on the network card is configured to load a bootkit — malicious code that gets executed pior to the operating system and can infect it before any security products start.

Some known malware programs store the bootkit code inside the Master Boot Record (MBR) of the hard disk drive. This makes it easy for computer forensics specialists and antivirus products to find and remove.

Rakshasa is different because it uses the iPXE firmware to download the bootkit from a remote location and load it into RAM every time the computer boots.

“We never touch the file system,” Brossard said. If you send the hard drive to a company and ask them to analyze it for malware they won’t be able to find it, he said.

In addition, after the bootkit has done its job, which is to perform malicious modifications of the kernel — the highest-privileged part of the operating system — it can be unloaded from memory. This means that a live analysis of the computer’s RAM won’t be able to find it either.

Detecting this type of compromise is very hard because programs that run inside the operating system get their information from the kernel. The bootkit could very well fake this information, Brossard said.

The iPXE firmware is capable of communicating over Ethernet, Wi-Fi or Wimax and supports a variety of protocols including HTTP, HTTPS and FTP. This gives potential attackers a lot of options.

For example, Rakshasa can download the bootkit from a random blog as a file with a .pdf extension. It can also send the IP addresses and other network information of the infected computers to a predefined email address.

The attacker can push configuration updates or a new version of the malware over an encrypted HTTPS connection by communicating directly with the network card firmware and the command and control server can be rotated among different websites to make it harder for law enforcement or security researchers to take it down.

Brossard did not release Rakshasa publicly. However, since most of its components are open source, someone with sufficient knowledge and resources could replicate it. A research paper that explains the malware’s implementation in more detail is available online.


Tools released at Defcon can crack widely used PPTP encryption in under a day

Monday, July 30th, 2012

New tool and service can decrypt any PPTP and WPA2 wireless sessions using MS-CHAPv2 authentication

Security researchers released two tools at the Defcon security conference that can be used to crack the encryption of any PPTP (Point-to-Point Tunneling Protocol) and WPA2-Enterprise (Wireless Protected Access) sessions that use MS-CHAPv2 for authentication.

MS-CHAPv2 is an authentication protocol created by Microsoft and introduced in Windows NT 4.0 SP4. Despite its age, it is still used as the primary authentication mechanism by most PPTP virtual private network (VPN) clients.

MS-CHAPv2 has been known to be vulnerable to dictionary-based brute force attacks since 1999, when a cryptanalysis of the protocol was published by cryptographer Bruce Schneier and other researchers.

However, the common belief on the Internet is that if you have a strong password then it’s ok, said Moxie Marlinspike, the security researcher who developed ChapCrack, one of the tools released at Defcon. “What we demonstrated is that it doesn’t matter. There’s nothing you can do.”

ChapCrack can take captured network traffic that contains a MS-CHAPv2 network handshake (PPTP VPN or WPA2 Enterprise handshake) and reduce the handshake’s security to a single DES (Data Encryption Standard) key.

This DES key can then be submitted to — a commercial online password cracking service that runs on a special FPGA cracking box developed by David Hulton of Pico Computing — where it will be decrypted in under a day.

The CloudCracker output can then be used with ChapCrack to decrypt an entire session captured with WireShark or other similar network sniffing tools.

PPTP is commonly used by small and medium-size businesses — large corporations use other VPN technologies like those provided by Cisco — and it’s also widely used by personal VPN service providers, Marlinspike said.

The researcher gave the example of IPredator, a VPN service from the creators of The Pirate Bay, which is marketed as a solution to evade ISP tracking, but only supports PPTP.

Marlinspike’s advice to businesses and VPN providers was to stop using PPTP and switch to other technologies like IPsec or OpenVPN. Companies with wireless network deployments that use WPA2 Enterprise security with MS-CHAPv2 authentication should also switch to an alternative.


Spoofing a Microsoft Exchange server: a new how-to

Friday, July 27th, 2012

The smartphone-based attack wreaks havoc on Android and iOS smartphones. you use an Android or iOS device to connect to a Microsoft Exchange server over WiFi, security researcher Peter Hannay may be able to compromise your account and wreak havoc on your handset.

At the Black Hat security conference in Las Vegas, the researcher at Edith Cowan University’s Security Research Institute in Australia described an attack he said works against many Exchange servers operated by smaller businesses.  Android and iOS devices that connect to servers secured with a self-signed secure sockets layer certificate will connect to servers even when those certificates have been falsified.

“The primary weakness is in the way that the client devices handle encryption and do certificate handling, so it’s a weakness in SSL handling routines of the client devices,” Hannay told Ars ahead of his presentation on Thursday.  “These clients should be saying that the SSL certificate really doesn’t match, none of the details are correct.  I won’t connect to it.”

Hannay has developed an attack that uses a WiFi network to implement a rogue server with a self-signed certificate, rather than one issued by a trusted certificate authority. Vulnerable devices on the same network that try to connect to their regular Exchange server won’t reach that intended destination. Instead, it will initiate communications with Hannay’s imposter machine.

The use of an SSL certificate to protect an Exchange server is designed to preclude precisely this kind of man-in-the-middle attack. Devices are supposed to connect only if the certificate bears a valid cryptographic key certifying the service is valid. But that’s not what always happens, the researcher said.

Android devices that connect to an Exchange server with a self-signed certificate will connect to any server at its designated address, even when its SSL credential has been spoofed or contains invalid data. iOS devices fared only slightly better in Hannay’s tests: They issued a warning, but allowed users to connect anyway.  Microsoft Windows Phone handsets, by contrast, issued an error and refused to allow the end user to connect.

Once a phone connects to a rogue server used in Hannay’s experiments, a script he wrote issues a command to remotely wipe its contents and to restore all factory settings.  He said it’s also possible to retrieve the login credentials users need to sign in to their accounts. Hannay said a malicious hacker could then use that information to login to the legitimate account.

“It’s really simple and that’s what’s disturbing to me,” Hannay said.  The whole attack is just 40 lines of python and most of that is just connection handling.”

As stated earlier, the attack works only against phones that have connected to an Exchange server secured by a self-signed SSL certificate.  Hannay said most organizations with fewer than 50 people use such credentials, rather than paying to have a certificate signed by a recognized certificate authority.

Google and Apple didn’t respond to an e-mail seeking comment for this article.  A Microsoft representative said members of the company’s Exchange team are looking in to the report.


Android phones hijacked via wallet tech

Friday, July 27th, 2012

A skilled hacker has shown how to hijack a smartphone via a short-range radio technology known as Near Field Communication (NFC).

Charlie Miller created tools that forced phones to visit websites seeded with attack software.

The software on the booby-trapped websites helped Mr Miller look at and steal data held on a handset.

NFC is becoming increasingly common in smartphones as the gadgets are used as electronic tickets and digital wallets.

Beam guide

Mr Miller, a research consultant at security firm Accuvant, demonstrated the work at the Black Hat hacker conference in Las Vegas.

During his presentation, Mr Miller showed how to attack three separate phones – the Samsung Nexus S, the Google Galaxy Nexus and the Nokia N9.

To attack the phones Mr Miller wrote software to control a reader tag that works in conjunction with NFC. As its name implies, NFC works when devices are brought close together or are placed near a reader chip.

In one demo Mr Miller piped commands through his custom-built chip that abused a feature of the smartphones known as Android beam. This allows phone owners to send links and information over short distances to other handsets.

He discovered that the default setting in Android Beam forces a handset to visit any weblink or open any file sent to it. Via this route he forced handsets to visit websites that ran code written to exploit known vulnerabilities in Android.

“The fact that, without you doing anything, all of a sudden your browser is going to my website, is not ideal,” Mr Miller told tech news website Ars Technica.

In one demonstration using this attack Mr Miller was able to view files on a target handset.

On the Nokia phone, Mr Miller demonstrated how to abuse NFC and take complete control of a target handset, making it send texts or make calls, via the weaknesses exploited by his customised radio tag.

Mr Miller said that to successfully attack phones they must be running a particular version of the Android operating system, be unlocked and have their screen active.

Nokia said it was aware of Mr Miller’s research and said it was “actively investigating” his claims of success against its N9 phone. It said it was not aware of anyone else abusing loopholes in Android via NFC.

Google has yet to comment on the research.

Source:  BBC

Researcher: New air traffic control system is hackable

Friday, July 27th, 2012

Air traffic control technology is getting a major upgrade in the United States that is scheduled to be completed in 2014, but the new systems are susceptible to potentially dangerous manipulation, according to a security researcher.

The actual flaws might seem mild compared to everyone’s worst fears and common Hollywood plot lines. Planes cannot be forced from the sky or dangerously redirected. But the researcher says the system can be tricked into seeing aircraft that are not actually there. Messages sent using the system are not encrypted or authenticated, meaning anyone with the basic technology and know-how could identify a plane and see its location.

Computer scientist Andrei Costin, a Ph.D. student at Eurecom, gave a talk on the weaknesses of the new air traffic system at the Black Hat security conference in Las Vegas on Wednesday. He did not mention any known hacks of the system, but did demonstrate the potential negative scenarios.

Old radar systems are being replaced with a new technology called Automatic Dependent Surveillance – Broadcast system, or ADS-B. The traditional radars work by sending a signal that triggers an aircraft’s responder to send back its position. The new system uses the global satellite navigation system to continuously broadcast the locations of planes. The information is sent to other aircraft and ground stations; the ground station sends the location to air traffic controllers.

The new system will open up this flight information to a new player: the general public.

“There are various applications which you can go to and basically see, online, in real time, all the airplanes which broadcast their information,” said Costin.

According to Costin, the chance of these security holes being exploited for terrorism is unlikely, but he says they still have the potential to be used by pranksters, paparazzi and military intelligence organizations interested in tracking private aircraft or confusing air traffic control systems on the ground. Intercepting the messages, jamming the system or attacking it by adding false information does not require advanced technology; the necessary software-defined radio retails for under $800.

One of the technology’s makers downplayed the threat.

“We are quite familiar with the theory that ADS-B could be ‘spoofed,’ or barrage jammed by false targets. There’s little new here. In fact, just about any radio frequency device can be interfered with somewhat,” said Skip Nelson, the president of ADS-B Technologies, which is one of many companies making these components. “I obviously can’t comment on countermeasures, but you should know that this issue has been thoroughly investigated and international aviation does have a plan.”

In a statement, the Federal Aviation Administration said it already has a process in place for addressing potential threats to the system, and it does conduct ongoing assessments of vulnerabilities: “An FAA ADS-B security action plan identified and mitigated risks and monitors the progress of corrective action. These risks are security sensitive and are not publicly available.”

The FAA has sunk millions of dollars into the system. The benefits of the ADS-B are that it will show more precise locations of aircraft and pilots will have access to more information about surrounding aircraft while in the air. The FAA also says it is more environmentally friendly by making flight routes more direct and saving on fuel.

Given the large time and financial investment, the FAA is not going to abandon the new technology. However, it isn’t throwing out the old system completely, just in case.

“The FAA plans to maintain about half of the current network of secondary radars as a backup to ADS-B in the unlikely event it is needed,” the FAA said in its statement.

Source:  CNN

Web-connected industrial controls stoke security fears

Tuesday, July 24th, 2012

Until a few days ago, anyone who had done a bit of digging into the security of industrial control systems could have reached into the website of a Kansas agricultural concern and turned off all its windmills.

The owner had left the system connected to the open Internet without any password protections, despite warnings from Canadian manufacturer Endurance Wind Power. A cyber researcher found the vulnerability along with thousands of other exposed industrial controls, many of them in critical facilities.

“I advise people that it’s the digital equivalent of sending your 12-year-old daughter to school without pants, but farmers aren’t big on our required security,” said Mike Meehan, an engineer at Endurance contacted by Reuters. He said he would contact the customer and use the discovery to urge other Endurance clients to limit connections to secure networks.

The research that found the lapse came from one of two new studies on the security of industrial controls that were provided to Reuters in advance of their public release at the Black Hat security conference being held this week in Las Vegas.

The research buttress concerns that critical national infrastructure in the West is more vulnerable to hacking attacks now than two years ago — despite its status as a top cybersecurity priority for the White House and other parts of the federal government.

Eireann Leverett, the researcher who found the Endurance customer, wrote in a master’s thesis last year that he had found 7,500 control devices connected to the Net, more than 80 percent of which did not require a password or other authentication before allowing a visitor to interact with the machines.

In his more recent work to be presented at Black Hat, Leverett said he found 36,000 such connected devices, including some in power plants. He said he wanted to “demolish the myth” that control systems are generally safe because of an “air gap” between them and the Net.

Ruben Santamarta, who is also presenting a paper at Black Hat, focused on smart meters, which measure and control electricity use. Smart meters are supported by utilities and governments worldwide because they can improve efficiency in consumption and report patterns back to energy providers.

Working from instruction manuals in a lab, Santamarta found a “back door” in one of the most popular types of smart meter, the ION product line made by Schneider Electric (SCHN.PA) of France. It was a reserved factory login account that enabled the company to change billing records and update the software.

After some more digging, Santamarta discovered that the passwords are computed from the serial number of the devices, which can be discovered by attempting to connect to them.

“Once you have access to the smart meter you can do anything,” Santamarta said. “You could sabotage it to disrupt the power in the facility” or install a spying program.

Santamarta contacted Department of Homeland Security (DHS) officials and Schneider, which has already made some patches available. Schneider did not respond to a request for comment.


The annual Black Hat conference, which runs through Thursday, comes as U.S. President Obama renews his push for a comprehensive cybersecurity bill with infrastructure protection as its centerpiece.

Reports of cyber-intrusions into water, energy and other infrastructure facilities leaped to 198 in 2011 from 41 the previous year, according to the DHS. Those incidents include common criminal infections and email-based attacks aimed at stealing corporate information through malicious attachments.

Though deliberate manipulation of industry controls by outsiders remain rare, experts and officials are concerned that it is only a matter of time before other countries, criminal gangs, terrorists or pranksters wreak havoc on dams, power plants or water treatment facilities.

The vulnerabilities are much harder to correct than those in standard software used by consumers and ordinary businesses, they say. Software for controlling industrial activity including generators, pumps and valves can remain in place for a decade or longer, making the improvement cycle slow.

In addition, many control devices and software were designed without a thought that they would ever be connected to the Internet, so they were built with minimal security.

A number of researchers have known for more than a decade about the pervasive problems. Black Hat founder Jeff Moss said that as a teenager, he discovered a dam that had never changed the default username and password on its control software, so that anyone who connected and ran a password-cracking program could have opened the gates.

Moss said he and others warned some facilities and manufacturers, and kept quiet to the public. But after the Stuxnet worm used industrial control vulnerabilities to disable Iranian centrifuges two years ago, there has been a rush to publish such findings.

“That’s going to be the pain that forces the sector to reform,” Moss said.

A convoluted regulatory structure makes it difficult for authorities to insist that owners of critical infrastructure meet basic standards.

On Friday, for example, the Department of Energy released a security “self-evaluation tool” for utilities, in hopes that private companies will want to investigate their own defenses and anonymously let regulators know how well they are doing. Officials said the voluntary tool was part of a “roadmap” for getting to energy sector cybersecurity that was published in September by a joint industry-government group.

That roadmap calls for solid defenses to be in place against critical attacks by 2020, nine years after publication.

Department of Energy spokeswoman Keri Fulton said the new tool was a sign that the government is not content to wait until 2020. “This is something that we are taking very seriously. We are trying to take concrete steps right now.”

The DHS, the lead U.S. authority for cybersecurity in the country, declined interview requests.


Though many owners of control devices do not realize that their systems are hooked up to the Net — and their regulators can be likewise oblivious — a new generation of tools is making it easy for researchers and adversaries to find them.

Chief among them is Shodan, a specialized search engine that checks for connections and can be asked to look for just one brand of software at a time, such as one known to operate dams.

A majority of the devices Leverett found, like the Kansas windmills, would not be considered “critical” to national safety or the economy — some 26,000 were heating, cooling and ventilation controls, which would only be vital if they were needed to do such things as prevent a key mine from overheating or keep a nuclear plant cooled. But these network weaknesses could give an attacker a foothold to reach more core processes.

Leverett, who now works for security firm IOActive, said that internationally he identified four devices inside power plants, two in hydropower plants, and one in a geothermal plant.

Both Santamarta and Leverett said that the core problem isn’t any one company but the prevailing architecture in the control-software industry, which will take a long time to change without concerted demands by governments or private customers.

“It’s not the only back door I’ve found,” Santamarta said. “The attack surface is massive.”


Bold plan: opening 1,000 MHz of federal spectrum to WiFi-style sharing

Monday, July 23rd, 2012

Government spectrum should be shared, not hoarded, Presidential council says.

An advisory council to President Obama today said the US should identify 1,000 MHz of government-controlled spectrum and share it with private industry to meet the country’s growing need for wireless broadband.

The report from the President’s Council of Advisors on Science and Technology (PCAST) says the “traditional practice of clearing and reallocating portions of the spectrum used by Federal agencies is not a sustainable model.” Instead, spectrum should be shared. For example, the government might need a chunk of spectrum for communications and radar systems in certain places and at certain times—but “that spectrum can be freed up for commercial purposes at other times and places while respecting the paramount needs of the Federal system.”

If put into place, the plan would represent a major change in how the US government uses and distributes spectrum, one that will help power our future filled with 4G phones and tablets. Spectrum sharing is a concept already embraced by TV white spaces technology, which is sometimes called “Super WiFi” and uses empty TV channels instead of traditional WiFi frequencies. The PCAST recommendation is that sharing shouldn’t be the exception—it should be the norm.

“Technology innovations of recent years make this transforma­tion eminently achievable,” the council explained. “Two trends are especially important. First, instead of just the tall cell towers that provide coverage for very large geographic areas, many wireless services are already moving to ‘small cell’ operations that provide services for very small geographic areas, reducing the potential for interference so that other services may operate much closer to them. The huge explosion of Wi­Fi services is one example of this evolution. Second, improvements in performance make it possible for devices to deliver services seamlessly even in the presence of signals from other systems, so that they do not need exclusive frequency assignments, only an assurance that potentially interfering signals will not rise above a certain level.”

Given these capabilities, it’s time to stop fragmenting spectrum “into ever more finely divided exclusivity frequency assignments,” and start specifying large frequency bands that can accommodate a wide variety of uses and technologies in a much more efficient manner.

PCAST said it has already identified more than 200 MHz of federal spectrum that can be freed for sharing. Another 195 MHz will be identified in a report coming later this year, and the Federal Communications Commission will use incentive auctions “to free up substantially more prime spectrum,” the council noted.

Some public interest groups are already applauding the PCAST report.

“The path to sustainable spectrum growth must take advantage of our power to innovate and our leadership in open spectrum technologies such as WiFi and Super WiFi,” Public Knowledge Senior VP Harold Feld said in a statement. “For too long, policymakers and industry lobbyists have quarrelled over whether to embrace more exclusive licensing or spectrum sharing as if a gain for one means a loss for the other. We are happy the PCAST report rejects this false choice that has deadlocked our spectrum policy for too long. By embracing sharing while continuing to find clearable spectrum for auction, we can not only ensure an endless supply of cat videos for our smart phones, but also provide enough open spectrum for technological innovation, job creation, and lower connection prices for consumers.”

PCAST recommended allowing “general authorized access” devices to operate in the 3550-3650 MHz band (used for radar). It also identified the following list of bands as being potentially suitable for shared use:





The Wireless Innovation Alliance weighed in on the plan, saying, “We agree with PCAST that expanding on the TV White Space database approach holds immediate promise for opening the underutilized 3550-3650 MHz band for unlicensed devices and encourage the FCC and NTIA to make implementation a priority.”

Today’s report was issued in response to a 2010 memorandum from Obama that required 500MHz of spectrum to be made available for commercial use over the next ten years. In recommending 1,000MHz of spectrum, PCAST noted that “in just two years, the astonishing growth of mobile information technology—exemplified by smartphones, tablets, and many other devices—has only made the demands on access to spectrum more urgent.”


Tri-band WiFi chips for 7Gbps speed coming from Marvell, Wilocity

Monday, July 23rd, 2012

Chips merge 802.11n WiFi with 60GHz to fuel everything from phones to routers.

One of the biggest changes ever made to WiFi is coming in the next year with a new standard supporting the 60GHz band, powering much faster transmissions than are possible in the existing 2.4GHz and 5GHz bands. All that’s needed are some chips, and products to put them in.

Slowly but surely, the chipmakers embracing 60GHz technology are making their plans known. The latest is Marvell, which today announced a partnership with startup Wilocity to make tri-band chips that will use all three bands. That will allow consumer devices to connect to existing WiFi networks while also taking advantage of the super-fast 60GHz band for high-speed data transfer and high-quality media streaming. Under the developing 802.11ad standard, 60GHz transmissions can hit 7Gbps.

Wilocity already has a partnership with Qualcomm Atheros, Qualcomm’s networking subsidiary, to build tri-band chips. Those are expected to come out by the end of this year and focus on the PC notebook market—for example a laptop bundled with a remote docking station. The partnership with Marvell won’t result in shipping products until 2013, but Wilocity’s VP of Marketing, Mark Grodzinsky, told us that the Marvell/Wilocity chips will focus on a broader range of products including tablets, Ultrabooks, and phones. The two companies are also targeting access points, residential gateways, and media center devices.

The first tri-band chips will support the existing 802.11n standard for 2.4GHz and 5GHz transmissions, as well as the forthcoming 802.11ad for 60GHz transmissions. Unfortunately, those first chips won’t support 802.11ac, the other forthcoming WiFi standard that will dramatically speed up the 5GHz band.

Eventually, you can expect to see chips supporting 11n, 11ac, and 11ad all in one package. Although some 11ac products are already on the market, both 11ac and 11ad are still awaiting ratification by the IEEE (Institute of Electrical and Electronics Engineers). Real-world applications of 11ac might move along more quickly than 11ad because it’s based on the familiar 5GHz band. Grodzinsky said he doesn’t expect mass adoption of 60GHz technologies until 2014.


Four tech trends in IT disaster recovery

Friday, July 20th, 2012

As we’ve seen in recent years, natural disasters can lead to long-term downtime for organizations. Because earthquakes, hurricanes, snow storms or other events can put data centers and other corporate facilities out of commission for a while, it’s vital that companies have in place a comprehensive disaster recovery plan.

Disaster recovery (DR) is a subset of business continuity (BC), and like BC, it’s being influenced by some of the key trends in the IT industry. Foremost among these are cloud services, server and desktop virtualization, the proliferation of mobile devices in the workforce, and the growing popularity of social networking as a business tool.

These trends are forcing many organizations to rethink how they plan, test and execute their DR strategies. CSO previously looked at how these trends are specifically affecting IT business continuity; as with BC, much of the impact they are having on DR is for the better. Still, IT and security executives need to consider how these developments can best be leveraged so that they improve, rather than complicate, DR efforts.

Here’s a look at how these four trends are having an impact on IT disaster recovery.

Cloud Services

As organizations use more internal and external cloud services, they’re finding that these resources can become part of a disaster recovery strategy.

Marist College in Poughkeepsie, N.Y., provides numerous private cloud services to internal users and customers. It also hosts services for 17 school districts and large enterprise clients.

“The cloud configuration allows us to perform software upgrades across the multiple tenant systems quickly, easily and without disruptions,” says Bill Thirsk, vice president of IT and CIO at the college.

“Because our storage is virtualized, we can replicate data across SANS [storage-area networks] that we have placed strategically on our campus in numerous locations and in our data center [in Syracuse, N.Y.] A loss of a SAN means only that production operations switches over to another.”

Because Marist can perform server-level backups across partitions, it can move data from one server platform to another should an event occur, Thirsk says.

There’s big potential value in cloud-based DR services, says Rachel Dines, senior analyst, Infrastructure & Operations, at Forrester Research in Cambridge Mass.

To date, adoption of these offerings has been low, Dines says, “but there is a huge amount of interest and planning going on at end-user companies. Instead of buying resources in case of a disaster, cloud computing and its pay-per-use pricing model allows companies to pay for long-term data storage while only paying for servers if they have a need to spin them up for a disaster or test.”

Cloud-based DR has the potential to give companies lower costs yet faster recovery, with easier testing and more flexible contracts, Dines says.

In a 2012 report from Forrester, the firm says cloud-based DR threatens to shake legacy approaches and offer a viable alternative to organizations that previously couldn’t afford to implement disaster recovery or found it to be a burdensome task.

Perhaps the biggest downside to the cloud from the standpoint of DR are concerns surrounding security and privacy management.

“You still see with some major events, such as the lightning strike in Dublin [in 2011] that took out the cloud services of Amazon and Microsoft, that there can be some temporary loss of service,” says John Morency, research vice president at research firm Gartner Inc. in Stamford, Conn. “The cloud shouldn’t be considered 100% foolproof. If organizations do need that 100% availability guaranteed they need to put some serious thought into what they need to develop for contingencies.”

A growing number of larger companies with complex IT infrastructures are putting in private clouds and using these as part of their disaster recovery strategies, rather than relying on public cloud services, Morency says. “They worry about being left out in the cold during a disaster” if service providers are not able to provide service, he says.

Morency notes that this is only true in the case of DR subscription services that provide floor space and actual equipment at a specific geographical location. “Given the more distributed and virtual nature of public clouds, this is much less of an issue,” he says.

What the cloud has done for traditional disaster recovery service providers is make testing of their backup capabilities more flexible and less costly, Morency says.


For many organizations, server virtualization has become a key compnent of the DR strategy, because it enables greater flexibility with computing resources.

“Virtualization has the potential to speed up the implementation of a disaster recovery strategy and the actual recovery in case of a disaster,” says Ariel Silverstone, an independent information security consultant and former CISO of Expedia, who blogs at

“It also has the ability to make disaster recovery more of an IT function rather than a corporate audit-type function,” Silverstone says. “If you have the right policies and processes in place, [with virtualization] disaster recovery can become part of automatically deploying any server.”

Virtualization enables companies to create an image of an entire data center that can be quickly activated–in part or in whole–when needed, at a relatively low cost, Silverstone says.

For Teradyne Inc., a North Reading, Mass., supplier of test equipment for electronic systems, virtualization has been an enabler for a much improved DR capability, says Chuck Ciali, CIO.

“We have leveraged virtualization for DR significantly,” Ciali says. Using virtualization technology from VMware, Teradyne can seamlessly fail over to redundant blade servers in the case of hardware problems. It can also use the technology to move workloads from its commercial data center to its research and development data center in case of disasters.

“This has taken our recovery time from weeks [or] days under our former tape-based model to hours for critical workloads,” and saves $300,000 per year in DR contract services, Ciali says.

Marist College has deployed virtualization, and one of the benefits is avoiding systems unavailability. “We do all we can to avoid any event that would cause users dissatisfaction, loss of access or loss of functionality,” Thirsk says. “To do so, we utilize massive virtualization of our processors, our network topology and our storage.”

Because Marist IT can now provide a virtual server, virtual network and spin out storage, “our systems assurance activities move along at a very rapid rate,” Thirsk says.

“If at any point of testing something goes horribly wrong, we can decide to trash it and start over or continue forward, all without much trouble at all on the system side.”

On the whole, server virtualization has made DR a lot easier, Dines says. “Because virtual machines are much more portable than physical machines and they can be easily booted on disparate hardware, a lot of companies are using virtualization as a critical piece of their recovery efforts,” she says.

There are lots of offerings in the market that can perform tasks such as automating rapid virtual machine rebooting, replicating virtual machines at the hypervisor layer with heterogeneous storage, and turning backups of physical or virtual machines into bootable virtual machines, Dines says.

“Ultimately, virtualization means companies can get a faster RTO [recovery time objective] for less money,” she says.

On the downside, the popularity of virtualization has led to virtual machine sprawl at many organizations, which can make DR more complex. “Companies have the [virtualization] structure in place that gives them the ability to create many more images, including some they do not even know about or plan for,” Silverstone says. “And they can do so very quickly.”

Another potential negative is that virtualization might give organizations a false sense of security. “People may fail to plan properly for disaster recovery, assuming that everything will be handled by virtualization,” Silverstone says. “There are certain machines that for various reasons are not likely to be virtualized, so using virtualization does not replace the need for proper disaster recovery planning and testing.”

Mobile Devices in the Workforce

From a disaster recovery standpoint the growing use of mobile devices such as smart phones and tablets facilitates the continuation of IT operations and business processes even after a disaster strikes.

“People will carry their mobile devices with them,” says George Muller, vice president, sales planning, supply chain & IT at Imperial Sugar Co, Sugar Land, Texas, a processor and marketer of refined sugar.

“I might not carry my laptop wherever I go, but if all of a sudden we’ve got a disaster I’ve probably got my Blackberry in my shirt pocket. Anything that facilitates connectivity in a ubiquitous way is a plus.”

One of the positive impacts of the prevalence of mobile devices is that it gives people a greater ability to work remotely and communicate using their devices in an emergency, says Malcolm Harkins, vice president of the IT group and CISO at microprocessor manufacturer Intel Corp. in Santa Clara, Calif.

But mobile device proliferation has also made disaster recovery slightly more complex, Dines says. “Along with mobile devices comes more data center infrastructure, such as mobile device management and [products] such as the BlackBerry Enterprise Server, which are often very critical,” she says. “This becomes one more system that must be planned for and properly protected.”

Another possible negative with mobility in a disaster recovery scenario is that some critical enterprise applications, such as payroll, might not be available for mobile devices, Silverstone says.

Harkins notes that there are potential security risks, such as non-encrypted mobile devices being lost or stolen, and unauthorized access to corporate networks from these devices. But these risks can be overcome by the ability to wipe out data on devices remotely over the internet.

Social Networking

Like mobile devices, social networking gives people another way to stay in contact during or after a disaster.

“We’ve seen instances such as a couple of years ago when we had major snow storms on the east coast and a lot businesses shut down and employees kept in touch with each other via Facebook and Twitter vs. email,” Morency says.

In some cases it might take days or weeks for a corporate data center to recover after a disaster. And if the company is relying on internal email systems that might put email service out of commission, Morency says.

“Assuming that either public or wireless networks are still available you can now be using social media to communicate, as an alternative to in-house email which may not be available,” Morency says.

“If you’re using a service like Gmail than it’s less of an issue. But if you’re using an Exchange-based internal email or directory services, than social media may be a more available alternative.”

During a recent disaster test that Marist College performed, “we were curious to see how social networking would be used in case of an actual event,” Thirsk says. Early one early morning the IT department launched an unannounced disaster drill. “While we had warned staff we would be doing this, they had no idea how real we were going to make it,” he says.

First, Thirsk sent a message that the college was experiencing a massive system failure. Due to building conditions, staffers could not report to their work place or to the data center. “We shut down our enterprise communications systems and then watched how the staff responded,” Thirsk says.

Managers quickly began communicating to their staff via outside email accounts, chat rooms, Facebook and Twitter. “They even found my personal email account off campus and began messaging me,” Thirsk says.

In a matter of 20 minutes, all staff had reported to a command center in the campus library, where they were tasked with performing a number of system checks, verifications and processes. “All of this activity occurred using alternate communications methods,” Thirsk says. “We documented this exercise and now use it as part of our plan.”

Forrester says there are several reasons why social networking should play a role in an emergency communications strategy. For one thing, social technology adoption is increasing, and a greater portion of employees and customers have continuous access to social sites such as Twitter and Facebook.

In addition, social channels are essentially free. It costs very little to set up a Facebook, Twitter or Yammer profile, recruit followers, and send out status updates.

Social media sites can also facilitate mass communication with external parties, the firm says. Typically, during a crisis immediate communication is limited to internal staff. However, companies should also plan for situations that call for communication with partners, customers, public officials and the public at large. Social media sites make it easy to establish these external connections.

Finally, the environment of social discussions provides mass mobilization and situational awareness. The value of social networking sites offers unique advantages in the crisis communications arena, Forrester says.

One downside of social networks for disaster recovery is that social networking “by its very nature has the ability to increase FUD–fear, uncertainty and doubt,” Silverstone says. “So I would advise companies that they need to have a policy in place [on how to use social networks] long before a disaster happens, and think about many different possibilities and manage all access and data sharing on social networks like any other communication effort.”


Senate introduces revised version of the Cybersecurity Act of 2012

Friday, July 20th, 2012

Five senators, including Senator Joe Lieberman, introduced a modified version of the Cybersecurity Act of 2012 (PDF) today, hoping to revitalize lagging support for the bill, especially among Republicans. The act, which was first introduced in February of 2012, calls for the creation of a council chaired by the Secretary of Homeland Security, and aims to promote the hardening of infrastructure critical to the US (and it’s not to be confused with SOPA, CISPA, PIPA, or ACTA; each of which made a claim to “enhancing cybersecurity” in its own way).

The revised version of the act makes the originally mandatory, government-dictated, security standards optional, but still establishes a “National Cybersecurity Council” to “coordinate with owners and operators of critical infrastructure.” If the measure is enacted, the Council would take an inventory of high-risk infrastructure, and would ask the owners of that infrastructure to come up with voluntary measures the could mitigate risks.

“A federal agency with responsibilities for regulating a critical infrastructure sector may adopt the practices as mandatory,” a summary of the bill (PDF) noted.

The measure goes on to imply that enforcement will be loose: “Owners of critical infrastructure may apply for certification in the program by self-certifying to the Council that the owner is satisfying the cybersecurity practices developed under section 103 or submitting to the Council a third party assessment verifying that the owner is satisfying the cybersecurity practices.”

But owners of critical infrastructure that self-certify with the council will be granted benefits for their participation, including liability protection if the infrastructure sustains damage while the voluntary risk-management measures were in place, expedited security clearance to employees, priority assistance on “cyber issues,” and warnings on relevant threat information that other companies may report.

The new language also includes a number of rules that have been applauded by the ACLU, including prohibiting the Federal government, “from compelling the disclosure of information from a private entity relating to an incident unless otherwise authorized by law and from intercepting a wire, oral, or electronic communication relating to an incident unless otherwise authorized by law.” The authors of the Cybersecurity Act went out of their way in the original document to avoid new regulation over individuals and networks, some say to stay away from the blacklash created by SOPA and CISPA, so the additions seem like a bid to find support among privacy experts.


XP and Vista users: No Office 2013 for you

Thursday, July 19th, 2012

Still running XP or Vista and eyeing Office 2013? Sorry, you’re out of luck.

Unveiled on Monday, the upcoming new Office suite won’t support Windows XP or Vista, meaning users who need or want Office 2013 will have to upgrade to Windows 7 or Windows 8.

Microsoft confirmed the tighter requirements on its Office 2013 Preview Technet page. Only Windows 7, Windows 8, Windows Server 2008 R2, and Windows Server 2012 will be able to run the new suite.

Users will also need a PC with at least a 1Ghz processor, 1GB of RAM for the 32-bit version (2GB for the 64-bit version), at least 3GB of free hard disk space, and a graphics card that can provide at least a 1024 x 576 resolution.

The PC specs shouldn’t be a challenge for most users. But the OS requirement may prove problematic.

Vista users have been dropping like flies, most of them likely upgrading to Windows 7 by this point. Recent stats from Net Applications showed Vista’s market share at less than 7 percent in June, and steadily dropping.

But Windows XP is hanging on after more than 10 years.

Though Windows 7 is likely to claim the top spot this month, XP still holds more than 40 percent of the market, according to Net Applications.

That figure certainly covers many businesses, large and small, that rely on XP as a standard and stable environment that supports all their applications and is familiar to their users.

Microsoft may be hoping that the appeal of Office 2013 will prompt more users and businesses to move away from XP. The company may even been looking at the combination of Windows 8 and Office 2013 to convince more people to upgrade both their OS and Office suite around the same time.

Extended technical support for Windows XP will also end in April 2014, which means no more patches, bug fixes, or other updates. Microsoft has revealed no release date for Office 2013, but let’s assume it debuts by the end of the year or early 2013. Why support an operating system that’s due to expire the following year, especially when you’re trying to push users to upgrade?

Still, it’s a gamble. The number of XP installations will certainly continue to fall as more companies make the move to Windows 7. By even by the time Office 2013 launches, XP will still hold a healthy chunk of the market, leaving a lot of people unable to run the new suite.

Windows and Office are Microsoft’s two bread-and-butter products, accounting for a major chunk of the company’s business. To continue to generate revenue, the company needs its customers to constantly migrate to the latest versions of both products.

And while individual users can easily upgrade a single machine, businesses face the time, expense, and effort of migrating hundreds, thousands, or tens of thousands of machines. So despite Microsoft’s best efforts, many companies will continue to hold on as long as they can with their current versions of Windows and Office.

Source:  CNET

Experts take down Grum spam botnet, world’s third largest

Wednesday, July 18th, 2012

Botnet was responsible for 18 billion spam messages a day — about 18 percent of the world’s spam — experts tell The New York Times.

Computer-security experts took down the world’s third-largest botnet, which they say was responsible for 18 percent of the world’s spam.

Command-and-control servers in Panama and the Netherlands pumping out up to 18 billion spam messages a day for the Grum botnet were taken down Tuesday, but the botnet’s architects set up new servers in Russia later in the day, according to a New York Times report. California-based security firm FireEye and U.K.-based spam-tracking service SpamHaus traced the spam back to servers in Russia and worked with local ISPs to shut down the servers, which ran networks of infected machines called botnets.

The tech community has stepped up its efforts of late to take these botnets offline. Microsoft in particular has been quite active, using court orders to seize command-and-control servers and cripple the operations of the Waledac, Rustock, and Kelihos botnets.

The takedown of the Rustock botnet cut the volume of spam across the world by one-third, Symantec reported in March 2011. At its peak, the notorious botnet was responsible for sending out 44 billion spam messages per day, or more than 47 percent of the world’s total output, making it the leading purveyor of spam.

Security experts are confident they have stopped the Grum botnet in its tracks.

“It’s not about creating a new server. They’d have to start an entirely new campaign and infect hundreds of thousands of new machines to get something like Grum started again,” Atif Mushtaq, a computer security specialist at FireEye, told the Times. “They’d have to build from scratch. Because of how the malware was written for Grum, when the master server is dead, the infected machines can no longer send spam or communicate with a new server.”

Source:  CNET

Microsoft unveils Office 2013, download the preview now

Tuesday, July 17th, 2012

Microsoft has today taken the lid off its all new version of Microsoft Office, dubbed Office 2013. And if you’ve used Office before, expect to be working with a suite of applications that look very different when you finally make the jump to Windows 8.

Microsoft has redesigned Office with touch in mind, so as to allow the suite to work across desktops, laptops, tablets, and smartphone devices, as well as putting a new focus on using the cloud. In order to do that the interface required a radical overhaul, and the result is Office 2013.

As well as creating a brand new interface that is touch-friendly, Microsoft wants to ensure you can access your documents everywhere and from any Windows 8 device. The solution to that problem is integration with cloud-based storage through SkyDrive, which Office uses by default to save all documents. Saving in the cloud also extends to your Office preferences and most recent files.

There will be two versions of Office on offer. Office 2013 is the desktop version we all are used to with a one-time fee for a license. Then there’s Office 365, which is the subscription version. Essentially Office 365 allows you to use Office 2013 across multiple machines (how many will depend on the version you choose) and sync your data between them using SkyDrive.

Microsoft also intends to make some extra cash from selling extral SkyDrive storage to heavy users, and integrating Skype for which you can buy credit for calls that aren’t deemed free.

As for pricing, Microsoft has yet to announce the final details. If you purchase a tablet running Windows RT it will ship with Word, Excel, PowerPoint, and OneNote by default. It’s likely pricing for the standard single-license Office 2013 will be similar to what has gone before, but the subscription pricing will be key, especially if Microsoft intends to compete with Google in the cloud-office space.

One thing Microsoft has promised is all subscription levels will include all applications (Word, Excel, PowerPoint, OneNote, Outlook, Publisher, and Access). And by subscribing you guarantee future updates and new versions will just roll out to you without disruption.

As for the levels of subscription, there will be Office 365 Home Premium for consumers, Office 365 Small Business Premium for business users, and Office 365 ProPlus for enterprises. The Home Premium edition will include 20GB of SkyDrive storage, 60 minutes of Skype world minutes to use every month, and 5 licenses for installing Office on multiple machines and devices.

Final pricing for Office 2013 is expected to be announced in the fall, but you don’t have to wait until then to try out the new suite. Microsoft is offering a Customer Preview version you can sign-up for and try right now.


Obama signs order outlining emergency Internet control

Thursday, July 12th, 2012

A new executive order addresses how the country deals with the Internet during natural disasters and security emergencies, but it also puts a lot of power in the government’s hands.

President Barack Obama signed an executive order last week that could give the U.S. government control over the Internet.

With the wordy title “Assignment of National Security and Emergency Preparedness Communications Functions,” this order was designed to empower certain governmental agencies with control over telecommunications and the Web during natural disasters and security emergencies.

Here’s the rationale behind the order:

The Federal Government must have the ability to communicate at all times and under all circumstances to carry out its most critical and time sensitive missions. Survivable, resilient, enduring, and effective communications, both domestic and international, are essential to enable the executive branch to communicate within itself and with: the legislative and judicial branches; State, local, territorial, and tribal governments; private sector entities; and the public, allies, and other nations. Such communications must be possible under all circumstances to ensure national security, effectively manage emergencies, and improve national resilience.

According to The Verge, critics of the order are concerned with Section 5.2, which is a lengthy part outlining how telecommunications and the Internet are controlled. It states that the Secretary of Homeland Security will “oversee the development, testing, implementation, and sustainment” of national security and emergency preparedness measures on all systems, including private “non-military communications networks.” According to The Verge, critics say this gives Obama the on/off switch to the Web.

Presidential powers over the Internet and telecommunications were laid out in a U.S. Senate bill in 2009, which proposed handing the White House the power to disconnect private-sector computers from the Internet. But that legislation was not included in the Cybersecurity Act of 2012 earlier this year.

After being published by the Federal Register, executive orders take 30 days to become law. However, the president can amend, withdraw, or issue an overriding order at any time.

Source:  CNET

Hackers post 450K credentials pilfered from Yahoo

Wednesday, July 11th, 2012

Credentials posted in plain text appear to have originated from the Web company’s Yahoo Voices platform. The hackers say they intended the data dump as a “wake-up call.”

Yahoo has been the victim of a security breach that yielded hundreds of thousands of login credentials stored in plain text.

The hacked data, posted to the hacker site D33D Company, contained more than 453,000 login credentials and appears to have originated from the Web pioneer’s network. The hackers, who said they used a union-based SQL injection technique to penetrate the Yahoo subdomain, intended the data dump to be a “wake-up call.”

“We hope that the parties responsible for managing the security of this subdomain will take this as a wake-up call, and not as a threat,” the hackers said in a comment at the bottom of the data. “There have been many security holes exploited in webservers belonging to Yahoo! Inc. that have caused far greater damage than our disclosure. Please do not take them lightly. The subdomain and vulnerable parameters have not been posted to avoid further damage.”

The hacked subdomain appears to belong to Yahoo Voices, according to a TrustedSec report. Hackers apparently neglected to remove the host name from the data. That host name — — appears to be associated with the Yahoo Voices platform, which was formerly known as Associated Content.

Yahoo confirmed that it is looking into the matter. “We are currently investigating the claims of a compromise of Yahoo! user IDs,” it said in a statement, according to the BBC. The company also told the BBC that it was unclear which portion of its network was affected, after first having said the problem originated at Yahoo Voice.

CNET has contacted Yahoo for comment independently and will update this report when we learn more.

Because the data is quite sensitive and displayed in plain text, CNET has elected not to link to the page, although it is not hard to find. However, the page size is very large and takes a while to load.

The disclosure comes at a time of heightened awareness over password security. Recent high-profile password thefts at LinkedIn, eHarmony, and contributed to approximately 8 million passwords posted in two separate lists to hacker sites in early June. Yesterday, Formspring announced it had disabled the passwords of its entire user base after discovering about 420,000 hashed passwords that appeared to come from the question-and-answer site were posted to a security forum.

Source:  CNET

Building Windows 8: Protecting user files with File History

Tuesday, July 10th, 2012

Setting it up

Before you start using File History to back up your files, you’ll need to set up a drive to save files to. We recommend that you use an external drive or network location to help protect your files against a crash or other PC problem.

File History only saves copies of files that are in your libraries, contacts, favorites, and on your desktop. If you have folders elsewhere that you want backed up, you can add them to one of your existing libraries or create a new library.

To set up File History

  1. Open File History control panel applet.
  2. Connect an external drive, refresh the page, and then tap or click Turn on.

Screenshot of the File History Control Panel applet showing an external hard drive

You can also set up a drive in AutoPlay by connecting the drive to your PC, tapping or clicking the notification that appears and then tapping or clicking Configure this drive for backup.

Screenshot of AutoPlay options, including speed up my system, configure for backup, open folder and take no action

That’s it. From that moment, every hour, File History will check your libraries, desktop, favorites and contacts for any changes. If it finds changed files, it will automatically copy them to the File History drive.

Restoring files

When something bad happens and one or more personal files are lost, the restore application makes it very easy to:

  • Browse personal libraries, folders and files in a way very similar to Windows Explorer.
  • Search for specific versions using keywords, file names and date ranges.
  • Preview versions of a selected file.
  • Restore a file or a selection of files with one tap or a click of a mouse.

We designed the restore application for wide screen displays and to offer a unique, engaging and convenient way of finding a specific version of a file by looking at its preview.

With other backup applications you would have to select a backup set that was created on a specific date. Then you would have to browse to find a specific folder, and then find the one file you need. However at this point it is impossible to open the file or preview its content in order to determine if it is the right one. You would have to restore the file. If it is not the right version, you’d have to start over.

With File History, the search starts right in Windows Explorer. You can browse to a specific location and click or tap on the History button in the explorer ribbon in order to see all versions of the selected library, folder or an individual file.

For example, when you select a Pictures library and click or tap on the History button…

Screenshot of pictures library with History button called out

… you will see the entire history of this library.

Screenshot of pictures library in File History view

When you click on a specific file, you can see the entire history of the selected picture.

Screenshot of the file history for one picture

In this example, the selected picture has 4 versions. You can easily navigate to the desired version by clicking on the Previous/Next buttons or by swiping the screen. Once you have found the version you were looking for, you can click the Restore button to bring it back. The selected version will be restored to its original location.

Continuous, reliable protection

File History, instead of using the old backup model, takes a different approach to data protection.

Protect only what is most important

Instead of protecting the entire system (operating system, applications, settings and user files) File History focuses only on user personal files. That’s what is most precious and hardest to recreate in case of an accident.

Optimized for performance

In the past, most backup applications used brute force method of checking for changes in directories or files by scanning the entire volume. This approach could significantly affect the system performance and requires an extended period of time to complete. File History, on the other hand, takes advantage of the NTFS change journal. The NTFS change journal records any changes made to any files stored on an NTFS volume. Instead of scanning the volume, which involves opening and reading directories, File History opens the NTFS change journal and quickly scans it for any changes. Based on this information it creates a list of files that have changed and need to be copied. The process is very quick and efficient.

File History was designed to be easily interrupted and to quickly resume. This way, File History can resume its operation, without the need to start over when a system goes into sleep mode, a user logs off, the system gets too busy and needs more CPU cycles to complete foreground operations, or the network connection is lost or saturated.

File History was designed to work well on any PC including small form factor PCs with limited resources and tablets. It uses system resources in a way to minimize the impact on system performance, battery life and overall experience.

File History takes into account:

  • If the user is present, i.e. logged on and actively using the system.
  • If the machine is on AC or battery power.
  • When the last backup cycle was completed.
  • How many changes have been made since the last cycle.
  • Activity of foreground processes.

Based on all of these factors, which are re-checked every 10 seconds, it determines the optimal way to back up your data. If any of those conditions change, the service makes a decision to reduce/increase quota or suspend/terminate the backup cycle.

Optimized for mobile users

When File History is running, it gracefully handles state transitions. For example, when you close the lid of your laptop, disconnect an external drive or leave home and take your laptop out of the range of the home wireless network, File History takes the right action:

  • Lid closed – When a PC goes into sleep mode, File History detects the power mode transition and suspends its operation.
  • Lid opened – File History resumes its operation at a priority that makes sure files are protected without impacting overall system performance, even for gamers. It also waits for all post “lid open” activities to complete so that we do not affect the system while it is coming back out of sleep.
  • Dedicated storage device disconnected – File History detects that the storage device is not present and starts caching versions of changed files on a system drive.
  • Dedicated storage device re-connected – in the next cycle, File History detects that the storage device was reconnected, flushes all versions from the local cache to the external drive and resumes normal operation.

Simplicity and peace of mind

We designed File History with two objectives in mind; 1) offer best possible protection of user personal files and 2) offer ease, simplicity and peace of mind.

If you want to take advantage of File History, you have to make only few, simple decisions. In most cases it will be limited to only one – which external drive to use. The rest is taken care of by Windows. The operation of File History is transparent and doesn’t affect the user experience, reliability or performance of Windows in any way.

Full control

Most backup applications, including the Windows Backup and Restore that shipped in Windows 7 require administrator privileges to set up and use. This means that standard users have to ask the administrator to set it up and every time they need to restore a file, or to grant them administrative privileges. Not so with File History. File History offers full control to each individual user. Now users can decide if and when to turn File History on and which external drive to use. In fact, each user can select a different location to store their file history. And they do not have to ask for the administrator’s help to restore a file.

Enthusiasts and experienced PC users can use advanced File History features to control many aspects of its operation, like:

  • How often you want to save copies of your files: The frequency of backups can be changed from 10 minutes to 24 hours. Higher frequency offers better protection but consumes more disk space.
  • How long you want to keep saved versions: Versions can be stored forever or as little as one month. This setting is useful when the File History drive fills up to fast. You can slow down this rate by reducing the time versions are stored.
  • Changing the size of the local cache: File History uses a small amount of space on the local drive to store versions of files while the File History target drive is not available. If you create a lot of versions of files while disconnected or stay disconnected for longer periods of time, you may need to reserve more space on the local drive to keep all versions. Note that the versions stored in the local cache are flushed to the external drive when it becomes available again.
  • Excluding folders that you do not want to back up: Some folders may contain very large files that do not have to be protected because they can be easily recreated (like downloaded high definition movies or podcasts). These files would quickly consume all of the File History drive capacity. This setting allows you to exclude such folders.
  • Recommend a drive to other HomeGroup members on your home network: This setting is covered in more detail in the File History and HomeGroup section below.
  • Accessing the File History event log: The event log contains records of events that may be useful while troubleshooting File History. It may be particularly useful if you want to identify files that File History could not access for any reason.

Advanced settings can be accessed from the File History control panel applet.

Screenshot of portion of control panel applet showing the Exclude folders and Advanced settings links

To exclude a folder, select Exclude folders. Next, click on the Add button, browse to the folder you want to exclude and select it. Files in this folder will not be backed up starting with the next backup cycle. To start backing it up again, simply remove the folder from the list.

Screenshot of Exclude folders page

Other advanced settings are available on the Advanced Settings page.

Screenshot of Advanced Settings page, including how often to save copies, size of offline cache, and how long to keep save versions

File History also supports new storage features introduced in Windows 8. Users who have lots of data to back up can use Storage Spaces to create a resilient storage pool using off-the-shelf USB drives. When the pool fills up, they can easily add more drives and extra storage capacity to the pool. You can find more about Storage Spaces in this blog post.

Users who use BitLocker to protect the content of their personal files can also use File History as it seamlessly supports BitLocker on both source and destination drives.

File History was designed for consumers but could also be used by enterprise customers. In some cases, File History may conflict with the enterprise policies (like retention policy). To prevent such conflicts, we added a group policy that gives enterprise IT administrators the ability to turn off File History on managed client PCs.

You will find the File History policy setting in the Group Policy Object Editor under Computer Configuration, Administrative Templates, Windows Components, File History.

Screenshot of File History policy setting page, for enterprise IT administrators to turn off File History

Minimal setup

File History is part of Windows so you don’t need to install any additional software. However, File History has to be turned on, which typically requires only one click.

As described above, to start protecting your libraries, you need to attach an external drive or select a network location. File History will store versions of your files on this device.

File History automatically selects an external drive if one is available. If more than one drive is available, one with the most free storage capacity is selected.

No schedule

File History wakes up once an hour and looks for personal files that have changed. Versions of all files that have changed are replicated to a dedicated storage device. This approach eliminates the need to set up a schedule and leave a computer idle for an extended period of time. One hour frequency offers a good balance between the level of protection and amount of storage space consumed by file versions. Enthusiasts can change the frequency from 10 min to 1 day in order to increase the level of protection or reduce storage consumption.

No maintenance

File History runs silently in the background and doesn’t require any ongoing maintenance. The only time when it will ask you to intervene is when the external drive is full. At this point you will be asked to either replace the drive with a bigger one or change a setting that tells File History how long to keep file versions around. By default, we keep versions of user personal files forever, but if storage is an issue, it can be reduced to a period of time that best suits your needs.

File History and HomeGroup

File History was also integrated with HomeGroup to make it easier for someone to set up backup for all members of a home network. Here is how it works.

  1. Jane wants her entire family to have their personal data automatically protected. She knows she can do this with File History.
  2. Jane creates a HomeGroup on the family’s home network.
  3. Jane turns on File History on a computer that has a large external drive.
  4. File History control panel detects the HomeGroup and asks if Jane wants to recommend this backup destination to other HomeGroup members.
  5. Jane selects this option and File History uses HomeGroup to broadcast the recommendation to all HomeGroup members.
  6. Each HomeGroup member can now accept the recommendation. If they do, their libraries, desktop, favorites and contacts are automatically backed up to a network share on Jane’s computer.

File History and SkyDrive

File History doesn’t back up your files to the cloud. While the cloud is great for storing files you’d like to access on-the-go, or for sharing files with others, backing up terabytes of data to the cloud requires a specialized service. Many cloud services today support local synchronization, where the data in the cloud is mirrored in your local file system. Sync solutions by their very nature copy changes immediately to all locations, which means accidental deletes or inadvertent changes or corruption to files will be synchronized as well. The best way to address this problem is to couple your sync service with a point-in-time backup solution like File History.

In the blog post, Connecting your apps, files, PCs and devices to the cloud with SkyDrive and Windows 8 we discussed how SkyDrive will integrate with Windows Explorer and the file system. File History takes advantage of that integration. If your SkyDrive is synced to your file system, File History will automatically start protecting the files stored in your local SkyDrive folder. This is a great example of local backup plus reliable anytime, anywhere access. You can access your files in SkyDrive through your PC, your phone, or the web and you’ll also know that File History is providing fast local backup and instantaneous access to all versions of those files.

Full system backup

Usually a full system backup is used to protect your PC against complete system loss, for example when a PC was stolen or lost or the internal hard drive stopped working. Our research showed that only a small number of users are concerned about losing the operating system, applications or settings. They are by far more concerned about losing their personal files. For these reasons, File History was designed specifically to protect user personal files.

File History doesn’t offer the ability to do a full system backup but for those users who may need a full system backup it offers a good compromise. Together with other features introduced in Windows 8 it provides protection against such disasters.

If you want to prepare for a disaster, we recommend a following strategy:

  1. Create a recovery drive to be used when you need to refresh or restore your PC. You can find more about it in this blog post.
  2. Connect to your Microsoft account
  3. Configure your PC to sync your settings
  4. Load apps from the Store
  5. Turn on File History

When your PC is replaced or needs to be reinstalled:

  1. Use the recovery drive to restore the operating system
  2. Connect to your Microsoft account
  3. Configure your PC to sync your settings – this will bring your settings back
  4. Go to the Store and reinstall your modern apps
  5. Reinstall legacy apps
  6. Connect your old File History drive and restore everything – this will restore your personal files

It may require more steps than a file or image restore but has some clear benefits:

  • You do not restore any “no more desired” software or settings that were on your system
  • You do not restore sources of some problems that you might have (or create new problems if you restore to different hardware)
  • You do not restore settings that may cause your system to perform badly or fail

Those who need a full system backup can still use Windows Backup to create a system image.


File History requires:

  • Windows 8 Client operating system
  • An external storage device with enough storage capacity to store a copy of all user libraries, such as a USB drive, Network Attached Storage device, or share on another PC in the home network.


What happens when you upgrade to Windows 8 from Windows 7?
If Windows 7 Backup was active, i.e. it was scheduled and the schedule was active, then it will continue running as scheduled after the upgrade. File History will be disabled by default and users will not be able to turn it on as long as the Windows 7 Backup schedule is active. To turn it you will have to first disable the Windows 7 Backup schedule.

Can Windows 7 users use File History?
Windows 7 users cannot use File History. However, they can restore files from a drive used by File History by browsing the volume in the Windows Explorer and selecting a specific file. Files on the File History drive are stored in the same relative location, and use the same name. The specific version can be identified by the time stamp appended to the file name.

Does File History protect the operating system and applications?
File History only protects user libraries, desktop, favorites and contacts. Other files, such as operating system files, applications, and settings, are not backed up.

Can File History be used with cloud storage?
No. File History is designed specifically for consumers and does not support cloud storage in this release. Windows 8 Server offers a backup feature that can back up files to a cloud. This feature is available on the Server version of Windows and is designed for small and medium businesses.

Can File History be used by enterprise customers?
Yes. However, enterprise customers should be aware that File History may not comply with their company security, access, and retention policies. For that reason, we offer a group policy setting that allows enterprise administrators to disable the feature for an entire organization.

Will File History protect files stored on a file share?
No. File History only protects file stored on a local drive.

  • If you use offline folders and folder redirection, your folders (like My Documents or My Pictures) are redirected to a network share and will not be protected.
  • If you add a network location to any of your libraries, this location will not be protected.

In closing

File History silently protects all of your important files stored in Libraries, Desktop, Favorites and Contacts. Once turned on, it requires no effort at all to protect your data. When you lose a file or just need to find an original version of a picture or a specific version of a resume, all versions of your files are available. With the File History restore application you can find it quickly and effortlessly.

Excerpt from MSDN

Worldwide IT spending to edge past $3.6 trillion this year

Tuesday, July 10th, 2012

Corporate spending on technology goods and services will rise just 3 percent this year, says Gartner.

Organizations around the world will collectively spend more than $3.6 trillion on IT products and services, Gartner said today.

That sounds like a healthy chunk of change, but it marks only a 3 percent increase from last year when spending totaled around $3.5 trillion. Still, Gartner’s latest forecast is a bit more optimistic than the 2.5 percent rise it projected last quarter.

“While the challenges facing global economic growth persist — the eurozone crisis, weaker U.S. recovery, a slowdown in China — the outlook has at least stabilized,” Richard Gordon, a research vice president at Gartner, said in a statement. “There has been little change in either business confidence or consumer sentiment in the past quarter, so the short-term outlook is for continued caution in IT spending.”

In the face of a sluggish year, the the cloud may be one of the few bright spots, according to Gartner. Companies are expected to shell out around $109 billion on cloud services this year, up from $91 billion last year. Spending on the cloud could reach as high as $207 billion by 2016.

As the largest market for IT spending, telecommunications is also due for a promising year.

Enterprises will spend around 377 billion on telecom equipment and $1.7 trillion on telecom services, Gartner noted. The growth is expected to come from new network connections set up in emerging markets and more connected mobile and electronic devices popping up in mature markets.

Source:  CNET

Spam-happy iOS trojan slips into App Store, gets pulled in rapid fashion

Friday, July 6th, 2012

Spamhappy iOS trojan slips into App Store, gets pulled in rapid fashionYou could call it technological baptism of sorts… just not the kind Apple would want.  A Russian scam app known as Find and Call managed to hit the App Store and create havoc for those who dared a download, making it the first non-experimental malware to hit iOS without first needing a jailbreak.

As Kaspersky found out, it wasn’t just scamware, but a trojan: the title would swipe the contacts after asking permission, send them to a remote server behind the scenes and text spam the daylights out of any phone number in that list.

Thankfully, Apple has already yanked the app quickly and explained to The Loop that the app was pulled for violating App Store policies.  We’d still like to know just why the app got there in the first place, but we’d also caution against delighting in any schadenfreude if you’re of the Android persuasion. The app snuck through to Google Play as well, and Kaspersky is keen to remind us that Android trojans are “nothing new;” the real solution to malware is to watch out for fishy-looking apps, no matter what platform you’re using.


Android smartphones ‘used for botnet,’ researchers say

Thursday, July 5th, 2012

Smartphones running Google’s Android software have been hijacked by an illegal botnet, according to a Microsoft researcher.

Botnets are large illegal networks of infected machines – usually desktop or laptop computers – typically used to send out masses of spam email.

Researcher Terry Zink said there was evidence of spam being sent from Yahoo mail servers by Android devices.

Microsoft’s own platform, Windows Phone, is a key competitor to Android.

The Google platform has suffered from several high-profile issues with malware affected apps in recent months.

The official store – Google Play – has had issues with fake apps, often pirated free versions of popular paid products like Angry Birds Space or Fruit Ninja.

This latest discovery has been seen as a change of direction for attackers.

“We’ve all heard the rumours,” Mr Zink wrote in a blog post.

“But this is the first time I have seen it – a spammer has control of a botnet that lives on Android devices.

“These devices login to the user’s Yahoo Mail account and send spam.”

Bad guys

He said analysis of the IP addresses used to send the email revealed the spam had originated from Android devices being used in Chile, Indonesia, Lebanon, Oman, Philippines, Russia, Saudi Arabia, Thailand, Ukraine, and Venezuela.

As is typical, the spam email looks to tempt people into buying products like prescription drugs.

Security expert Graham Cluley, from anti-virus firm Sophos, said it was highly likely the attacks originated from Android devices, given all available information, but this could not be proven.

This was the first time smartphones had been exploited in this way, he said.

“We’ve seen it done experimentally to prove that it’s possible by researchers, but not done by the bad guys,” he told the BBC.

“We are seeing a lot of activity from cybercriminals on the Android platform.

“The best thing you can do right now is upgrade your operating system, if that’s possible.

“And before you install apps onto your device, look at the reviews, because there are many bogus apps out there.”

Google told the BBC it did not respond to queries about specific apps but was working to improve security on the Android platform.

“We are committed to providing a secure experience for consumers in Google Play, and in fact our data shows between the first and second halves of 2011, we saw a 40% decrease in the number of potentially malicious downloads from Google Play,” a spokesman said.

“Last year we also introduced a new service into Google Play that provides automated scanning for potentially malicious software without disrupting the user experience or requiring developers to go through an application approval process.”

Source:  BBC

Web users beware: DNSChanger victims lose Web access July 9

Thursday, July 5th, 2012

On that day, the FBI will be shutting down the temporary DNS servers it used to assist DNSChanger victims

If you’re one of thousands of people infected with the DNSChanger malware, get rid of it before Monday.

On July 9, the FBI will be switching off servers it used to keep those infected with the malware on the Internet. The organization says maintaining the servers is costly and that therefore the agency won’t extend its support.

DNSChanger was first discovered in 2007 and was found to have infected millions of computers worldwide. The payload effectively modified a computer’s DNS settings to redirect traffic through its rogue servers. When users typed in a domain name in a browser, the servers would direct them to other sites for the creators’ financial gain.

Late last year, the FBI disrupted the crime ring and converted the rogue servers to clean servers to give infected users time to fix their systems. A host of tools and techniques have surfaced for removing the malware, but thousands of machines are still affected. If DNSChanger is not removed from those computers, users won’t be able to connect to the Internet.

So, before that happens, Web users are encouraged to head over to a special DNSChanger Web site,, to see how to fix the problem. Several security firms, including McAfee and Trend Micro, also have free tools available to remove DNSChanger.

Source:  CNET