Archive for May, 2012

Flame: Massive cyber-attack discovered, researchers say

Tuesday, May 29th, 2012

http://news.bbcimg.co.uk/media/images/60531000/jpg/_60531995_60531994.jpgA complex targeted cyber-attack that collected private data from countries such as Israel and Iran has been uncovered, researchers have said.

Russian security firm Kaspersky Labs told the BBC they believed the malware, known as Flame, had been operating since August 2010.

The company said it believed the attack was state-sponsored, but could not be sure of its exact origins.

They described Flame as “one of the most complex threats ever discovered”.

Research into the attack was carried out in conjunction with the UN’s International Telecommunication Union.

They had been investigating another malware threat, known as Wiper, which was reportedly deleting data on machines in western Asia.

In the past, targeted malware – such as Stuxnet – has targeted nuclear infrastructure in Iran.

Others like Duqu have sought to infiltrate networks in order to steal data.

This new threat appears not to cause physical damage, but to collect huge amounts of sensitive information, said Kaspersky’s chief malware expert Vitaly Kamluk.

“Once a system is infected, Flame begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on,” he said.

More than 600 specific targets were hit, Mr Kamluk said, ranging from individuals, businesses, academic institutions and government systems.

Iran’s National Computer Emergency Response Team posted a security alert stating that it believed Flame was responsible for “recent incidents of mass data loss” in the country.

The malware code itself is 20MB in size – making it some 20 times larger than the Stuxnet virus. The researchers said it could take several years to analyse.

Iran and Israel

Mr Kamluk said the size and sophistication of Flame suggested it was not the work of independent cybercriminals, and more likely to be government-backed.

He explained: “Currently there are three known classes of players who develop malware and spyware: hacktivists, cybercriminals and nation states.

“Flame is not designed to steal money from bank accounts. It is also different from rather simple hack tools and malware used by the hacktivists. So by excluding cybercriminals and hacktivists, we come to conclusion that it most likely belongs to the third group.”

Among the countries affected by the attack are Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia and Egypt.

“The geography of the targets and also the complexity of the threat leaves no doubt about it being a nation-state that sponsored the research that went into it,” Mr Kamluk said.

The malware is capable of recording audio via a microphone, before compressing it and sending it back to the attacker.

It is also able to take screenshots of on-screen activity, automatically detecting when “interesting” programs – such as email or instant messaging – were open.

‘Industrial vacuum cleaner’

Kaspersky’s first recorded instance of Flame is in August 2010, although it said it is highly likely to have been operating earlier.

Prof Alan Woodward, from the Department of Computing at the University of Surrey said the attack is very significant.

“This is basically an industrial vacuum cleaner for sensitive information,” he told the BBC.

He explained that unlike Stuxnet, which was designed with one specific task in mind, Flame was much more sophisticated.

“Whereas Stuxnet just had one purpose in life, Flame is a toolkit, so they can go after just about everything they can get their hands on.”

Once the initial Flame malware has infected a machine, additional modules can be added to perform specific tasks – almost in the same manner as adding apps to a smartphone.

Source:  BBC

Official version of Office for iPad, Android now rumored for November

Thursday, May 24th, 2012

The mobile version will reportedly look similar to a version leaked in February.

A new rumor suggests iPad and Android tablet users will be able to use a native, tablet-optimized version of Microsoft Office this fall. According to a source speaking to BGR, Microsoft will have a version of Office for both platforms ready in November.

A purported iPad version of Office was allegedly leaked in February, though Microsoft denied that what was published was “an actual Microsoft product.” Despite this, the company wouldn’t say whether it was in fact working on a version of Office for Apple’s popular tablet or not.

BGR’s source claimed to have seen Office running on an iPad, and confirmed that it looked “almost identical” to the previously leaked version. Additionally, Microsoft will reportedly release the software for Android-based tablets in the same November timeframe.

Microsoft would neither confirm nor deny the information in BGR’s report. “We have nothing to share at this time as we do not comment on rumors or speculation,” a Microsoft spokesperson told Ars.

With the increasing uptake of tablets at home, work, and school, there has been a growing demand to use Microsoft’s popular word processing, spreadsheet, and presentation applications on mobile devices. There are a number of apps that offer varying compatibility with existing Office documents, and a few solutions have popped up which allow running Office on virtualized Windows environments running on remote servers. Such solutions do work, but aren’t optimized for tablet interfaces.

Source:  arstechnica.com

Berkeley Lab scientists generate electricity from viruses

Wednesday, May 23rd, 2012

New approach is a promising first step toward the development of tiny devices that harvest electrical energy from everyday tasks

http://newscenter.lbl.gov/wp-content/uploads/icon2.jpg

Imagine charging your phone as you walk, thanks to a paper-thin generator embedded in the sole of your shoe. This futuristic scenario is now a little closer to reality. Scientists from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a way to generate power using harmless viruses that convert mechanical energy into electricity.

The scientists tested their approach by creating a generator that produces enough current to operate a small liquid-crystal display. It works by tapping a finger on a postage stamp-sized electrode coated with specially engineered viruses. The viruses convert the force of the tap into an electric charge.

Their generator is the first to produce electricity by harnessing the piezoelectric properties of a biological material. Piezoelectricity is the accumulation of a charge in a solid in response to mechanical stress.

The milestone could lead to tiny devices that harvest electrical energy from the vibrations of everyday tasks such as shutting a door or climbing stairs.

It also points to a simpler way to make microelectronic devices. That’s because the viruses arrange themselves into an orderly film that enables the generator to work. Self-assembly is a much sought after goal in the finicky world of nanotechnology.

The scientists describe their work in a May 13 advance online publication of the journal Nature Nanotechnology.

“More research is needed, but our work is a promising first step toward the development of personal power generators, actuators for use in nano-devices, and other devices based on viral electronics,” says Seung-Wuk Lee, a faculty scientist in Berkeley Lab’s Physical Biosciences Division and a UC Berkeley associate professor of bioengineering.

He conducted the research with a team that includes Ramamoorthy Ramesh, a scientist in Berkeley Lab’s Materials Sciences Division and a professor of materials sciences, engineering, and physics at UC Berkeley; and Byung Yang Lee of Berkeley Lab’s Physical Biosciences Division.

http://newscenter.lbl.gov/wp-content/uploads/virus.jpgThe piezoelectric effect was discovered in 1880 and has since been found in crystals, ceramics, bone, proteins, and DNA. It’s also been put to use. Electric cigarette lighters and scanning probe microscopes couldn’t work without it, to name a few applications.

But the materials used to make piezoelectric devices are toxic and very difficult to work with, which limits the widespread use of the technology.

Lee and colleagues wondered if a virus studied in labs worldwide offered a better way. The M13 bacteriophage only attacks bacteria and is benign to people. Being a virus, it replicates itself by the millions within hours, so there’s always a steady supply. It’s easy to genetically engineer. And large numbers of the rod-shaped viruses naturally orient themselves into well-ordered films, much the way that chopsticks align themselves in a box.

These are the traits that scientists look for in a nano building block. But the Berkeley Lab researchers first had to determine if the M13 virus is piezoelectric. Lee turned to Ramesh, an expert in studying the electrical properties of thin films at the nanoscale. They applied an electrical field to a film of M13 viruses and watched what happened using a special microscope. Helical proteins that coat the viruses twisted and turned in response—a sure sign of the piezoelectric effect at work.

http://newscenter.lbl.gov/wp-content/uploads/AFM.jpgNext, the scientists increased the virus’s piezoelectric strength. They used genetic engineering to add four negatively charged amino acid residues to one end of the helical proteins that coat the virus. These residues increase the charge difference between the proteins’ positive and negative ends, which boosts the voltage of the virus.

The scientists further enhanced the system by stacking films composed of single layers of the virus on top of each other. They found that a stack about 20 layers thick exhibited the strongest piezoelectric effect.

The only thing remaining to do was a demonstration test, so the scientists fabricated a virus-based piezoelectric energy generator. They created the conditions for genetically engineered viruses to spontaneously organize into a multilayered film that measures about one square centimeter. This film was then sandwiched between two gold-plated electrodes, which were connected by wires to a liquid-crystal display.

When pressure is applied to the generator, it produces up to six nanoamperes of current and 400 millivolts of potential. That’s enough current to flash the number “1” on the display, and about a quarter the voltage of a triple A battery.

“We’re now working on ways to improve on this proof-of-principle demonstration,” says Lee. “Because the tools of biotechnology enable large-scale production of genetically modified viruses, piezoelectric materials based on viruses could offer a simple route to novel microelectronics in the future.”

Source:  lbl.gov

FBI quietly forms secretive Net-surveillance unit

Wednesday, May 23rd, 2012

http://asset1.cbsistatic.com/cnwk.1d/i/tim/2012/05/22/fbi_610x304.png

The FBI has recently formed a secretive surveillance unit with an ambitious goal: to invent technology that will let police more readily eavesdrop on Internet and wireless communications.

The establishment of the Quantico, Va.-based unit, which is also staffed by agents from the U.S. Marshals Service and the Drug Enforcement Agency, is a response to technological developments that FBI officials believe outpace law enforcement’s ability to listen in on private communications.

While the FBI has been tight-lipped about the creation of its Domestic Communications Assistance Center, or DCAC — it declined to respond to requests made two days ago about who’s running it, for instance — CNET has pieced together information about its operations through interviews and a review of internal government documents.

DCAC’s mandate is broad, covering everything from trying to intercept and decode Skype conversations to building custom wiretap hardware or analyzing the gigabytes of data that a wireless provider or social network might turn over in response to a court order. It’s also designed to serve as a kind of surveillance help desk for state, local, and other federal police.

The center represents the technological component of the bureau’s “Going Dark” Internet wiretapping push, which was allocated $54 million by a Senate committee last month. The legal component is no less important: as CNET reported on May 4, the FBI wants Internet companies not to oppose a proposed law that would require social-networks and providers of VoIP, instant messaging, and Web e-mail to build in backdoors for government surveillance.

During an appearance last year on Capitol Hill, then-FBI general counsel Valerie Caproni referred in passing, without elaboration, to “individually tailored” surveillance solutions and “very sophisticated criminals.” Caproni said that new laws targeting social networks and voice over Internet Protocol conversations were required because “individually tailored solutions have to be the exception and not the rule.”

Caproni was referring to the DCAC’s charge of creating customized surveillance technologies aimed at a specific individual or company, according to a person familiar with the FBI’s efforts in this area.

An FBI job announcement for the DCAC that had an application deadline of May 2 provides additional details. It asks applicants to list their experience with “electronic surveillance standards” including PacketCable (used in cable modems); QChat (used in push-to-talk mobile phones); and T1.678 (VoIP communications). One required skill for the position, which pays up to $136,771 a year, is evaluating “electronic surveillance solutions” for “emerging” technologies.

“We would expect that capabilities like CIPAV would be an example” of what the DCAC will create, says Steve Bock, president of Colorado-based Subsentio, referring to the FBI’s remotely-installed spyware that it has used to identify extortionists, database-deleting hackers, child molesters, and hitmen.

Bock, whose company helps companies comply with the 1994 Communications Assistance for Law Enforcement Act (CALEA) and has consulted for the Justice Department, says he anticipates “that Internet and wireless will be two key focus areas” for the DCAC. VoIP will be a third, he says.

For its part, the FBI responded to queries this week with a statement about the center, which it also refers to as the National Domestic Communications Assistance Center (even Caproni has used both names interchangeably), saying:

The NDCAC will have the functionality to leverage the research and development efforts of federal, state, and local law enforcement with respect to electronic surveillance capabilities and facilitate the sharing of technology among law enforcement agencies. Technical personnel from other federal, state, and local law enforcement agencies will be able to obtain advice and guidance if they have difficulty in attempting to implement lawful electronic surveillance court orders.It is important to point out that the NDCAC will not be responsible for the actual execution of any electronic surveillance court orders and will not have any direct operational or investigative role in investigations. It will provide the technical knowledge and referrals in response to law enforcement’s requests for technical assistance.

Here’s the full text of the FBI’s statement in a Google+ post.

One person familiar with the FBI’s procedures told CNET that the DCAC is in the process of being launched but is not yet operational. A public Justice Department document, however, refers to the DCAC as “recently established.”

“They’re doing the best they can to avoid being transparent”

The FBI has disclosed little information about the DCAC, and what has been previously made public about the center was primarily through budget requests sent to congressional committees. The DCAC doesn’t even have a Web page.

“The big question for me is why there isn’t more transparency about what’s going on?” asks Jennifer Lynch, a staff attorney at the Electronic Frontier Foundation, a civil liberties group in San Francisco. “We should know more about the program and what the FBI is doing. Which carriers they’re working with — which carriers they’re having problems with. They’re doing the best they can to avoid being transparent.”

The DCAC concept dates back at least four years. FBI director Robert Mueller was briefed on it in early 2008, internal FBI documents show. In January 2008, Charles Smith, a supervisory special agent and section chief in the FBI’s Operational Technology Division, sent e-mail to other division officials asking for proposals for the DCAC’s budget.

When it comes to developing new surveillance technologies, Quantico is the U.S. government’s equivalent of a Silicon Valley incubator. In addition to housing the FBI’s Operational Technological Division, which boasts of developing the “latest and greatest investigative technologies to catch terrorists and criminals” and took the lead in creating the DCAC, it’s also home to the FBI’s Engineering Research Facility, the DEA’s Office of Investigative Technology, and the U.S. Marshals’ Technical Operations Group. In 2008, Wired.com reported that the FBI has “direct, high-speed access to a major wireless carrier’s systems” through a high-speed DS-3 link to Quantico.

The Senate appropriations committee said in a report last month that, for electronic surveillance capabilities, it authorizes “$54,178,000, which is equal to both the request and the fiscal year 2012 enacted level. These funds will support the Domestic Communications Assistance Center, providing for increased coordination regarding lawful electronic surveillance amongst the law enforcement community and with the communications industry.” (It’s unclear whether all of those funds will go to the DCAC.)

In trying to convince Congress to spend taxpayers’ dollars on the DCAC, the FBI has received help from local law enforcement agencies that like the idea of electronic surveillance aid. A Justice Department funding request for the 2013 fiscal year predicts DCAC will “facilitate the sharing of solutions and know-how among federal, state, and local law enforcement agencies” and will be welcomed by telecommunications companies who “prefer to standardize and centralize electronic surveillance.”

A 2010 resolution from the International Association of Chiefs of Police — a reliable FBI ally on these topics — requests that “Congress and the White House support the National Domestic Communications Assistance Center Business Plan.”

The FBI has also had help from the Drug Enforcement Administration, which last year requested $1.5 million to fund eight additional DCAC positions. DEA administrator Michele Leonhart has said (PDF) the funds will go to “develop these new electronic surveillance capabilities.” The DEA did not respond to CNET’s request for comment.

An intriguing hint of where the DCAC might collaborate with the National Security Agency appeared in author James Bamford’s article in the April issue of Wired magazine. Bamford said, citing an unidentified senior NSA official, that the agency has “made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems” — an obstacle that law enforcement has encountered in investigations.

Eventually, the FBI may be forced to lift the cloak of secrecy that has surrounded the DCAC’s creation. On May 2, a House of Representatives committee directed the bureau to disclose “participation by other agencies and the accomplishments of the center to date” three months after the legislation is enacted.

Source:  CNET

Microsoft Research sets new record by sorting 1,401GB of data in 60 seconds

Wednesday, May 23rd, 2012

While everyone fawns over the hot new phone or tablet coming out every other day, Microsoft Research is always plugging away doing some real computer science. The big news around Redmond this week is that a team of Microsoft researchers have broken the record for data sorting speed by a huge margin. Granted, Yahoo! held the record previously, but a win is a win.

The nine-person team at Microsoft Research was able to shuffle through data at a rate of 1,401GB in just 60 seconds. This was done with the MinuteSort benchmark, which as its name suggests, is a test that measures how much data a system can sort in one minute. Microsoft used a new distributed computing system dubbed Flat Datacenter Storage to accelerate data handling.

Because Microsoft tied together multiple scalable systems in its record run, the combined processing rate of 2GB per second on each of its 250 machines resulted in a more than three-fold improvement over Yahoo’s old record (around 500GB). The system also has a further 2GB of bandwidth free for output.

Flat Datacenter Storage isn’t just for making Yahoo! feel bad, though. Microsoft believes that Bing could benefit from the technology to improve its performance. In a more far-off future, Microsoft believes that Flat Datacenter Storage could accelerate machine learning to make software more personal to you. Looking for patterns in data, image recognition, and almost any other task that deals with large data sets could be improved with Flat Datacenter Storage.

These “big data” problems have become easier to tackle over time with technologies like Google’s MapReduce or Hadoop, but this new Microsoft breakthrough is even more advanced.

Source:  geek.com

Malware threat level hits four-year high

Wednesday, May 23rd, 2012

Surfing the Internet is becoming more dangerous than ever, according to a report released Wednesday by cyber security software maker McAfee.

In first three months of the year, malware circulating in cyberspace reached a four-year high and is on a pace to reach 100 million samples by year’s end, McAfee says in its quarterly threats report.

“In the first quarter of 2012, we have already detected 8 million new malware samples, showing that malware authors are continuing their unrelenting development of new malware,” Vincent Weafer, senior vice president of McAfee Labs, said in a statement.

“The same skills and techniques that were sharpened on the PC platform are increasingly being extended to other platforms, such as mobile and Mac,” he added.

Contributing to the proliferation of malware is the arrival of new kits for creating malicious software, Adam Wosotowsky, a messaging data architect at McAfee Labs, said in an interview with PCWorld.

The most common kits for creating malware have been based on the Zeus and SpyEye packages, but crackdowns on botnets built on those models have prodded cyber criminals to seek alternatives.

“As the authors of Zeus and SpyEye have started to be located by authorities and are starting to have issues putting out their stuff, we’re starting to see more new botnet-building SDKs [Software Development Kits] being released into the black market,” he said.

That has increased the number of campaigns, the number of strains and the number of mutations, which increases the number of samples McAfee collects, he added.

Mobile malware continues to grow, McAfee reported, with more than 7,000 Android threats being collected and identified during the quarter. That’s more than a 1,200 percent increase over the previous quarter.

Much of the Android infections are being spread by software distributed by third-party retailers, Wosotowsky observed. “The official Google apps store [Google Play] doesn’t have very many malicious applications on it,” he said.

Most mobile malware is designed to surreptitiously send text messages to premium SMS services. Cyber bandits get a cut of the charges for each message and disappear before a victim can protest to his or her wireless provider.

McAfee also reported that spam levels continue to decline. Global spam during the quarter dropped to slightly more than one trillion messages. “A lot of that is because spam is a lot more accurate nowadays than it used to be,” Wosotowsky explained.

When spam volumes were at their all-time highs, he continued, messages were being sent to lots of random addresses. “Now you can purchase these lists that contain legitimate e-mail addresses,” he said. “For that reason, spam has become more accurate, so you don’t have as much chaff in the spam world.”

Source:  pcworld.com

Designing for PCs that boot faster than ever before

Wednesday, May 23rd, 2012

Windows 8 has a problem – it really can boot up too quickly.

So quickly, in fact, that there is no longer time for anything to interrupt boot. When you turn on a Windows 8 PC, there’s no longer long enough to detect keystrokes like F2 or F8, much less time to read a message such as “Press F2 for Setup.” For the first time in decades, you will no longer be able to interrupt boot and tell your PC to do anything different than what it was already expecting to do.

Fast booting is something we definitely want to preserve. Certainly no one would imagine intentionally slowing down boot to allow these functions to work as they did in the past. In this blog I’ll walk through how we’re addressing this “problem” with new solutions that will keep your PC booting as quickly as possible, while still letting you do all the things you expect.

Too fast to interrupt

It’s worth taking a moment to watch (again, if you’ve already seen it) the fast boot video posted by Gabe Aul in his previous post about delivering fast boot times in Windows 8. In this video you can see a laptop with a solid state drive (SSD) fully booting in less than 7 seconds. Booting this fast doesn’t require special hardware, but it is a feature of new PCs. You’ll still see much improved boot times in existing hardware, but in many PCs, the BIOS itself (the BIOS logo and set of messages you see as you boot up) does take significant time. An SSD contributes to the fast boot time as well, as you can imagine.

If the entire length of boot passes in just seven seconds, the individual portions that comprise the boot sequence go by almost too quickly to notice (much less, interrupt). Most of the decisions about what will happen in boot are over in the first 2-3 seconds – after that, booting is just about getting to Windows as quickly as possible. These 2-3 seconds include the time allowed for firmware initialization and POST (< 2 seconds), and the time allowed for the Windows boot manager to detect an alternate boot path (< 200 milliseconds on some systems). These times will continue to shrink, and even now they no longer allow enough time to interrupt boot as you could in the past.

On the Windows team, we felt the impact of this change first, and perhaps most painfully, with our own F8 behavior. In previous versions of Windows (as far back as Windows 95), you could press F8 at the beginning of boot to access an advanced boot options menu. This is where you’d find useful options such as Safe Mode and “Disable driver signing.” I personally remember using them when I upgraded my first PC from Windows 3.1 to Windows 95. F8 helped me quickly resolve an upgrade issue and get started using Windows 95.

However, the hardware and software improvements in Windows 8 have collapsed the slice of time that remains for Windows to read and respond to the F8 keystroke. We have SSD-based UEFI systems where the “F8 window” is always less than 200 milliseconds. No matter how fast your fingers are, there is no way to reliably catch a 200 millisecond event. So you tap. I remember walking the halls and hearing people frantically trying to catch the F8 window – “tap-tap-tap-tap-tap-tap-tap” – only to watch them reboot several times until they managed to finally get a tap inside the F8 window. We did an informal study and determined that top performers could, at best, sustain repeated tapping at about a 250ms frequency. Even in this best case, catching a 200 millisecond window still depends somewhat on randomness. And even if you eventually manage to catch this short window of time, you still have to contend with sore fingers, wasted time, and just how ridiculous people look when they are frantically jamming on their keyboard.

The problem we saw with our F8 key extends to any other key you may want to press during boot. For example, in the Windows 8 Developer Preview release, the F8 key led to a full set of repair, recovery, and advanced boot options. A different key allowed developer-focused options, such as enabling debugging or disabling driver signing. And on most PCs, there are additional keystrokes used by the firmware and advertised by messages during POST: “Press F2 for Setup” or “Press F12 for Network Boot.” Now, POST is almost over by the time these instructions could be displayed. And in many cases, the keyboard wouldn’t be functional until so late in POST that it’s almost not worth the time it would take the firmware to look for these keystrokes. Some devices won’t even try.

Even so, every one of these keystrokes plays an important role, and we have historically counted on them to provide important interrupt functions in boot. However, now, there is no longer time to do any of them.

Defining the problem space

We looked at these problems from many angles, and took a holistic approach to solving them. This effort spanned across developers, testers, and program managers, examining everything from the deepest parts of the kernel to the overall user experience. Approaching this first as an engineering problem, we identified the situations and scenarios that depended on keystrokes in boot and considered literally dozens of ways to restore functionality to each scenario in Windows 8.

Here are some of the key scenarios pulled from this list:

  • Even when Windows is booting up correctly, you may want to do something different – for example, you may want to boot from an alternate device such as a USB drive, go to the firmware’s BIOS setup options, or run tools from within the protected Windows Recovery Environment image on a separate partition. In general, these scenarios were accomplished in the past mainly without the involvement of Windows, using firmware-specific keys such as F2 or F12 (or some other key that you couldn’t quite remember!).
  • You may need to troubleshoot a problem after something goes wrong, or want to undo something that just happened. Windows has many tools that assist with situations like these, such as allowing you to refresh or reset your PC, go back to a restore point using System Restore, or perform manual troubleshooting via the always-popular Command Prompt. In the past, these troubleshooting options were accessed primarily via the Windows boot manager, by pressing F8 at the beginning of boot.
  • Some error cases in startup are difficult to automatically detect. For example, the Windows boot process may have succeeded, but errors in components that are loaded later actually make Windows unusable. These cases are rare, but an example of where this might happen is a corrupt driver installation causing the login screen to crash whenever it loads. On previous-era hardware, you could interrupt boot with a keystroke (F8, for example) and reach a suitable repair option before the crashing component was even loaded. Over time, it has gotten harder to interrupt boot in this way, and in Windows 8, it’s virtually impossible.
  • We needed to enable certain startup options that are mainly used by developers – both inside and outside of Windows. Previously you could access these by pressing a key like F8 at the beginning of boot. These developer-targeted options are still important and include disabling driver signature enforcement, turning off “early launch anti-malware,” as well as other options.

One key design principle we focused on was how our solutions would fit in with the rest of Windows 8. We believed that these various boot options were more alike than they were different, and shouldn’t be located in different places within Windows. To look at this from the opposite direction, no one should need to learn how Windows is built, under the hood, to know where to go for a certain task. In the purest sense, we wanted it to “just work.”

Three solutions – one experience

We ultimately solved these problems with a combination of three different solutions. Together they create a unified experience and solve the scenarios without needing to interrupt boot with a keystroke:

  1. We pulled together all the options into a single menu – the boot options menu – that has all the troubleshooting tools, the developer-focused options for Windows startup, methods for accessing the firmware’s BIOS setup, and a straightforward method for booting to alternate devices such as USB drives.
  2. We created failover behaviors that automatically bring up the boot options menu (in a highly robust and validated environment) whenever there is a problem that would keep the PC from booting successfully into Windows.
  3. Finally, we created several straightforward methods to easily reach the boot options menu, even when nothing is wrong with Windows or boot. Instead of these menus and options being “interrupt-driven,” they are triggered in an intentional way that is much easier to accomplish successfully.

Each of these solutions addresses a different aspect of the core problem, and together they create a single, cohesive end-to-end experience.

A single menu for every boot option

The core vision behind the boot options menu is to create a single place for every option that affects the startup behavior of the Windows 8 PC. Portions of this menu were discussed in detail in our previous blog post titled Reengineering the Windows boot experience. That post has the complete details and describes the fundamental changes made within the boot menus to enable touch interaction, Windows 8 visuals, and a cohesive user experience across the many surfaces that make up boot. Here is a screenshot of the boot options menu on one of my UEFI-based PCs:

Booting to an alternate device (such as a USB drive or network) is one of the most common scenarios that previously required interrupting boot with a keystroke. With Windows 8 UEFI-based firmware, we can now use software to trigger this. On these devices, you’ll now see the “Use a device” button in the boot options menu, which provides this functionality directly. As you can see in the above image, this functionality sits side-by-side with the other boot options. Windows no longer requires a keystroke interruption to boot from an alternate device, (assuming, for the moment, that you can reach the boot options menu itself without requiring a keystroke in boot. More on this in a minute.)

Into this same menu, we’ve added new functionality that allows you to reboot directly into the UEFI firmware’s BIOS setup (on Windows 8 UEFI hardware that supports this). On previous-era hardware, instructions for entering BIOS setup appeared at POST in messages like “Press F2 for setup.” (These messages have been around on PCs longer than perhaps any other type of UI.) They will still occur on systems that were made prior to Windows 8, where they will continue to work (primarily because these devices take several seconds to POST.) However, a Windows 8 UEFI-based PC won’t stay in POST long enough for keystrokes like this to be used, so the new UEFI-based functionality allows this option to live on in the boot options menu. After looking at the other items in this menu, we decided to place the button that reboots the PC into the UEFI firmware’s BIOS setup under the “Troubleshooting” node, within the “Advanced options” group:

 

A quick note about older, non-UEFI devices: legacy hardware that was made before Windows 8 will not have these new UEFI-provided menu features (booting to firmware settings and booting directly to a device). The firmware on these devices will continue to support this functionality from the POST screen as it did in the past (using messages such as “Press F2 for Setup”). There is still time for keystrokes like this to work in POST on these legacy devices, since they won’t have the improvements that enable a Windows 8 PC to POST in less than 2 seconds.

The next item appears on all Windows 8 devices – UEFI and non-UEFI alike. In the image above, you can see that we’ve added Windows Startup Settings. This new addition brings the entry point for the developer-focused Windows startup options into the unified boot options menu, and allows us to satisfy the scenarios that previously required the separate key during boot. These include items such as “disable driver signing” and “debugging mode,” as well as Safe Mode and several other options. Here is a close-up view of the informational page for these options:

http://blogs.msdn.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-29-43-metablogapi/0572.1_2D002D00_Boot_2D00_Options_2D00_menu_5F00_thumb_5F00_761E9939.jpg

Excerpt from:  blogs.msdn.com

Windows Vista infection rates climb, says Microsoft

Tuesday, May 22nd, 2012

End of support last year for SP2 responsible for spike in successful attacks

Microsoft said last week that a skew toward more exploits on Windows Vista can be attributed to the demise of support for the operating system’s first service pack.

Data from the company’s newest security intelligence report showed that in the second half of 2011, Vista Service Pack 1 (SP1) was 17% more likely to be infected by malware than Windows XP SP3, the final upgrade to the nearly-11-year-old operating system.

That’s counter to the usual trend, which holds that newer editions of Windows are more secure, and thus exploited at a lower rate, than older versions like XP. Some editions of Windows 7, for example, boast an infection rate half that of XP.

Tim Rains, the director of Microsoft’s Trustworthy Computing group, attributed the rise of successful attacks on Vista SP1 to the edition’s retirement from security support.

“This means that Windows Vista SP1-based systems no longer automatically receive security updates and helps explain why there [was] a sudden and sharp increase in the malware infection rate on that specific platform,” said Rains in a blog post last week.

Microsoft stopped delivering patches for Vista SP1 in July 2011. For the bulk of the reporting period, then, Vista SP2 users did not receive fixes to flaws, including some that were later exploited by criminals.

Vista SP2 will continue to be patched until mid-April 2017.

Rains also noted that the infection rates of both Windows XP SP3 and Vista dropped dramatically last year after Microsoft automatically pushed a “backport” update which disabled AutoRun, a Windows feature that major worms, including Conficker and Stuxnet, abused to infect millions of machines.

Rains seemed to intimate that the AutoRun disabling had more impact on XP than on Vista, and by Microsoft’s data, he may have been on to something: While XP’s infection rate continued to drop throughout the year, Vista SP2’s climbed from the second quarter to the third, and again from the third to the fourth.

Windows 7’s infection rate also increased each quarter of 2011.

Andrew Storms, director of security operations at nCircle Security, had a different theory for XP’s infection rate decline and the rise of Vista’s and Windows 7’s.

“As Microsoft’s intelligence gets better in [the Malicious Software Removal Tool] and fewer attackers focus on the older OS, then fewer infections should be found on the older OS,” said Storms, talking about Windows XP.

Most of Microsoft’s infection rate data is derived from the Malicious Software Removal Tool (MSRT), a free utility it distributes to all Windows users each month that detects, then deletes selected malware families.

http://www.computerworld.com/common/images/site/features/2012/05/windows_infection_rates.jpgAnd the rise of infection rates in Vista and Windows 7?

“It would be expected that all the SKUs should go up slightly over time simply because new vulnerabilities are found, more attacks always happening, and so on,” Storms added.

Rains urged XP and Vista users to upgrade to the supported service packs — SP3 for XP, SP2 for Vista — to continue to receive patches.

The 126-page Security Intelligence Report that Rains referenced can be found on Microsoft’s website (download PDF).

Source:  computerworld.com

Google will alert users to DNSChanger malware infection

Tuesday, May 22nd, 2012

Google is using a clever Domain Name System hack to let people infected with the DNSChanger malware know that they have only a few weeks left before their Internet connection goes dead.

http://asset1.cbsistatic.com/cnwk.1d/i/tim/2012/05/22/DNSChanger_warning_610x138.png

The warning that will appear at the top of search results for people whose computers are infected

Google is about to begin an ambitious project to notify some half a million people that their computers are infected with the DNSChanger malware.

The effort, scheduled to begin this afternoon, is designed to let those people know that their Internet connections will stop working on July 9, when temporary servers set up by the FBI to help DNSChanger victims are due to be disconnected.

“The warning will be at the top of the search results page for regular searches and image searches and news searches,” Google security engineer Damian Menscher told CNET this morning. “The text will say, ‘Your computer appears to be infected,’ and it will give additional detail warning them that they may not be able to connect to the Internet in the future.”

The malware, also known as “RSPlug,” “Puper,” and “Jahlav,” was active until an FBI investigation called Ghost Click resulted in six arrests last November.

DNSChanger worked by pointing infected computers to rogue Domain Name System servers that could, for instance, direct someone trying to connect to BankOfAmerica.com to a scam Web site.

Computers became infected with DNSChanger when they visited certain Web sites or downloaded particular software to view videos online. In addition to altering the DNS server settings, the malware also prevented antivirus updates from happening.

Source:  CNET

Smartphone hijacking vulnerability affects AT&T, 47 other carriers

Monday, May 21st, 2012
http://cdn.arstechnica.net/wp-content/uploads/2012/05/cisco_asa_55001-640x512.png

Cisco Systems' ASA 5500 series is one of many firewalls that drops data packets that contains invalid TCP sequence numbers. The feature leaks data that can be used to hijack connections.

Malicious data is injected by tricking firewalls into leakiing sensitive data

Computer scientists have identified a vulnerability in the network of AT&T and at least 47 other cellular carriers that allows attackers to surreptitiously hijack the Internet connections of smartphone users and inject malicious content into the traffic passing between them and trusted websites.

The attack, which doesn’t require an adversary to have to have any man-in-the-middle capability over the network, can be used to lace unencrypted Facebook and Twitter pages with code that causes victims to take unintended actions, such as post messages or follow new users. It can also be used to direct people to fraudulent banking websites and to inject fraudulent messages into chat sessions in some Windows Live Messenger apps. Ironically, the vulnerability is introduced by a class of firewalls cellular carriers use. While intended to make the networks safer, these firewall middleboxes allow hackers to infer TCP sequence numbers of data packets appended to each data packet, a disclosure that can be used to tamper with Internet connections.

“The TCP sequence number inference attack opens up a whole new set of attack venues,” the researchers from the University of Michigan’s Computer Science and Engineering Department wrote in a research paper scheduled to be presented at this week’s IEEE Symposium on Security and Privacy. “It breaks the common assumption that communication is relatively safe on encrypted/protected WiFi or cellular networks that encrypt the wireless traffic. In fact, since our attack does not rely on sniffing traffic, it works regardless of the access technology as long as no application-layer protection is enabled.”

The researchers tested their attack on Android-powered smartphones manufactured by HTC, Samsung, and Motorola. When the devices were connected to a “nation-wide carrier” that used sequence number-checking, the researchers were able to able to hijack connections to online services including Facebook, Twitter, Windows Live Messenger, and the AdMob advertising network. They could also spoof traffic from four unidentified banks and an unnamed Android app that gives real-time stock quotes. Zhiyun Qian, a recent PhD recipient and one of the coauthors of the paper, told Ars the attack will also work against computers connected to networks using cellular cards or smartphone tethers. He said there’s no reason to believe iOS devices from Apple can’t be hijacked as well.

This week’s paper reports that of 150 worldwide carriers tested, 48 were found to use firewalls that allowed the researchers to deduce the TCP sequence numbers needed to hijack end-user connections. Using an Android app the researchers released, Ars was able to identify AT&T as the US carrier referred to in the paper. Representatives of the carrier didn’t immediately answer an e-mail requesting comment. This article will be updated if they respond later.

Playing outside of the sandbox

Qian and fellow co-author, Professor Z. Morley Mao have devised a buffet of possible attacks depending on which required elements are satisfied in a given exploit scenario. They labeled the most potent of the attacks on-site TCP hijacking. It relies on a lightweight piece of malware that must first be installed on a victim’s phone that has Internet access as its sole privilege. With help from the malicious app, the attacker queries firewalls AT&T and other carriers use to drop all data packets that contain sequence numbers out of a range considered to be valid for a current connection. By testing which packets are permitted to go through and which ones are blocked, attackers can quickly deduce an acceptable number and append it to the malicious data to camouflage its fraudulent source.

“What that means is that we’re able to completely hijack the connection, so that the original server, say the Facebook server, will be completely cut off from the communication, and we can inject whatever malicious content we want,” Qian told Ars. While Android apps are contained in a security sandbox that prevents them from accessing code and data used by other apps, he said the circuitous route taken by the lightweight malware effectively breaks out of this barrier, allowing the attackers to tamper with the phone’s web browser and other protected apps.

 

An off-path TCP sequence number inference attack

 

Another variant of the attack relies on intermediate routers that help funnel data through a carrier network. The monotonically incrementing 16-bit headers known as IPIDs act as side channels for inferring how many packets a target system has sent. By examining the values in the headers before and after sending spoofed probing packets, attackers can deduce sequence numbers by inferring if they successfully passed through the firewall.

Still another variation of the attack doesn’t rely on any malware at all. An off-site TCP injection/hijacking exploit, for instance, relies on a technique known as URL phishing, which lures a user to a malicious intermediate webpage before sending him to what appears to be a legitimate target website. When certain conditions are met, the attack can replace the content of the site with arbitrary traffic, or if the user is logged in to the targeted site, can inject JavaScript into the pages that steals authentication cookies or performs actions on behalf of the victim.

The required ingredient

The required ingredient in all the attacks is a firewall on the carrier network that keeps track of sequence numbers for connections the end user has made with other address on the Internet. Firewalls that drop sequence numbers are manufactured by a variety of companies, including Cisco Systems, Juniper, and Check Point.

“They all build on top of the sequence number inference,” Qian said of the attacks. “Without the sequence number, all of these attacks would not be possible, so you can think of sequence number inference as a building block for all of these attacks.”

TCP sequence numbers were designed to help computers to reassemble packets that arrive out-of-sequence into their proper order. As researchers devised attacks in the late 1990s that used sequence numbers to hijack connections, the scheme was revised to give the numbers pseudo-random characteristics so they’d be hard for attackers to predict. Qian and Mao said they are the first known researchers to devise a TCP sequence number inference attack using the state kept on middleboxes.

Qian said online services can go a long way towards repelling the attacks by encrypting sessions using the secure sockets layer (SSL) or transport layer security (TLS) protocols, since almost all of the exploits he and Mao devised work against pages and apps that transmit content in plaintext. But even when web traffic is encrypted, sequence number inference can be used to mount denial-of-service attacks. Of course, the attacks could be more effectively prevented if carriers removed sequence number-checking functions from the firewalls they use. Qian said he’s not sure that move is feasible because the carriers rely on the features to conserve resources by summarily dropping arbitrarily packets that enter their networks.

‘In my opinion, they should be turned off,” Qian said of the sequence number checks. “However, the carriers may have their own reasons not to.”

Source:  arstechnica.com

Samsung mass-produces 4-gigabit LPDDR2 memory, aims to make 2GB a common sight in smartphones

Friday, May 18th, 2012

http://www.blogcdn.com/www.engadget.com/media/2012/05/samsung-20nm-lpddr2.jpgSamsung started making 2GB low-power mobile memory last year, but as the 1GB-equipped phone you likely have in your hand shows, the chips weren’t built on a wide-enough scale to get much use. The Korean company is hoping to fix that now that it’s mass-producing 20-nanometer, 4-gigabit LPDDR2 RAM. Going to a smaller process than the 30-nanometer chips of old will not just slim the memory down by a fifth, helping your smartphone stay skinny: it should help 2GB of RAM become the “mainstream product” by the end of 2013, if Samsung gets its way. New chips should run at 1,066Mbps without chewing up any more power than the earlier parts, too, so there’s no penalty for using the denser parts. It’s hard to say whether or not the 20nm design is what’s leading to the 2GB of RAM in the Japanese Galaxy S III; we just know that the upgraded NTT DoCoMo phone is now just the start of a rapidly approaching trend for smartphones and tablets.

Source:  engadget.com

New DDR4 memory to boost tablet, server performance

Thursday, May 17th, 2012

Expect big performance gains in data centers and on consumer devices

The upcoming shift from Double Data Rate 3 (DDR3) RAM to its successor, DDR4, will herald in a significant boost in both memory performance and capacity for data center hardware and consumer products alike.

The DDR4 memory standard, which the Joint Electronic Devices Engineering Council (JEDEC) expects to OK this summer, represents a doubling of performance over its predecessor and a reduction in power use by 20% to 40% based on a maximum 1.2 volts of power use.

“It’s a fantastic product,” said Mike Howard, an analyst with market research firm IHS iSuppli. “Increasing the amount of memory and the bandwidth of that memory is going to have huge implications.”

DDR4’s significant reduction in power needs means that relatively low-priced DDR memory will, for the first time, be used in mobile products such as ultrabooks and tablets, according to Howard.

Today, mobile devices use low-power DDR (LPDDR) memory, the current iteration of which uses 1.2v of power. The next generation of mobile memory, LPDDR3, will further reduce that power consumption (probably by 35% to 40%), but it will likely cost 40% more than DDR4 memory, said Howard. (LPDDR memory is more expensive to manufacture.)

Designed for servers

The impact that DDR4 will have on the server market could be even greater.

Intel, for example, is planning to start using DDR4 in 2014, but only in server platforms, according to Howard. “Server platforms are the ones really screaming for this stuff, because they need the bandwidth and the lower voltage to reduce their power consumption.

“So while Intel is only supporting DDR4 on their server platforms in 2014, I have a feeling they’re going to push it to their compute platforms as well in 2014,” Howard continued.

The draft of the DDR4 specification and its key attributes were released last August.

“With DDR4, we’re certainly … seeing some larger power savings advantages with the performance increase,” said Todd Farrell, director of technical marketing for Micron’s DRAM Solutions Group.

Both Samsung and Micron have announced they’re preparing to ship memory modules based on the DDR4 standard. Samsung’s memory modules, expected to ship later this year, purport to reduce power use by up to 40%. Both companies are using 30-nanometer circuitry to build their products, their smallest to date.

By employing a new circuit architecture, Samsung said its DDR4 modules will be able to perform operations at speeds of up to 3.2Gbps, compared with today’s DDR3 speeds of 1.6Gbps and DDR2’s speeds of up to 800Mbps.

Another benefit from the arrival of DDR4 will be greater density and the ability to stack more chips atop one another. Micron’s DDR4 memory module is expected to ship next year, but test modules have already shipped to system manufacturers.

“For DDR3, we see stacking going up to four chips (4H), [but] for DDR4 this clearly will go up to eight chips stacked on top of each other (8H), which means that, using a 16Gbit memory [chip], manufacturers will be able to produce 128Gbit memory boards,” Farrell said.

Farrell described the jump from DDR3 to DDR4 as greater than any other past DDR memory evolution.

“It’s hard to pick just one [attribute]. DDR4 is one of these devices where you’re getting a lot of benefits at once. Power reduction is key. But at the same time we’re reducing power, we’re getting a substantial increase in performance. They kind of go hand in hand,” Farrell said.

For example, if you run DDR4 at the same bandwidth as DDR3, you can achieve a 30% to 40% power savings. Running at its maximum bandwidth, which represents a doubling of performance, DDR4 will use the same power as its predecessor.

Does power improvement matter?

Historically, memory power consumption has not been considered a big issue because at the motherboard level, processors were responsible for most of the power use in a system.

“Moving forward, as we see a tremendous amount of power reduction — especially in tablets — at that point, if the memory power doesn’t reduce with it … all of a sudden the memory is setting your battery life,” Farrell said.

I/O signaling has been improved for added power savings. The I/O uses an “open drain” driver, meaning it only uses power when it writes a zero and not a one at the data bit level. Previous DDR memory used power when writing both zeros and ones.

“Our DRAM controller doesn’t drive current to a one,” Farrell said.

Another power-saving feature with the DDR4 standard will be a reduction in refreshes. In DDR3 memory boards, refreshes occur periodically — and more frequently as the temperature of a device rises. DDR4 memory is being tuned to take advantage of mobile device cooling capabilities. For example, as mobile devices like tablets and laptops go into sleep mode, they cool off. As they cool, DDR4 memory modules will refresh less often, thus using less power.

Additionally, DDR4 can be optimized for server use. For example, higher reliability can be configured using a Cyclic Redundancy Check for the data bus to verify the integrity of the memory. The command address bus also has parity built directly into the DRAM module. Traditionally, parity was achieved through the use of a separate register or another chip on a buffer DIMM.

Memory prices plummet, then stabilize

Even as the arrival of DDR4 memory nears, prices for DRAM remain soft, though the market is expected to pick up steam this year.

Last year, IHS iSuppli reported there was an oversupply in the DRAM market as demand came in lower than expected.

ISuppli has released figures showing that DRAM pricing declined to its lowest point at the end of 2010, the latest period for which it has released data. In December 2010, the contract price for a 2GB DDR3 DRAM module stood at $21, less than half the $44.40 the same module cost just six months earlier.

The price dip isn’t restricted to DDR3. Pricing for a DDR2 DRAM module dropped to $21.50 in December 2011, down from $38.80 in June 2010, according to iSuppli.

This year, iSuppli said it has a much more optimistic outlook for DRAM prices. “DRAM prices have stabilized (and look to stay firm), and the dynamic of the world economy looks much more positive in 2012,” it stated in a report last month.

After seeing major price declines in 2011, memory manufacturers cut output, bringing supply more in line with demand.

“Prices have been essentially flat in the commodity memory market since December, specifically DDR3. It is really weird,” Howard said, adding that market consolidation should help firm up memory prices this year.

For example, Japan’s Elpida Memory filed for bankruptcy in February. This week reports circulated that Micron is in talks to acquire Elpida.

“So it looks like there is going to be some really meaningful consolidation in the industry, and that’s pointing to a much better balance between supply and demand,” Howard said. “We’re anticipating prices for commodity products increasing in the second half of the year.”

Source:  computerworld.com

Terahertz frequencies bring Japanese researchers 3Gbps in a WiFi prototype

Thursday, May 17th, 2012

The tiny wireless radio transmits on spectrum between 300GHz and 3THz

http://cdn.arstechnica.net/wp-content/uploads/2012/05/extremetech-rohm-wireless-chip-348x1961.jpg

A team of researchers at the Tokyo Institute of Technology have transmitted data on the terahertz range of spectrum using a wireless radio no bigger than a 10-yen coin (roughly the size of a penny). The tiny contraption can access spectrum between 300GHz and 3THz (otherwise known as T-Rays for terahertz), and was able to transfer data at a speed of 3Gbps. But this was only a test run—researchers suspect that using terahertz spectrum could get data transfer up to rates of 100 Gbps.

The newest WiFi standard available to consumers (but not yet ratified by the IEEE), 802.11 ac, transmits on a 5GHz band and can theoretically achieve 1.3Gbps. There’s an even-further-out standard in the works as well; 802.11ad (otherwise known as WiGig) will transmit on the 60 GHz rage for a theoretical 10 Gbps—although this will generally only be within a line-of-sight range.

A T-ray based WiFi is certainly far off, and the greatly increased frequency of the transmission will undoubtedly require devices using terahertz spectrum to be quite close to each other. As Extreme Tech points out, the short distance of transmission for this technology would be better for server farms than anything else, permitting servers to share data between each other wirelessly rather than through a web of wiring.

Aside from the potentially huge bandwidth of T-ray networking, there’s another reason the spectrum is so attractive. Terahertz waves are unregulated, and present an untouched frontier away from currently crowded bands of spectrum.

Source:  arstechnica.com

Microsoft bolsters parental controls with Windows 8

Thursday, May 17th, 2012

http://asset2.cbsistatic.com/cnwk.1d/i/tim/2012/05/16/8204.FamilySafety02_thumb_5B1173CB.png

Aiming to give parents the option of keeping an eagle eye over their kid’s computer use, Microsoft revamps its parental controls in a “monitor first” approach that includes weekly reports.

Microsoft aims to give parents more control over their children’s computer use on Windows 8 with a new feature announced this week.

“With Windows 8, you can monitor what your kids are doing, no matter where they use their PC,” Microsoft’s senior program manager for Family Safety Phil Sohn wrote in a blog post. “All you have to do is create a Windows user account for each child, check the box to turn on Family Safety, and then review weekly reports that describe your children’s PC use.”

With these controls and weekly reports, parents will be able to keep tabs on whether their kids are playing violent online video games, looking at bikini models, or actually doing their homework. They’ll also be able to make sure their children aren’t associating with online predators.

Most previous parental controls focused on complex filtering options or using software to block children from Web sites; however Microsoft says with Windows 8, it’s now taking a “monitor first” approach.

The company says this new system is much easier. How it works: parents sign into Windows 8 with a Microsoft account, create a separate user account for each child, and then check the box to turn on Family Safety.

From there, parents can make the controls more or less restrictive and see what their kids are doing via the weekly e-mail reports.

Microsoft says Windows 8 will have all the same restrictions as Windows 7 along with some new ones. Here’s the list of additional restrictions:

  • Web filtering: You can choose between several web filtering levels.
  • SafeSearch: When web filtering is active, SafeSearch is locked into the “Strict” setting for popular search engines such as Bing, Google, and Yahoo. This will filter out adult text, images, and videos from your search results.
  • Time limits: With Windows 8, you now can restrict the number of hours per day your child can use their PC. For example, you might set a limit of one hour on school nights and two hours on weekends. This is in addition to the bedtime limits currently available in Windows 7.
  • Windows Store: Activity reports list the most recent Windows Store downloads, and you can set a game-rating level, which prevents your children from seeing apps in the Windows Store above a particular age rating.
  • Application and game restrictions: As in Windows 7, you can block specific applications and games or set an appropriate game rating level.

Source:  CNET

Companies slow to react to mobile security threat

Monday, May 14th, 2012

Nearly a third of IT managers have reported a security threat as a result of personal devices accessing company data, Juniper finds

Nearly nine in 10 executives and employees are using their personal smartphones or tablets for business and about half are doing so without the permission of their companies, a new study shows.

Making the situation even more precarious, less than half of the more than 4,000 mobile device users surveyed by Juniper Networks in the U.S., U.K., Germany, China and Japan took even the most basic precautions in using mobile applications.

The findings, released this week, point to the need for all C-level executives to start taking mobile security seriously to avoid giving hackers an open door to the corporate network.

“You’re extremely hard pressed to find an enterprise that says, ‘Yes, we understand what’s going on with mobility, we did our research and we put together and have implemented a comprehensive solution to address our mobility concerns,'” Dan Hoffman, chief mobile security evangelist for Juniper, said Friday. “They’re just not there right now.”

As a security vendor, Juniper has a vested interest in scaring the bejeezus out of execs to get them to spend their company’s money on expensive security technology to lockdown mobile devices. Nevertheless, based on the study, there are some troubling trends within the enterprise.

Juniper found that 89 percent of business users, often called prosumers, are using their personal devices to access what the vendor says is “critical work information.” More than 40 percent of that group is using their tablets and smartphones without asking their companies for permission.

This risky behavior has already had some consequences. Nearly a third of IT managers have reported a security threat as a result of personal devices accessing company data, Juniper said. In China, that number doubles.

The fact that breaches have occurred is unsurprising, given the lack of commonsense in the use of mobile apps. Less than half of the respondents said they read the terms and conditions before downloading an app, manually set data security features and settings or researched applications to ensure they are trustworthy.

In the background to all this risky behavior is a growing malware threat. In 2011, the number of malware targeting mobile devices grew 155 percent year to year, according to Juniper. In the first three months of this year, the number has grown by an additional 30 percent.

Most troubling about the increase this year is the rise is spyware capable of stealing personal, financial and work information. Juniper found the number of spyware doubled in the first quarter.

The report had a bright side. Many people are willing to have their devices supported by IT staff, which would give their companies the needed control to secure the devices. The study found that more than four in 10 employees and execs are actually pressuring IT staff for support. Hoffman recommends CSOs give these employees and execs what they want.

“Providing security to the bring-your-own-device (BYOD) user has to be about protecting the enterprise, but I think it also has be about protecting the end user because fundamentally, they’re the same,” Hoffman said.

Source:  infoworld.com

Windows 8 upgrade program kicks off June 2

Monday, May 14th, 2012

http://www.geek.com/wp-content/uploads/2012/05/windows-8-upgrade1.pngAs they always do, Microsoft will be offering customers free (or discounted) upgrades to Windows 8 in the run-up to its retail launch later this year. Those upgrades, of course, will be offered up to shoppers who purchase a new Windows 7 PC from a participating vendor after a specific date.

The date specified in the tiny print above says June 2nd. While that might seem early at first glance, it’s yet another date on the Windows 8 timeline that closely lines up with the one for Windows 7. It’s set to end on January 31 2013 — which will help retail stores to sell out any remaining Windows 7 stock once Windows 8 hardware begins arriving. It also conveniently covers the back-to-school and holiday shopping rushes.

So why is the upgrade offer in the one pictured above a paid one? Likely because it’s for Windows 8 Pro. The upgrade offer for the more consumer-oriented Windows 8 SKU will probably be free, as was the Windows Vista Home Premium to Windows 7 program.

As before, it’ll be a simple process. Find the upgrade code in the box your new Windows 7  PC arrived in, head over to Microsoft’s redemption site, and enter your code. After that, you’ll be hooked up with your Windows 8 upgrade. Based on how well the new installer handled upgrades from Windows 7 to the Windows 8 Consumer Preview on my test machines, the process should be a breeze even for less experienced users.

Is the upgrade for you? You’ve got some time to make up your mind, obviously, and you’ll probably want to hold off until after the Windows 8 Release Preview arrives. That’s coming next month, too, and it’s worth giving the hands-on treatment before you make up your mind. Of course, if you buy a Windows 7 machine you can always stick with it until Windows 9 comes out if you’d rather give the whole Metro thing a pass for now.

Source:  geek.com

Get smart: Charge your phone while walking in this shoe

Monday, May 14th, 2012

Anthony Mutua’s modified Nike sneaker can recharge your phone as you walk. Just don’t expect more air in your jumps.

Love walking and texting? Still haven’t done a faceplant on a streetlight? Well, this sneaker from Kenya can power your phone so you’ll never have to look up from that screen again.

Inventor Anthony Mutua, 24, has been showing off his recharging sneaker at the first-ever Kenyan Science Technology and Innovation Week, held in Nairobi. It’s another way of using your body’s own energy to fuel electronics.

The shoe apparently has a very thin “crystal chip,” perhaps a piezoelectric device, that generates power when the sole bends. It can charge phones via a long cable to a pocket while the user walks, or store power for later charging.

“This charger works using pressure, as you walk you generate pressure that in turn generates energy, once you have arrived where you were going you can now sit down and charge your mobile phone,” Mutua told CNC World.

The technology apparently works with any shoe except bathroom slippers, and can be transferred to another once a shoe gets worn out.

It can also power several phones at the same time. Good news for those who like to tote more than one handset.

The project was apparently sponsored to the tune of some $6,000 by Kenya’s National Council of Science and Technology (NCST). It has been patented and the device could enter mass production soon.

It could sell for the equivalent of $46, which would include a 2.5-year warranty.

Isn’t that just about the time a pair of new shoes will last?

Source:  CNET

Verizon to offer 100G links, resilient mesh on optical networks

Saturday, May 12th, 2012

The carrier will upgrade metro networks in the U.S. and other countries to 100-gigabit and use mesh designs for recovery from breaks

Looking ahead to growing demand for bandwidth to feed large companies and computing clouds, Verizon Communications announced steps on Friday to increase the speed of the links its enterprise customers can buy and to make its network connections more resilient.

Verizon already has 100Gbps (gigabit-per-second) connections over its optical core networks across continents. Now the carrier is bringing that speed to its metro networks, which enterprises tap into for high-speed data connections. The metro networks so far have been limited to 10Gbps or 40Gbps, so that’s all enterprises have been able to buy, according to Glenn Wellbrock, Verizon’s director ofA Optical Transport Network Architecture and Design.

Though the carrier doesn’t expect many customers to start ordering 100Gbps connections soon, it is preparing for the future, Wellbrock said. For example, many large organizations are looking for 10-gigabit links, and where Verizon has 100-Gigabit capability, it can quickly divide those fat pipes into narrower connections, he said.

Verizon’s 100-gigabit U.S. backbone technology forms the basis of a high-speed, low-latency network for financial trades that was inaugurated between Chicago and New York last month. It can complete a stock trade in as little as 14.5 milliseconds, according to the carrier. The carrier doesn’t yet have 100-gigabit capability across the Atlantic or Pacific but is working on it, Wellbrock said.

Also on Friday, Verizon said it has begun to use the same general architecture for high-speed land-based networks, such as those in North America and Europe, that it already uses for its connections across oceans. That architecture, based on a mesh of cables, gives traffic across its core network more alternate routes to take if one cable breaks. This is a step up from a ring architecture, in which the network recovers by sending bits the other way around the ring if one spot on it is damaged. This has limitations.

“If you had two outages at the same time, you were out of business,” Wellbrock said.

Verizon already has mesh networks across the Pacific and across the Atlantic, each with eight alternate paths. Traffic can be rerouted over the quickest path across the mesh in the event of a disaster. Because of the trans-Pacific mesh, Verizon’s network recovered after its cables were damaged in the earthquake, aftershocks and tsunami that hit Japan last year, according to the carrier. “It’s gotten us out of a lot of jams,” Wellbrock said.

Now, Verizon is building that capability into high-capacity land-based networks in a global initiative, upgrading not just the carrier’s domestic U.S. system but also networks in key markets elsewhere in the world. The move is designed to bring greater reliability to enterprises’ high-speed links. This will be a gradual process, Verizon said.

Source:  computerworld.com

Anti-WiFi wallpaper lets cellular and radio through

Friday, May 11th, 2012

No Faraday cage or tinfoil hat required.

Better WiFi security could soon be just a few rolls of wallpaper away. French researchers at Institut Polytechnique de Grenoble, in cooperation with the Centre Technique du Papier, have developed a wallpaper that can block WiFi signals, preventing them from being broadcast beyond the confines of an office or apartment.  But unlike other signal-blocking technologies based on the Faraday cage (which block all electromagnetic radiation), the wallpaper only blocks a select set of frequencies used by wireless LANs, and allows cellular phones and other radio waves through. L’Informatcienreports that researchers claim the price of the wallpaper, which is being licensed to a Finnish manufacturer for production, would be “equivalent to a traditional mid-range wallpaper.” It should be available for sale in 2013.

Pierre Lemaitre-Auger, the director of studies at Grenoble INP’s ESISAR (School of Advanced Systems and Networks) said during a demonstration of the wallpaper that in addition to preventing WiFi snooping, it could also be used in areas where there is concern about interference from WiFi or to block external WiFi sources—such as in hospitals, hotels, or theaters. (It could also be used to prevent guests from trying to get out of paying for WiFi and picking up an outside network for free.) He also said that the paper could be marketed to people concerned about sensitivity to electromagnetic waves, such as “people who want the opportunity to protect themselves and to have very low levels of radio waves in their apartment.”

Source:  arstechnica.com

Apple patches 36 bugs in OS X, fixes encryption password goof

Thursday, May 10th, 2012

Update includes fixes to FileVault in Lion and Snow Leopard, as well as QuickTime bugs

Apple yesterday patched 36 vulnerabilities in Mac OS X, most of them critical, plugging a hole that revealed passwords used to encrypt folders with an older version of FileVault.

Both Mac OS X 10.7, aka Lion, and 10.6, better known as Snow Leopard, were updated with fixes. The two operating systems were last updated in February.

High on the fix list was one specific to Lion that put FileVault passwords in plain text, where they could easily be read — and thus encrypted folders deciphered — if a Mac was stolen or lost. The software consultant who publicly reported the bug attributed it to a programming error on Apple’s part.

“The login process recorded sensitive information in the system log, where other users of the system could read it,” Apple’s advisory stated. Apple also acknowledged that the plain-text passwords may persist in the Mac’s logs after users update to 10.7.4 and urged them to review a support document that walked through steps to eradicate any that are remaining.

Among the other patches were four Snow Leopard-only fixes quashing bugs that could be exploited via malicious image files; another four in QuickTime, Apple’s media player and browser plug-in; and one in FileVault 2, the full-disk encryption technology used by Lion.

The FileVault 2 flaw caused some date to be left unencrypted when a Mac went into “sleep” mode.

Twenty-one of the 36 vulnerabilities were tagged with Apple’s phrase of “arbitrary code execution,” indicating they were critical flaws that, if exploited by attackers, could result in a Mac malware infection.

Eight of the bugs affected only Snow Leopard.

On Lion, Apple also included a number of nonsecurity fixes it categorized as stability and compatibility improvements. Many of them were related to connecting to network services, such as Microsoft’s Active Directory and that company’s Server Message Block (SMB) file-sharing protocol. Both are used by Macs in enterprises to access corporate resources held on servers running Windows.

Snow Leopard’s update, dubbed “Security Update 201-002,” received no feature improvements.

Yesterday’s update may be the last for Snow Leopard, as Apple seems to be on the fast track for OS X 10.8, aka Mountain Lion, which may ship as soon as late June. Apple typically stops serving security updates to the oldest edition in its support rotation when it finalizes a major operating system upgrade.

Last year, OS X 10.5, or Leopard, received its final security update in late June, about a month before Apple launched Lion. Leopard’s versions of iTunes, QuickTime, and Java, however, were updated after June 2011.

As usual, some users reported problems with the update.

On the Lion support forum, complaints ranged from kernel errors and difficulty reaching a Wi-Fi network to numerous reports of bricked MacBook Pros.

No one problem was dominant in those reports, but the MacBook Pro-not-booting thread was heavily trafficked, with more than 1,500 views since its inception Wednesday afternoon.

Mac OS X 10.7.4 and the separate 2012-002 security update for Snow Leopard can be downloaded from Apple’s support site or installed using the operating system’s built-in update service.

Source:  infoworld.com