Archive for May, 2011

Google Cloud Connect for Microsoft Office Unites Office With Google Docs

Wednesday, May 25th, 2011

Wouldn’t it be great if you could combine the best of Microsoft Office and Google Docs? Imagine the feature set and usability of Office with the ability of Google Docs to store documents in the cloud and share them. With Google Cloud Connect for Microsoft Office, that’s exactly what you get. This free add-in for Microsoft Office, which hails from Google, lets you save your Office documents to Google Docs, where you can use them as you would normally.

After you install the software, it runs as its own toolbar at the top of Office. When you want to save a document to Google Docs, simply click Sync, and it saves the file, and then syncs it automatically every time you save. If you want, you can change that syncing behavior, and having Google Cloud Connect sync only when you manually tell it to.

You can also use the Google Cloud Connect toolbar to share a document that you’ve saved to Google Docs. Click the Share button and a dialog box opens that lets you share your document. You can use Google Cloud Connect with multiple Google Doc accounts; simply switch from one to the other. There are a few limitations, though. You can’t use multiple accounts simultaneously, and you can’t open a document stored on Google Docs from directly within Office.

There’s no simpler way to combine the power of Microsoft Office and Google Docs than Google Cloud Connect for Microsoft Office–and you can get it for free.

Source:  PCworld

Microsoft: One in 14 downloads is malicious

Monday, May 23rd, 2011

The next time a website says to download new software to view a movie or fix a problem, think twice. There’s a pretty good chance that the program is malicious.

In fact, about one out of every 14 programs downloaded by Windows users turns out to be malicious, Microsoft said Tuesday. And even though Microsoft has a feature in its Internet Explorer browser designed to steer users away from unknown and potentially untrustworthy software, about 5 percent of users ignore the warnings and download malicious Trojan horse programs anyway.

Five years ago, it was pretty easy for criminals to sneak their code onto computers. There were plenty of browser bugs, and many users weren’t very good at patching. But since then, the cat-and-mouse game of Internet security has evolved: Browsers have become more secure, and software makers can quickly and automatically push out patches when there’s a known problem.

So increasingly, instead of hacking the browsers themselves, the bad guys try to hack the people using them. It’s called social engineering, and it’s a big problem these days. “The attackers have figured out that it’s not that hard to get users to download Trojans,” said Alex Stamos, a founding partner with Isec Partners, a security consultancy that’s often called in to clean up the mess after companies have been hacked.

Social engineering is how the Koobface virus spreads on Facebook. Users get a message from a friend telling them to go and view a video. When they click on the link, they’re then told that they need to download some sort of video playing software in order to watch. That software is actually a malicious program.

Social-engineering hackers also try to infect victims by hacking into Web pages and popping up fake antivirus warnings designed to look like messages from the operating system. Download these and you’re infected. The criminals also use spam to send Trojans, and they will trick search engines into linking to malicious websites that look like they have interesting stories or video about hot news such as the royal wedding or the death of Osama bin Laden.

“The attackers are very opportunistic, and they latch onto any event that might be used to lure people,” said Joshua Talbot, a manager with Symantec Security Response. When Symantec tracked the 50 most common malicious programs last year, it found that 56 percent of all attacks included Trojan horse programs.

In enterprises, a social-engineering technique called spearphishing is a serious problem. In spearphishing, the criminals take the time to figure out who they’re attacking, and then they create a specially crafted program or a maliciously encoded document that the victim is likely to want to open — materials from a conference they’ve attended or a planning document from an organization that they do business with.

With its new SmartScreen Filter Application Reputation screening, introduced in IE 9, Internet Explorer provides a first line of defense against Trojan horse programs, including Trojans sent in spearphishing attacks.

IE also warns users when they’re being tricked into visiting malicious websites, another way that social-engineering hackers can infect computer users. In the past two years, IE’s SmartScreen has blocked more than 1.5 billion Web and download attacks, according to Jeb Haber, program manager lead for SmartScreen.

Haber agreed that better browser protection is pushing the criminals into social engineering, especially over the past two years. “You’re just seeing an explosion in direct attacks on users with social engineering,” he said. “We were really surprised by the volumes. The volumes have been crazy.”

When the SmartScreen warning pops up to tell users that they’re about to run a potentially harmful program, the odds are between 25 percent and 70 percent that the program will actually be malicious, Haber said. A typical user will only see a couple of these warnings each year, so it’s best to take them very seriously.

Source:  networkworld.com

Laser puts record data rate through fiber

Monday, May 23rd, 2011

Researchers have set a new record for the rate of data transfer using a single laser: 26 terabits per second.

At those speeds, the entire Library of Congress collections could be sent down an optical fibre in 10 seconds.

The trick is to use what is known as a “fast Fourier transform” to unpick more than 300 separate colours of light in a laser beam, each encoded with its own string of information.

The technique is described in the journal Nature Photonics.

The push for higher data rates in light-based telecommunications technologies has seen a number of significant leaps in recent years.

While the earliest optical fibre technologies encoded a string of data as “wiggles” within a single colour of light sent down a fibre, newer approaches have used a number of tricks to increase data rates.

Among them is what is known as “orthogonal frequency division multiplexing”, which uses a number of lasers to encode different strings of data on different colours of light, all sent through the fibre together.

At the receiving end, another set of laser oscillators can be used to pick up these light signals, reversing the process.

Check the pulse

While the total data rate possible using such schemes is limited only by the number of lasers available, there are costs, says Wolfgang Freude, a co-author of the current paper from the Karlsruhe Institute of Technology in Germany.

“Already a 100 terabits per second experiment has been demonstrated,” he told BBC News.

“The problem was they didn’t have just one laser, they had something like 370 lasers, which is an incredibly expensive thing. If you can imagine 370 lasers, they fill racks and consume several kilowatts of power.”

Professor Freude and his colleagues have instead worked out how to create comparable data rates using just one laser with exceedingly short pulses.

Within these pulses are a number of discrete colours of light in what is known as a “frequency comb”.

When these pulses are sent into an optical fibre, the different colours can add or subtract, mixing together and creating about 325 different colours in total, each of which can be encoded with its own data stream.

Last year, Professor Freude and his collaborators first demonstrated how to use a smaller number of these colours to transmit over 10 terabits per second.

At the receiving end, traditional methods to separate the different colours will not work. In the current experiment, the team sent their signals down 50km of optical fibre and then implemented what is known as an optical fast Fourier transform to unpick the data streams.

Colours everywhere

The Fourier transform is a well-known mathematical trick that can in essence extract the different colours from an input beam, based solely on the times that the different parts of the beam arrive.

The team does this optically – rather than mathematically, which at these data rates would be impossible – by splitting the incoming beam into different paths that arrive at different times, recombining them on a detector.

In this way, stringing together all the data in the different colours turns into the simpler problem of organising data that essentially arrive at different times.

Professor Freude said that the current design outperforms earlier approaches simply by moving all the time delays further apart, and that it is a technology that could be integrated onto a silicon chip – making it a better candidate for scaling up to commercial use.

He concedes that the idea is a complex one, but is convinced that it will come into its own as the demand for ever-higher data rates drives innovation.

“Think of all the tremendous progress in silicon photonics,” he said. “Nobody could have imagined 10 years ago that nowadays it would be so common to integrate relatively complicated optical circuits on to a silicon chip.”

Source:  BBC

Cisco debuts ‘private cloud’ controller

Sunday, May 22nd, 2011

Retailers, other distributed enterprises get controller-less Wi-Fi branch option

Cisco shops can finally run a slew of branch office WLANs without having to put a controller at each site.

The company announced the Flex 7500 Series Cloud Controller at Interop last week, which centralizes control and management functions in the “private cloud” data center but allows for distributed data forwarding in local branch APs.

Connections and inter-AP fast-roaming capabilities stay up if there’s a WAN failure between the branch site and the controller, and users can also authenticate locally, according to Greg Beach, director of product management in Cisco’s wireless networking business unit.

The Flex 7500 is a “1 [rack-unit] appliance that supports 2,000 APs [across distributed sites], local authentication and fast roaming. If a WAN link goes down, already connected devices survive,” he said.

The architecture alleviates the high cost associated with having a controller in every site in enterprises that are highly distributed, such as retail organizations.

However, the control and management functions are inaccessible if the WAN is unavailable. Cisco wireless VP and general manager Ray Smets said at the show that “retailers want CleanAir,” Cisco’s well-received spectrum analysis capabilities for identifying and mitigating sources of interference. “To get CleanAir, they need a controller. And they want that controller in the data center.”

In other words, while a WAN failure would not impact local connectivity and data forwarding, continued operation of the bells-and-whistles RF management features such as CleanAir remain dependent on a live WAN.

That’s because while CleanAir uses purpose-built ASICs in Cisco APs for monitoring, you need a correlation and analysis engine to crunch the data collected. According to Cisco CleanAir documents: “You can deploy Cisco CleanAir technology effectively with just Cisco Aironet 3500 Access Points and the Cisco WLC for simple detection and mitigation of RF interference. For added benefits such as location, zone of impact, policy enforcement, and visualization of air quality, you should also consider including the Cisco Mobility Services Engine (MSE) and the Cisco WCS [Wireless Control System].”

Source:  networkworld.com

NXP and Cohda teach cars to communicate with 802.11p, hopes to commercialize tech by 2014

Thursday, May 19th, 2011

Ford promised to give our cars X-ray vision, and this little blue box might be the key — it’s apparently the first standardized hardware platform for peer-to-peer automobile communications. Called C2X (for “car-to-x”), the module inside is the product of Cohda Wireless and near-field communications (NFC) gurus at NXP, and it uses 802.11p WiFi to let equipped cars see one another around blind corners, through other vehicles, or even chat with traffic signals up to a mile away.

Pocket-lint got a look at the technology during Automotive Week, and got a good idea of when we can expect the tech; NXP says it should begin rolling out in 2014, and hopes to have 10 percent of the cars on the road gleefully gabbing by 2020.

Source:  engadget

Microsoft gives away free Xbox 360s to students purchasing new PCs

Thursday, May 19th, 2011

No gimmicks.  Just be a student with a verifiable .edu email account (for online purchases) or a student ID (for retail store purchases) looking to purchase a new Windows 7 PC priced at $699 or more, and get a free Xbox 360 4GB console.  The offer is detailed here, and is valid from 5/23/11 to 9/3/11 or while supplies last at Dell.com, HP.com, Best Buy, and other Microsoft resellers.

The timing of this offer probably isn’t coincidental.  Is Microsoft trying to one-up Google’s recent announcement of a $20/month Chromebook offer for students by tipping the Chromebook-PC decision scale with a multimedia gaming console?

Could be.  In any event, it’s enough to make me wish I was a student again.

AT&T plans consumer security service for 2012

Thursday, May 19th, 2011

AT&T Inc plans to launch a wireless security service for consumers next year to help combat a big rise in cyber attacks on mobile devices, a top executive said.

As more people use smartphones like Apple Inc’s iPhone and Google Inc’s Android-based devices to download Web applications, John Stankey, the head of AT&T’s enterprise business, said he had seen a big spike in security attacks on cellphones.

“Hackers always go to where there’s a base of people to attack,” Stankey said in an interview ahead of the Reuters Technology Media and Telecommunications Summit.

The company already sells security services to businesses, helping them protect their workers’ cellphones. But it has yet to offer consumer services such as anti-virus software as it has had a tough time trying to sell them.

“I do believe it’ll become as relevant in the mobile space as it is today in the desktop,” he said, referring to subscription anti-virus software services currently available for PCs. “You’ll see that occur in the wireless world.”

Stankey said AT&T would probably launch such services in 2012.

Consumers have been reluctant to pay for these services as most feel there is little risk.

“When you start asking them what’s your willingness to pay for a solution, if they’re not a little frightened, their willingness to pay is nothing,” Stankey said. “It’ll take a little time for this in the mass market.”

But the reluctance to pay will probably change in the coming year as consumers become more aware of security threats, he said.

Source:  reuters.com

The line between phone and PC has disappeared entirely

Tuesday, May 17th, 2011

Well, a cellphone maker has finally crossed the line between mobility and insanity – and even though it’s a Japanese carrier offering the device first, business users in the US shouldn’t feel any safer.

The Fujitsu LOOX F-07C is a dual-boot phone that can operate in either Symbian or – no kidding – 32-bit Windows 7 Home Premium.  Not Windows 7 Phone; you read it right the first time…

Hardware specs are right up there with other top-of-the-line devices, with a 1.2 Atom CPU, 1 GB RAM, and 32 GB onboard memory.  A physical keyboard and trackball mouse assist in navigation and input.  Battery life in Windows mode tops out at a mere two hours; after all, it is just a phone.  When the battery runs low, it kicks you back to Symbian.

The peripheral options and native Windows hardware/software compatibility, however, are what could open up the door for conceptual acceptance here in the US.  Connected to the available dock in Windows mode, USB and HDMI ports permit you to hook up your TV or monitor, mouse, keyboard, and even printers (without the need for third-party apps, which devices running Android, like the Motorola Atrix, require), making your phone a fully functional PC – not a webtop running a mobile OS.

Oh, and it comes with a full version of Microsoft Office as well.

We’re getting into dangerous territory here.  Soon there will be absolutely no escape from work.  Be afraid, business users; be very afraid….

Graphene Breakthrough Could Lead to Fastest Downloads Ever

Monday, May 16th, 2011

Graphene keeps boosting its reputation as the next wonder substance in computing. Researchers at the University of California-Berkeley have created an optical modulator that uses the carbon-based material to crank speeds ten times beyond current switching technology. The process could lead to devices capable of downloading an entire high-definition 3D movie in seconds.

The team, led by Dr. Xiang Zhang, created an optical data modulator out of a piece of graphene that’s just one atom thick. It’s capable of switching from on to off at a speed of 1GHz, though the technology is theoretically scalable up to 500GHz, the researchers said.

On top of that, the modulator is tiny. Optical modulators are typically bulkier than their electronic counterparts since they’re made to manipulate light pulses instead of electrons. But in the case of the graphene modulator, the researchers were able to shrink it to a size of 25 square microns. At that size, 400 graphene modulators could fit in the cross-section of a human hair.

“This is the world’s smallest optical modulator, and the modulator in data communications is the heart of speed control,” said Zhang. “Graphene enables us to make modulators that are incredibly compact and that potentially perform at speeds up to ten times faster than current technology allows. This new technology will significantly enhance our capabilities in ultrafast optical communication and computing.”

Scientists have continually found new and impressive applications for graphene since the carbon variant was first isolated in 2004. Graphene is the thinnest crystalline substance known, but it’s still very strong. It stretches like elastic, and it also conducts heat and electricity. This combination of useful properties has led to it being built into a variety of applications, from lighting to transistors.

To apply graphene to optics, the researchers exploited a curious property: that graphene becomes transparent when a voltage is applied. With a negative voltage, electrons are drawn out and can’t absorb photons passing through; when the voltage is positive, the electrons become so tightly packed that they can’t absob photons, so they still pass through.

The researchers found the right voltage that turns graphene opaque, which turns the switch “off.” It turns out that graphene can absorb the broad spectrum of light, which means a graphene modulator can carry more much data than current modulators.

“Graphene-based modulators not only offer an increase in modulation speed, they can enable greater amounts of data packed into each pulse,” said Zhang. “Instead of broadband, we will have ‘extremeband.'”

With video technologies like 4K coming down the pike, the potential increased bandwidth couldn’t be coming at a better time. So how long before we’re downloading at “extremeband” speeds? Zhang says he hopes to see applications in the “next few years,” though with any emerging tech, it’s a long road from concept to production.

Source:  pcmag.com

Microsoft Security Intelligence Report reveals trends from Q3-Q4 2010

Monday, May 16th, 2011

The Microsoft Security Intelligence Report compiled from data gleaned in the second half of 2010 has been released.  Here are some notable excerpts you may find interesting:

  • Windows XP users are about four times more likely to become infected with malware than Windows 7 users
  • Infection rates for 64-bit OS users are much lower than for 32-bit OS users on the same version of Windows (Microsoft is unsure whether to attribute this to the fact that “64-bit versions of Windows still appeal to a more technically savvy audience than their 32-bit counterparts,” or to take a bow for the creation of “Kernel Patch Protection (KPP), a feature of 64-bit versions of Windows that protects the kernel from unauthorized modification”)
  • Product advertisements accounted for 54% of spam in 2010, down from 69% a year ago
  • Image-only spam is on the rise, up over 30% from 2009
  • Malicious HTML inline frames (IFrames) topped the list of web exploits
  • Document exploits involving Adobe Acrobat and Reader fell from 50% to 10% of all document exploits over the last two quarters (apparently they patched something…)
  • It is twice as likely that a company will lose control of your personal information due to negligence (i.e., lost, stolen, or missing equipment, accidental disclosure, or improper disposal) than due to malfeasance (i.e., hackers, malware, or fraud)
  • You are most likely to be a victim of a trojan attack if you live in:  USA, 43% of global incidents (with Russia a close second at 40%)
  • You are most likely to have your password stolen if you live in:  Brazil, 27% of global incidents (due to a particular malware strain that targets customers of Brazilian banks)
  • You are most likely to be a victim of a worm attack if you live in:  Spain or Korea, both at 40% of global incidents
  • You are most likely to be a pestered by annoying adware if you live in:  France, 33% of global incidents
  • You are least likely to be a victim of malware if you live in:  Mongolia, at less than 2% of global incidents
  • Also registering a surprisingly low incidence of attacks:  Pakistan, right around 2% (clarification: these figures reflect malware attacks, not Navy Seal attacks)
  • China maintains a stranglehold on the unique category of malware called “Misc. potentially unwanted software” at 52% of global incidents
  • “Worms accounted for five of the top 10 families detected on domain-joined computers. Several of these worms, including Conficker, Win32/Autorun, and Win32/Taterf, are designed to propagate via network shares, which are common in domain environments.”
  • Phishing attacks, traditionally targeting financial sites for obvious reasons, increasingly targeted social media and gaming sites in 2010

So what lessons can we take away from a careful scrutiny of this data and the underlying patterns therein?  The answer is:  absolutely nothing.  If you didn’t have updated antivirus and antimalware protection on your computer before now, you’re likely reading this on a public computer at the library because yours is infected and out of commission.

But at least now you don’t have to sift throughthe 88-page Microsoft report for the interesting parts.  You’re welcome.

Memristors’ current carves protected channels

Monday, May 16th, 2011

A circuit component touted as the “missing link” of electronics is starting to give up the secrets of how it works.

Memristors resist the passage of electric current, “remembering” how much current passed previously.

Researchers reporting in the journal Nanotechnology have now studied their nanoscale makeup using X-rays.

They show for the first time where the current switching process happens in the devices, and how heat affects it.

First predicted theoretically in the early 1970s, the first prototype memristor was realised by researchers at Hewlett-Packard in 2008.

They are considered to be the fourth fundamental component of electronics, joining the well-established resistor, capacitor, and inductor.

Because their resistance at any time is a function of the amount of current that has passed before, they are particularly attractive as potential memory devices.

What is more, this history-dependent resistance is reminiscent of the function of the brain cells called neurons, whose propensity to pass electrical signals depends crucially on the signals that have recently passed.

The earliest implementations of the idea have been materially quite simple – a piece of titanium dioxide between two electrodes, for example.

What is going on at the microscopic and nanoscopic level, in terms of the movement of electric charges and the structure of the material, has remained something of a mystery.

Now, researchers at Hewlett-Packard including the memristor’s discoverer Stan Williams, have analysed the devices using X-rays and tracked how heat builds up in them as current passes through.

The team discovered that the current in the devices flowed in a 100-nanometre channel within the device. The passage of current caused heat deposition, such that the titanium dioxide surrounding the conducting channel actually changed its structure to a non-conducting state.

A number of different theories had been posited to explain the switching behaviour, and the team was able to use the results of their X-ray experiments to determine which was correct.

Electronics’ ‘missing link’

The detailed knowledge of the nanometre-scale structure of memristors and precisely where heat is deposited will help to inform future engineering efforts, said Dr Williams.

He recounted the story of Thomas Edison, who said that it took him over 1,000 attempts before arriving at a working light bulb.

“Without this key information [about memristors], we are in ‘Edison mode’, where we just guess and modify the device at random,” he told BBC News.

“With key information, we can be much more efficient in designing devices and planning experiments to improve them – as well as understand the behavior that we see.”

Once these precise engineering details are used to optimise memristors’ performance, they can be integrated – as memory storage components, computational devices, or even “computer neurons” – into the existing large-scale manufacturing base that currently provides computer chips.

“With the information that we gained from the present study, we now know that we can design memristors that can be used for multi-level storage – that is, instead of just storing one bit in one device, we may be able to store as many as four bits,” Dr Williams said.

“The bottom line is that this is still a very young technology, but we are making very rapid progress.”

Source:  bbc.com

Google to rebuild Chrome on secure foundation

Friday, May 13th, 2011

Native Client, an obscure security project at Google, is about to get much more important as the foundation for Chrome, CNET has learned.

Native Client–NaCl for short–got its start as a part of Chrome as a way to run software modules downloaded over the Net safely and quickly. With the move, though, the tables will turn, and Chrome will itself become a NaCl module.

“We want to move more and more of Chrome to Native Client,” Linus Upson, vice president of engineering for the Chrome team, said in an interview at the Google I/O show here. “Over time we want to move the entire browser in Native Client.”

The move is a bold bet on a project that hasn’t yet even been enabled by default in Chrome, much less tested widely in the real world. But if it works, Google will get a new level of security for Chrome–and for its new browser-based operating system, Chrome OS.

Inevitably, programmers introduce bugs into their products. But if Chrome is running as Native Client, those bugs aren’t as big of a security problem: “It becomes extremely difficult for a bad guy to compromise your computer,” Upson said.

Google is starting small, not with the whole browser, Upson said. The first part of Chrome to run within a Native Client framework is the PDF reader, Upson said. And that move is coming soon.

“It’ll happen this year,” Upson said.

Native Client innards
To understand Native Client, it’s best to understand its chief alternative today. Web-based software today runs within the browser in JavaScript, a language that’s much slower to run than native software that runs directly on an operating system.

JavaScript performance has grown by leaps and bounds in recent years, helping Google expand what can be done with Web apps such as Google Docs. But JavaScript programs aren’t prepackaged to run on a particular processor the way native software is. Instead, it’s written in higher-level instructions compiled on the fly into machine-comprehensible code that runs not on the hardware but instead in a virtual environment called a JavaScript engine.

There’s a good reason for that approach. Running native software you just downloaded over the Web, with the full privileges of native software such as Microsoft Word or Adobe Photoshop, poses a huge security risk. It’s the reason today’s operating systems ask if you really want to run that installer you just downloaded: do you really trust the source? If attackers could run whatever software they wanted on your machine just because you happened to visit a particular Web site, it would be a golden age for malware.

Native Client, though, is intended to make that high-risk behavior safe with two main types of protection.

First, it confines running software to a sandbox–in fact, to two levels of sandboxes–that restrict the privileges of the software. Second, it scrutinizes the machine code instructions in advance to make sure it’s not performing any of a set of restricted operations that could enable an attack–for example, writing data to the hard disk or launching new computing processes.

Special programming tools
That means not just any old machine-readable binary code will run on NaCl. Instead, a specially crafted compiler must be used to build the NaCl module without any of the offending instructions.

With Native Client, “you can run untrusted machine code, verify it doesn’t do anything bad, move at the full speed of the hardware, and maintain the security model of the Web,” Upson said. “Full speed” is a pretty bold claim, but Google thinks it can reach performance just a few percent shy of an ordinary native program.

Running Chrome within Native Client is one idea. Google has plenty more: decoding video, encrypting corporate data, and running the calculations of video game physics engines. With planned improvements later encompassing 3D graphics, NaCl could be better for more game technology, too.

One big problem for early versions of NaCl was compatibility, since it used native code compiled for a specific processor, Web programmers would have produce different versions for different types of chips. Initially only 32-bit x86 chips were supported, but 64-bit ones arrived later. However, ARM processors–the lineage used in virtually all smartphones today–were not.

Thus, Google created a variation called PNaCl, short for Portable Native Client. It uses software modules compiled not all the way to native instructions but to an intermediate and universal form called Low Level Virtual Machine (LLVM). The browser itself handles translation the rest of the way into the native language of the processor.

Native Client has passed at least early stages of security scrutiny, and Google is exquisitely sensitive to security issues. The fact that the company is willing to base its entire browser on NaCl is a tremendous vote of confidence for the technology.

Skeptics
But not too many others have voted publicly for NaCl. Unity, a start-up with a cross-platform engine that can be used to build video games on everything from browsers to mobile phones, is one fan. Upson insists there are other developers interested as well, but there hasn’t been the level of public declarations of support the way there has been even for modestly successful new Web technologies such as WebGL.

Indeed, one major potential ally, Mozilla, isn’t interested. To refresh your memory, Mozilla’s Firefox is the lineal descendant of the Netscape Navigator browser from the 1990s that rattled Microsoft with its promise of new Web-based applications. Mozilla programmers took pleasure in producing a JavaScript version of an image-editing app that Google earlier had produced to show off chores that seemingly were too taxing not to be running native.

Much of the browser world today is focused on Web performance through other means. It’s not just JavaScript that’s getting faster: hardware acceleration is speeding up many graphics tasks, including Cascading Style Sheets (CSS) for formatting and animated transitions; Canvas and Scalable Vector Graphics (SVG) for 2D graphics; and WebGL for 3D graphics. Work on faster processing of Web page elements, faster loading of pages, more intelligent caching, Web page preloading, multithreaded JavaScript work, and other improvements are speeding up other aspects.

So with all these other high-speed Web programming options, might NaCl be left by the wayside? No, said Upson.

“It’s tremendously important,” he said. “The high-level [interfaces of Web browsers] keep getting faster, but they don’t give you the full performance of the hardware,” he said.

In addition, there’s a programming reason for NaCl, he argued. A lot of people write software such as games in C or C++ that’s relatively easy to port to NaCl. “You don’t have to rewrite it in JavaScript,” he said.

For now, though, a lot of that is vision. Upson is working hard to make it reality, though, and hopes to enable Native Client by default in Chrome.

“It’s a hard problem,” Upson said of NaCl. “We don’t want to ship it until it’s really good.”

Source:  CNET

Google Chrome Hacked, But Security Firm Won’t Share Details

Wednesday, May 11th, 2011

A French security firm has discovered the first-known exploit for Google’s Chrome browser, but the firm is prompting concern because it is selling the details to its government clients instead of sharing them with Google.

The exploit was discovered by Vupen Security, which managed to bypass Chrome’s sandbox to execute arbitrary code (see video below). But controversially, “for security reasons,” Vupen said in a blog post that it was only sharing the exploit code and technical details with its government customers, which does not include Google.

“We did not send the technical details to Google, and they did not ask us to do so,” Vupen CEO Chaouki Bekar wrote in an email. “All users of Chrome should be aware now that this browser can be hacked despite its famous sandbox and despite all the marketing that Google has been doing around its security.”

A Google spokesperson said, “We’re unable to verify Vupen’s claims at this time as we have not received any details from them. Should any modifications become necessary, users will be automatically updated to the latest version of Chrome.”

Earlier, Bekar told tech reporter Brian Krebs that the company had no plans of sharing its findings with Google.

“It seems odd that Vupen would brag about a flaw that it plans to sell to government clients for offensive purposes, since doing so might tip off potential targets to be extra cautious,” Krebs wrote at Krebs On Security.

“This also raises the question of how long it will be before hackers figure out a way to defeat the sandbox technology surrounding Adobe’s Reader X, which the company said was based in part on Google’s research,” Krebs wrote. “Currently, there are several zero-day vulnerabilities that Adobe has put off patching in Reader X, out of an abundance of confidence in the ability of its sandbox technology to thwart these attacks.”

Bekar wrote that the Chrome exploit takes advantage of two distinct vulnerabilities: the first one resulting from a memory corruption leading to the execution of the first payload as a Low integrity level (inside the sandbox). The second payload is then used to exploit another vulnerability that allows the bypass of the sandbox and execution of the final payload with a Medium integrity level (outside the sandbox).

Google Chrome is the only browser that has emerged unharmed from the annual Pwn2Own hacker competition all three years. At Pwn2Own 2011, the one hacker scheduled to attack it, the infamous PS3 hacker “Geohot,” dropped out at the last minute. Meanwhile Microsoft’s Internet Explorer 8 and Apple’s Safari 5.0.3 fell to hackers at the event.

Google uses sandboxing for HTML rendering and JavaScript execution, making it one of the most secure browsers available.

Source:  pcmag.com

Skype Is Microsoft’s Missing Lync

Wednesday, May 11th, 2011

Microsoft, in one of its biggest acquisitions ever, is set to purchase the cloud VoIP service, Skype, for $8.5 billion dollars. Of course, the question on everyone’s mind is not if Skype is worth that much (with over 170 million users, no one can dispute it is a valuable service) but just what are Microsoft’s plans for its latest acquisition? My guess: this purchase is largely about enterprise Unified Communications.

Most analysts are weighing in that this acquisition is about Microsoft competing against Apple’s Face Time and integrating Skype into Windows Phone 7. Yes, the consumer market and Windows Phone 7 is set to gain additional communications capabilities with this acquisition. Yet, there are bigger fish to fry as far as revenue goes, and that is in the enterprise space—all of the enterprise, from small- and mid-sized businesses to huge corporations. This Skype acquisition will make Microsoft most formidable against its major adversary in Unified Communications, Cisco Systems.

Cisco has been pushing TelePresence and its unified communications offerings, hard. Cisco is already deeply rooted in the switching and router markets (making up over 70 to 90 percent of its core business) that it had to expand into new markets to avoid stagnation. That venture into the new has not always worked out so well for Cisco. Just this year, Cisco CEO John Chambers spoke about overhauling Cisco strategy after some less than stellar ventures into the consumer market with set-top boxes, the Flip video camera, and home video conferencing products. Chambers told investors that he was adamant about retaining Cisco’s Unified Communications portfolio and TelePresence video conferencing products—all targeted mainly to the enterprise.

Chambers’ commitment to Unified Communications and TelePresence is no surprise. Microsoft and Cisco are the most widely deployed unified communication suppliers according to a report from Infonetics in 2010, with AT&T not far behind. Unified Commuincations, which is the convergence of messaging, VoIP, and conferencing into one platform, is expected as a market to top $1 billion by 2013.

Although Microsoft is at the forefront of the UC market, it has always had a challenge with the VoIP aspect of its UC solutions. Integrating VoIP has been too complicated and cost prohibitive for many businesses, particularly smaller ones. Office Communications Server (OCS) was Microsoft’s UC offering, now replaced by Lync server 2010. In a survey conducted by Osterman Research, the feature organizations with OCS deployed were most unlikely to offer users was enterprise voice, only suprassed by the group chat feature.

To deploy full-featured, enterprise VoIP, businesses still needed to deploy and maintain VoIP PBX systems, especially to do more advanced tasks like call queueing and IVR as well as compatible handsets, SIP trunks or hybrid gateways—all administrative-intensive and relatively expensive. Lync Server 2010 promises to cut down on some of the costs associated with enterprises incorporating voice into a UC solution with full-on PBX replacement. That is a big benefit to VoIP-managing weary business. With the Lync client as part of Microsoft’s new Office 365 solution including Lync server for a hosted UC platform and the hassle-free rich VoIP that the Skype acquisition could provide, it makes a strong and enticing case for businesses to move to Microsoft’s new cloud offerings like Office 365.

Leveraging the talents that make Skype what it is today with Microsoft’s rich cloud offerings for businesses already familiar with Microsoft products and Cisco may have to worry about holding its market share in UC and TelePresence, very soon.

Source:   pcmag.com

Microsoft tool helps devs port iOS apps to WP7

Wednesday, May 4th, 2011

Microsoft is trying to make it easier for iOS developers to bring their creations to its Windows Phone 7 platform.

A newly announced service called the iOS to Windows Phone 7 API mapping tool, acts as an interchange for developers to take applications they’ve already written for Apple’s platform, and figure out ways to get the code work with Microsoft’s standards.

“With this tool, iPhone developers can grab their apps, pick out the iOS API calls, and quickly look up the equivalent classes, methods and notification events in WP7,” said Jean-Christophe Cimetiere, Microsoft’s senior technical evangelist in a blog post announcing the tool. The database is also able to direct users to a directory of code samples, where they can learn to do some of the same things using Microsoft technologies.

“The code samples allow developers to quickly migrate short blobs of iOS code to the equivalent C# code. All WP7 API documentations are pulled in from the Silverlight, C# and XNA sources on MSDN,” Cimetiere said.

Right now, Cimetiere says the translation tool is designed to work with just a handful of iOS APIs (application programming interfaces), with more to be added in the future. Even so, the two platforms will not line up “one to one” because of basic differences in user interface and architecture, the company said.

Along with this new service, Cimetiere mentioned that the company is working on a similar offering for Google’s Android, though did not provide a date on its arrival.

Porting games and applications from one platform to another is nothing new, though providing first-party documentation to help get the job done is a tactical gesture on Microsoft’s part. It’s clearly in Microsoft’s interest as the company nears the launch of the first wave of devices it’s collaborated on with Nokia as part of the two companies’ strategic partnership. A report released by market research firm Distimo earlier this week noted that Microsoft’s application marketplace is on track to be larger than Nokia’s Ovi store and BlackBerry App World in less than a year since its launch, based on current growth rates.

Source:  cnet.com

Failure Cascading Through the Cloud

Wednesday, May 4th, 2011

Two major outages illustrate how complicated it is to keep a cloud system up and running

Recently two major cloud computing services, Amazon’s Elastic Compute Cloud and Sony’s PlayStation Network, have suffered extended outages. Though the circumstances of each were different, details that the companies have released about their causes show how delicate complex cloud systems can be.

Cloud computing services have grown in popularity over the past few years; they’re flexible, and often less expensive than owning physical systems and software. Amazon’s service attracts business customers who want the power of a modern, distributed system without having to build and maintain the infrastructure themselves. The PlayStation Network offers an enhanced experience for gamers, such as multi-player gameplay or an easy way to find and download new titles. But the outages illustrate how customers are at the mercy of the cloud provider, both in terms of fixing the problem, and in terms of finding out what went wrong.

The Elastic Compute Cloud—one of Amazon’s most popular Web services—was down from Thursday, April 21, to Sunday, April 24. Popular among startups, the service is used by Foursquare, Quora, Reddit, and others. Users can rent virtual computing resources and scale up or down as their needs fluctuate.

Amazon’s outage was caused by a feature called Elastic Block Store, which provides a way to store data so that it works optimally with the Elastic Compute Cloud’s virtual machines. Elastic Block Store is designed to protect data from being lost by automatically creating replicas of memory units, or “nodes” within Amazon’s network.

The problem occurred when Amazon engineers attempting to upgrade the primary Elastic Block Store network accidentally routed some traffic onto a backup network that didn’t have enough capacity. Though this individual mistake was small, it had far-reaching effects that were amplified by the systems put in place to protect data.

A large number of Elastic Block Store nodes lost their connection to the replicas they had created, causing them to immediately look for somewhere to create a new replica. The result was what Amazon calls “a re-mirroring storm” as the nodes created new replicas. The outage worsened as other nodes began to fail under the traffic onslaught, creating even more orphans hunting for storage space in which to create replicas.

Amazon’s attempts to fix the problem were stymied by the need to avoid interference with other systems. For example, Elastic Block Store doesn’t reuse failed nodes, since the engineers who built it assumed they would contain data that might need to be recovered.

Amazon says the problem has led to better understanding of its network. “We now understand the amount of capacity needed for large recovery events and will be modifying our capacity planning and alarming so that we carry the additional safety capacity that is needed for large scale failures,” the team responsible for fixing the network wrote in a statement.

However, some experts question whether this will really help prevent future outages. “It’s not just individual systems that can fail,” says Neil Conway, a PhD student at the University of California, Berkeley, who works on a research project involving large-scale and complex computing platforms. “One failure event can have all of these cascading effects.” A similar problem led to a temporary failure of Amazon’s Simple Storage Service in 2008.

One of the biggest challenges, Conway says, is that “testing is almost impossible, because by definition these are unusual situations.” He adds that it’s difficult to simulate the behavior of a system as large and complex as Amazon Web Services, or even to know what to simulate.

Conway expects companies and researchers to look into new ways of testing abnormal situations for cloud computing systems. “The severity of the outage and the time it took [Amazon] to recover will draw a lot of people’s attention,” he says.

Sony’s PlayStation Network, an online gaming platform linked to the PlayStation 3, has yet to be fully restored after its outage on April 20. The company took it down in response to a security breach and has been frantically reworking the system to keep it better protected in the future. In a press release, Sony offered some details of its progress to date. The company has added enhanced levels of data protection and encryption, additional firewalls, and better methods for detecting intrusions and unusual activity.

For both Sony and Amazon, these struggles are happening in public, under pressure, and under the scrutiny of millions. Systems as complex as cloud services are going to fail, and it’s impossible to anticipate all the conditions that could lead to trouble. But as cloud computing matures, companies will build more extensive testing, monitoring, and backup systems to prevent outages resulting in public embarrassment and financial loss.

Source:  MIT Technology Review

Malicious software features Usama bin Laden links to ensnare unsuspecting computer users

Wednesday, May 4th, 2011

The FBI today warns computer users to exercise caution when they receive e-mails that purport to show photos or videos of Usama bin Laden’s recent death. This content could be a virus that could damage your computer. This malicious software, or “malware,” can embed itself in computers and spread to users’ contact lists, thereby infecting the systems of associates, friends, and family members. These viruses are often programmed to steal your personally identifiable information.

The Internet Crime Complaint Center (IC3) urges computer users to not open unsolicited (spam) e-mails, including clicking links contained within those messages. Even if the sender is familiar, the public should exercise due diligence. Computer owners must ensure they have up-to-date firewall and anti-virus software running on their machines to detect and deflect malicious software.

The IC3 recommends the public do the following:

  • Adjust the privacy settings on social networking sites you frequent to make it more difficult for people you know and do not know to post content to your page. Even a “friend” can unknowingly pass on multimedia that’s actually malicious software.
  • Do not agree to download software to view videos. These applications can infect your computer.
  • Read e-mails you receive carefully. Fraudulent messages often feature misspellings, poor grammar, and nonstandard English.
  • Report e-mails you receive that purport to be from the FBI. Criminals often use the FBI’s name and seal to add legitimacy to their fraudulent schemes. In fact, the FBI does not send unsolicited e-mails to the public. Should you receive unsolicited messages that feature the FBI’s name, seal, or that reference a division or unit within the FBI or an individual employee, report it to the Internet Crime Complaint Center at www.ic3.gov.

Source:  fbi.gov

New MACDefender malware discovered for OS X

Monday, May 2nd, 2011

Mac antivirus and security developer Intego has issued a blog report on a new malware threat for OS X systems called “MACDefender” that has surfaced. The threat is a trojan horse that is being targeted to Mac systems through “Search Engine Optimization (SEO) poisoning” efforts, and uses Safari’s “Open Safe Files” feature to run the installer for the malware.

SEO Poisoning takes advantage of common search terms that Google, Yahoo, Bing, and other search engines use to present results, and forces a malicious web page to the top of the search provider’s results page. If you then click the link to the malicious Web page, harmful scripts and routines are then attempted on your system.

In this case, the malware sites are taking advantage of Safari’s “Open Safe Files” feature to download a zip file containing the MACDefender malware installer, which is then launched automatically by Safari.

It is unknown what the MACDefender malware does, but in this case it appears the attackers are attempting to further trick users by disguising the malware as a legitimate anti-malware scanner.

MACDefender malware installer

The installer for MACDefender will automatically open if you visit a malicious site containing
the software and you have Safari’s “Open Safe Files” feature enabled.  (Credit: Intego)

 

Be sure to never install software that automatically downloads from the Internet. If you see the installer screen for MACDefender show up, or any other installer window without your prior intent to install the software, be sure to quit the installer. Force-quit it if you have to by pressing Option-Command-Escape to bring up the force-quit window. This will ensure you do not interact with the installer’s interface, which in itself may be suspect.

If you have installed the MACDefender software, you should be able to uninstall the software by searching for and removing any references to “MACDefender” on your system. You may want to check the following locations for files that MACDefender may have installed:

  1. Applications Folder — Go to the Applicaitons folder (and subfolders like “Utilities”) and remove any folder or application that is associated with MACDefender. List folder contents by date modified or created, to see if any files have been put there recently, and remove them.

  2. Login Items — Go to the “Login Items” section of the Accounts system preferences and remove any reference to MACDefender in there. Do this for all accounts on the system.

  3. Activity Monitor — Open Activity Monitor and sort the list of running processes by name. Then locate any that you suspect are associated with MACDefender and force-quit them. Unfortunately this may be more difficult to do if the name of the running process is different than MACDefender, but it is worth a shot.

  4. Launch Agents and Daemons — Go to the following folders and see if any launch daemon or agent property list files reference MACDefender (open them and search through them if necessary). Do this for all files located in the following directories, but be sure you only remove the files that clearly are associated with MACDefender. If you remove others you will disable OS X features that may destabilize your system:

    /Macintosh HD/System/Library/LaunchDaemons/
    /Macintosh HD/System/Library/LaunchAgents/
    /Macintosh HD/Library/LaunchDaemons/
    /Macintosh HD/Library/LaunchAgents/
    /username/Library/LaunchDaemons/
    /username/Library/LaunchAgents/

Currently antivirus definitions for Intego’s VirusBarrier X6 software are being updated to address this threat, and it is likely other legitimate antivirus software companies are doing the same for their programs. Therefore, if you run VirusBarrier or other antivirus utilities then be sure to check for an update soon, and run a full scan on your system to remove the MACDefender malware.

Safari Preferences

Disable Safari’s “Open Safe Files” feature to help avoid these type of threats (click for larger view).

 

While this threat is a new attack attempt on OS X users, its threat level is relatively low because it does require a fair amount of user interaction to install the malware. You have to first provide the correct search terms to the search engine, and then proceed with the installation by manually clicking the buttons in the installer window. As long as you avoid doing this for software you have not purposefully downloaded, then you should be good to go.

An additional security point is that threats like this will have a more difficult time affecting your system if you run your system in a Standard or Managed account instead of an administrator account. This will ensure that even if threats are installed they will have a more difficult time accessing vital or private components of your system.

Finally, if you are concerned about this and similar threats, be sure to uncheck Safari’s “Open safe files after downloading” option that is available in the “General” section of the Safari preferences. Doing this will prevent Safari from automatically launching malware files that have been disguised as legtimate documents, disk images, or archives.

Source:  cnet.com