Archive for October, 2011

ARM welcomes Windows with 64-bit chips for desktops and servers

Friday, October 28th, 2011

On Thursday, ARM took the wraps off its long-awaited 64-bit version of the ARM instruction set architecture (ISA). Called ARMv8, the new extensions will put ARM squarely in competition with Intel in the server and desktop markets. It’s important to note that ARM’s move to 64 bits isn’t about performance — rather, it’s strictly about giving ARM-based platforms the ability to cleanly and efficiently address more than 4 GB of usable memory. In exchange for 64-bit support, ARM will be giving up a bit of power efficiency, which indicates that the initial batch of chips based on ARMv8 will be targeted not a mobiles but at the aforementioned servers and desktops.

The main mobile application for 64-bit ARM will be the forthcoming ARM-based Windows tablets. Neither Microsoft nor its third-party developers will want to have a 32-bit/64-bit split between its desktop and tablet OS, so the Windows ARM port will be 64-bit from the start. Indeed, it’s likely that ARM’s commitment to announcing a 64-bit version of its architecture was necessary before Microsoft even agreed to the Windows port.

Given that this 64-bit move was made in large part with Microsoft in mind, it’s no surprise that Microsoft was on-hand to provide supporting quotes for ARM’s v8 announcement. NVIDIA was also there for the announcement, and the chipmaker hopes to position its forthcoming ARM-based CPU family as the premier way to run Windows on ARM. The GPU powerhouse makes no secret that it is openly going after Intel in the desktop, high-performance CPU space with its ARM family of general-purpose microprocessors.

64-bit costs and benefits

As was the case with AMD’s (and later Intel’s) 64-bit extensions to x86, ARMv8 will effectively be a superset of ARMv7. All ARMv8 chips will run legacy 32-bit ARMv7 code in the AArch32 execution state, while 64-bit code will be run in the AArch64 state.

The present ARM architecture has a maximum integer size of 32 bits, and older ARM chips could store and operate on memory addresses that are up to 32 bits wide. Because computers use the base-2 binary number system, then the largest number that can fit into one of ARM’s 40-bit registers (a register is a small bit of memory on the chip) is 232, or little over 4 billion. So a computer with a 32-bit register file can “see” and manipulate no more than 4 billion bytes, or 4 GB, of memory.

In 2010 ARM announced a 40-bit virtual memory extension for ARM v7, which uses a two-stage address translation to get around the 4 GB memory limitation. This kind of sleight of hand isn’t nearly as ideal, however, as a clean 64-bit implementation of the type that ARM is announcing with v8.

By widening the integer registers in its register file to 64 bits, ARM v8 can store and operate on memory addresses in the range of millions of terabytes. Of course, in the near and medium term it won’t be feasible to build machines with such large memory capacities, and in reality even current 64-bit systems from Intel and AMD don’t support such large amounts of memory as a practical matter. But the move to 64 bits will put ARM on a more equal footing with x86 by removing future barriers to growth.

ARM’s 64-bit parts will pay a price in power efficiency for the boost to memory capacity and forward compatibility. The wider register file uses more logic and power on the die, and the wider integers and addresses will increase internal and external bus traffic. At the level of a desktop or server microprocessor, the power cost of these two items will be completely negligible. But in the sub-milliwatt regime where many of ARM’s mobile chips operate, there would be no reason to pay a power penalty for an increased memory capacity that won’t be used anyway. (Mobiles won’t yet need more than 4 GB of memory, and won’t for the foreseeable future.) So except for chips aimed at tablets and laptops, ARM’s mobile offerings will likely stay 32-bit for quite a long time.

Source: wired.com

Linux Foundation wades into Windows 8 secure boot controversy

Friday, October 28th, 2011

The Linux Foundation wants OEMs to give control of the PC to its owner

The Linux Foundation today released technical guidance to PC makers on how to implement secure UEFI without locking Linux or other free software off of new Windows 8 machines. The guidance included a subtle tisk-tisk at Microsoft’s Steven Sinofsky for suggesting that PC owners won’t want to mess with control of their hardware and would happily concede that to operating system makers and hardware manufacturers.

Hey why should the Free Software Foundation get the last word, with its anti-secure-boot petition?

To recap: The next-generation boot specification is known as Unified Extensible Firmware Interface. Microsoft is requiring Windows 8 PC makers to use UEFI’s secure boot protocol to qualify for Microsoft’s Windows 8 logo program. Secure UEFI is intended to thwart rootkit infections by using a key infrastructure before allowing executables or drivers to be loaded onto the device. Problem is, such keys can also be used to keep the PC’s owner from wiping out the current OS and installing another option such as Linux. It can also prevent them from loading their own device drivers.

It is possible for OEMs to implement Secure UEFI in a way that users can simply disable it. Sinofsky, who is president of Microsoft’s Windows division, pointed this out in a blog post last month. He also noted that the Samsung Windows 8 developer tablet given away to BUILD attendees could disable secure boot. But Microsoft is not mandating the disable option. Matthew Garrett, a developer that works for Red Hat and has been involved in the UEFI specification process, has said that Red Hat is aware of some Windows 8 PCs that do not allow users a way to disable.Secure UEFI

The issue becomes even trickier if PC owners don’t want to disable secure UEFI and still want to be able to load Linux or to dual-boot Windows and Linux. In that case, they need access to the master platform key. Only the owner of the platform key can authorize new firmware or operating systems to be loaded onto the device. Then they will need a way to manage the signature database that validates the firmware, drivers and operating system.

Many free software advocates fear Microsoft is pushing an approach in which the key does not wind up in the hands of the devices owner. “Steven Sinofsky has suggested in his blog posting … that the average platform owner might wish to give up control of the PK [platform key] (and with it control of the signature database) to Microsoft and the OEM suppliers of the platform. This mode of operation runs counter to the UEFI recommendation that the platform owner be the PK controller,” the authors say in their paper entitled, Making UEFI Secure Boot Work With Open Platforms. The paper was written by James Bottomley, CTO at Parallels and Jonathan Corbet, Editor at LWN.net , both of whom are on the Linux Foundation Technical Advisory Board.

The paper’s authors concede that some PC owners may have no desire to manage a PK infrastructure to use their PCs and would just as soon give it over to Microsoft to do, even if that means they will not be able to load drivers or operating systems unless Microsoft first approves.

But for those that want control and want the extra security secure UEFI affords, The Linux Foundation is proposing several guidelines:. It wants:

1) all platforms that enable UEFI secure boot to ship “in setup mode” where the PC owner can be the one to initially control the platform key. The owner can choose one controlled by Microsoft at that time. The device owner should also be able to return to setup mode and change the choice. This is particularly important if the owner sells the machine.

2) an operating system to detect when the PC is in setup mode and install keys appropriately at that time and then activate secure boot mode.

3) a firmware-based mechanism used to allow a platform owner to add new keys for validating software while running in secure mode so that dual-boot systems can be set up.

4) a firmware-based mechanism for easy booting off of removable media.

5) At some future time, the Foundation also wants an operating-system- and vendor-neutral certificate authority to be established to issue keys for third-party hardware and software vendors. However, the paper notes while this would make using secure UEFI easier, a new CA isn’t mandatory.

The authors emphasize that secure UEFI doesn’t have to be a technology that drives stakes between Microsoft and free software.

“Some observers have expressed concerns that secure boot could be used to exclude open systems from the market, but, as we have shown, there is no need for things to be that way,” they write. “If vendors ship their systems in the setup mode and provide a means to add new [keys] to the firmware, those systems will fully support open operating systems while maintaining compliance with the Windows 8 logo requirements. ”

Still, how much burden will the average Windows 8 consumer want to take on to manage secure UEFI? How much will the typical enterprise want to do? Can PC makers find a balance?

Source:  networkworld.com

Cisco rolls out military-strength encryption for ISR router

Friday, October 28th, 2011

NSA-formulated point-to-point encryption module announced for ISR G2 router

Cisco has announced a hardware encryption module for its ISR G2 router that allows point-to-point encryption of IP traffic based on what’s called “Suite B,” the set of encryption algorithms designated by the National Security Agency for Department of Defense communications.

According to Sarah Vanier, security solutions marketing at Cisco, the VPN Internal Service Module for the Cisco ISR G2 router lets information technology managers select how to use any of the main encryption algorithms as well as the SHA-2 hash algorithm to protect sensitive information traveling between any two routing points equipped with the module.

“The module allows you to offload the encryption process on to the card,” says Vanier, with the hardware doing the hard work of encryption and decryption of traffic at the beginning and terminating points.

The selection of encryption and hash algorithms in the Cisco card include the Advanced Encryption Standard, standards-based elliptic-curve cryptography or Triple-DES, to satisfy encryption requirements that might range from unclassified to Top Secret in military networks, she said.

The card, which is said to support up to 3,000 concurrent tunnels with throughput of up to 1.2Gbps, can make use of the SHA-2 hash algorithm to assure data integrity between the two router points.

Nelson Chao, Cisco product manager, said the Cisco encryption card does not currently support multi-cast encryption, but that is anticipated to be supported by Cisco in the future, perhaps late next year.

Cisco also points out that the encryption module is still undergoing official encryption testing to achieve the government’s FIPS-level certification, but the module is shipping now.

The Cisco VPN Internal Service Module for the ISR G2 starts at $2,000.

Source:  networkworld.com

Microsoft claims Hyper-V will leapfrog VMware

Friday, October 28th, 2011

The next version of Hyper-V promises management and storage features that VMware can’t touch. VMware disagrees.

After years of playing catch-up to VMware the upcoming version of Hyper-V is wowing the Microsoft faithful with unique new features — and gaining the attention of VMware users, too, one consultant says.

Hyper-V will get an overhaul as part of the release of Microsoft’s Windows Server 8. Microsoft has not announced a ship date for Windows Server 8, although roadmap documents released some time ago pegged it for 2012 (earning it the nickname of Windows Server 2012). A developer’s preview of Windows Server 8, including Hyper-V, was made available during Microsoft’s BUILD conference in Anaheim, Calif., in September.

The new Hyper-V is “at least on par and in some ways better than VMware,” says Aidan Finn, a Microsoft Most Valuable Professional working as an IT consultant in Dublin, Ireland. The MVP program honors individuals who share knowledge about Microsoft products but are independent of the company. Finn is the author of “Mastering Hyper-V Deployment” [Sybex, November 2010].

“Certainly, at the moment, Hyper-V is a better value for the money. When you start looking at some of the new features, it catches up with vSphere, and users are also getting some stuff that vSphere only does at top end,” Finn says.

Finn says Hyper-V exceeds VMware in three areas: support for cheap server-attached storage and Just A Bunch Of Disks (JBOD) with features such as Share Nothing Live Migration; site-to-site failover for disaster recovery with a feature called Hyper-V Replica; and virtual networking with a feature called Hyper-V Extensible Switch. In addition, Hyper-V now scales to massive sizes, supporting more logical processors and allowing each virtual machine access to more virtual CPUs.

Mike Schutz, Microsoft’s senior director of Windows Server and virtualization, contends that Hyper-V adds features that none of its competitors have.

Most of Hyper-V’s gains over VMware start with storage, Finn believes. Hyper-V no longer requires a NAS, SAN or cluster. “VMware has had vMotion and high availability features, [but] they’ve been treated as the same thing in the Hyper-V world,” he says.

No more.

Right now, “you have to store virtual machines on a SAN,” Finn says. Ergo, if you want to give Hyper-V a try, you have to have a SAN or be willing to buy one. That’s a very expensive thing to do, even on the low end. This changes in Windows 8. Hyper-V will be able to store virtual machines on a file server. Microsoft invested in remote direct memory access (RDMA) and built a new version of the Server Message Block file server protocol, dubbed SMB 2.2, which uses RDMA. This lets Hyper-V access files on another machine’s file server, and allows users to build an active/active cluster between server-attached storage devices. So if a file server fails, it automatically fails over to another one, Finn says.

Live Migration will be supported between the server-attached storage devices, too, a feature Microsoft calls Share Nothing Live Migration. This is something “no one else in the market is really able to do today,” Schutz says. Share Nothing allows a virtual hard drive and a virtual machine to be transferred between server-attached disks over a network connection.

Continue to source article:  networkworld.com

Intel chips let web sites check your computer’s ID

Friday, October 28th, 2011

Passwords can be phished, and carrying an extra key fob security device for accessing sensitive sites can be inconvenient. So Intel is putting authentication technology into its chips that will allow Web sites to verify that it’s your PC logging into your online account and not an imposter or thief.

Intel Identity Protection Technology is being added to the chipsets of some Core and Core vPro processor-based PCs from HP, Lenovo, Sony and others, that began shipping to consumers this summer, according to Jennifer Gilburg, marketing director for the authentication technology unit.

This is two-factor authentication, which adds an extra layer of security so that even if your password gets stolen whoever knows your secret code can’t get into your account without offering more identification or proof of account ownership. In two-factor systems, the first part of the equation is what you know–password and username. The second factor is what you have–usually a hardware token, but in this case it’s a token that’s embedded in the chip.

“My three brothers have had e-mail accounts hijacked. My younger brother gets his Facebook account hijacked like once a month,” she said in a recent interview with CNET. “This s a friction-less log in that can’t be hijacked or phished or compromised.”

Here’s how it works. When you visit a Web site that offers this two-factor authentication service you will be asked if you want to use the Identity Protection Technology. If you opt in, you log in with username and password a unique number is assigned to that PC so the site will know it is associated with your account. Thereafter, when you visit that site and type in your username and password an algorithm running on the chipset generates a six-digit code that changes every 30 seconds from the embedded processor that is then validated by the site.

“It’s seamless to the user after set up,” Gilburg said.

The Web site needs to be using technology that works with the Intel chip to enable this two-factor authentication. For example, VeriSign sites use Symantec’s VIP (Validation and Identity Protection) Service technology on their end to communicate with Intel’s chip-level technology on the customer’s computer. Symantec acquired VeriSign’s authentication services unit last year.

Some sites will be rolling the service out over the next few months and they will be using a Javacode-based software, according to Gilburg. She couldn’t say how many sites are now offering the authentication support, but according to a list on Intel’s site they include eBay and PayPal.

“They need to get Amazon, Google, whoever does authentication (on sites) and sells you stuff” on board, said Jack Gold, founder of tech analyst firm J. Gold Associates.

The technology could also be used for activities like downloading songs, he said, adding “It’s basically a way of protecting the user and telling the site at the other end that this really is the legitimate user.”

If you want to use the authentication but you aren’t at your regular computer, some Web sites offer an SMS option in which a code can be sent to a customer’s phone.

The new Intel technology comes at a good time, with stolen passwords and hijacked accounts are becoming commonplace and at a time when traditional hardware token-based systems are running into problems. Earlier this year, there was a serious hacker break-in at RSA that prompted some corporations, government agencies and other organizations to replace their SecurID tokens.

“The RSA breach showed the vulnerability of hardware tokens from a disaster recovery perspective,” Gilburg said. “It took months to remanufacture, reseed (pair codes with tokens and accounts) and reship out the tokens. Here you can revoke and reprovision in minutes.”

The Intel solution is a good one for now, said Charlie Miller, principal research consultant at security firm Accuvant.

“It seems like a pretty natural migration as many security related things are moving from software to hardware to protect them from prying eyes,” he said. “As for drawbacks, there might be a privacy issue, but it’s hard to think how it would be significantly worse than tying a computer to a website via cookies and other current software mechanisms.”

Source:  CNET

Spotted in Iran, trojan Duqu may not be “son of Stuxnet” after all

Friday, October 28th, 2011

A year after the Stuxnet worm targeted industrial systems in Iran and surprised security researchers with its sophistication, a new Trojan called Duqu has spread through the wild while being called the “Son of Stuxnet” and a “precursor to a future Stuxnet-like attack.” Researchers from Symantec say Duqu and Stuxnet were likely written by the same authors and based on the same code.

But further analyses by security researchers from Dell suggest Duqu and Stuxnet may not be closely related after all. That’s not to say Duqu isn’t serious, as attacks have been reported in Sudan and Iran. But Duqu may be an entirely new breed, with an ultimate objective that is still unknown.

A report yesterday from Dell SecureWorks analyzing the relationship to Stuxnet casts doubt on the idea that Duqu is related. For example, Dell says:

  • Duqu and Stuxnet both use a kernel driver to decrypt and load encrypted DLL (Dynamic Load Library) files. The kernel drivers serve as an “injection” engine to load these DLLs into a specific process. This technique is not unique to either Duqu or Stuxnet and has been observed in other unrelated threats.
  • The kernel drivers for both Stuxnet and Duqu use many similar techniques for encryption and stealth, such as a rootkit for hiding files. Again, these techniques are not unique to either Duqu or Stuxnet and have been observed in other unrelated threats.

And while Stuxnet and Duqu each “have variants where the kernel driver file is digitally signed using a software signing certificate,” Dell says this commonality is insufficient evidence of a connection “because compromised signing certificates can be obtained from a number of sources.”

While Stuxnet spread through USB sticks and PDF files, the Duqu infection method is still unknown, Dell said. Unlike Stuxnet, Duqu doesn’t have specific code targeting SCADA (supervisory control and data acquisition) components. Duqu provides attackers with remote access to compromised computers with the ability to run arbitrary programs, and can theoretically be used to target any organization, Dell said.

“Both Duqu and Stuxnet are highly complex programs with multiple components,” Dell says. “All of the similarities from a software point of view are in the ‘injection’ component implemented by the kernel driver. The ultimate payloads of Duqu and Stuxnet are significantly different and unrelated. One could speculate the injection components share a common source, but supporting evidence is circumstantial at best and insufficient to confirm a direct relationship. The facts observed through software analysis are inconclusive at publication time in terms of proving a direct relationship between Duqu and Stuxnet at any other level.”

The security vendor Bitdefender has also cast doubt on the supposed Duqu/Stuxnet link in its Malwarecity blog. “We believe that the team behind the Duqu incident are not related to the ones that released Stuxnet in 2010, for a number of reasons,” BitDefender’s Bogdan Botezatu writes. While a rootkit driver used in Duqu is similar to one identified in Stuxnet, that doesn’t mean it’s based on the Stuxnet source code.

“A less known aspect is that the Stuxnet rootkit has been reverse-engineered and posted on the Internet,” Botezatu writes. “It’s true that the open-sourced code still needs some tweaking, but an experienced malware writer could use it as inspiration for their own projects.” The fact that Stuxnet and Duqu seem to be targeting different systems and the fact that reusing code would not be a smart move for attackers argue against a link, he continues.

“Code reuse is a bad practice in the industry, especially when this code has been initially seen in legendary e-threats such as Stuxnet,” he writes. “By now, all antivirus vendors have developed strong heuristics and other detection routines against industry heavy-weights such as Stuxnet or Downadup. Any variant of a known e-threat would likely end up caught by generic routines, so the general approach is ‘hit once, then dispose of the code.’”

Symantec, however, seems convinced of the link to Stuxnet. “Duqu is essentially the precursor to a future Stuxnet-like attack,” Symantec writes. “The threat was written by the same authors (or those that have access to the Stuxnet source code) and appears to have been created since the last Stuxnet file was recovered. Duqu’s purpose is to gather intelligence data and assets from entities, such as industrial control system manufacturers, in order to more easily conduct a future attack against another third party. The attackers are looking for information such as design documents that could help them mount a future attack on an industrial control facility.” Symantec bolsters its case by noting that executables designed to capture keystrokes and system information “using the Stuxnet source code” have been discovered.

Kaspersky Lab agrees that Stuxnet and Duqu are similar and in fact says the main distinguishing factor between the two is the “detection of only a very few [Duqu] infections.” A handful of infections have been found, including one in Sudan and three in Iran. But the Duqu end game is unknown.

Kaspersky Lab Chief Security Expert Alexander Gostev says in a statement, “Despite the fact that the location of the systems attacked by Duqu are located in Iran, to date there is no evidence of there being industrial or nuclear program-related systems. As such, it is impossible to confirm that the target of the new malicious program is the same as that of Stuxnet. Nevertheless, it is clear that every infection by Duqu is unique. This information allows one to say with certainty that Duqu is being used for targeted attacks on pre-determined objects.”

Researchers will no doubt uncover more information about Duqu in the coming weeks and come up with methods of thwarting Duqu-based attacks. Microsoft, among many others, has released antivirus signature updates covering variants of the Duqu Trojan.

Source:  arstechnica.com

Citrix: Virtual desktops about to become cheaper than physical ones

Thursday, October 27th, 2011

Citrix says it’s driven down costs to the point that next year it will be cheaper to deploy virtual desktops than traditional desktops.

Helping to drive down costs is a new system-on-a-chip the company has designed called HDX and that it says will be produced by Texas Instruments and put into commercial devices by NComputing as well as other partners to be named.

The price of the zero-client device will be less than $100, says Wes Wasson the company’s chief marketing officer, who was speaking in advance of the formal announcement today at the Citrix Synergy conference in Barcelona.

He says that the reference design Citrix has come up with for the chip could be implemented in a variety of devices including zero-client boxes the size of cigarette packages, monitors, keyboards and cable TV set-top boxes.

These smaller devices could find places in hospitals where they could be deployed on carts moved room-to-room or on factory floors.

The chip would offer hardware assistance optimized for virtual desktops to make optimal use of bandwidth and to boost performance. The chip would replace other individual chips that otherwise are compiled on a motherboard at greater cost, Wasson says.

He says the reduced costs consist of the up-front, first-year capital costs of virtual desktops, and that they will be less expensive than buying physical desktop hardware. “In the next six months the cost of setting up virtual desktops will be less than setting up physical desktops across the board,” he says. “It takes cost off the table,” as a hindrance.

The company is also announcing that it is adding data sharing to its GoToMeeting collaboration service. Technology for the new feature comes from its ShareFile technology. This includes what Citrix calls Follow-me Data Fabric, in which the service enables a range of devices, including mobile devices, to search, share, sync, send, encrypt and remote-wipe data.

The same technology is also available later this quarter with Citrix Receiver client software.

The company is also announcing the addition of WAN optimization to its CloudBridge gear that connects data centers to cloud-based infrastructure-as-a-service networks. So when data centers tap public clouds for more computing resources, the traffic between them and the cloud is optimized to use less bandwidth and be more responsive.

Source:  networkworld.com

Ten years later, Microsoft celebrates Windows XP by asking everyone to move on

Thursday, October 27th, 2011

Ten years ago today, on Oct. 25, 2001, Microsoft released Windows XP to the world. A decade later, much of the technology world is completely different, but you wouldn’t know it from looking at many PCs today. Almost half of personal computers around the world are still running Windows XP — like a threadbare winter coat so comfortable that it’s difficult to take to the Goodwill.

Microsoft is marking the anniversary by acknowledging the significance of Windows XP in the evolution of its flagship product line. But for business customers, in particular, the company is also using the occasion to make the case that it’s finally time to move on.

The company has come up with a fun infographic to support its argument, pointing out everything that has changed in the world since 2001.

“If you think back to XP, obviously it introduced a bunch of different things,” said Rich Reynolds, Windows Commercial general manager. “The user interface made it easier, faster and fun. It mainstreamed photography. It mainstreamed wireless, it mainstreamed plug-and-play. There was a bunch of great things that it did in its time … but the nature of work has changed.”

To illustrate how much things have changed, Reynolds told the story of recently connecting over WiFi on an Alaska Airlines flight somewhere over the Rockies and using the DirectAccess feature of Windows 7 Enterprise to access the corporate network and collaborate on a document in real time with someone in India using the Lync collaboration tools — not doable on a Windows XP machine.

One reason that so many business are still using Windows XP is that so many of them were reluctant to shift to its successor, Windows Vista, which was plagued by glitches and compatibility problems with drivers and software. Microsoft has seen much faster adoption of Windows 7, which already accounts for more than 30 percent of the market.

Apart from touting the virtues of Windows 7 and Office 2010, Microsoft points to the upcoming April 8, 2014, end of support for Windows XP.

“In most enterprise customers it takes anywhere from 12 to 18 to 24 months just to plan the deployment,” Reynolds said. “We want to make sure they are starting to move because they may be at risk of making the April 8 2014 date, because of the time it takes to plan and deploy applications.”

Source:  geekwire.com

Microsoft’s vision on the future of productivity

Thursday, October 27th, 2011

Via a post on the Official Microsoft Blog today, Kurt DelBene announced a new video that shows our vision on the future of productivity. You may have seen my post earlier this week that explored how far Microsoft and the industry has gone in achieving the vision laid out in an earlier vision video. This video carries forward many of the themes we laid out there, as well as introducing some new ones. Natural user interfaces, displays everywhere, an ecosystem of devices, cloud computing are consistent. New themes emerge in this video such as technology working on our behalf, and context plays a much bigger part in many scenes.

As Kurt notes in his post, this vision is no flight of fancy – all of the ideas contained in the video are based on real technology – from Microsoft and others. Some of the capabilities in the video are here and now (such as real time collaboration), while others are some way off. You can find more info on the video at office.com/vision

Over the next few weeks, with the help of the team behind the video, I’ll dig in to the scenes in some more detail. We’ll explain what’s going on and what technology is at play – joining the dots to show you how this future may come to pass.

Source:  microsoft

Old flaw turns unpatched JBoss servers into botnet

Thursday, October 27th, 2011

A new worm exploiting a JBoss vulnerability that was patched in April 2010 is targeting unsecured servers and adding them to a botnet, security researchers are reporting. “The problem has been fixed last year, but there are apparently still a number of vulnerable installs out there,” Johannes Ullrich of the SANS Technology Institute writes. The older configuration of JBoss only authenticated GET and POST requests, but did not protect other HTTP request types or interfaces, so attackers could “use other methods to execute arbitrary code without authentication.”

“The worm affects users of JBoss Application Server who have not correctly secured their JMX consoles as well as users of older, unpatched versions of JBoss enterprise products,” Red Hat security response director Mark Cox writes in a blog, which points to both the April 2010 patch and instructions for securing the JMX console. “This worm propagates by connecting to unprotected JMX consoles, then uses the ability of the JMX console to execute arbitrary code in the context of the JBoss user.”

In addition to adding servers to a botnet, the worm can install a remote access tool giving the attacker control over the infected server, Kaspersky Lab reports. One user who set up a honeypot on a deliberately insecure JBoss server reports having explored the contents of the malicious payload and discovered that it “contained Perl Scripts to automatically connect the compromised host to an IRC Server and be part of a BOTNET.”

The new worm taking advantage of a long-fixed flaw points to the need for users to update their systems, both servers and PCs. A recent report by Microsoft found that 3.2 percent of malware was from exploits for which security updates had been available for at least a year, and another 2.4 percent were related to exploits for which an update was available for less than a year.

Source:  arstechnica.com

Tsunami backdoor trojan ported from Linux to take control of Macs too

Thursday, October 27th, 2011

The Linux-based Tsunami backdoor trojan has made its way over to the Mac, according to security firm ESET. The company posted to its blog (hat tip to Macworld) that a Mac-specific variant, OSX/Tsunami.A has made an appearance on the trojan scene, though ESET made no mention of whether it was gaining any traction among users.

ESET’s Robert Lipovsky wrote on Wednesday that the code for OSX/Tsunami.A was ported from the Linux version of the trojan that the company has been tracking since 2002. Hard-coded is a list of IRC servers and channels, which the trojan tries to connect to in order to listen for malicious commands sent from those channels.

Lipovsky published a list of the commands pulled from the Linux variant of Tsunami, but the general gist is that the trojan can open a backdoor to perform DDoS attacks, download files, or execute shell commands. Tsunami has “the ability to essentially take control of the affected machine.”

Security firm Sophos also acknowledged the appearance of the Mac-targeted Tsunami backdoor, but reminded users that there is still “far less malware [in] existence for Mac OS X than for Windows.” Still, the company says the problem is real and that users should protect themselves with anti-malware software. “We fully expect to see cybercriminals continuing to target poorly protected Mac computers in the future,” Sophos’ Graham Cluley wrote. “If the bad guys think they can make money out of infecting and compromising Macs, they will keep trying.”

Source:  arstechnica.com

Microsoft settles suit against alleged botnet hoster

Thursday, October 27th, 2011

Microsoft said today that a Czech Republic-based provider of free domains has agreed to pull the plug on botnet activities using his subdomains, as part of a settlement of a lawsuit the software giant filed in September to shut down the Kelihos botnet.

The suit, filed in federal court in Virginia, named Dominique Alexander Piatti and his domain company, Dotfree Group SRO, as defendants, alleging that they were involved in hosting the Kelihos botnet. Infected computers in that operation, also known as “Waledac 2.0” after a previous botnet that Microsoft shut down last year, were used to send unregulated pharmaceutical and other spam, to harvest e-mails and passwords, to conduct fraudulent stock scams and, in some cases, to promote sites dealing with sexual exploitation of children. Subdomains also were allegedly used to spread the MacDefender scareware.

“Since the Kelihos takedown, we have been in talks with Mr. Piatti and dotFREE Group s.r.o. and, after reviewing the evidence voluntarily provided by Mr. Piatti, we believe that neither he nor his business were involved in controlling the subdomains used to host the Kelihos botnet. Rather, the controllers of the Kelihos botnet leveraged the subdomain services offered by Mr. Piatti’s cz.cc domain,” Richard Domingues Boscovich, senior attorney for Microsoft’s Digital Crimes Unit, wrote in a blog post.

As part of the settlement, Piatti has agreed to delete or transfer to Microsoft all the subdomains that were used to operate the botnet or for other illegitimate purposes, according to Boscovich. Piatti and his company also have agreed to work with Microsoft to prevent abuse of free subdomains and to establish a secure free top level domain going forward, he said.

“By gaining control of the subdomains, we are afforded an inside look at the Kelihos botnet, giving us the opportunity to learn which unique IP addresses are infected with the botnet’s malware,” Boscovich wrote.

Meanwhile, the lawsuit against the 22 other unnamed defendants is pending, Microsoft said.

The Kelihos botnet comprised about 41,000 infected computers worldwide and was capable of sending 3.8 billion spam e-mails per day, according to Microsoft.

Microsoft has been aggressive in moving to put botnets out of business. Kelihos is the third botnet–following Waledac, and Rustock earlier this year–that Microsoft has taken down using legal and technical measures.

Source:  CNET

Adobe fixes Flash privacy panel so hackers can’t check you out

Sunday, October 23rd, 2011

Yesterday, Adobe made changes to a page on an Adobe website that controls Flash user’s security settings—or more specifically, to the Flash .SWF file embedded in the page that opens the Flash website privacy settings panel. The changes are intended to prevent a clickjacking attack that uses the file to activate and access users’ webcams and microphones to spy on them.

The change comes a few days after a Stanford student revealed the vulnerability on his website. Feross Aboukhadijeh posted the exploit, along with a demo and a video demonstration, on October 18. He said in a blog post that he had notified Adobe weeks earlier of the problem, reporting the vulnerability to Adobe through the Stanford Security lab.

The exploit demonstrated by Aboukhadijeh uses an elaborate clickjack “game” that overlays the SWF panel over buttons in a transparent IFRAME. Here’s a screenshot of the panel before Adobe’s changes:

 

Through a series of clicks, the exploit was able to clear the privacy settings for Flash’s web camera controls and then authorize a new site to activate and access the camera video.The changes did not prompt any pop-ups or other user notifications.

The changes made by Adobe are to the behavior of the widgets in the privacy settings panel. Here’s a screenshot of the new panel, after the exploit was attempted :

While my test of the exploit still added feross.com to my list of sites in the privacy panel, it was only successfully added with an “always ask” setting for establishing a video link.

Source:  arstechnica.com

File group permissions constantly displaying “Fetching…” in OS X

Tuesday, October 18th, 2011

Finder information windowIf you get information on files and folders in the OS X Finder you will see the access permissions for the items listed at the bottom of the information window.

The items in this list are generally the username of the file’s owner, the primary group associated with the owner, and then an “everyone” group; however, there may be situations where the system will not display a group, and instead will show a persistent “Fetching…” notification.

This situation may happen because the system cannot properly identify the group that is associated with the file. In OS X, permissions work by user and group identification numbers being associated with files in the filesystem index, and when you access the file the system looks up these identification numbers in the system directory (the user and group database). There also may be a situation where a user-specific group (i.e., one that is the same name as the current user account) is being used as the default group for a file.

If a username or group is missing, then the system should display something like “unknown” for the respective permissions, but may also continually search for a match and display “Fetching…” while this is under way.

This mismatching may happen after a system has been upgraded, or if you have restored one from backup or migrated it from another system, and generally lies in how the permissions in the filesystem are stored rather than there being a problem with the system’s directory setup.

If this is happening to you, then your best bet would be to ensure that your account is associated with the proper group, followed by resetting permissions on your home folder, which can be done with the OS X installation DVD or the OS X Lion recovery partition.

In OS X, local user accounts are members of the “staff” group, with system administrator accounts being members of the “admin” group. To make sure that your account is associated with the proper group, when logged in to your account run the following in the Terminal:

sudo dscl . -append /Groups/GROUPNAME GroupMembership `whoami`

Be sure to change the “GROUPNAME” text to the proper group of either “staff” or “admin,” and also note that the “whoami” is encompassed in grave accents (the symbol under the tilde key on U.S. English keyboards) instead of single quotes. When this is done, reset the home folder permissions on your system, the procedure for which will depend on what system you are using:

In OS X Prior to Lion:

  1. Insert the OS X Installation DVD and reboot with the “C” key held down.
  2. After selecting your language, choose “Reset Password” from the “Utilities” menu.
  3. Select your hard drive and then select your user account from the drop-down menu.
  4. Click the “Reset” button next to “Reset Home Directory Permissions and ACLs.”
  5. Select “Restart” from the Apple menu to reboot normally.

In OS X Lion:

  1. Reboot and hold “Command-R” to get to the recovery partition.
  2. Choose your language and select “Terminal” from the Utilities menu.
  3. Enter “resetpassword” in the Terminal to open the same password reset utility.
  4. Continue with step three in the instructions above.

Doing this should make sure that the permissions and user/group associations for files in your home directory are based on usernames and groups that are in the user account. Do keep in mind that this will only affect the files and folders in your home directory and not any of those that you have placed elsewhere, such as on external hard drives or within system directories.

Lastly, in addition to ensuring user accounts are set up properly, use Disk Utility to run a permissions fix routine on the boot drive, which should make certain that system folder permissions are also set up so files and folders can be properly accessed. When performing a permissions fix, do not worry about repeated errors in Disk Utility’s log window. Just run the fix routine once and then quit Disk Utility.

Some people may find that after fixing account and system permissions that their battery lives might also significantly increase and the system becomes more responsive, as it spends less time resolving group conflicts and more freely looks up group associations.

Source:  CNET

Microsoft shows ‘touch screen’ for any surface

Tuesday, October 18th, 2011

OmniTouch allows any surface to be used as a touch screen.Microsoft Research is unveiling technology that turns any surface into a touch screen at a user interface symposium this week in Santa Barbara, Calif.

Dubbed OmniTouch, it is a wearable system that allows multitouch input on “arbitrary, everyday surfaces,” according to a description on a Microsoft Research Web page.

“We wanted to capitalize on the tremendous surface area the real world provides,” said Hrvoje Benko of the Natural Interaction Research group at Microsoft.

The technology combines a laser-based pico projector and depth-sensing camera, the latter not unlike Microsoft’s Kinect camera for the Xbox 360. But it is modified to work at short range.

The camera is a prototype provided by PrimeSense. When the camera and projector are calibrated to each other, the user can don the system and begin using it, Microsoft said.

Key research challenges included defining to the system what fingers look like; the notion that any surface is potentially a projected surface for touch interaction; and detecting touch when the surface being touched contains no sensors–according to Chris Harrison, a Ph.D. student at Carnegie Mellon University, who participated in the project and wrote about the research.

Presumably a consumer-friendly system wouldn’t require the bulky apparatus that only a card-carrying propeller-head would be brazen enough to wear in public.

The project is being unveiled during UIST 2012, the Association for Computing Machinery’s 24th Symposium on User Interface Software and Technology, being held October 16-19 in Santa Barbara, Calif.

Source:  CNET

The future of malware

Tuesday, October 4th, 2011

Watch out for whaling, smartphone worms, social media scams, not to mention attacks targeting your car and house

Personal information belonging to a full third of Massachusetts residents has been compromised in one way or another, according to the state’s attorney general, citing statistics gleaned from a tough new data breach reporting law.

RSA recently announced that security of its two-factor SecurID tokens could be at risk following a sophisticated cyber-attack on the company. And Sony suffered a massive breach in its video game online network that led to the theft of names, addresses and possibly credit card data belonging to 77 million user accounts. The cost to Sony and credit card issuers could hit $2 billion.

Of course, that’s just a sampling of recent breaches, and if you think it’s bad now, just wait. It’s only going to get worse as more information gets dumped online by mischievous hacker groups like Anonymous, and as for-profit hackers widen their horizons to include smartphones and social media.

For example, in August AntiSec (a collaboration between Anonymous and the disbanded LulzSec group) released more than 10GB of information from 70 U.S. law enforcement agencies.

According to Todd Feinman, CEO of DLP vendor Identity Finder, AntiSec wasn’t motivated by money.

“Apparently, they don’t like how various law enforcement agencies operate and they’re trying to embarrass and discredit them,” he said.

But, he adds, what they don’t realize is that when they publish sensitive personal information, they are helping low-skilled cyber-criminals commit identity theft. Every week, another university, government agency or business has records breached. Feinman estimates that 250,000 to 500,000 records are breached each year. Few details from those breaches are published on the Internet for everyone to see, however.

While certain high-profile attacks, like the one on Sony, may be intended to embarrass and spark change, the U.S. law enforcement breach could represent a shift in hacker thinking. AntiSec’s motivations appear to have a key difference, with the attackers consciously considering collateral damage a strategic weapon.

“In one online post, AntiSec came right out and said ‘we don’t care about collateral damage. It will happen and so be it,'” Feinman says.

Social networking

Experts say the future of malware isn’t so much about how malware itself will be engineered so much as how potential victims will be targeted. And collateral damage won’t be limited to innocents compromised through no fault of their own.

Have you ever accepted a friend invite on Facebook or connected to someone on LinkedIn you didn’t know? Maybe, you thought this was someone from high school you had forgotten about or a former business partner whose name had slipped your mind. Not wanting to seem like an arrogant jerk, you accept this friend and quickly forget about it.

“When people make trust decisions with social networks, they don’t always understand the ramifications. Today, you are far more knowable by someone who doesn’t know you than ever before in the past,” says Dr. Hugh Thompson, program chair of RSA Conferences.

We all know people who discuss every single thing they do on social networks and blogs – from their breakfast choices to their ingrown toenails. While most of us simply consider these people nuisances, cyber-criminals love them.

“Password reset questions are so easy to guess now, and tools like Ancestry.com, while not created for this purpose, provide hackers with a war chest of useful information,” Thompson says.

Thompson believes there are two areas the IT security industry desperately needs to innovate around: 1) security for social media, along with ways to manage the information shared about you on social networks and 2) better methods for measuring evolving risks in a more concrete way.

Thar she blows

Chris Larsen, head of Blue Coat Systems’ research lab, says the most common social engineering attack their lab catches is for fake security products. He also explained that social networks aren’t just being used to target individuals.

Larsen outlined a recent attack attempt where the bad guys targeted executives of a major corporation through their spouses. The logic was that at least one executive would have a poorly secured PC at home shared with a non-tech savvy spouse, which would then provide the backdoor needed to compromise the executive and gain access into the target company.

“Whaling is definitely on the rise,” says Paul Wood, senior intelligence analyst for Symantec.cloud. “Just a couple years ago, we saw one or two of these sorts of attacks per day. Today, we catch as many as 80 daily.”

According to Wood, social engineering is by far the most potent weapon in the cyber-criminal’s toolbox (automated, widely available malware and hacking toolkits are No.2). Combine that with the fact that many senior executives circumvent IT security because they want the latest and trendiest devices, and cyber-crooks have many valuable, easy-to-hit targets in their sights.

Fortune 500 companies aren’t the only ripe targets. “Attacks on SMBs are increasing dramatically because they are usually the weakest link in a larger supply chain,” Wood says.

Today, there’s no sure way to defend against this. Until Fortune 500 companies start scrutinizing the cyber-security of their partners and suppliers, they can’t say with any certainty whether or not they themselves are secure. While it’s common for, say, General Electric to run parts suppliers through the ringer with factory visits that result in the implementation of an array of best practices, companies aren’t doing this when it comes to cyber-security.

Watch your e-wallet

While smartphone threats are clearly on the rise, we’ve yet to see a major incident. Part of the reason is platform fragmentation. Malware creators still get more bang for their buck by targeting Windows PCs or websites.

Larsen of Blue Coat believes that platform-agnostic, web-based worms represent the new frontier of malware. Platform-agnostic malware lets legitimate developers do some of the heavy lifting for malware writers. As developers re-engineer websites and apps to work on a variety of devices, hackers can then target the commonalities, such as HTML, XML, JPEGs, etc., that render on any device, anywhere.

Smartphones are also poised to become e-wallets, and if there’s one trait you can count on in cyber-criminals, it’s that they’re eager to follow the money.

“The forthcoming ubiquity of near-field communication payment technology in smartphones is especially worrisome,” says Marc Maiffret, CTO of eEye Digital Security. Europe and Asia are already deep into the shift to m-commerce, but the U.S. isn’t far behind. “Once the U.S. adopts mobile payments in significant numbers, more hackers will focus on these targets,” he adds.

Over time, smartphones might replace other forms of identification. Your driver’s license and passport could be on your phone instead of in your pocket. In the business world, this shift is already occurring.

Mobile phones are serving as a second identity factor for all sorts of corporate authentication schemes. Businesses that used to rely on hard tokens, such as RSA SecureID, are moving to soft tokens, which can reside on mobile phones roaming beyond the corporation as easily as on PCs ensconced within corporate walls.

“Two-factor authentication originally emerged because people couldn’t trust computers. Using mobile phones as an identity factor defeats two-factor authentication,” Maiffret says.

For consumers, mobile payments aren’t necessarily all that troubling, especially if m-commerce is tied to credit card accounts and surrounded with the same consumer protections. Banks have been aggressively pushing consumers towards e-banking for years. Obviously, even with the risks involved, e-banking generates better ROI than traditional banking. Otherwise, they wouldn’t do it.

Moreover, m-commerce should have all of the behind-the-scenes security benefits wrapped around it, such as advanced fraud detection. You can’t say that for cash.

Today, Android is the big smartphone target, but don’t be surprised if attackers turn their attention to the iPhone, especially if third-party antivirus programs become more or less standard on Androids. IPhone demographics are appealing to attackers, and when you talk to security pros, they’ll tell you that Apple products are notoriously insecure.

Apple is extremely reluctant to provide third-party security entities with the kind of platform access they need to improve the security of iPhones, iPads, MacBook Airs, etc. “Apple is very much on its own with security,” Maiffret says. “It almost mirrors late-90’s Microsoft, and it’ll probably take a major incident or two to incite change.”

If we’ve learned anything about digital security in the last 20 years, it’s that another major incident is always looming just over the horizon. And then there are the new threats to cars and homes.

During the Black Hat and Defcon conferences in early August, researchers demonstrated a number of disturbing attack scenarios. One particularly scary hack showcased the possibility of hijacking a car. Hackers could disable the alarm, unlock its doors and remotely start it through text messages sent over cell phone links to wireless devices in the vehicle.

Other at-risk embedded devices include airbags, radios, power seats, anti-lock braking systems, electronic stability controls, autonomous cruise controls and communication systems. Another type of attack could compromise a driver’s privacy by tracking RFID tags used to monitor tire pressure via powerful long-distance readers.

“As more and more functions get embedded in the digital technology of automobiles, the threat of attack and malicious manipulation increases,” says Stuart McClure, senior vice president and general manager, McAfee. “Many examples of research-based hacks show the potential threats and depth of compromise that expose the consumer. It’s one thing to have your email or laptop compromised but having your car hacked could translate to dire risks to your personal safety.”

Of course, cars represent just one example of hackable embedded systems. With the number of IP-connected devices climbing to anywhere from 50 billion to a trillion in the next five to 10 years, according to the likes of IBM, Ericsson and Cisco, tomorrow’s hackers could target anything from home alarm systems to air traffic control systems to flood control in dams.

Source:  networkworld.com

Microsoft falsely labels Chrome as malware

Tuesday, October 4th, 2011

Google has released a new version of Chrome after Microsoft’s antivirus software flagged the browser as malware and removed it from about 3,000 people’s computers on Friday.

Microsoft apologized for the problem and updated its virus definition file to correct the false-positive problem, according to a post from Ryan Naraine at ZDNet.

But not before the damage was done. Even though the problem directly affected only a relatively tiny fraction of Chrome users, Google decided to spin up and distribute updated beta and stable versions of Chrome.

“Earlier today, we learned that the Microsoft Security Essentials tool began falsely identifying Google Chrome as a piece of malware (“PWS:Win32/Zbot”) and removing it from people’s computers,” said Mark Larson, Chrome engineering manager, in a blog post Friday. “We are releasing an update that will automatically repair Chrome for affected users over the course of the next 24 hours.”

Win32/Zbot is a Trojan horse that lets attacker steal passwords and gain access to a victim’s computer–not the sort of product anyone would want associated with their Web browser.

Microsoft had this statement about the mistake:

Information about incorrect detection of Google Chrome as PWS:Win32/Zbot

On September 30th, 2011, an incorrect detection for PWS:Win32/Zbot was identified and as a result, Google Chrome was inadvertently blocked and in some cases removed. Within a few hours, Microsoft released an update that addresses the issue. Signature versions 1.113.672.0 and higher include this update. Affected customers should manually update Microsoft Security Essentials with the latest signatures. After updating the definitions, reinstall Google Chrome. We apologize for the inconvenience this may have caused our customers.

To get the latest definitions, simply launch Microsoft Security Essentials, go to the update tab and click the Update button. The definitions can be updated manually by visiting the following Microsoft Knowledge Base article:

http://support.microsoft.com/kb/971606

PWS:Win32/Zbot is a password-stealing trojan that monitors for visits to certain websites. It allows limited backdoor access and control and may terminate certain security-related processes.

Google also provided detailed instructions on how to update the Microsoft Security Essential virus definition file and to reinstall Chrome. It’s good that both companies worked to tidy this problem up swiftly, but perhaps Microsoft should have included Google, not just its customers, in its apology.

Source:  CNET