Archive for November, 2013

This new worm targets Linux PCs and embedded devices

Wednesday, November 27th, 2013

A new worm is targeting x86 computers running Linux and PHP, and variants may also pose a threat to devices such as home routers and set-top boxes based on other chip architectures.

According to security researchers from Symantec, the malware spreads by exploiting a vulnerability in php-cgi, a component that allows PHP to run in the Common Gateway Interface (CGI) configuration. The vulnerability is tracked as CVE-2012-1823 and was patched in PHP 5.4.3 and PHP 5.3.13 in May 2012.

The new worm, which was named Linux.Darlloz, is based on proof-of-concept code released in late October, the Symantec researchers said Wednesday in a blog post.

“Upon execution, the worm generates IP [Internet Protocol] addresses randomly, accesses a specific path on the machine with well-known ID and passwords, and sends HTTP POST requests, which exploit the vulnerability,” the Symantec researchers explained. “If the target is unpatched, it downloads the worm from a malicious server and starts searching for its next target.”

The only variant seen to be spreading so far targets x86 systems, because the malicious binary downloaded from the attacker’s server is in ELF (Executable and Linkable Format) format for Intel architectures.

However, the Symantec researchers claim the attacker also hosts variants of the worm for other architectures including ARM, PPC, MIPS and MIPSEL.

These architectures are used in embedded devices like home routers, IP cameras, set-top boxes and many others.

“The attacker is apparently trying to maximize the infection opportunity by expanding coverage to any devices running on Linux,” the Symantec researchers said. “However, we have not confirmed attacks against non-PC devices yet.”

The firmware of many embedded devices is based on some type of Linux and includes a Web server with PHP for the Web-based administration interface. These kinds of devices might be easier to compromise than Linux PCs or servers because they don’t receive updates very often.

Patching vulnerabilities in embedded devices has never been an easy task. Many vendors don’t issue regular updates and when they do, users are often not properly informed about the security issues fixed in those updates.

In addition, installing an update on embedded devices requires more work and technical knowledge than updating regular software installed on a computer. Users have to know where the updates are published, download them manually and then upload them to their devices through a Web-based administration interface.

“Many users may not be aware that they are using vulnerable devices in their homes or offices,” the Symantec researchers said. “Another issue we could face is that even if users notice vulnerable devices, no updates have been provided to some products by the vendor, because of outdated technology or hardware limitations, such as not having enough memory or a CPU that is too slow to support new versions of the software.”

To protect their devices from the worm, users are advised to verify if those devices run the latest available firmware version, update the firmware if needed, set up strong administration passwords and block HTTP POST requests to -/cgi-bin/php, -/cgi-bin/php5, -/cgi-bin/php-cgi, -/cgi-bin/php.cgi and -/cgi-bin/php4, either from the gateway firewall or on each individual device if possible, the Symantec researchers said.

Source:  computerworld.com

N.S.A. may have hit Internet companies at a weak spot

Tuesday, November 26th, 2013

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

“From Echelon to Total Information Awareness to Prism, all these programs have gone under different names, but in essence do the same thing,” said Chip Pitts, a law lecturer at Stanford University School of Law.

Based in the Denver suburbs, Level 3 is not a household name like Verizon or AT&T, but in terms of its ability to carry traffic, it is bigger than the other two carriers combined. Its networking equipment is found in 200 data centers in the United States, more than 100 centers in Europe and 14 in Latin America.

Level 3 did not directly respond to an inquiry about whether it had given the N.S.A., or the agency’s foreign intelligence partners, access to Google and Yahoo’s data. In a statement, Level 3 said: “It is our policy and our practice to comply with laws in every country where we operate, and to provide government agencies access to customer data only when we are compelled to do so by the laws in the country where the data is located.”

Also, in a financial filing, Level 3 noted that, “We are party to an agreement with the U.S. Departments of Homeland Security, Justice and Defense addressing the U.S. government’s national security and law enforcement concerns. This agreement imposes significant requirements on us related to information storage and management; traffic management; physical, logical and network security arrangements; personnel screening and training; and other matters.”

Security experts say that regardless of whether Level 3’s participation is voluntary or not, recent N.S.A. disclosures make clear that even when Internet giants like Google and Yahoo do not hand over data, the N.S.A. and its intelligence partners can simply gather their data downstream.

That much was true last summer when United States authorities first began tracking Mr. Snowden’s movements after he left Hawaii for Hong Kong with thousands of classified documents. In May, authorities contacted Ladar Levison, who ran Lavabit, Mr. Snowden’s email provider, to install a tap on Mr. Snowden’s email account. When Mr. Levison did not move quickly enough to facilitate the tap on Lavabit’s network, the Federal Bureau of Investigation did so without him.

Mr. Levison said it was unclear how that tap was installed, whether through Level 3, which sold bandwidth to Lavabit, or at the Dallas facility where his servers and networking equipment are stored. When Mr. Levison asked the facility’s manager about the tap, he was told the manager could not speak with him. A spokesman for TierPoint, which owns the Dallas facility, did not return a call seeking a comment.

Mr. Pitts said that while working as the chief legal officer at Nokia in the 1990s, he successfully fended off an effort by intelligence agencies to get backdoor access into Nokia’s computer networking equipment.

Nearly 20 years later, Verizon has said that it and other carriers are forced to comply with government requests in every country in which they operate, and are limited in what they can say about their arrangements.

“At the end of the day, if the Justice Department shows up at your door, you have to comply,” Lowell C. McAdam, Verizon’s chief executive, said in an interview in September. “We have gag orders on what we can say and can’t defend ourselves, but we were told they do this with every carrier.”

Source:  nytimes.com

U.S. government rarely uses best cybersecurity steps: advisers

Friday, November 22nd, 2013

The U.S. government itself seldom follows the best cybersecurity practices and must drop its old operating systems and unsecured browsers as it tries to push the private sector to tighten its practices, technology advisers told President Barack Obama.

“The federal government rarely follows accepted best practices,” the President’s Council of Advisors on Science and Technology said in a report released on Friday. “It needs to lead by example and accelerate its efforts to make routine cyberattacks more difficult by implementing best practices for its own systems.”

PCAST is a group of top U.S. scientists and engineers who make policy recommendations to the administration. William Press, computer science professor at the University of Texas at Austin, and Craig Mundie, senior adviser to the CEO at Microsoft Corp, comprised the cybersecurity working group.

The Obama administration this year stepped up its push for critical industries to bolster their cyber defenses, and Obama in February issued an executive order aimed at countering the lack of progress on cybersecurity legislation in Congress.

As part of the order, a non-regulatory federal standard-setting board last month released a draft of voluntary standards that companies can adopt, which it compiled through industry workshops.

But while the government urges the private sector to adopt such minimum standards, technology advisers say it must raise its own standards.

The advisers said the government should rely more on automatic updates of software, require better proof of identities of people, devices and software, and more widely use the Trusted Platform Module, an embedded security chip.

The advisers also said for swifter response to cyber threats, private companies should share more data among themselves and, “in appropriate circumstances” with the government. Press said the government should promote such private sector partnerships, but that sensitive information exchanged in these partnerships “should not be and would not be accessible to the government.”

The advisers steered the administration away from “government-mandated, static lists of security measures” and toward standards reached by industry consensus, but audited by third parties.

The report also pointed to Internet service providers as well-positioned to spur rapid improvements by, for instance, voluntarily alerting users when their devices are compromised.

Source: reuters.com

Encrypt everything: Google’s answer to government surveillance is catching on

Thursday, November 21st, 2013

While Microsoft’s busy selling t-shirts and mugs about how Google’s “Scroogling” you, the search giant’s chairman is busy tackling a much bigger problem: How to keep your information secure in a world full of prying eyes and governments willing to drag in data by the bucket load. And according to Google’s Eric Schmidt, the answer is fairly straightforward.

“We can end government censorship in a decade,” Schmidt said Wednesday during a speech in Washington, according to Bloomberg. “The solution to government surveillance is to encrypt everything.”

Google’s certainly putting its SSL certificates where Schmidt’s mouth is, too: In the “Encrypt the Web” scorecard released by the Electronic Frontier Foundation earlier this month, Google was one of the few Internet giants to receive a perfect five out of five score for its encryption efforts. Even basic Google.com searches default to HTTPS encryption these days, and Google goes so far as to encrypt data traveling in-network between its data centers.

Spying eyes

That last move didn’t occur in a vacuum, however. Earlier this month, one of the latest Snowden revelations revealed that the National Security Agency’s MUSCULAR program taps the links flowing between Google and Yahoo’s internal data centers.

“We have strengthened our systems remarkably as a result of the most recent events,” Schmidt said during the speech. “It’s reasonable to expect that the industry as a whole will continue to strengthen these systems.”

Indeed, Yahoo recently announced plans to encrypt, well, everything in the wake of the recent NSA surveillance revelations. Dropbox, Facebook, Sonic.net, and the SpiderOak cloud storage service also received flawless marks in the EFF’s report.

And the push for ubiquitous encryption recently gained an even more formidable proponent. The Internet Engineering Task Force working on HTTP 2.0 announced last week that the next-gen version of the crucial protocol will only work with HTTPS encrypted URLs.

Yes, all encryption, all the time could very well become the norm on the ‘Net before long. But while that will certainly raise the general level of security and privacy web-wide, don’t think for a minute that HTTPS is a silver bullet against pervasive government surveillance. In yet another Snowden-supplied revelation released in September, it was revealed that the NSA spends more than $250 million year-in and year-out in its efforts to break online encryption techniques.

Source:  csoonline.com

Repeated attacks hijack huge chunks of Internet traffic, researchers warn

Thursday, November 21st, 2013

Man-in-the-middle attacks divert data on scale never before seen in the wild.

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Researchers from network intelligence firm Renesys made that sobering assessment in a blog post published Tuesday. Since February, they have observed 38 distinct events in which large blocks of traffic have been improperly redirected to routers at Belarusian or Icelandic service providers. The hacks, which exploit implicit trust placed in the border gateway protocol used to exchange data between large service providers, affected “major financial institutions, governments, and network service providers” in the US, South Korea, Germany, the Czech Republic, Lithuania, Libya, and Iran.

The ease of altering or deleting authorized BGP routes, or of creating new ones, has long been considered a potential Achilles Heel for the Internet. Indeed, in 2008, YouTube became unreachable for virtually all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country. Later that year, researchers at the Defcon hacker conference showed how BGP routes could be manipulated to redirect huge swaths of Internet traffic. By diverting it to unauthorized routers under control of hackers, they were then free to monitor or tamper with any data that was unencrypted before sending it to its intended recipient with little sign of what had just taken place.”This year, that potential has become reality,” Renesys researcher Jim Cowie wrote. “We have actually observed live man-in-the-middle (MitM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries.”

At least one unidentified voice-over-IP provider has also been targeted. In all, data destined for 150 cities have been intercepted. The attacks are serious because they affect the Internet equivalents of a US interstate that can carry data for hundreds of thousands or even millions of people. And unlike the typical BGP glitches that arise from time to time, the attacks observed by Renesys provide few outward signs to users that anything is amiss.

“The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the Web,” Cowie wrote. “Even if he ran his own traceroute to verify connectivity to the world, the paths he’d see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with.”

Guadalajara to Washington via Belarus

Renesys observed the first route hijacking in February when various routes across the globe were mysteriously funneled through Belarusian ISP GlobalOneBel before being delivered to their final destination. One trace, traveling from Guadalajara, Mexico, to Washington, DC, normally would have been handed from Mexican provider Alestra to US provider PCCW in Laredo, Texas, and from there to the DC metro area and then, finally, delivered to users through the Qwest/Centurylink service provider. According to Cowie:

Instead, however, PCCW gives it to Level3 (previously Global Crossing), who is advertising a false Belarus route, having heard it from Russia’s TransTelecom, who heard it from their customer, Belarus Telecom. Level3 carries the traffic to London, where it delivers it to Transtelecom, who takes it to Moscow and on to Belarus. Beltelecom has a chance to examine the traffic and then sends it back out on the “clean path” through Russian provider ReTN (recently acquired by Rostelecom). ReTN delivers it to Frankfurt and hands it to NTT, who takes it to New York. Finally, NTT hands it off to Qwest/Centurylink in Washington DC, and the traffic is delivered.

Such redirections occurred on an almost daily basis throughout February, with the set of affected networks changing every 24 hours or so. The diversions stopped in March. When they resumed in May, they used a different customer of Bel Telecom as the source. In all, Renesys researchers saw 21 redirections. Then, also during May, they saw something completely new: a hijack lasting only five minutes diverting traffic to Nyherji hf (also known as AS29689, short for autonomous system 29689), a small provider based in Iceland.

Renesys didn’t see anything more until July 31 when redirections through Iceland began in earnest. When they first resumed, the source was provider Opin Kerfi (AS48685).

Cowie continued:

In fact, this was one of seventeen Icelandic events, spread over the period July 31 to August 19. And Opin Kerfi was not the only Icelandic company that appeared to announce international IP address space: in all, we saw traffic redirections from nine different Icelandic autonomous systems, all customers of (or belonging to) the national incumbent Síminn. Hijacks affected victims in several different countries during these events, following the same pattern: false routes sent to Síminn’s peers in London, leaving ‘clean paths’ to North America to carry the redirected traffic back to its intended destination.

In all, Renesys observed 17 redirections to Iceland. To appreciate how circuitous some of the routes were, consider the case of traffic passing between two locations in Denver. As the graphic below traces, it traveled all the way to Iceland through a series of hops before finally reaching its intended destination.

Cowie said Renesys’ researchers still don’t know who is carrying out the attacks, what their motivation is, or exactly how they’re pulling them off. Members of Icelandic telecommunications company Síminn, which provides Internet backbone services in that country, told Renesys the redirections to Iceland were the result of a software bug and that the problem had gone away once it was patched. They told the researchers they didn’t believe the diversions had a malicious origin.

Cowie said that explanation is “unlikely.” He went on to say that even if it does prove correct, it’s nonetheless highly troubling.

“If this is a bug, it’s a dangerous one, capable of simulating an extremely subtle traffic redirection/interception attack that plays out in multiple episodes, with varying targets, over a period of weeks,” he wrote. “If it’s a bug that can be exploited remotely, it needs to be discussed more widely within the global networking community and eradicated.”

Source:  arstechnica.com

HP: 90 percent of Apple iOS mobile apps show security vulnerabilities

Tuesday, November 19th, 2013

HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.

Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97 percent of these apps inappropriately accessed private information sources within a device, and 86 percent proved to be vulnerable to attacks such as SQL injection.

The Apple guidelines for developing iOS apps help developers but this doesn’t go far enough in terms of security, says Armistead. Mobile apps are being used to extend the corporate website to mobile devices, but companies in the process “are opening up their attack surfaces,” he says.

In its summary of the testing, HP said 86 percent of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.

The same number did not have optimized security built in the early part of the development process, according to HP. Three quarters “did not use proper encryption techniques when storing data on mobile devices, which leaves unencrypted data accessible to an attacker.” A large number of the apps didn’t implement SSL/HTTPS correctly.To discover weaknesses in apps, developers need to involve practices such as app scanning for security, penetration testing and a secure coding development life-cycle approach, HP advises.

The need to develop mobile apps quickly for business purposes is one of the main contributing factors leading to weaknesses in these apps made available for public download, according to HP. And the weakness on the mobile side is impacting the server side as well.

“It is our earnest belief that the pace and cost of development in the mobile space has hampered security efforts,” HP says in its report, adding that “mobile application security is still in its infancy.”

Source:  infoworld.com

Hackers exploit JBoss vulnerability to compromise servers

Tuesday, November 19th, 2013

Attackers are actively exploiting a known vulnerability to compromise JBoss Java EE application servers that expose the HTTP Invoker service to the Internet in an insecure manner.

At the beginning of October security researcher Andrea Micalizzi released an exploit for a vulnerability he identified in products from multiple vendors including Hewlett-Packard, McAfee, Symantec and IBM that use 4.x and 5.x versions of JBoss. That vulnerability, tracked as CVE-2013-4810, allows unauthenticated attackers to install an arbitrary application on JBoss deployments that expose the EJBInvokerServlet or JMXInvokerServlet.

Micalizzi’s exploit installs a Web shell application called pwn.jsp that can be used to execute shell commands on the operating system via HTTP requests. The commands are executed with the privileges of the OS user running JBoss, which in the case of some JBoss deployments can be a high privileged, administrative user.

Researchers from security firm Imperva have recently detected an increase in attacks against JBoss servers that used Micalizzi’s exploit to install the original pwn.jsp shell, but also a more complex Web shell called JspSpy.

Over 200 sites running on JBoss servers, including some that belong to governments and universities have been hacked and infected with these Web shell applications, said Barry Shteiman, director of security strategy at Imperva.

The problem is actually bigger because the vulnerability described by Micalizzi stems from insecure default configurations that leave JBoss management interfaces and invokers exposed to unauthenticated attacks, a issue that has been known for years.

In a 2011 presentation about the multiple ways in which unsecured JBoss installations can be attacked, security researchers from Matasano Security estimated, based on a Google search for certain strings, that there were around 7,300 potentially vulnerable servers.

According to Shteiman, the number of JBoss servers with management interfaces exposed to the Internet has more than tripled since then, reaching over 23,000.

One reason for this increase is probably that people have not fully understood the risks associated with this issue when it was discussed in the past and continue to deploy insecure JBoss installations, Shteiman said. Also, some vendors ship products with insecure JBoss configurations, like the products vulnerable to Micalizzi’s exploit, he said.

Products vulnerable to CVE-2013-4810 include McAfee Web Reporter 5.2.1, HP ProCurve Manager (PCM) 3.20 and 4.0, HP PCM+ 3.20 and 4.0, HP Identity Driven Manager (IDM) 4.0, Symantec Workspace Streaming 7.5.0.493 and IBM TRIRIGA. However, products from other vendors that have not yet been identified could also be vulnerable.

JBoss is developed by Red Hat and was recently renamed to WildFly. Its latest stable version is 7.1.1, but according to Shteiman many organizations still use JBoss 4.x and 5.x for compatibility reasons as they need to run old applications developed for those versions.

Those organizations should follow the instructions for securing their JBoss installations that are available on the JBoss Community website, he said.

IBM also provided information on securing the JMX Console and the EJBInvoker in response to Micalizzi’s exploit.

The Red Hat Security Response Team said that while CVE-2013-4810 refers to the exposure of unauthenticated JMXInvokerServlet and EJBInvokerServlet interfaces on HP ProCurve Manager, “These servlets are also exposed without authentication by default on older unsupported community releases of JBoss AS (WildFly) 4.x and 5.x. All supported Red Hat JBoss products that include the JMXInvokerServlet and EJBInvokerServlet interfaces apply authentication by default, and are not affected by this issue. Newer community releases of JBoss AS (WildFly) 7.x are also not affected by this issue.”

Like Shteiman, Red Hat advised users of older JBoss AS releases to follow the instructions available on the JBoss website in order to apply authentication to the invoker servlet interfaces.

The Red Hat security team has also been aware of this issue affecting certain versions of the JBoss Enterprise Application Platform, Web Platform and BRMS Platform since 2012 when it tracked the vulnerability as CVE-2012-0874. The issue has been addressed and current versions of JBoss Enterprise Platforms based on JBoss AS 4.x and 5.x are no longer vulnerable, the team said.

Source:  computerworld.com

Internet architects propose encrypting all the world’s Web traffic

Thursday, November 14th, 2013

A vastly larger percentage of the world’s Web traffic will be encrypted under a near-final recommendation to revise the Hypertext Transfer Protocol (HTTP) that serves as the foundation for all communications between websites and end users.

The proposal, announced in a letter published Wednesday by an official with the Internet Engineering Task Force (IETF), comes after documents leaked by former National Security Agency contractor Edward Snowden heightened concerns about government surveillance of Internet communications. Despite those concerns, websites operated by Yahoo, the federal government, the site running this article, and others continue to publish the majority of their pages in a “plaintext” format that can be read by government spies or anyone else who has access to the network the traffic passes over. Last week, cryptographer and security expert Bruce Schneier urged people to “make surveillance expensive again” by encrypting as much Internet data as possible.

The HTTPbis Working Group, the IETF body charged with designing the next-generation HTTP 2.0 specification, is proposing that encryption be the default way data is transferred over the “open Internet.” A growing number of groups participating in the standards-making process—particularly those who develop Web browsers—support the move, although as is typical in technical deliberations, there’s debate about how best to implement the changes.

“There seems to be strong consensus to increase the use of encryption on the Web, but there is less agreement about how to go about this,” Mark Nottingham, chair of the HTTPbis working group, wrote in Wednesday’s letter. (HTTPbis roughly translates to “HTTP again.”)

He went on to lay out three implementation proposals and describe their pros and cons:

A. Opportunistic encryption for http:// URIs without server authentication—aka “TLS Relaxed” as per draft-nottingham-http2-encryption.

B. Opportunistic encryption for http:// URIs with server authentication—the same mechanism, but not “relaxed,” along with some form of downgrade protection.

C. HTTP/2 to only be used with https:// URIs on the “open” Internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).

In subsequent discussion, there seems to be agreement that (C) is preferable to (B), since it is more straightforward; no new mechanism needs to be specified, and HSTS can be used for downgrade protection.

(C) also has this advantage over (A) and furthermore provides stronger protection against active attacks.

The strongest objections against (A) seemed to be about creating confusion about security and discouraging use of “full” TLS, whereas those against (C) were about limiting deployment of better security.

Keen observers have noted that we can deploy (C) and judge adoption of the new protocol, later adding (A) if necessary. The reverse is not necessarily true.

Furthermore, in discussions with browser vendors (who have been among those most strongly advocating more use of encryption), there seems to be good support for (C), whereas there’s still a fair amount of doubt/disagreement regarding (A).

Pros, cons, and carrots

As Nottingham acknowledged, there are major advantages and disadvantages for each option. Proposal A would be easier for websites to implement because it wouldn’t require them to authenticate their servers using a digital certificate that is recognized by all the major browsers. This relaxation of current HTTPS requirements would eliminate a hurdle that stops many websites from encrypting traffic now, but it also comes at a cost. The lack of authentication could make it trivial for the person at an Internet cafe or the spy monitoring Internet backbones to create a fraudulent digital certificate that impersonates websites using this form of relaxed transport layer security (TLS). That risk calls into question whether the weakened measure is worth the hassle of implementing.

Proposal B, by contrast, would make it much harder for attackers, since HTTP 2.0 traffic by default would be both encrypted and authenticated. But the increased cost and effort required by millions of websites may stymie the adoption of the new specification, which in addition to encryption offers improvements such as increased header compression and asynchronous connection multiplexing.

Proposal C seems to resolve the tension between the other two options by moving in a different direction altogether—that is, by implementing HTTP 2.0 only in full-blown HTTPS traffic. This approach attempts to use the many improvements of the new standard as a carrot that gives websites an incentive to protect their traffic with traditional HTTPS encryption.

The options that the working group is considering do a fair job of mapping the current debate over Web-based encryption. A common argument is that more sites can and should encrypt all or at least most of their traffic. Even better is when sites provide this encryption while at the same time providing strong cryptographic assurances that the server hosting the website is the one operated by the domain-name holder listed in the address bar—rather than by an attacker who is tampering with the connection.

Unfortunately, the proposals are passing over an important position in the debate over Web encryption, involving the viability of the current TLS and secure sockets layer (SSL) protocols that underpin all HTTPS traffic. With more than 500 certificate authorities located all over the world recognized by major browsers, all it takes is the compromise of one of them for the entire system to fail (although certificate pinning in some cases helps contain the damage). There’s nothing in Nottingham’s letter indicating that this single point of failure will be addressed. The current HTTPS system has serious privacy implications for end users, since certificate authorities can log huge numbers of requests for SSL-protected websites and map them to individual IP addresses. This is also unaddressed.

It’s unfortunate that the letter didn’t propose alternatives to the largely broken TLS system, such as the one dubbed Trust Assertions for Certificate Keys, which was conceived by researchers Moxie Marlinspike and Trevor Perrin. Then again, as things are now, the engineers in the HTTPbis Working Group are likely managing as much controversy as they can. Adding an entirely new way to encrypt Web traffic to an already sprawling list of considerations would probably prove to be too much.

Source:  arstechnica.com

Researchers find way to increase range of wireless frequencies in smartphones

Friday, November 8th, 2013

Researchers have found a new way to tune the radio frequency in smartphones and other wireless devices that promises to reduce costs and improve performance of semiconductors used in defense, satellite and commercial communications.

Semiconductor Research Corp. (SRC) and Northeastern University in Boston presented the research findings at the 58th Magnetism and Magnetic Materials Conference in Denver this week.

Nian Sun, associate professor of electrical and computer engineering at Northeastern, said he’s been working on the process since 2006, when he received National Science Foundation grants for the research.

“In September, we had a breakthrough,” he said in a telephone interview. “We didn’t celebrate with champagne exactly, but we were happy.”

The research progressed through a series of about 20 stages over the past seven years. It wasn’t like the hundreds of failures that the Wright brothers faced in coming up with a working wing design, but there were gradual improvements at each stage, he said.

Today, state-of-the art radio frequency circuits in smartphones rely on tuning done with radio frequency (RF) varactors, a kind of capacitor. But the new process allows tuning in inductors as well, which could enhance a smartphone’s tunable frequency range from 50% to 200%, Sun said. Tuning is how a device finds an available frequency to complete a wireless transmission. It’s not very different from turning a dial on an FM radio receiver to bring in a signal.

Capacitors and inductors work in electronic circuits to move electrons; inductors change the direction of electrons in a circuit, while capacitors do not.

Most smartphones use 15 to 20 frequency channels to make connections, but the new inductors made possible by the research will potentially more than double the number of channels available on a smartphone or other device. The new inductors are a missing link long sought for in ways to upgrade the RF tunable frequency range in a tuned circuit.

“Researchers have been trying a while to make inductors tunable — to change the inductance value — and haven’t been very successful,” said Kwok Ng, senior director of device sciences at SRC. He said SRC has worked with Northeastern since 2011 on the project, investing up to $300,000 in the research work.

How it worked: Researchers at the Northeastern lab used a thin magnetic piezoelectric film deposit in an experimental inductor about a centimeter square, using microelectromechanical systems (MEMS) processes . Piezoelectricity is an electromechanical interaction between the mechanical and electric states in a crystalline material. A crystal can acquire a charge when subjected to AC voltage.

What the researchers found is they could apply the right amount of voltage on a layer of metal going around a core of piezoelectric film to change its permeability. As the film changes permeability, its electrons can move at different frequencies.

Ng said the research means future inductors can be used to improve radio signal performance, which could eliminate the number of modules needed in a smartphone, with the potential to reduce the cost of materials.

Intel and Texas Instruments cooperated in the work, and the new inductor technology will be available for further industrial development by the middle of next year, followed by use in consumer applications by as earlier as late 2014.

Source:  networkworld.com

FCC crowdsources mobile broadband research with Android app

Friday, November 8th, 2013

Most smartphone users know data speeds can vary widely. But how do the different carriers stack up against each other? The Federal Communications Commission is hoping the public can help figure that out, using a new app it will preview next week.

The FCC on Friday said that the agenda for next Thursday’s open meeting, the first under new Chairman Tom Wheeler, will feature a presentation on a new Android smartphone app that will be used to crowdsource measurements of mobile broadband speeds. 

The FCC announced it would start measuring the performance of mobile networks last September. All four major wireless carriers, as well CTIA-The Wireless Association have already agreed to participate in the app, which is called “FCC Speed Test.” It works only on Android for now — no word on when an iPhone version might be available.

While the app has been in the works for a long time, its elevation to this month’s agenda reaffirms something Wheeler told the Journal this week. During that conversation, the Chairman repeatedly emphasized his desire to “make decisions based on facts.” Given the paucity of information on mobile broadband availability and prices, this type of data collection seems like the first step toward evaluating whether Americans are getting what they pay for from their carriers in terms of mobile data speeds.

The FCC unveiled its first survey of traditional land-based broadband providers in August 2011, which showed that most companies provide access that comes close to or exceeds advertised speeds. (Those results prompted at least one Internet service provider to increase its performance during peak hours.) Expanding the data collection effort to the mobile broadband is a natural step; smartphone sales outpace laptop sales and a significant portion of Americans (particularly minorities and low-income households) rely on a smartphone as their primary connection to the Internet.

Wheeler has said ensuring there is adequate competition in the broadband and wireless markets is among his top priorities. But first the FCC must know what level of service Americans are getting from their current providers. If mobile broadband speeds perform much as advertised, it would bolster the case of those who argue the wireless market is sufficiently competitive. But if any of the major carriers were to seriously under-perform, it would raise questions about the need for intervention from federal regulators.

Source:  wsj.com

High-gain patch antennas boost Wi-Fi capacity for Georgia Tech

Tuesday, November 5th, 2013

To boost its Wi-Fi capacity in packed lecture halls, Georgia Institute of Technology gave up trying to cram in more access points, with conventional omni-directional antennas, and juggle power settings and channel plans. Instead, it turned to new high-gain directional antennas, from Tessco’s Ventev division.

Ventev’s new TerraWave High-Density Ceiling Mount Antenna, which looks almost exactly like the bottom half of a small pizza box, focuses the Wi-Fi signal from the ceiling mounted Cisco access point in a precise cone-shaped pattern, covering part of the lecture hall floor. Instead of the flakey, laggy connections, about which professors had been complaining, users now consistently get up to 144Mbps (if they have 802.11n client radios).

“Overall, the system performed much better” with the Ventev antennas, says William Lawrence, IT project manager principal with the university’s academic and research technologies group. “And there was a much more even distribution of clients across the room’s access points.”

Initially, these 802.11n access points were running 40-MHz channels, but Lawrence’s team eventually switched to the narrower 20 MHz. “We saw more consistent performance for clients in the 20-MHz channel, and I really don’t know why,” he says. “It seems like the clients were doing a lot of shifting between using 40 MHz and 20 MHz. With the narrower channel, it was very smooth and consistent: we got great video playback.”

With the narrower channel, 11n clients can’t achieve their maximum 11n throughput. But that doesn’t seem to have been a problem in these select locations, Lawrence says. “We’ve not seen that to be an issue, but we’re continuing to monitor it,” he says.

The Atlanta main campus has a fully-deployed Cisco WLAN, with about 3,900 access points, nearly all supporting 11n, and 17 wireless controllers. Virtually all of the access points use a conventional, omni-directional antenna, which radiates energy in a globe-shaped configuration with the access point at the center. But in high density classrooms, faculty and students began complaining of flakey connections and slow speeds.

The problem, Lawrence says, was the surging number of Wi-Fi devices actively being used in big classrooms and lectures halls, coupled with Wi-Fi signals, especially in the 2.4-GHz band, stepping on each other over wide sections of the hall, creating co-channel interference.

One Georgia Tech network engineer spent a lot of time monitoring the problem areas and working with students and faculty. In a few cases, the problems could be traced to a client-side configuration problem. But “with 120 clients on one access point, performance really goes downhill,” Lawrence says. “With the omni-directional antenna, you can only pack the access points so close.”

Shifting users to the cleaner 5 GHz was an obvious step but in practice was rarely feasible: many mobile devices still support only 2.4-GHz connections; and client radios often showed a stubborn willfulness in sticking with a 2.4-GHz connection on a distant access point even when another was available much closer.

Consulting with Cisco, Georgia Tech decided to try some newer access points, with external antenna mounts, and selected one of Cisco’s certified partners, Tessco’s Ventev Wireless Infrastructure division, to supply the directional antennas. The TerraWave products also are compatible with access points from Aruba, Juniper, Meru, Motorola and others.

Patch antennas focus the radio beam within a specific area. (A couple of vendors, Ruckus Wireless and Xirrus, have developed their own built-in “smart” antennas that adjust and focus Wi-Fi signals on clients.) Depending on the beamwidth, the effect can be that of a floodlight or a spotlight, says Jeff Lime, Ventev’s vice president. Ventev’s newest TerraWave High-Density products focus the radio beam within narrower ranges than some competing products, and offer higher gain (in effect putting more oomph into the signal to drive it further), he says.

One model, with a maximum power of 20 watts, can have beam widths of 18 or 28 inches vertically, and 24 or 40 inches horizontally, with a gain of 10 or 11 dBi, depending on the frequency range. The second model, with a 50-watt maximum power output, has a beamwidth in both dimension of 35 degrees, at a still higher gain of 14 dBi to drive the spotlighted signal further, in really big areas like a stadium.

At Georgia Tech, each antenna focused the Wi-Fi signal from a specific overhead access point to cover a section of seats below it. Fewer users associate with each access point. The result is a kind of virtuous circle. “It gives more capacity per user, so more bandwidth, so a better user experience,” says Lime.

The antennas come with a quartet of 36-inch cables to connect to the access points. The idea is to give IT groups maximum flexibility. But the cables initially were awkward for the IT team installing the antennas. Lawrence says they experimented with different ways of neatly and quickly wrapping up the excess cable to keep it out of the way between the access point proper and the antenna panel [see photo, below]. They also had to modify mounting clips to get them to hold in the metal grid that forms the dropped ceiling in some of the rooms. “Little things like that can cause you some unexpected issues,” Lawrence says.

Georgia Tech wifiThe IT staff worked with Cisco engineers to reset a dedicated controller to handle the new “high density group” of access points; and the controller automatically handled configuration tasks like setting access point power levels and selecting channels.

Another issue is that when the patch antennas were ceiling mounted in second- or third-story rooms, their downward-shooting signal cone reached into the radio space of access points in the floor below. Lawrence says they tweaked the position of the antennas in some cases to send the spotlight signal beaming at an angle. “I look at each room and ask ‘how am I going to deploy these antennas to minimize signal bleed-through into other areas,” he says. “Adding a high-gain antenna can have unintended consequences outside the space it’s intended for.”

But based on improved throughput and consistent signals, Lawrence says it’s likely the antennas will be used in a growing number of lecture halls and other spaces on the main and satellite campuses. “This is the best solution we’ve got for now,” he says.

Source:  networkworld.com

New malware variant suggests cybercriminals targeting SAP users

Tuesday, November 5th, 2013

The malware checks if infected systems have a SAP client application installed, ERPScan researchers said

A new variant of a Trojan program that targets online banking accounts also contains code to search if infected computers have SAP client applications installed, suggesting that attackers might target SAP systems in the future.

The malware was discovered a few weeks ago by Russian antivirus company Doctor Web, which shared it with researchers from ERPScan, a developer of security monitoring products for SAP systems.

“We’ve analyzed the malware and all it does right now is to check which systems have SAP applications installed,” said Alexander Polyakov, chief technology officer at ERPScan. “However, this might be the beginning for future attacks.”

When malware does this type of reconnaissance to see if particular software is installed, the attackers either plan to sell access to those infected computers to other cybercriminals interested in exploiting that software or they intend to exploit it themselves at a later time, the researcher said.

Polyakov presented the risks of such attacks and others against SAP systems at the RSA Europe security conference in Amsterdam on Thursday.

To his knowledge, this is the first piece of malware targeting SAP client software that wasn’t created as a proof-of-concept by researchers, but by real cybercriminals.

SAP client applications running on workstations have configuration files that can be easily read and contain the IP addresses of the SAP servers they connect to. Attackers can also hook into the application processes and sniff SAP user passwords, or read them from configuration files and GUI automation scripts, Polyakov said.

There’s a lot that attackers can do with access to SAP servers. Depending on what permissions the stolen credentials have, they can steal customer information and trade secrets or they can steal money from the company by setting up and approving rogue payments or changing the bank account of existing customers to redirect future payments to their account, he added.

There are efforts in some enterprise environments to limit permissions for SAP users based on their duties, but those are big and complex projects. In practice most companies allow their SAP users to do almost everything or more than what they’re supposed to, Polyakov said.

Even if some stolen user credentials don’t give attackers the access they want, there are default administrative credentials that many companies never change or forget to change on some instances of their development systems that have snapshots of the company data, the researcher said.

With access to SAP client software, attackers could steal sensitive data like financial information, corporate secrets, customer lists or human resources information and sell it to competitors. They could also launch denial-of-service attacks against a company’s SAP servers to disrupt its business operations and cause financial damage, Polyakov said.

SAP customers are usually very large enterprises. There are almost 250,000 companies using SAP products in the world, including over 80 percent of those on the Forbes 500 list, according to Polyakov.

If timed correctly, some attacks could even influence the company’s stock and would allow the attackers to profit on the stock market, according to Polyakov.

Dr. Web detects the new malware variant as part of the Trojan.Ibank family, but this is likely a generic alias, he said. “My colleagues said that this is a new modification of a known banking Trojan, but it’s not one of the very popular ones like ZeuS or SpyEye.”

However, malware is not the only threat to SAP customers. ERPScan discovered a critical unauthenticated remote code execution vulnerability in SAProuter, an application that acts as a proxy between internal SAP systems and the Internet.

A patch for this vulnerability was released six months ago, but ERPScan found that out of 5,000 SAProuters accessible from the Internet, only 15 percent currently have the patch, Polyakov said. If you get access to a company’s SAProuter, you’re inside the network and you can do the same things you can when you have access to a SAP workstation, he said.

Source:  csoonline.com

Enterprise defenses lag despite rising cybersecurity awareness

Tuesday, November 5th, 2013

Increased executive involvement and higher spending not enough, says study

Organizations are showing more interest in cybersecurity through executive involvement and higher spending. Nevertheless, the added attention is new and more resources need to be directed at defending against cyberattacks, a study shows.

Last year, no information security professionals said they reported to senior executives. Today, 35 percent report quarterly on the state of information security to the company board and the chief executive and about 10 percent report monthly, according to this year’s Global Information Security Survey from consultancy Ernst & Young.

While the upper echelon is paying more attention, they are still not spending enough to defend against cyberattackers, who are increasingly more sophisticated, according to the survey of senior executives in more than 1,900 companies and government organizations.

Half of the respondents planned to increase their cybersecurity budget by 5 percent or more over the next 12 months, yet 65 percent cited insufficient funds as their number one challenge to operating at a security level expected by their companies. For businesses with revenues of $10 million or less, the number dissatisfied with funding rose to 71 percent.

A larger percentage of budgets need to be directed at security innovation and emerging technologies within the enterprise, such as the use of mobile devices and social media, the survey found. Over the next 12 months, 14 percent of security budgets are being allocated to new technologies, yet respondents said they were unsure whether they were ready to handle the risks posed by corporate use of social media.

“Organizations need to be more forward-looking,” Ken Allan, EY global information security leader, said in a statement.

Data protection is being taken much more seriously within organizations. Rather than being treated as a line item in a contract or something left to third parties, as seen in previous surveys, three quarters of respondents were mandating self-assessments or commissioning independent external assessments.

As the attention given to cybersecurity grows, so does the need for skilled professionals. Unfortunately, the available pool of talent is insufficient. Half of the respondents cited a lack of skilled workers as a barrier to meeting all security priorities.

The scarcity of talent is not being properly addressed by an increasing number of executives, the survey found. The percentage of respondents citing a lack of executive awareness or support rose to 31 percent this year, from 20 percent in 2012.

“A lack of skilled talent is a global issue,” Allan said. “It is particularly acute in Europe, where governments and companies are fiercely competing to recruit the brightest talent to their teams from a very small pool.”

To become more efficient in cybersecurity, EY is recommending that businesses take time to understand the attackers targeting them and then decide on the defense strategies and technology.

“Look for the trophies that they (attackers) would be interested in and organize your defenses around that,” Chip Tsantes, a principal in EY’s cybersecurity practice, told CSOonline Friday.

Tsantes finds that the digital assets being targeted within an organization often do not correlate with where organizations are spending their money.

Gathering and sharing intelligence on cyberattackers threatening data, networks and business processes is an emerging information security discipline.

A recent survey of security decision-makers found that three quarters of them rated establishing or improving threat intelligence as a top priority for their organizations, according to Forrester Research.

In addition, a recent Ponemon Institute report found that enterprises could reduce annual costs associated with cyber-attacks by 40 percent, if they had intelligence they could use to bolster defenses.

The need for improve cybersecurity is well established. Forrester Research found that 45 percent of respondents had experienced a breach at least once in the last 12 months.

EY found that 31 percent of the participants in its survey had seen at least a 5 percent increase in the number of security incidents in their organizations in the same timeframe.

Source:  csoonline.com

Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps

Friday, November 1st, 2013

Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn’t know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours.

In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet’s next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux.

“We were like, ‘Okay, we’re totally owned,'” Ruiu told Ars. “‘We have to erase all our systems and start from scratch,’ which we did. It was a very painful exercise. I’ve been suspicious of stuff around here ever since.”

In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that’s able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine’s inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations.

Another intriguing characteristic: in addition to jumping “airgaps” designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities.

“We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD,” Ruiu said. “At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we’re using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys.”

Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world’s foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer’s Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it.

But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: “badBIOS,” as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.

Bigfoot in the age of the advanced persistent threat

At times as I’ve reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he’s beginning to draw. (A compilation of Ruiu’s observations is here.)

Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he’s no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu’s computers and networks.

In contrast to the skepticism that’s common in the security and hacking cultures, Ruiu’s peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS.

“Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS,” Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: “No joke it’s really serious.” Plenty of others agree.

“Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest,” security researcher Arrigo Triulzi told Ars. “Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever.”

Been there, done that

Triulzi said he’s seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer’s peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU.

It’s also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT.

Of course, it’s one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it’s another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What’s more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran’s nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet.

“Really, everything Dragos reports is something that’s easily within the capabilities of a lot of people,” said Graham, who is CEO of penetration testing firm Errata Security. “I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy.”

Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month’s G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications.

Eureka

For most of the three years that Ruiu has been wrestling with badBIOS, its infection mechanism remained a mystery. A month or two ago, after buying a new computer, he noticed that it was almost immediately infected as soon as he plugged one of his USB drives into it. He soon theorized that infected computers have the ability to contaminate USB devices and vice versa.

“The suspicion right now is there’s some kind of buffer overflow in the way the BIOS is reading the drive itself, and they’re reprogramming the flash controller to overflow the BIOS and then adding a section to the BIOS table,” he explained.

He still doesn’t know if a USB stick was the initial infection trigger for his MacBook Air three years ago, or if the USB devices were infected only after they came into contact with his compromised machines, which he said now number between one and two dozen. He said he has been able to identify a variety of USB sticks that infect any computer they are plugged into. At next month’s PacSec conference, Ruiu said he plans to get access to expensive USB analysis hardware that he hopes will provide new clues behind the infection mechanism.

He said he suspects badBIOS is only the initial module of a multi-staged payload that has the ability to infect the Windows, Mac OS X, BSD, and Linux operating systems.

“It’s going out over the network to get something or it’s going out to the USB key that it was infected from,” he theorized. “That’s also the conjecture of why it’s not booting CDs. It’s trying to keep its claws, as it were, on the machine. It doesn’t want you to boot another OS it might not have code for.”

To put it another way, he said, badBIOS “is the tip of the warhead, as it were.”

“Things kept getting fixed”

Ruiu said he arrived at the theory about badBIOS’s high-frequency networking capability after observing encrypted data packets being sent to and from an infected laptop that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when the laptop had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine’s power cord so it ran only on battery to rule out the possibility that it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed the internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped.

With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on.

“The airgapped machine is acting like it’s connected to the Internet,” he said. “Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird.”

It’s too early to say with confidence that what Ruiu has been observing is a USB-transmitted rootkit that can burrow into a computer’s lowest levels and use it as a jumping off point to infect a variety of operating systems with malware that can’t be detected. It’s even harder to know for sure that infected systems are using high-frequency sounds to communicate with isolated machines. But after almost two weeks of online discussion, no one has been able to rule out these troubling scenarios, either.

“It looks like the state of the art in intrusion stuff is a lot more advanced than we assumed it was,” Ruiu concluded in an interview. “The take-away from this is a lot of our forensic procedures are weak when faced with challenges like this. A lot of companies have to take a lot more care when they use forensic data if they’re faced with sophisticated attackers.”

Source:  arstechnica.com