Archive for December, 2013

Saas predictions for 2014

Friday, December 27th, 2013

While the bulk of enterprise software is still deployed on-premises, SaaS (software as a service) continues to undergo rapid growth. Gartner has said the total market will top $22 billion through 2015, up from more than $14 billion in 2012.

The SaaS market will likely see significant changes and new trends in 2014 as vendors jockey for competitive position and customers continue shifting their IT strategies toward the deployment model. Here’s a look at some of the possibilities.

The matter of multitenancy: SaaS vendors such as Salesforce.com have long touted the benefits of multitenancy, a software architecture where many customers share a single application instance, with their information kept separate. Multitenancy allows vendors to patch and update many customers at once and get more mileage out of the underlying infrastructure, thereby cutting costs and easing management.

This year, however, other variations on multitenancy emerged, such as one offered by Oracle’s new 12c database. An option for the release allows customers to host many “pluggable” databases within a single host database, an approach that Oracle says is more secure than the application-level multitenancy used by Salesforce.com and others.

Salesforce.com itself has made a shift away from its original definition of multitenancy. During November’s Dreamforce conference, CEO Marc Benioff announced a partnership with Hewlett-Packard around a new “Superpod” option for large enterprises, wherein companies can have their own dedicated infrastructure inside Salesforce.com data centers based on HP’s Converged Infrastructure hardware.

Some might say this approach has little distinction from traditional application hosting. Overall, in 2014 expect multitenancy to fade away as a major talking point for SaaS.

Hybrid SaaS: Oracle has made much of the fact its Fusion Applications could be deployed either on-premises or from its cloud, but due to the apparent complexity involved with the first option, most initial Fusion customers have chosen SaaS.

Still, concept of application code bases that are movable between the two deployment models could become more popular in 2014.

While there’s no indication Salesforce.com will offer an on-premises option — and indeed, such a thing seems almost inconceivable considering the company’s “No Software” logo and marketing campaign around the convenience of SaaS — the HP partnership is clearly meant to give big companies that still have jitters about traditional SaaS a happy medium.

As in all cases, customer demand will dictate SaaS vendors’ next moves.

Geographic depth: It was no accident that Oracle co-President Mark Hurd mentioned during the company’s recent earnings call that it now has 17 data centers around the world. Vendors want enterprise customers to know their SaaS offerings are built for disaster recovery and are broadly available.

Expect “a flurry of announcements” in 2014 from SaaS vendors regarding data center openings around the world, said China Martens, an independent business applications analyst, via email. “This is another move likely to benefit end-user firms. Some firms at present may not be able to proceed with a regional or global rollout of SaaS apps because of a lack of local data center support, which may be mandated by national data storage or privacy laws.”

Keeping customers happy: On-premises software vendors such as Oracle and SAP are now honing their knowledge of something SaaS vendors such as NetSuite and Salesforce.com had to learn years earlier: How to run a software business based on annual subscriptions, not perpetual software licenses and annual maintenance.

The latter model provides companies with big one-time payments followed by highly profitable support fees. With SaaS, the money flows into a vendor’s coffers in a much different manner, and it’s arguably also easier for dissatisfied customers to move to a rival product compared to an on-premises deployment.

As a result, SaaS vendors have suffered from “churn,” or customer turnover. In 2014, there will be increased focus on ways to keep customers happy and in the fold, according to Karan Mehandru, general partner at venture capital firm Trinity Ventures.

Next year “will further awareness that the purchase of software by a customer is not the end of the transaction but rather the beginning of a relationship that lasts for years,” he wrote in a recent blog post. “Customer service and success will be at the forefront of the customer relationship management process where terms like retention, upsells and churn reduction get more air time in board meetings and management sessions than ever before.”

Consolidation in marketing, HCM: Expect a higher pace of merger and acquisition activity in the SaaS market “as vendors buy up their competitors and partners,” Martens said.

HCM (human capital management) and marketing software companies may particularly find themselves being courted. Oracle, SAP and Salesforce.com have both invested heavily in these areas already, but the likes of IBM and HP may also feel the need to get in the game.

A less likely scenario would be a major merger between SaaS vendors, such as Salesforce.com and Workday.

SaaS goes vertical: “There will be more stratification of SaaS apps as vendors build or buy with the aim of appealing to particular types of end-user firms,” Martens said. “In particular, vendors will either continue to build on early industry versions of their apps and/or launch SaaS apps specifically tailored to particular verticals, e.g., healthcare, manufacturing, retail.”

However, customers will be burdened with figuring out just how deep the industry-specific features in these applications are, as well as gauging how committed the vendor is to the particular market, Martens added.

Can’t have SaaS without a PaaS: Salesforce.com threw down the gauntlet to its rivals in November, announcing Salesforce1, a revamped version of its PaaS (platform as a service) that couples its original Force.com offering with tools from its Heroku and ExactTarget acquisitions, a new mobile application, and 10 times as many APIs (application programming interfaces) than before.

A PaaS serves as a multiplying force for SaaS companies, creating a pool of developers and systems integrators who create add-on applications and provide services to customers while sharing an interest in the vendor’s success.

Oracle, SAP and other SaaS vendors have been building out their PaaS offerings and will make plenty of noise about them next year.

Source:  cio.com

Target’s nightmare goes on: Encrypted PIN data stolen

Friday, December 27th, 2013

After hackers stole credit and debit card records for 40 million Target store customers, the retailer said customers’ personal identification numbers, or PINs, had not been breached.

Not so.

On Friday, a Target spokeswoman backtracked from previous statements and said criminals had made off with customers’ encrypted PIN information as well. But Target said the company stored the keys to decrypt its PIN data on separate systems from the ones that were hacked.

“We remain confident that PIN numbers are safe and secure,” Molly Snyder, Target’s spokeswoman said in a statement. “The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems.”

The problem is that when it comes to security, experts say the general rule of thumb is: where there is will, there is a way. Criminals have already been selling Target customers’ credit and debit card data on the black market, where a single card is selling for as much as $100. Criminals can use that card data to create counterfeit cards. But PIN data is the most coveted of all. With PIN data, cybercriminals can make withdrawals from a customer’s account through an automatic teller machine. And even if the key to unlock the encryption is stored on separate systems, security experts say there have been cases where hackers managed to get the keys and successfully decrypt scrambled data.

Even before Friday’s revelations about the PIN data, two major banks, JPMorgan Chase and Santander Bank both placed caps on customer purchases and withdrawals made with compromised credit and debit cards. That move, which security experts say is unprecedented, brought complaints from customers trying to do last-minute shopping in the days leading to Christmas.

Chase said it is in the process of replacing all of its customers’ debit cards — about 2 million of them — that were used at Target during the breach.

The Target breach,from Nov. 27 to Dec. 15, is officially the second largest breach of a retailer in history. The biggest was a 2005 breach at TJMaxx that compromised records for 90 million customers.

The Secret Service and Justice Department continue to investigate.

Source:  nytimes.com

Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Huawei sends 400Gbps over next-generation optical network

Thursday, December 26th, 2013

Huawei Technologies and Polish operator Exatel have tested a next-generation optical network based on WDM (Wavelength Division Multiplexing) technology and capable of 400Gbps throughput.

More data traffic and the need for greater transmission speed in both fixed and wireless networks have consequences for all parts of operator networks. While faster versions of technologies such as LTE are being rolled out at the edge of networks, vendors are working on improving WDM (Wavelength-Division Multiplexing) to help them keep up at the core.

WDM sends large amounts of data using a number different wavelengths or channels over a single optical fiber.

However, the test conducted by Huawei and Exatel only used one channel to send the data, which has its advantages, according to Huawei. It means the system only needs one optical transceiver, which is used to both send and receive data. That, in turn, results in lower power consumption and a smaller chance that something may go wrong, it said.

Huawei didn’t say when it expects to include the technology in commercial products.

Currently operators are upgrading their networks to include 100Gbps links. That increased third quarter spending on optical networks in North America by 13.4 percent year-over-year, following an 11.1 percent increase in the previous quarter, according to Infonetics Research. Huawei, Ciena, and Alcatel-Lucent were the WDM market share leaders, it said.

Source:  networkworld.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Computers share their secrets if you listen

Friday, December 20th, 2013

Be afraid, friends, for science has given us a new way in which to circumvent some of the strongest encryption algorithms used to protect our data — and no, it’s not some super secret government method, either. Researchers from Tel Aviv University and the Weizmann Institute of Science discovered that they could steal even the largest, most secure RSA 4096-bit encryption keys simply by listening to a laptop as it decrypts data.

To accomplish the trick, the researchers used a microphone to record the noises made by the computer, then ran that audio through filters to isolate the vibrations made by the electronic internals during the decryption process. With that accomplished, some cryptanalysis revealed the encryption key in around an hour. Because the vibrations in question are so small, however, you need to have a high powered mic or be recording them from close proximity. The researchers found that by using a highly sensitive parabolic microphone, they could record what they needed from around 13 feet away, but could also get the required audio by placing a regular smartphone within a foot of the laptop. Additionally, it turns out they could get the same information from certain computers by recording their electrical ground potential as it fluctuates during the decryption process.

Of course, the researchers only cracked one kind of RSA encryption, but they said that there’s no reason why the same method wouldn’t work on others — they’d just have to start all over to identify the specific sounds produced by each new encryption software. Guess this just goes to prove that while digital security is great, but it can be rendered useless without its physical counterpart. So, should you be among the tin-foil hat crowd convinced that everyone around you is a potential spy, waiting to steal your data, you’re welcome for this newest bit of food for your paranoid thoughts.

Source:  engadget.com

New modulation scheme said to be ‘breakthrough’ in network performance

Friday, December 20th, 2013

A startup plans to demonstrate next month a new digital modulation scheme that promises to dramatically boost bandwidth, capacity, and range, with less power and less distortion, on both wireless and wired networks.

MagnaCom, a privately held company based in Israel, now has more than 70 global patent applications, and 15 issued patents in the U.S., for what it calls and has trademarked Wave Modulation (or WAM), which is designed to replace the long-dominant quadrature amplitude modulation (QAM) used in almost every wired or wireless product today on cellular, microwave radio, Wi-Fi, satellite and cable TV, and optical fiber networks. The company revealed today that it plans to demonstrate WAM at the Consumer Electronics Show, Jan. 7-10, in Las Vegas.

The vendor, which has released few specifics about WAM, promises extravagant benefits: up to 10 decibels of additional gain compared to the most advanced QAM schemes today; up to 50 percent less power; up to 400 percent more distance; up to 50 percent spectrum savings. WAM tolerates noise or interference better, has lower costs, is 100 percent backward compatible with existing QAM-based systems; and can simply be swapped in for QAM technology without additional changes to other components, the company says.

Modulation is a way of conveying data by changing some aspect of a carrier signal (sometimes called a carrier wave). A very imperfect analogy is covering a lamp with your hand to change the light beam into a series of long and short pulses, conveying information based on Morse code.

QAM, which is both an analog and a digital modulation scheme, “conveys two analog message signals, or two digital bit streams, by changing the amplitudes of two carrier waves,” as the Wikipedia entry explains. It’s used in Wi-Fi, microwave backhaul, optical fiber systems, digital cable television and many other communications systems. Without going into the technical details, you can make QAM more efficient or denser. For example, nearly all Wi-Fi radios today use 64-QAM. But 802.11ac radios can use 256-QAM. In practical terms, that change boosts the data rate by about 33 percent.

But there are tradeoffs. The denser the QAM scheme, the more vulnerable it is to electronic “noise.” And amplifying a denser QAM signal requires bigger, more powerful amplifiers: when they run at higher power, which is another drawback, they also introduce more distortion.

MagnaCom claims that WAM modulation delivers vastly greater performance and efficiencies than current QAM technology, while minimizing if not eliminating the drawbacks. But so far, it’s not saying how WAM actually does that.

“It could be a breakthrough, but the company has not revealed all that’s needed to assure the world of that,” says Will Straus, president of Forward Concepts, a market research firm that focuses on digital signal processing, cell phone chips, wireless communications and related markets. “Even if the technology proves in, it will take many years to displace QAM that’s already in all digital communications. That’s why only bounded applications — where WAM can be [installed] at both ends – will be the initial market.”

“There are some huge claims here,” says Earl Lum, founder of EJL Wireless, a market research firm that focuses on microwave backhaul, cellular base station, and related markets. “They’re not going into exactly how they’re doing this, so it’s really tough to say that this technology is really working.”

Lum, who originally worked as an RF design engineer before switching to wireless industry equities research on Wall Street, elaborated on two of those claims: WAM’s greater distance and its improved spectral efficiency.

“Usually as you go higher in modulation, the distance shrinks: it’s inversely proportional,” he explains. “So the 400 percent increase in distance is significant. If they can compensate and still get high spectral efficiency and keep the distance long, that’s what everyone is trying to have.”

The spectrum savings of up to 50 percent is important, too. “You might be able to double the amount of channels compared to what you have now,” Lum says. “If you can cram more channels into that same spectrum, you don’t have to buy more [spectrum] licenses. That’s significant in terms of how many bits-per-hertz you can realize. But, again, they haven’t specified how they do this.”

According to MagnaCom, WAM uses some kind of spectral compression to improve spectral efficiency. WAM can simply be substituted for existing QAM technology in any product design. Some of WAM’s features should result in simpler transmitter designs that are less expensive and use less power.

For the CES demonstration next month, MagnaCom has partnered with Altera Corp., which provides custom field programmable gate arrays, ASICs and other custom logic solutions.

Source:  networkworld.com

Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site PHP.net used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of PHP.net, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to PHP.net, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the PHP.net attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.

Source:  csoonline.com

Windows 7 given a reprieve of sorts to extend OEM sales

Friday, December 13th, 2013

October 30, 2014 is no longer the cut off date—well, at least for now.

Microsoft updated its Windows lifecycle table last week, quietly announcing that OEMs would have to cease preinstalling Windows 7 on new systems by October 30, 2014. Retail boxed copies of the operating system have already ceased, ending on October 30 of this year.

But the company has now removed that 2014 date, claiming that it was a mistake. The date is now “to be determined.” The issued statement about the mistake reads:

We have yet to determine the end of sales date for PCs with Windows 7 preinstalled. The October 30, 2014 date that posted to the Windows Lifecycle page globally last week was done so in error. We have since updated the website to note the correct information; however, some non-English language pages may take longer to revert to correctly reflect that the end of sales date is ‘to be determined.’ We apologize for any confusion this may have caused our customers. We’ll have more details to share about the Windows 7 lifecycle once they become available.”

This of course leaves open the possibility that the October 30, 2014 date could be the cut-off.

As things stand, Windows 7 is still due to leave mainstream support on January 13, 2015, giving Windows 7 systems just a few months of full support. Extended support—which for the most part means “security fixes”—is due to run until January 14, 2020.

More pressing is the end of Windows XP’s extended support, which is still due to terminate on April 8, 2014.

Source:  arstechnica.com

Google’s Dart language heads for standardization with new Ecma committee

Friday, December 13th, 2013

Ecma, the same organization that governs the standardization and development of JavaScript (or “EcmaScript” as it’s known in standardese), has created a committee to oversee the publication of a standard for Google’s alternative Web language, Dart.

Technical Committee 52 will develop standards for Dart language and libraries, create test suites to verify conformance with the standards, and oversee Dart’s future development. Other technical committees within Ecma perform similar work for EcmaScript, C#, and the Eiffel language.

Google released version 1.0 of the Dart SDK last month and believes that the language is sufficiently stable and mature to be both used in a production capacity and put on the track toward creating a formal standard. The company asserts that this will be an important step toward embedding native Dart support within browsers.

Source:  arstechnica.com

Microsoft exec hints at separate Windows release trains for consumers, business

Monday, December 9th, 2013

Resistance from enterprises, and Ballmer’s departure, may be changing Microsoft’s mind

Microsoft may revert to separate release schedules for consumer and business versions of Windows, the company’s top operating system executive hinted this week.

At a technology symposium hosted by financial services giant Credit Suisse, Tony Myerson acknowledged the operating system adoption chasm between consumers and more conservative corporations. Myerson, who formerly led the Windows Phone team, was promoted in July to head all client-based OS development, including that for smartphones, tablets, PCs and the Xbox game console.

“The world has shown that these two different customers really have divergent needs,” said Myerson Wednesday, according to a transcript of his time on stage. “And there may be different cadences, or different ways in which we talk to those two customers. And so [while Windows] 8.1 and [Windows] 8.1 Pro both came at the same time, it’s not clear to me that’s the right way to serve the consumer market. [But] it may be the right way to continue serving the enterprise market.”

Myerson’s comment hinted at a return to a practice last used in the early years of this century, when Microsoft delivered new operating systems to the company’s consumer and commercial customers on different schedules.

Before 2001’s arrival of Windows XP — when Microsoft shipped consumer and business versions simultaneously — Microsoft aimed different products, with different names, at each category. In 2000, for example, Microsoft delivered Windows ME, for “Millennium Edition,” to consumers and Windows 2000 to businesses. Prior to that, Windows 95, although widely used in businesses, was the consumer-oriented edition, while Windows NT 4.0, which launched in 1996, targeted business PCs and servers.

The update/upgrade-acceptance gap between consumers and businesses reappeared after Microsoft last year said it would accelerate its development and release schedule for Windows, then delivered on the first example of that tempo, Windows 8.1, just a year after the launch of its predecessor.

Enterprises have become nervous about the cadence, say analysts. Businesses as a rule are much more conservative about upgrading their machines’ operating systems than are consumers: The former must spend thousands, even millions, to migrate from one version to another, and must test the compatibility of in-house and mission-critical applications, then rewrite them if they don’t work.

That conservative approach to upgrades was a major reason why Windows XP retained a stranglehold on business PCs for more than a decade, and why Windows 7, not Windows 8 or 8.1, has replaced it.

It’s extremely difficult to serve both masters — consumer and commercial — equally well, said Patrick Moorhead, principal analyst at Moor Insights & Strategy. “No one has yet mastered being good on enterprise and good on consumer,” said Moorhead in an interview. “[The two] are on completely different cycles.”

In October, outgoing CEO Steve Ballmer dismissed concerns over the faster pace. At a Gartner Research-sponsored conference, when analyst David Cearley noted, “Enterprises are concerned about that accelerated delivery cycle,” Ballmer simply shook his head.

“Let me push back,” said Ballmer, “and say, ‘Not really.’ If our customers have to take DVDs from us, install them, and do customer-premise software, you’re saying to us ‘Don’t upgrade that software very often … two to three years is perfect.’ But if we deliver something to you that’s a service, as we do with Office 365, our customers are telling us, ‘We want to be up to date at all times.'”

Another Gartner analyst, Michael Silver, countered Ballmer’s claim. “Organizations need to be afraid of what’s to come,” Silver said at the time. “If [companies] get on this release train, Microsoft will take them where [Microsoft] wants to go, or [Microsoft] will run them over.”

Myerson’s hint of separate release trains, to use Silver’s terminology, may be a repudiation of Ballmer’s contention. Or not.

His statement of, “It may be the right way to continue serving the enterprise market,” could be interpreted to mean that Microsoft will maintain an accelerated tempo for business versions of Windows — one faster than the three years between upgrades that the company has used in the past — and speed up Windows updates to consumers even more.

“The consumer really is ready for things to be upgraded on their own,” Myerson said.

“Microsoft’s biggest strategic question is, ‘Am I an enterprise company or a consumer company, or both?” said Moorhead. “Something has to break here.”

And one crack might be, according to Myerson, a separation of consumer and commercial on Windows.

Source:  infoworld.com

FCC postpones spectrum auction until mid 2015

Monday, December 9th, 2013

In a blog post on Friday, Federal Communications Commission Chairman Tom Wheeler said that he would postpone a June 2014 spectrum auction to mid-2015. In his post, Wheeler called for more extensive testing of “the operating systems and the software necessary to conduct the world’s first-of-a kind incentive auction.”

”Only when our software and systems are technically ready, user friendly, and thoroughly tested, will we start the auction,” wrote Wheeler. The chairman also said that he wanted to develop procedures for how the auction will be conducted, specifically after seeking public comment on those details in the second half of next year.

A separate auction for 10MHz of space will take place in January 2014. In 2012, Congress passed the Middle Class Tax Relief and Job Creation Act, which required the FCC to auction off 65MHz of spectrum by 2015. Revenue from the auction will go toward developing FirstNet, an LTE network for first responders. Two months ago, acting FCC chair Mignon Clyburn announced that the commission would start that sell-off by placing 10MHz on the auction block in January 2014. The other 55MHz would be auctioned off at a later date, before the end of 2015.

The forthcoming auction aims to pay TV broadcasters to give up lower frequencies, which will be bid on by wireless cell phone carriers like AT&T and Verizon, but also by smaller carriers who are eager to expand their spectrum property. Wheeler gave no hint as to whether he would push for restrictions on big carriers during the auction process, but he wrote, “I am mindful of the important national interest in making available additional spectrum for flexible use.”

Source:  arstechnica.com

Crackdown successfully reduces spam

Friday, December 6th, 2013

Efforts to put an end to e-mail phishing scams are working, thanks to the development of e-mail authentication standards, according to a pair of Google security researchers.

Internet industry and standards groups have been working since 2004 to get e-mail providers to use authentication to put a halt to e-mail address impersonation. The challenge was both in creating the standards that the e-mail’s sending and receiving domains would use, and getting domains to use them.

Elie Bursztein, Google’s anti-abuse research lead, and Vijay Eranti, Gmail’s anti-abuse technical lead, wrote that these standards — called DomainKey Identified Email (DKIM) and Sender Policy Framework (SPF) — are now in widespread use.

http://asset2.cbsistatic.com/cnwk.1d/i/tim2/2013/12/06/chart.jpg“91.4 percent of nonspam e-mails sent to Gmail users come from authenticated senders,” they said. By ensuring that the e-mail has been authenticated, the standards have made it easier to block the billions of annual spam and phishing attempts.

While social media gets all the buzz, the statistics they shared tell the story of the enormous use of e-mail and the challenges in preventing e-mail address fraud.

More than 3.5 million domains that are active on a weekly basis use the SPF standard when sending e-mail via SMTP servers, which accounts for 89.1 percent of e-mail sent to Gmail.

More than half a million e-mail sending and receiving domains that are active weekly adopted the DKIM standards, which accounts for 76.9 percent of e-mails received by Gmail.

Another 74.7 percent of all incoming e-mail to Gmail accounts is authenticated using both DKIM and SPF standards, and more than 80,000 domains use e-mail policies that allow Google to use the Domain-based Message Authentication, Reporting and Conformance (DMARC) standard to reject “hundreds of millions” of unauthenticated e-mails per week.

The pair cautioned domain owners to make sure that their DKIM cryptographic keys were 1024 bits, as opposed to the weaker 512-bit keys. They added that owners of domains that never send e-mail should use DMARC to create a policy that identifies the domain as a “non-sender.”

Questions about the origins of the unsecured e-mails were not immediately returned by Google.

Source:  CNET

Study finds zero-day vulnerabilities abound in popular software

Friday, December 6th, 2013

Subscribers to organizations that sell exploits for vulnerabilities not yet known to software developers gain daily access to scores of flaws in the world’s most popular technology, a study shows.

NSS Labs, which is in the business of testing security products for corporate subscribers, found that over the last three years, subscribers of two major vulnerability programs had access on any given day to at least 58 exploitable flaws in Microsoft, Apple, Oracle or Adobe products.

In addition, NSS labs found that an average of 151 days passed from the time when the programs purchased a vulnerability from a researcher and the affected vendor released a patch.

The findings, released Thursday, were based on an analysis of 10 years of data from TippingPoint, a network security maker Hewlett-Packard acquired in 2010, and iDefense, a security intelligence service owned by VeriSign. Both organizations buy vulnerabilities, inform subscribers and work with vendors in producing patches.

Stefan Frei, NSS research director and author of the report, said the actual number of secret vulnerabilities available to cybercriminals, government agencies and corporations is much larger, because of the amount of money they are willing to pay.

Cybercriminals will buy so-called zero-day vulnerabilities in the black market, while government agencies and corporations purchase them from brokers and exploit clearinghouses, such as VUPEN Security, ReVuln, Endgame Systems, Exodus Intelligence and Netragard.

The six vendors collectively can provide at least 100 exploits per year to subscribers, Frei said. According to a February 2010 price list, Endgame sold 25 zero-day exploits a year for $2.5 million.

In July, Netragard founder Adriel Desautels told The New York Times that the average vulnerability sells from around $35,000 to $160,000.

Part of the reason vulnerabilities are always present is because of developer errors and also because software makers are in the business of selling product, experts say. The latter means meeting deadlines for shipping software often trumps spending additional time and money on security.

Because of the number of vulnerabilities bought and sold, companies that believe their intellectual property makes them prime targets for well-financed hackers should assume their computer systems have already been breached, Frei said.

“One hundred percent prevention is not possible,” he said.

Therefore, companies need to have the experts and security tools in place to detect compromises, Frei said. Once a breach is discovered, then there should be a well-defined plan in place for dealing with it.

That plan should include gathering forensic evidence to determine how the breach occurred. In addition, all software on the infected systems should be removed and reinstalled.

Steps taken following a breach should be reviewed regularly to make sure they are up to date.

Source:  csoonline.com

Microsoft ends Windows 7 retail sales

Friday, December 6th, 2013

Sets October 2014 cut-off for sales to OEMs

Microsoft has quietly ended retail sales of Windows 7, according to a notice on its website.

The company’s policies for shutting off sales to retailers and shipping licenses to OEMs (original equipment manufacturers) are posted on its site, which was recently updated to show that Windows 7’s “retail end of sales” date was Oct. 30.

The next deadline, marked as “End of sales for PCs with Windows preinstalled,” will be Oct. 30, 2014, less than a year away.

Microsoft’s practice, first defined in 2010, is to stop selling an older operating system in retail one year after the launch of its successor, and halt delivery of the previous Windows edition to OEMs two years after a new version launches. The company shipped Windows 8, Windows 7’s replacement, in October 2012.

As recently as late September, the last time Computerworld cited the online resource, Microsoft had not filled in the deadlines for Windows 7. At the time, Computerworld said that the end-of-October dates were the most likely.

A check of Microsoft’s own online store showed that the company has pulled Windows 7 from those virtual shelves.

In practical terms, the end-of-retail-sales date has been an artificial and largely meaningless deadline, as online retailers have continued to sell packaged copies, sometimes for years, by restocking through distributors which squirreled away older editions.

Today, for example, Amazon.com had a plentiful supply of various versions of Windows 7 available to ship, as did technology specialist Newegg.com. The former also listed copies of Windows Vista and even Windows XP for sale through partners.

Microsoft also makes a special exception for retail sales, telling customers that between the first and second end-of-sale deadlines they can purchase Windows 7 from computer makers. “When the retail software product reaches its end of sales date, it can still be purchased through OEMs (the company that made your PC) until it reaches the end of sales date for PCs with Windows preinstalled,” the company’s website stated.

The firmer deadline is the second, the one for offering licenses to OEMs. According to Microsoft, it “will continue to allow OEMs to sell PCs preinstalled with the previous version for up to two years after the launch date of the new version” (emphasis added).

After that date, Microsoft shuts off the spigot, more or less, although OEMs, especially smaller “white box” builders, can and often do stockpile licenses prior to the cut-off.

But officially, the major PC vendors — like Dell, Hewlett-Packard and Lenovo — will discontinue most Windows 7 PC sales in October 2014, making Windows 8 and its follow-ups, including Windows 8.1, the default.

Even then, however, there are ways to circumvent the shut-down. Windows 8 Pro, the more expensive of the two public editions, includes “downgrade” rights that allow PC owners to legally install an older OS. OEMs and system builders can also use downgrade rights to sell a Windows 8- or Windows 8.1-licensed system, but factory-downgrade it to Windows 7 Professional before it ships.

Enterprises with volume license agreements are not at risk of losing access to Windows 7, as they are granted downgrade rights as part of those agreements. In other words, while Microsoft may try to stymie Windows 7 sales, the 2009 operating system will long remain a standard.

As of the end of November, approximately 46.6% of all personal computers ran Windows 7, according to Web measurement vendor Net Applications, a number that represented 51.3% of all the systems running Windows.

Source:  computerworld.com

IT managers are increasingly replacing servers with SaaS

Friday, December 6th, 2013

IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding, according to new data.

IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers.

In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers.

“What that means is a lot of people are buying SaaS,” said Frank Gens, referring to software-as-a-service. “A lot of capacity if shifting out of the enterprise into cloud service providers.”

The increased use of SaaS is a major reason for the market shift, but so is virtualization to increase server capacity. Data center consolidations are eliminating servers as well, along with the purchase of denser servers capable of handling larger loads.

For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining, based on market direction and the experience of IT managers.

Two years ago, when Mark Endry became the CIO and SVP of U.S. operations for Arcadis, a global consulting, design and engineering company, the firm was running its IT in-house.

“We really put a stop to that,” said Endry. Arcadis is moving to SaaS, either to add new services or substitute existing ones. An in-house system is no longer the default, he added.

“Our standard RFP for services says it must be SaaS,’ said Endry.

Arcadis has added Workday, a SaaS-based HR management system, replaced an in-house training management system with a SaaS system, and an in-house ADP HR system was replaced with a service. The company is also planning a move to Office 365, and will stop running its in-house Exchange and SharePoint servers.

As a result, in the last two years, Endry has kept the server count steady at 1,006 spread through three data centers. He estimates that without the efforts at virtualization, SaaS and other consolidations, they would have more 200 more physical servers.

Endry would like to consolidate the three data centers into one, and continue shifting to SaaS to avoid future maintenance costs, and also the need to customize and maintain software. SaaS can’t yet be used for everything, particularly ERP, but “my goal would be to really minimize the footprint of servers,” he said.

Similarly, Gerry McCartney, CIO of Purdue University is working to cut server use and switch more to SaaS.

The university’s West Lafayette, Ind., campus had some 65 data centers two years ago, many small. Data centers at Purdue are defined as any room with additional power and specialized heavy duty cooling equipment. They have closed at least 28 of them in the last 18 months.

The Purdue consolidation is the result of several broad directions: increased virtualization, use of higher density systems, and increase use of SaaS.

McCartney wants to limit the university’s server management role. “The only things that we are going to retain on campus is research and strategic support,” he said. That means that most, if not all, of the administrative functions may be moved off campus.

This shift to cloud-based providers is roiling the server market, and is expected to help send server revenue down 3.5% this year, according to IDC.

Gens says that one trend among users who buy servers is increasing interest in converged or integrated systems that combine server, storage, networking and software. They account now about for about 10% of the market, and are expected to make up 20% by 2020.

Meanwhile, the big cloud providers are heading in the opposite direction, and are increasingly looking for componentized systems they can assemble, Velcro-like, in their data centers. This has given rise to contract, or original design manufacturers (ODM), mostly overseas, who make these systems for cloud systems.

Source:  computerworld.com

Microsoft disrupts ZeroAccess web fraud botnet

Friday, December 6th, 2013

ZeroAccess, one of the world’s largest botnets – a network of computers infected with malware to trigger online fraud – has been disrupted by Microsoft and law enforcement agencies.

ZeroAccess hijacks web search results and redirects users to potentially dangerous sites to steal their details.

It also generates fraudulent ad clicks on infected computers then claims payouts from duped advertisers.

Also called Sirefef botnet, ZeroAccess, has infected two million computers.

The botnet targets search results on Google, Bing and Yahoo search engines and is estimated to cost online advertisers $2.7m (£1.7m) per month.

Microsoft said it had been authorised by US regulators to “block incoming and outgoing communications between computers located in the US and the 18 identified Internet Protocol (IP) addresses being used to commit the fraudulent schemes”.

In addition, the firm has also taken control of 49 domains associated with ZeroAccess.

David Finn, executive director of Microsoft Digital Crimes Unit, said the disruption “will stop victims’ computers from being used for fraud and help us identify the computers that need to be cleaned of the infection”.

‘Most robust’

The ZeroAccess botnet relies on waves of communication between groups of infected computers, instead of being controlled by a few servers.

This allows cyber criminals to control the botnet remotely from a range of computers, making it difficult to tackle.

According to Microsoft, more than 800,000 ZeroAccess-infected computers were active on the internet on any given day as of October this year.

“Due to its botnet architecture, ZeroAccess is one of the most robust and durable botnets in operation today and was built to be resilient to disruption efforts,” Microsoft said.

However, the firm said its latest action is “expected to significantly disrupt the botnet’s operation, increasing the cost and risk for cyber criminals to continue doing business and preventing victims’ computers from committing fraudulent schemes”.

Microsoft said its Digital Crimes Unit collaborated with the US Federal Bureau of Investigation (FBI) and Europol’s European Cybercrime Centre (EC3) to disrupt the operations.

Earlier this year, security firm Symantec said it had disabled nearly 500,000 computers infected by ZeroAccess and taken them out of the botnet.

Source: BBC

Case Studies: Point-to-point wireless bridge – Campus

Friday, December 6th, 2013

IMG_0095

Gyver Networks recently completed a point-to-point (PTP) bridge installation to provide wireless backhaul for a Boston college

Challenge:  The only connectivity to local network or Internet resources from this school’s otherwise modern athletic center was via a T1 line topping out at 1.5 Mbps bandwidth.  This was unacceptable not only to the faculty onsite attempting to connect to the school’s network, but to the attendees, faculty, and media outlets attempting to connect to the Internet during the high-profile events and press conferences routinely held inside.

Another vendor’s design for a 150 Mbps unlicensed wireless backhaul link failed during a VIP visit, necessitating a redesign by Gyver Networks.

http://www.gyvernetworks.com/TechBlog/wp-content/uploads/2013/12/IMG_0103.jpgResolution:  After performing a spectrum analysis of the surrounding environment, Gyver Networks determined that the wireless solution originally proposed to the school was not viable due to RF spectrum interference.

For a price point close to the unlicensed, failed design, Gyver Networks engineered a secure, 700 Mbps point-to-point wireless bridge in the licensed 80GHz band to link the main campus with the athletic center, providing adequate bandwidth for both local network and Internet connectivity at the remote site.  Faculty are now able to work without restriction, and event attendees can blog, post to social media, and upload photos and videos without constraint.

RBS admits decades of IT neglect after systems crash

Tuesday, December 3rd, 2013

Royal Bank of Scotland has neglected its technology for decades, the state-backed bank’s boss admitted on Tuesday after a system crash left more than 1 million customers unable to withdraw cash or pay for goods.

The problem for three hours on Monday, one of the busiest online shopping days of the year, raised questions about the resilience of RBS’s technology, which analysts and banking industry sources regard as outdated and made up of a complex patchwork of systems after dozens of acquisitions.

“For decades, RBS failed to invest properly in its systems,” Ross McEwan, who became chief executive in October, said.

“Last night’s systems failure was unacceptable … I’m sorry for the inconvenience we caused our customers,” he said, adding he would outline plans in the New Year to improve the bank and increase investment.

The latest crash could cost RBS millions of pounds in compensation and follows a more serious crash in its payments system last year that Britain’s regulator is still investigating.

The regulator has been scrutinising the resilience of all banks’ technology to address concerns that outdated systems and a lack of investment could cause more crashes.

The technology glitch is another setback for the bank’s efforts to recover from the financial crisis when it had to be rescued in a taxpayer-funded bailout. The government still owns 82 percent of RBS.

RBS’s cash machines did not work from 1830-2130 GMT on Monday and customers trying to pay for goods with debit cards at supermarkets and petrol stations, buy goods online or use online or mobile banking were also unable to complete transactions.

The bank said the problem had been fixed and it would compensate anyone who had been left out of pocket as a result.

About 250,000 people an hour would typically use RBS’s cash machines on a Monday night, and tens of thousands more customers would have used the other affected parts at its RBS, NatWest and Ulster operations. RBS has 24 million customers in the UK.

Twitter lit up with customer complaints.

“RBS a joke of a bank. Card declined last night and almost 1,000 pounds vanished from balance this morning! What is going on?” tweeted David MacLeod from Edinburgh, echoing widely-felt frustration with the bank.

Some people tweeted on Tuesday they were still experiencing problems and accounts were showing incorrect balances.

RUN OF PROBLEMS

Millions of RBS customers were affected in June 2012 by problems with online banking and payments after a software upgrade went wrong.

That cost the bank 175 million pounds ($286 million) in compensation for customers and extra payments to staff after the bank opened branches for longer in response. Stephen Hester, chief executive at the time, waived his 2012 bonus following the problem. Britain’s financial watchdog is still investigating and could fine the bank.

The latest crash occurred on so-called Cyber Monday, one of the busiest days for online shopping before Christmas.

RBS said the problem was not related to volume, but gave no details on what had caused the system crash.

McEwan has vowed to improve customer service and has said technology in British banking lags behind Australia, where he previously worked. He has pledged to spend 700 million pounds in the next three years on UK branches, with much of that earmarked for improving systems.

RBS’s former CEO Fred Goodwin has been blamed for under-investing in technology and for not building robust enough systems following its takeover of NatWest in 2000.

Andy Haldane, director for financial stability at the Bank of England, told lawmakers last year banks needed to transform their IT because they had not invested enough during the boom years. Haldane said 70-80 percent of big banks’ IT spending was on maintaining legacy systems rather than investing in improvements.

“It appears to be another example of the lack of sufficient investment in technology by a bank that is still hurting. They are trying to do it on a shoestring, because they don’t have any extra money,” said Ralph Silva at research firm SRN.

“They need to do more, they need to allocate a greater portion of their spend to IT.”

Source:  reuters.com

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com