Archive for March, 2012

Engineers rebuild HTTP as a faster Web foundation

Friday, March 30th, 2012

The formal process of speeding up Hypertext Transfer Protocol is under way with proposals from Google, Microsoft, and others. There are differences — but common ground, too.

PARIS – Engineers have begun taking the first big steps in overhauling Hypertext Transfer Protocol, a seminal standard at the most foundational level of the Web.

At a meeting of the Internet Engineering Task Force (IETF) here yesterday, the working group overseeing HTTP formally opened a dicussion about how to make the technology faster. That discussion included presentations about four specific proposals for HTTP 2.0, including SPDY, developed at Google and already used in the real world, and HTTP Speed+Mobility, developed at Microsoft and revealed Wednesday.

There are some differences in the HTTP 2.0 proposals that have emerged so far — for example, Google’s preference for required encryption contrasting with Microsoft’s preference for it to be optional — and there’s another two-and-a-half months for people to submit new proposals. But notably, there also are similarities, in particular Microsoft’s support for some SPDY features.

“There’s a lot of overlap,” said Greenbytes consultant Julian Reschke, who attended the meeting and is involved in Web standards matters. “There’s a lot of agreement about what needs to be fixed.”

SPDY has a big head start in the market. It’s built into two browsers, Google Chrome and Amazon Silk, with Firefox adopting it in coming weeks. On the other side of the Internet connection, Google, Amazon, and Twitter are among those using SPDY on their servers. And Google has hard data showing the technology’s speed benefits.

Mark Nottingham, chairman of the HTTP Working Group, acknowledged SPDY’s position with a presentation slide titled “Elephant, meet Room.” (PDF). But he was careful to note that SPDY hasn’t carried the day.

“We’ll discuss SPDY because it’s here, but other proposals will be discussed too,” Nottingham said in his presentation, and added, “If we do choose SPDY as a starting point, that doesn’t mean it won’t change.”

Why change HTTP?
Rebuilding standards that touch every device on the Web is complicated, but there’s one simple word at the heart of the work: speed.

Web pages that respond faster are of course nice for anybody using the Web, but there are business reasons that matter, too. Better performance turns out to lead to more time spent on pages, more e-commerce transactions, more searches, more participation.

HTTP was the product of Tim Berners-Lee and fellow developers of the earliest incarnation of the World Wide Web more than 20 years ago. Its job is simple: a browser uses HTTP to request a Web page, and a Web server answers that request by transmitting the data to the browser. That data consists of the actual Web page, constructed using technologies such as HTML (Hypertext Markup Language) for describing the page, CSS (Cascading Style Sheets) for formatting and some visual effects, and the JavaScript programming language.

Web developers can do a lot to improve performance by carefully optimizing their Web page code. But improving HTTP itself gives a free speed boost to everybody on top of that.

It’s no coincidence, therefore, that the first item on the HTTP working group’s new charter is “improved perceived performance.”

SPDY’s technologies for faster HTTP include “multiplexing,” in which multiple streams of data can be sent over a single network connection; the ability to assign high or low priorities to Web page resources being requested from a server; and compression of “header” information that accompanies communications for resource requests and responses.

New proposals
Gabriel Montenegro, who presented and helped develop Microsoft’s proposal, pointed out in an interview that two of his proposal’s four points adopted SPDY’s approach.

Added SPDY co-creator Mike Belshe, “The Microsoft and Google proposals are almost the same.” Belshe helped develop SPDY at Google but who now works at the startup Twist, where he continues to work on the technology for mobile app purposes.

One difference between the Google and Microsoft proposals is in syntax, but, Belshe said, SPDY developers are flexible on that point and the choice of compression technology.

A bigger difference is that SPDY calls for encrypted connections all the way from a Web server to the browser it’s communicating with. Microsoft believes otherwise. According to its proposal:

Encryption must be optional to allow HTTP 2.0 to meet certain scenarios and regulations. HTTP 2.0 is a universal replacement for HTTP 1.X, and there are some instances in which imposing TLS is not required (or allowed). For example, a “random thought of the day” Web service has very little need for it, nor does a sensor spewing out a temperature reading every few seconds.

Belshe, though, said users care about encryption, and the fact that modern mobile phones can handle encryption means that it’s feasible for other devices to use it, too. And although an encrypted channel all the way from a browser to a Web server can damage the businesses of content delivery networks, which cache data on intermediate servers to speed up Web performance, the user should come first, he said.

“Users care about privacy and security more than whether some guy can cache something in the middle,” Belshe said. “Security is not free, but we can make it so it’s free to users.”

A third proposal, called Network-Friendly HTTP Upgrade and presented by Willy Tarreau, is designed with those intermediate network devices in mind. But that proposal, too, calls for network connection multiplexing.

It’s possible the group could implement the elements where there are agreement and leave other areas aside, Reschke said. “Deploying new HTTP is expensive, but incremental improvements are better than no improvements.”

And improvements will come, he expects.

“We want to standardize this,” Reschke said. “It’s time. It needs to happen.”

Source:  CNET

Surveillance spyware migrates from Windows to Mac OS X

Friday, March 30th, 2012

Researchers have uncovered a malware-based espionage campaign that subjects Mac users to the same techniques that have been used for years to surreptitiously siphon confidential data out of Windows machines.

The recently discovered campaign targets Mac-using employees of several pro-Tibetan non-governmental organizations, and employs attacks exploiting already patched vulnerabilities in Microsoft Office and Oracle’s Java framework, Jaime Blasco, a security researcher with AlienVault, told Ars. Over the past two weeks, he has identified two separate backdoor trojans that get installed when users open booby-trapped Word documents or website links included in e-mails sent to them. Once installed, the trojans send the computer, user, and domain name associated with the Mac to a server under the control of the attackers and then await further instructions.

“This particular backdoor has a lot of functionalities,” he said of the most recent trojan he found. Victims, he said, “won’t see almost anything.”

Blasco’s findings, which are documented in blog posts here and here, are among the first to show that Macs are being subjected to the same types of advanced persistent threats (APTs)  that have plagued Windows users for years—not that the shift is particularly unexpected. As companies such as Google increasingly adopt Macs to limit their exposure to Windows-dependent exploits, it was inevitable that the spooks conducting espionage on them would make the switch, too.

“What [attackers] have been installing via APT-style, targeted attack campaigns for Windows, they’re now starting to do for Macs, too,” said Ivan Macalintal, a security researcher at antivirus provider Trend Micro. Macalintal has documented some of the same exploits and trojans Blasco found.

Another researcher who has confirmed the findings is Alexis Dorais-Joncas, Security Intelligence Team Leader at ESET. In his own blog post, he documented the encryption one of the trojans uses to conceal communications between infected Macs and a command and control server. He also described a series of queries sent to a test machine he infected that he believes were manually typed by a live human at the other end of the server. They invoked Unix commands to rummage through Mac folders that typically store browser cookies, passwords, and software downloads.


Commands monitored by ESET researcher Alexis Dorais-Joncas. They appear to have been manually typed in real time by someone at the other end of a command and control server.

Commands monitored by ESET researcher Alexis Dorais-Joncas. They appear to
have been manually typed in real time by someone at the other end of a command
and control server.

“The purpose here clearly is information stealing,” he wrote.

He noted that the backdoor he observed was unable to survive a reboot on Macs that weren’t running with administrator privileges. That’s because the /Library/Audio/Plug-Ins/AudioServer folder used to stash one of the underlying malware files didn’t allow unprivileged users to save data there. A more recent trojan analyzed by AlienVault’s Blasco has overcome that shortcoming, by saving the file in the less-restricted /Users/{User}/Library/LaunchAgents/ folder, ensuring it gets launched each time the user’s account starts.

The backdoors are installed by exploiting critical holes in two pieces of software that are widely used by Mac users. One of the vulnerabilities, a buffer overflow flaw in Microsoft Office for the Mac, was patched in 2009, while the other, an unspecified bug in Java, was fixed in October. The Java exploit is advanced enough that it reads the user agent of the intended victim’s browser, and based on the results unloads a payload that’s unique to machines running either Windows or OS X.

Reports of malware that target Macs have risen steadily over the past 36 months. Most of the reported infections rely on the gullibility of users, tricking them into believing their systems are already compromised and can be disinfected by downloading and installing a piece of rogue antivirus software. Others have exploited software weaknesses to install data-stealing trojans, often requiring little interaction on the part of users. While these reports are more rare, they date back to at least July 2010.

In his blog post, Trend Micro’s Macalintal said the Word exploit he observed “dropped a Gh0stRat payload,” a reference to a huge malware-based spy network uncovered three years ago that infiltrated government and private offices in 103 countries. The Word exploit works by embedding Mac-executable files known as “Mach-Os” into the booby-trapped document file, Macalintal added.

Seth Hardy, a Senior Security Analyst who has been monitoring espionage attacks on pro-Tibetan groups for an organization called Citizen Lab, said it’s too early to know if the recent campaign is related to Gh0stRat. Hardy—whose Citizen Lab was a principal organization for the research and publication of the Tracking Ghostnet and Shadows in the Cloudcyber espionage reports and is based at the Munk School of Global Affairs—went on to say that Macs are likely to play are growing role in future attacks.

“While APT-for-Mac (iAPT?) isn’t exactly new, it does seem like the attackers are catching on that many of these organizations use Macs more than the general public,” he wrote in an e-mail. “It’s also interesting that the attackers are developing multi-platform attacks: we’ve seen the Mac malware bundled with similar Windows malware, and the delivery system will identify the user’s operating system and run the appropriate program.”


Security firms disable the second Kelihos botnet

Thursday, March 29th, 2012

A group of malware experts from security companies Kaspersky Lab, CrowdStrike, Dell SecureWorks and the Honeynet Project, have worked together to disable the second version of the Kelihos botnet, which is significantly bigger than the one shut down by Microsoft and its partners in September 2011.

The Kelihos botnet, also known as Hlux, is considered the successor of the Waledac and Storm botnets. Like its predecessors, it has a peer-to-peer-like architecture and was primarily used for spam and launching DDoS (distributed denial-of-service) attacks.

In September 2011, a coalition of companies that included Microsoft, Kaspersky Lab, SurfNET and Kyrus Tech, managed to take control of the original Kelihos botnet and disable its command-and-control infrastructure.

However, back in January, Kaspersky Lab researchers discovered a new version of the botnet, which had an improved communication protocol and the ability to mine and steal Bitcoins, a type of virtual currency.

Last week, after analyzing the new botnet for the past several months, the new group of experts decided to launch a new takedown operation, said Stefan Ortloff of Kaspersky Lab in a blog post on Wednesday.

Disabling botnets with a decentralized architecture like Kelihos is more complicated than simply taking over a few command-and-control servers, because the botnet clients are also able to exchange instructions among themselves.

In order to prevent the botnet’s authors from updating the botnet through the peer-to-peer infrastructure, the security companies had to set up rogue botnet clients around the world and use special techniques to trick all other infected machines to only connect to servers operated by Kaspersky Lab. This is known as sinkholing, said CrowdStrike researcher Tillmann Werner during a press conference Wednesday.

Once the majority of the botnet clients connected to the sinkhole servers, the researchers realized that the second Kelihos botnet was significantly larger than the one taken down in September 2011. It has almost 110,000 infected hosts compared to the first botnet’s 40,000, said Kaspersky Lab’s Marco Preuss during the same press conference.

Twenty-five percent of the new Kelihos bots were located in Poland and 10 percent were in the U.S. The high concentration of bots in Poland suggests that the cybercriminal gang behind Kelihos paid other botnet operators to have their malware distributed on computers from a country with cheaper pay-pay-install prices, Werner said.

The vast majority of Kelihos-infected computers — over 90,000 — run Windows XP. Around 10,000 run Windows 7 and 5,000 run Windows 7 with Service Pack 1.

Microsoft was not involved in the new takedown operation, but was informed about it, Werner said. During the September 2011 operation, the company’s role was to disable the domain names the Kelihos gang could have used to take back control of the botnet.

However, this type of action was no longer necessary, because this fallback communication channel is only used by the Kelihos bots if the primary peer-to-peer-based channel is disrupted, which doesn’t happen with sinkholing, Werner said.

Kaspersky will notify Internet service providers about the Internet Protocol addresses on their networks that display Kelihos activity, so that they can contact the subscribers who own the infected machines. The sinkhole will be kept operational for as long as it is necessary, Preuß said.

Various signs suggest that the Kelihos gang gave up on the botnet soon after it was sinkholed. However, given that this was their fifth botnet — including the Storm and Waledac variants — they’re unlikely to give up and will most likely create a new one, Werner said.


Is application security the glaring hole in your defense?

Tuesday, March 27th, 2012

When it comes to security, a large number of organizations have a glaring hole in their defenses: their applications.

A recent study of more than 800 IT security and development professionals reports that most organizations don’t prioritize application security as a discipline, despite the fact that SQL injection attacks are the highest root cause of data breaches. The second-highest root cause is exploited vulnerable code in Web 2.0/social media applications.

Sixty-eight percent of developers’ organizations and 47 percent of security practitioners’ organizations suffered one or more data breaches in the past 24 months due to hacked or compromised applications. A further 19 percent of security practitioners and 16 percent of developers were uncertain if their organization had suffered a data breach due to a compromised or hacked application. Additionally, only 12 percent of security practitioners and 11 percent of developers say all their organizations’ applications meet regulations for privacy, data protection and information security.

Despite the data breaches resulting from hacked or compromised applications and the lack of compliance with regulations, 38 percent of security practitioners and 39 percent of developers say less than 10 percent of the IT security budget is dedicated to application security.

“We set out to measure the tolerance to risk across the established phases of application security, and define what works and what hasn’t worked, how industries are organizing themselves and what gaps exist,” says Dr. Larry Ponemon, CEO of the Ponemon Institute, the research firm that conducted the study on the behalf of security firm Security Innovation. “We accomplished that, but what we also found was a drastic divide between the IT security and development organizations that is caused by a major skills shortage and a fundamental misunderstanding of how an application security process should be developed. This lack of alignment seems to hurt their business based on not prioritizing secure software, but also not understanding what to do about it.”

The study found that security practitioners and developers were far apart in their perception of the issue. While one might expect that security practitioners held the more cynical views with regard to application security, in fact the opposite was true. Dr. Ponemon says 71 percent of developers say application security was not adequately emphasized during the application development lifecycle, compared with 49 percent of security practitioners who felt the same way. Additionally, 46 percent of developers say their organization had no process for ensuring security is built into new applications, while only 21 percent of security practitioners believed that to be the case.

Developers and security practitioners are also divided on the issue of remediating vulnerable code. Nearly half (47 percent) of developers say their organization have no formal mandate to remediate vulnerable code, while 29 percent of security practitioners say the same.

“What emerged in this study was that companies don’t seem to be looking at the root causes of data breaches, and they aren’t moving very fast to bridge the existing gaps to fix the myriad of problems,” says Ed Adams, CEO of Security Innovation. “The threat landscape has grown substantially in scope, most notably as our survey respondents stated that Web 2.0 and mobile attacks are the targets of the next wave of threats beyond just web applications.”

The survey also found that nearly half of developers say there is no collaboration between their development organization and the security organization when it comes to application security. That’s a stark contrast from the 19 percent of security practitioners that say there is no collaboration.

Lack of Collaboration in Application Security

“We basically found that developers were much more likely to think there was a lack of collaboration,” Dr. Ponemon says. “The security folks, on the whole, thought the collaboration was OK. I think that one of the biggest problems is that the security folks think they’re getting the word out on collaborating or helping, but they’re not doing so effectively.”

In other words, Dr. Ponemon says, the security organization writes its security policy and gives it to developers, but the developers, by and large, don’t understand how to implement that policy. The security organizations think they’ve done their job, but they haven’t managed to make their policy contextual for developers.

“We find that process has no bearing whatsoever on the ability of an organization to write secure code,” Dr. Ponemon says. “It doesn’t take any longer to write a line of secure code than it does to write a line of insecure code. You just have to know which one to write.”

Education Is Key to Application Security

But knowing which line of code to write seems to be a large part of the problem. The study found that only 22 percent of security practitioners and 11 percent of developers say their organization has a fully deployed application security training program. Fully 36 percent of security practitioners and 37 percent of developers say their organization had no application security training program and no plans to deploy one.

Adams believes providing that education will go a long way toward helping organizations secure their applications and minimize the risk.

“This is more of an education problem than anything else,” Adams says. “In the late 90s, everybody was putting their applications on the web. But they kept on crashing. It was really a performance problem: The developers didn’t know how to code for performance. Amazingly, that’s what’s happening in the world today. Organizations are buying application security tools before they get application security training. You have to get trained on the technique first.”


Open source code libraries seen as rife with vulnerabilities

Tuesday, March 27th, 2012

Google Web Toolkit, Apache Xerces among most downloaded vulnerable libraries, study says

A study of how 31 popular open-source code libraries were downloaded over the past 12 months found that more than a third of the 1,261 versions of these libraries had a known vulnerability and about a quarter of the downloads were tainted.

The study was undertaken by Aspect Security, which evaluates software for vulnerabilities, with Sonatype, a firm that provides a Central Repository housing more than 300,000 libraries for downloading open-source components and gets 4 billion requests per year.

“Increasingly over the past few years, applications are being constructed out of libraries,” says Jeff Williams, CEO of Aspect Security, referring to “The Unfortunate Reality of Insecure Libraries” study. Open-source communities have done little to provide a clear way to spotlight code found to have vulnerabilities or identify how to remedy it when a fix is even made available, he says.

“There’s no notification infrastructure at all,” says Williams. “We want to shed light on this problem.”

He adds that Aspect and Sonatype are mulling how it might be possible to improve the situation overall.

According to the study, researchers at Aspect analyzed 113 million software downloads made over 12 months from the Central Repository of 31 popular Java frameworks and security libraries (Aspect says one basis for the selection of libraries were those being used by its customers). Researchers found:

– 19.8 million (26%) of the library downloads have known vulnerabilities.

– The most downloaded vulnerable libraries were Google Web Toolkit (GWT); Apache Xerces; Spring MVC; and Struts 1.x. (The other libraries examined were: Apache CXF; Hibernate; Java Servlet; Log4j; Apache Velocity; Spring Security; Apache Axis; BouncyCastle; Apache Commons; Tiles; Struts2; Wicket; Java Server Pages; Lift; Hibernate Validator; Java Server Faces; Tapestry; Apache Santuario; JAX-WS; Grails; Jasypt; Apache Shiro; Stripes; AntiSamy; ESAPI; HDIV and JBoss Seam.)

Security libraries are slightly more likely to have a known vulnerability than frameworks, the study says. “Today’s applications commonly use 30 or more libraries, which can compromise up to 80% of the code in an application,” according to the study.

The types of vulnerabilities found in open source code libraries vary widely.

“While some vulnerabilities allow the complete takeover of the host using them, others might result in data loss or corruption, and still others might provide a bit of useful information to attackers,” the study says. “In most cases, the impact of a vulnerability depends greatly on how the library is used by the application.”

The study noted some known well-publicized vulnerabilities.

– Spring, the popular application development framework for Java, was downloaded more than 18 million times by over 43,000 organizations in the last year. However, a discovery last year showed a new class of vulnerabilities in Spring’s use of Expression Language that could be exploited through HTTP parameter submissions that would allow attackers to get sensitive system data, application and user cookies.

– in 2010 Google’s research team discovered a weakness in Struts2 that allowed attackers to execute arbitrary code on any Struts2 Web application.

– In Apache CXF, a framework for Web Services, which was downloaded 4.2 million times by more than 16,000 organizations in the last 12 months, two major vulnerabilities were discovered since 2010 (CVE-2010-2076 and CVE 2012-0803) that allowed attackers to trick any service using CXF to download arbitrary system files and bypass authentication.

Discovery of vulnerabilities are made by researchers, who disclose them as they choose, with some coordinated and “others simply write blog posts or emails in mailing lists,” the study notes. “Currently, developers have no way to know that the library versions they are using have known vulnerabilities. They would have to monitor dozens of mailing lists, blogs, and forums to stay abreast of information. Further, development teams are unlikely to find their own vulnerabilities, as it requires extensive security experience and automated tools are largely ineffective at analyzing libraries.”

Although some open source groups, such as OpenBSD, are “quite good” in how they manage vulnerability disclosures, says Williams, the vast majority handle these kinds of security issues in haphazard fashion and with uncertain disclosure methods. Organizations should strengthen their security processes and OpenBSD can be considered an encouraging model in that respect, the study says.

Williams adds that use of open source libraries also raises the question of “dependency management.” This is the security process that developers would use to identify what libraries their project really directly depends on. Often, developers end up using code that goes beyond the functionality that’s really needed, using libraries that may also be dependent on other libraries. This sweeps in a lot of out-of-date code that brings risk and no added value, but swells the application in size. “Find out what libraries you’re using and which are out of date,” says Williams. “We suggest minimizing the use of libraries.”

The report points out, “While organizations typically have strong patch management processes for software products, open source libraries are typically not part of these processes. In virtually all development organizations, updates to libraries are handled on an ad hoc basis, by development teams.”


Microsoft leads two raids targeting Zeus botnet servers

Tuesday, March 27th, 2012

When you’re fighting against the cybercriminals behind the world’s biggest botnets, sometimes you have to get creative with your battle plans. That’s what Microsoft figured, anyways, sweeping through the offices of two hosting providers with U.S. Marshals to take down computer systems that were helping operators control the Zeus botnet from somewhere in eastern Europe.

Zeus remains one of the largest botnets in existence — and one of the most lucrative, having stolen more than $100 million from its victims over the past five years. As many as 13 million computers have been infected with some variant of Zeus, and they’re all Windows machines, of course. That’s not something Microsoft wants to allow to continue, and they weren’t going to sit idly by any longer. As they did with three other botnets — Kelihos, Rustock, and Waledac — Microsoft formed a team and filed a civil suit against several John Does in order to secure permission to seize domain names and computer equipment that were connected to Zeus.

Microsoft’s goal wasn’t to wipe out Zeus altogether. Not this time around, anyway. Rather, Microsoft wanted to let those operating the botnet know that they’re being watched and that operations are going to be disrupted whenever possible. The domains Microsoft seized — more than 800 in total — will now be used to monitor activity from infected computers.

It’s amazing that computers can still be infected by Zeus in 2010. Microsoft added it to the Malicious Software Removal Tool way back in 2010, though constant fiddling with the original code makes it a bit more difficult to detect and uproot the many variants now floating around the Web. If everyone with a Windows computer was diligent about updating their OS and plug-ins like Java — and using a good quality anti-malware app with heuristic detection abilities — we probably could’ve relegated Zeus to the botnet scrapheap already.

The reality, however, is that there are more than 350 other Zeus servers scattered around the globe and still online. Still, you’ve got to start somewhere, and at least 357 is a few less than there were last week.


HTML5 roundup: access a virtualized desktop from your browser with VMware

Monday, March 19th, 2012

VMware is developing an impressive new feature called WSX that will allow users to access virtualized desktops remotely through any modern Web browser. VMware developer Christian Hammond, who worked on the implementation, demonstrated a prototype this week in a blog post.

According to Hammond, WSX is built with standards-based Web technologies, including the HTML5 Canvas element and Web Sockets. The user installs and runs a lightweight Web server that acts as a relay between the Web-based client and the virtualized desktop instance. It is compatible with VMware Workstation and ESXi/vSphere.

WSX, which doesn’t require any browser plugins, is compatible out of the box with Firefox, Chrome, and Safari on the desktop. It will also work with mobile Safari on iPads that are running iOS 5 or later. Hammond says that Android compatibility is still a work in progress.

The performance is said to be good enough to provide “near-native quality and framerates” when viewing a 720p YouTube video on the virtualized desktop through WSX in Chrome or Firefox. Users who want to test the feature today can see it in action by downloading the Linux version of the VMware Workstation Technology Preview.

Although it’s still somewhat experimental, WSX is a compelling demonstration of how far the Web has evolved as a platform. It also shows how the ubiquity of Web standards make it possible to deliver complex applications across a wide range of platforms and device form factors.

Excerpt from:

DARPA dreams of authentication using the way you type

Monday, March 19th, 2012

In DARPA’s vision of the future, you won’t be typing passwords anymore—because typing is the password.

The Defense Advanced Research Projects Agency is investigating the feasibility of developing software that can identify a user based purely on the style and speed of his or her typing.

“What I’d like to do,” explained Richard Guidorizzi, DARPA product manager, in a talk last year, “is move to a world where you sit down at a console, you identify yourself, and you just start working, and the authentication happens in the background, invisible to you, while you continue to do your work without interruptions.”

The problem with traditional passwords, explained Mr. Guidorizzi, is that we tend to prefer patterns that make remembering passwords more manageable. These passwords are good for humans, but bad for security. It’s hard to strike a balance between memorable and secure.

That’s part of the reason why Roy Maxion, a research professor of computer science at Carnegie Mellon University, believes it could be possible to simply do away with passwords altogether. By studying a user’s unique keystroke dynamics—the length of time a key is pressed, for example, or the speed with which a user types—Professor Maxion has had considerable success identifying test subjects based purely on the way they type.

In fact, similar software being developed at Pace University can apparently identify a user based on keyboard pressure with 99.5 percent accuracy (PDF).

And because typing is an act of motor control—something we don’t do consciously—“mimicking keystroke dynamics is physiologically improbable,” explains Professor Maxion, making impersonation or fraud nigh impossible. A similar identity model could potentially be constructed from a user’s mouse movements too.

The downside, of course, is that unlike traditional password-based logins which only require initial authentication, a behavioral system based on typing style would require constant monitoring. Otherwise, there would be no way to verify that the same user remained in control of a given machine, DARPA says.

A small price to pay, perhaps, for never having to worry about password strength or security again.


Suspicions aroused as exploit for critical Windows bug is leaked

Sunday, March 18th, 2012

Attack code privately submitted to Microsoft to demonstrate the severity of a critical Windows vulnerability is circulating on the ‘Net, prompting the researcher who discovered it to say it was leaked by the software maker or one of its trusted partners.

The precompiled executable surfaced on Chinese-language web links such as this one on Thursday, two days after Microsoft released a patch for the hole, which affects all supported versions of the Windows operating system. The company warned users to install the fix as soon as possible because the vulnerability allows attackers to hit high-value targets with self-replicating exploits that remotely install malicious software. Microsoft security personnel have predicted exploit code will be independently developed in the next month.

Luigi Auriemma, the Italian security researcher who discovered the vulnerability and submitted proof-of-concept code to Microsoft and one of its partners in November, wrote in an email that he’s “100% sure” the rdpclient.exe binary was taken from the exploit he wrote. In a later blog post, he said evidence his code was copied included an internal tracking number the Microsoft Security Response Center assigned to the vulnerability. He also cited other striking similarities in the packet that triggers the vulnerability.

“So yes, the pre-built packet stored in ‘rdpclient.exe’ IS mine,” he wrote. “No doubts.”

He went on to speculate that the code was leaked by someone from Microsoft or one of its trusted partners. He specifically named ZDI, or the Zero Day Initiative, which is a program sponsored by HP-owned Tipping Point, a maker of intrusion prevention systems that pays researchers cash for technical details about critical software vulnerabilities. He also speculated the leak could have come from any one of the 30 or so partners who participate in the Microsoft Active Protections Program. The program gives antivirus and IPS makers technical details of Microsoft vulnerabilities in advance so they can release signatures that prevent them from being exploited.

Update: Aaron Portnoy, ZDI’s Manager of Security Research told Ars he’s sure the leak didn’t come from anyone with his company.

“I’ve actually gotten confirmation from Microsoft that they’re are also confident that the leak wasn’t from us,” he said in an interview. “I can’t comment further on the issue other than [to say] they seem to have some is knowledge as to what happened and they are confident it was not from us.”

He said exploit details have never leaked out of his company, and he added he was unaware of leaks involving other Microsoft partners, either.

Update 2: Yunsun Wee, Director of Microsoft’s Trustworthy Computing division confirmed that the code appeared to be match vulnerability information shared with MAPP partners.

“Microsoft is actively investigating the disclosure of these details and will take the necessary actions to protect customers and ensure that confidential information we share is protected pursuant to our contracts and program requirements,” Wee wrote in a blog post. The statement made no reference to Portnoy’s comments.

Members of the Metasploit framework, an open-source package that hackers and penetration testers use to exploit known security bugs, have confirmed that rdpclient.exe triggers the vulnerability Microsoft reported Tuesday. It resides in the Remote Desktop Protocol and allows attackers to execute code of their choosing on any machine that has it turned on. HD Moore, CSO of Rapid7 and chief architect of the Metasploit project, told Ars the code caused a machine running Windows Server 2008 R2 to display a blue screen of death, but there were no signs it executed any code.

He said Metasploit personnel have been able to replicate the crash, but are still several weeks away from being able to exploit the bug to execute code.

“It’s a still a huge vector for knocking servers offline right now if you can figure out how to DOS the RDP service,” he said in an interview.

There are unconfirmed claims that code is already circulating in the wild that does far more than cause machines to crash. This screen shot, for example, purports to show a Windows machine receiving a remote payload after the vulnerability is exploited. Security consultants have said there’s no proof such attacks are real.

“On the other hand, if they’ve had this since November internally, it’s not impossible that someone would have had time to actually develop what that screen shot is showing,” Alex Ionescu, who is chief architect of security firm CrowdStrike, told Ars.


Duqu Trojan contains unknown programming language

Monday, March 12th, 2012

The Duqu torjan has researchers puzzled as it contains a programming language they’ve never seen before

In June 2010, the world became aware of Stuxnet, largely considered to be the most advanced and dangerous piece of malware ever created. But before you run to check that your antivirus software is up to date, note that Stuxnet, largely believed to be state-created, was created with one singular purpose in mind – to cripple Iran’s ability to develop nuclear weapons.

When security researches began studying Stuxnet more closely, they were astonished at its level of sophistication. Stuxnet’s ultimate aim, researches found, was to target specialized Siemens industrial software and equipment employed in Iran’s Nuclear research facilities. The original Stuxnet virus was able to deftly inject code into the Programmable Logic Controllers (PLC) of the aforementioned Siemens industrial control systems.

The end result, according to foreign reports, is that Stuxnet was able to infiltrate an Iranian uranium enrichment facility and subsequently destroy over 1,000 centrifuges, albeit in a subtle manner as to avoid detection from Iranian nuclear scientists.

In the wake of Stuxnet, researchers weren’t shy about proclaiming that new era of sophisticated malware was upon us.

This past September, a new variant of Stuxnet was discovered. It’s called Duqu and security experts believe it was developed in conjunction with Stuxnet by the same development team. After studying the software, security firm Symantec said that the Duqu virus was almost identical to Stuxnet, yet with a “completely different purpose.”

The reported goal of the Duqu virus wasn’t to sabotage but rather to acquire information.

A research report from Symantec this past October explained,

Duqu is essentially the precursor to a future Stuxnet-like attack. The threat was written by the same authors (or those that have access to the Stuxnet source code) and appears to have been created since the last Stuxnet file was recovered. Duqu’s purpose is to gather intelligence data and assets from entities, such as industrial control system manufacturers, in order to more easily conduct a future attack against another third party. The attackers are looking for information such as design documents that could help them mount a future attack on an industrial control facility.

And just when you thought the whole Stuxnet/Duqu trojan saga couldn’t get any crazier, a security firm who has been analyzing Duqu writes that it employs a programming language that they’ve never seen before.

Security researchers at Kapersky Lab found the “payload DLL” of Duqu is comprised of code from an unrecognizable programming language. While many parts of the Trojan are written in C++, other portions contain syntax that security researchers can’t pin back to a recognizable programming language.

After analyzing the code, researchers at Kapersky were able to conclude the following:

  • The Duqu Framework appears to have been written in an unknown programming language.
  • Unlike the rest of the Duqu body, it’s not C++ and it’s not compiled with Microsoft’s Visual C++ 2008.
  • The highly event driven architecture points to code which was designed to be used in pretty much any kind of conditions, including asynchronous commutations.
  • Given the size of the Duqu project, it is possible that another team was responsible for the framework than the team which created the drivers and wrote the system infection and exploits.
  • The mysterious programming language is definitively NOT C++, Objective C, Java, Python, Ada, Lua and many other languages we have checked.
  • Compared to Stuxnet (entirely written in MSVC++), this is one of the defining particularities of the Duqu framework.

Consequently, Kapersky decided to reach out to the programming community to help them figure out which programming language the Duqu Framework employs. As of Sunday evening, nothing conclusive has been found, but a comment on Kapersky’s blog post might prove useful.

The code your referring to .. the unknown c++ looks like the older IBM compilers found in OS400 SYS38 and the oldest sys36.The C++ code was used to write the tcp/ip stack for the operating system and all of the communications. The protocols used were the following x.21(async) all modes, Sync SDLC, x.25 Vbiss5 10 15 and 25. CICS. RSR232. This was a very small and powerful communications framework. The IBM system 36 had only 300MB hard drive and one megabyte of memory,the operating system came on diskettes.This would be very useful in this virus. It can track and monitor all types of communications. It can connect to everything and anything.

While many other suggestions via the comment section were  dismissed by Kapersky lab expert Igor Soumenkov, the one above netted a “Thank you!”

Another tip that Soumenkov seemed excited about identifies the unknown language as Simple Object Orientation (for C), but not without some reservations.

SOO may be the correct answer! But there are still two things to figure out:
1) When was SOO C created? I see Oct 2010 in git – that’s too late, Duqu was already out there.
2) If SOO is the toolkit, then event driven model was created by the authors of Duqu. Given the size of framework-based code, they should have spent 1+ year making all things work correctly.

It turns out that almost the same code can be produced by the MSVC compiler for a “hand-made” C class. This means that a custom OO C framework is the most probable answer to our question.
We kept this (OO C) version as a “worst-case” explanation – because that would mean that the amout of time and effort invested in development of the Framework is enormous compared to other languages/toolkits.

Note that work on Duqu, according to researchers, began sometime in 2007. And as for the enormous amount of work Soumenkov refers to, remember that most researchers believe Stuxnet and its bretheren were created by state actors. Many believe Israel and the United States may have worked together on the project to stymie Iran’s nuclear weapons plans. Others believe Stuxnet may be the handiwork of China.


Hacker commandeers GitHub to prove Rails vulnerability

Tuesday, March 6th, 2012

A Russian hacker dramatically demonstrated one of the most common security weaknesses in the Ruby on Rails web application language. By doing so, he took full control of the databases GitHub uses to distribute Linux and thousands of other open-source software packages.

Egor Homakov exploited what’s known as a mass assignment vulnerability in GitHub to gain administrator access to the Ruby on Rails repository hosted on the popular website. The weekend hack allowed him to post an entry in the framework’s bug tracker dated 1,001 years into the future. It also allowed him to gain write privileges to the code repository. He carried out the attack by replacing a cryptographic key of a known developer with one he created. While the hack was innocuous, it sparked alarm among open-source advocates because it could have been used to plant malicious code in repositories millions of people use to download trusted software.

Homakov launched the attack two days after he posted a vulnerability report to the Rails bug list warning mass assignments in Rails made the websites relying on the developer language susceptible to compromise. A variety of developers replied with posts saying the vulnerability is already well known and responsibility for preventing exploits rests with those who use the language. Homakov responded by saying even developers for large sites for GitHub, Poster, Speakerdeck, and Scribd were failing to adequately protect against the vulnerability.

In the following hours, participants in the online discussion continued to debate the issue. The mass assignment vulnerability is to Rails what SQL injection weaknesses are to other web applications. It’s a bug that’s so common many users have grown impatient with warnings about them. Maintainers of Rails have largely argued individual developers should single out and “blacklist” attributes that are too sensitive to security to be externally modified. Others such as Homakov have said Rails maintainers should turn on whitelist technology by default. Currently, applications must explicitly enable such protections.

A couple days into the debate, Homakov responded by exploiting mass assignment bugs in GitHub to take control of the site. Less than an hour after discovering the attack, GitHub administrators deployed a fix for the underlying vulnerability and initiated an investigation to see if other parts of the site suffered from similar weaknesses. The site also temporarily suspended Homakov, later reinstating him.

“Now that we’ve had a chance to review his activity, and have determined that no malicious intent was present, @homakov’s account has been reinstated,” a blog post published on Monday said. It went on to encourage developers to practice “responsible disclosure.”


Adobe rushes out critical Flash update

Tuesday, March 6th, 2012

Adobe rushed out a security update to its Flash Player on March 6, ahead of the company’s normal release on the second Tuesday of the month. The fix is for a critical flaw in Flash that could “cause a crash and potentially allow an attacker to take control of the system.”

The vulnerability, discovered by Tavis Ormandy and Fermin Serna of Google’s security team, affects Flash players on Windows, Mac OS X, Linux, and Solaris operating systems, as well as Google Chrome and Android. It takes advantage of a bug in Flash’s Matrix3D class, which could allow an attacker to corrupt system memory. That could allow the attacker to inject and execute code on a targeted system, gaining control of it.


‘Twisted’ waves could boost capacity of wi-fi and TV

Friday, March 2nd, 2012

A striking demonstration of a means to boost the information-carrying capacity of radio waves has taken place across the lagoon in Venice, Italy.

The technique exploits what is called the “orbital angular momentum” of the waves – imparting them with a “twist”.

Varying this twist permits many data streams to fit in the frequency spread currently used for just one.

The approach, described in the New Journal of Physics, could be applied to radio, wi-fi, and television.

The parts of the electromagnetic spectrum that are used for all three are split up in roughly the same way, with a spread of frequencies allotted to each channel. Each one contains a certain, limited amount of information-carrying capacity: its bandwidth.

As telecommunications have proliferated through the years, the spectrum has become incredibly crowded, with little room left for new means of signal transmission, or for existing means to expand their bandwidths.

But Bo Thide of Swedish Institute of Space Physics and a team of colleagues in Italy hope to change that by exploiting an entirely new physical mechanism to fit more capacity onto the same bandwidth.

Galilean connection

The key lies in the distinction between the orbital and spin angular momentum of electromagnetic waves.

A perfect analogy is the Earth-Sun system. The Earth spins on its axis, manifesting spin angular momentum; at the same time orbits the Sun, manifesting orbital angular momentum.

The “particles” of light known as photons can carry both types; the spin angular momentum of photons is better known through the idea of polarisation, which some sunglasses and 3-D glasses exploit.

Just as the “signals” for the left and right eye in 3-D glasses can be encoded on light with two different polarisations, extra signals can be set up with different amounts of orbital angular momentum.

Prof Thide and his colleagues have been thinking about the idea for many years; last year, they published an article in Nature Physics showing that spinning black holes could produce such “twisted” light.

But the implications for exploiting the effect closer to home prompted the team to carry out their experiment in Venice, sending a signal 442m from San Giorgio island to the Palazzo Ducale in St Mark’s square.

“It’s exactly the same place that Galileo first demonstrated his telescope to the authorities in Venice, 400 years ago,” Prof Thide told BBC News.

“They were not convinced at all; they could see the moons of Jupiter but they said, ‘They must be inside the telescope, it can’t possibly be like that.’

“To some extent we have felt the same (disbelief from the community), so we said, ‘Let’s do it, let’s demonstrate it for the public.'”

Marconi style

In the simplest case, putting a twist on the waves is as easy as putting a twist into the dish that sends the signal. The team split one side of a standard satellite-type dish and separated the two resulting edges.

In this way, different points around the circumference of the beam have a different amount of “head start” relative to other points – if one could freeze and visualise the beam, it would look like a corkscrew.

In a highly publicised event in 2011, the team used a normal antenna and their modified antenna to send waves of 2.4 GHz – a band used by wi-fi – to send two audio signals within the bandwidth normally required by one. They repeated the experiment later with two television signals.

Crowds were treated to projections beamed onto the Palazzo Ducale explaining the experiment, and then a display of the message “signal received” when the experiment worked.

Prof Thide said that the public display – “in the style of (radio pioneer) Guglielmo Marconi… involving ordinary people in the experiment”, as the authors put it – was just putting into practice what he had believed since first publishing the idea in a 2007 Physical Review Letters article.

“For me it was obvious this would work,” he said. “Maxwell’s equations that govern electromagnetic fields are… the most well tested laws of physics that we have.

“We did this because other people wanted us to demonstrate it.”

Prof Thide and his colleagues are already in discussions with industry to develop a system that can transmit many more than two bands of different orbital angular momentum.

The results could radically change just how much information and speed can be squeezed out of the crowded electromagnetic spectrum, applied to radio and television as well as wi-fi and perhaps even mobile phones.

Source:  BBC

NASA admits stolen laptop can control ISS

Thursday, March 1st, 2012

You’d expect a group like NASA to be pretty proficient when it comes to securing sensitive information. You’d also expect them to keep portable assets like laptops tightly controlled. Unfortunately, you’d be wrong on both counts.

A NASA laptop stolen last year contained control codes for the International Space Station, and its contents had never been encrypted. Even more alarming is the fact that this wasn’t an isolated incident. Between 2009 and 2011, 48 NASA laptops have been reported lost or stolen. Even if you overlook the type of information these computers would be carrying, that’s a terrible stat even from an inventory control standpoint.

Part of the problem stems from NASA’s reliance on staff to report lost or stolen assets. They’re also supposed to specify what information was on the machine when it disappeared. Whether employees who don’t sound the alarm because of embarrassment or fear of being reprimanded, it’s clear that NASA needs to batten down the hatches.

A good place to start would be implementing an agency-wide encryption policy — and ensuring that such a policy is strictly adhered to on portable devices. Asset tracking software would be an excellent idea too — something like LoJack, so that systems can be located and disabled remotely should they go missing.

NASA Inspector General Paul Martin makes it clear just how dangerous the situation is. In the past two years, he wrote, more than 5,000 computer security incidents resulted in unauthorized access to NASA systems or malware infections. Those breaches cost NASA approximately $7 million — money that could have gone a long way towards improving security.