Archive for the ‘Web’ Category

IE 10 zero-day attack targets US military

Friday, February 14th, 2014

Fireeye, a security research firm, has identified a targeted and sophisticated attack which they believe to be aimed at US military personnel. Fireeye calls this specific attack Operation SnowMan.The attack was staged from the web site of the U.S. Veterans of Foreign Wars which the attackers had compromised. Pages from the site were modified to include code (in an IFRAME) which exploited an unpatched vulnerability in Internet Explorer 10 on systems which also have Adobe Flash Player.

The actual vulnerability is in Internet Explorer 10, but it relies on a malicious Flash object and a callback from that Flash object to the vulnerability trigger in JavaScript. Fireeye says they are in touch with Microsoft about the vulnerability.

The attack checks to make sure it is running on IE10 and that the user is not running the Microsoft Enhanced Mitigation Experience Toolkit (EMET), a tool which can help to harden applications against attack. So running another version of IE, including IE11, or installing EMET would protect against this attack.

The attack was first identified on February 11. Fireeye believes that it was placed on the VFW site in order to be found by US military personnel, and that the attack was timed to coincide with a long holiday weekend and the major snowstorm which struck the eastern United States this week, including the Washington DC region.

Fireeye also presents evidence that the attack comes from the same group of attackers they have identified in previous sophisticated, high-value attacks, specifically Operation DeputyDog and Operation Ephemeral Hydra. They reach this conclusion by analyzing the techniques used. They say that this group has, in the past, attacked U.S. government entities, Japanese firms, defense industrial base (DIB) companies, law firms, information technology (IT) companies, mining companies and non-governmental organizations (NGOs).

Source:  zdnet.com

Building control systems can be pathway to Target-like attack

Tuesday, February 11th, 2014

Credentials stolen from automation and control providers were used in Target hack

Companies should review carefully the network access given to third-party engineers monitoring building control systems to avoid a Target-like attack, experts say.

Security related to providers of building automation and control systems was in the spotlight this week after the security blog KrebsonSecurity reported that credentials stolen from Fazio Mechanical Services, based in Sharpsburg, Penn, were used by hackers who snatched late last year 40 million debit- and credit-card numbers from Target’s electronic cash registers, called point-of-sale (POS) systems.

The blog initially identified Fazio as a provider of refrigeration and heating, ventilation and air conditioning (HVAC) systems. The report sparked a discussion in security circles on how such a subcontractor’s credentials could provide access to areas of the retailer’s network Fazio would not need.

On Thursday, Fazio released a statement saying it does not monitor or control Target’s HVAC systems, according to KrebsonSecurity. Instead it remotely handles “electronic billing, contract submission and project management,” for the retailer.

In light of its work, Fazio having access to Target business applications that could be tied to POS systems is certainly possible. However, interviews with experts before Fazio’s clarification found that subcontractors monitoring and maintaining HVAC and other building systems remotely often have too much access to corporate networks.

“Generally what happens is some new business service needs network access, so, if there’s time pressure, it may be placed on an existing network, (without) thinking through all the security implications,” Dwayne Melancon, chief technology officer for data security company Tripwire, said.

Most building systems, such as HVAC, are Internet-enabled so maintenance companies can monitor them remotely. Use of the Shodan search engine for Internet-enabled devices can reveal thousands of systems ranging from building automation to crematoriums with weak login credentials, researchers have found.

Using homegrown technology, Billy Rios, director of threat intelligence for vulnerability management company Qualys, found on the Internet a building control system for Target’s Minneapolis-based headquarters.

While the system is connected to an internal network, Rios could not determine whether it’s a corporate network without hacking the system, which would be illegal.

“We know that we could probably exploit it, but what we don’t know is what purpose it’s serving,” he said. “It could control energy, it could control HVAC, it could control lighting or it could be for access control. We’re not sure.”

If the Web interface of such systems is on a corporate network, then some important security measures need to be taken.

All data traffic moving to and from the server should be closely monitored. To do their job, building engineers need to access only a few systems. Monitoring software should flag traffic going anywhere else immediately.

“Workstations in your HR (human resources) department should probably not be talking to your refrigeration devices,” Rios said. “Seeing high spikes in traffic from embedded devices on your corporate network is also an indication that something is wrong.”

In addition, companies should know the IP addresses used by subcontractors in accessing systems. Unrecognized addresses should be automatically blocked.

Better password management is also a way to prevent a cyberattack. In general, a subcontractor’s employees will share the same credentials to access a customer’s systems. Those credentials are seldom changed, even when an employee leaves the company.

“That’s why it’s doubly important to make sure those accounts and systems have very restricted access, so you can’t use that technician login to do other things on the network,” Melancon said.

Every company should do a thorough review of their networks to identify every building system. “Understanding where these systems are is the first step,” Rios said.

Discovery should be followed by an evaluation of the security around those systems that are on the Internet.

Source:  csoonline.com

Huge hack ‘ugly sign of future’ for internet threats

Tuesday, February 11th, 2014

A massive attack that exploited a key vulnerability in the infrastructure of the internet is the “start of ugly things to come”, it has been warned.

Online security specialists Cloudflare said it recorded the “biggest” attack of its kind on Monday.

Hackers used weaknesses in the Network Time Protocol (NTP), a system used to synchronise computer clocks, to flood servers with huge amounts of data.

The technique could potentially be used to force popular services offline.

Several experts had predicted that the NTP would be used for malicious purposes.

The target of this latest onslaught is unknown, but it was directed at servers in Europe, Cloudflare said.

Attackers used a well-known method to bring down a system known as Denial of Service (DoS) – in which huge amounts of data are forced on a target, causing it to fall over.

Cloudflare chief executive Matthew Prince said his firm had measured the “very big” attack at about 400 gigabits per second (Gbps), 100Gbps larger than an attack on anti-spam service Spamhaus last year.

Predicted attack

In a report published three months ago, Cloudflare warned that attacks on the NTP were on the horizon and gave details of how web hosts could best try to protect their customers.

NTP servers, of which there are thousands around the world, are designed to keep computers synchronised to the same time.

The fundamentals of the NTP began operating in 1985. While there have been changes to the system since then, it still operates in much the same way.

A computer needing to synchronise time with the NTP will send a small amount of data to make the request. The NTP will then reply by sending data back.

The vulnerability lies with two weaknesses. Firstly, the amount of data the NTP sends back is bigger than the amount it receives, meaning an attack is instantly amplified.

Secondly, the original computer’s location can be “spoofed”, tricking the NTP into sending the information back to somewhere else.

In this attack, it is likely that many machines were used to make requests to the NTP. Hackers spoofed their location so that the massive amounts of data from the NTP were diverted to a single target.

“Amplification attacks like that result in an attacker turning a small amount of bandwidth coming from a small number of machines into a massive traffic load hitting a victim from around the internet,” Cloudfare explained in a blog outlining the vulnerability, posted last month.

‘Ugly future’

The NTP is one of several protocols used within the infrastructure of the internet to keep things running smoothly.

Unfortunately, despite being vital components, most of these protocols were designed and implemented at a time when the prospect of malicious activity was not considered.

“A lot of these protocols are essential, but they’re not secure,” explained Prof Alan Woodward, an independent cyber-security consultant, who had also raised concerns over NTP last year.

“All you can really do is try and mitigate the denial of service attacks. There are technologies around to do it.”

Most effective, Prof Woodward suggested, was technology that was able to spot when a large amount of data was heading for one destination – and shutting off the connection.

Cloudflare’s Mr Prince said that while his firm had been able to mitigate the attack, it was a worrying sign for the future.

“Someone’s got a big, new cannon,” he tweeted. “Start of ugly things to come.”

Source:  BBC

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Feds to dump CGI from Healthcare.gov project

Monday, January 13th, 2014

The Obama Administration is set to fire CGI Federal as prime IT contractor of the problem-plagued Healthcare.gov website, a report says.

The government now plans to hire IT consulting firm Accenture to fix the Affordable Care Act (ACA) website’s lingering performance problems, the Washington Post reported today. Accenture will get a 12-month, $90 million contract to update the website, the newspaper reported.

The Healthcare.gov site is the main portal for consumers to sign up for new insurance plans under the Affordable Care Act.

CGI’s Healthcare.gov contract is due for renewal in February. The terms of the agreement included options for the U.S. to renew it for one more year and then another two more after that.

The decision not to renew comes as frustration grows among officials of the Centers for Medicare and Medicaid Services (CMS), which oversees the ACA, about the pace and quality of CGI’s work, the Post said, quoting unnamed sources. About half of the software fixes written by CGI engineers in recent months have failed on first attempt to use them, CMS officials told the Post.

The government awarded the contract to Accenture on a sole-source, or no-bid, basis because the CGI contract expires at the end of next month. That gives Accenture less than two months to familiarize itself with the project before it takes over the complex task of fixing numerous remaining glitches.

CGI did not immediately respond to Computerworld’s request for comment.

In an email, an Accenture spokesman declined to confirm or deny the report.

“Accenture Federal Services is in discussions with clients and prospective clients all the time, but it is not appropriate to discuss new business opportunities we may or may not be pursuing,” the spokesman said The decision to replace CGI comes as performance of the Healthcare.gov website appears to be steadily improving after its spectacularly rocky Oct. 1.

A later post mortem of the debacle showed that servers did not have the right production data, third party systems weren’t connecting as required, dashboards didn’t have data and there simply wasn’t enough server capacity to handle traffic.

Though CGI had promised to have the site ready and fully functional by Oct. 1, between 30% and 40% of the site had yet to be completed at the time. The company has taken a lot of the heat since.

Ironically, the company has impressive credentials. The company is nowhere as big as some of the biggest government IT contractors but still is only one of 10 companies in the U.S. to have achieved the highest level Capability Maturity Model Integration (CMMI) level for software development certification.

CGI Federal is a subsidiary of Montreal-based CGI Group. CMS hired the company as the main IT contractor for Healthcare.gov in 2011 under an $88 million contract. So far, the firm has received about $113 million for its work on the site.

Source:  pcadvisor.com

Cisco promises to fix admin backdoor in some routers

Monday, January 13th, 2014

Cisco Systems promised to issue firmware updates removing a backdoor from a wireless access point and two of its routers later this month. The undocumented feature could allow unauthenticated remote attackers to gain administrative access to the devices.

The vulnerability was discovered over the Christmas holiday on a Linksys WAG200G router by a security researcher named Eloi Vanderbeken. He found that the device had a service listening on port 32764 TCP, and that connecting to it allowed a remote user to send unauthenticated commands to the device and reset the administrative password.

It was later reported by other users that the same backdoor was present in multiple devices from Cisco, Netgear, Belkin and other manufacturers. On many devices this undocumented interface can only be accessed from the local or wireless network, but on some devices it is also accessible from the Internet.

Cisco identified the vulnerability in its WAP4410N Wireless-N Access Point, WRVS4400N Wireless-N Gigabit Security Router and RVS4000 4-port Gigabit Security Router. The company is no longer responsible for Linksys routers, as it sold that consumer division to Belkin early last year.

The vulnerability is caused by a testing interface that can be accessed from the LAN side on the WRVS4400N and RVS4000 routers and also the wireless network on the WAP4410N wireless access point device.

“An attacker could exploit this vulnerability by accessing the affected device from the LAN-side interface and issuing arbitrary commands in the underlying operating system,” Cisco said in an advisory published Friday. “An exploit could allow the attacker to access user credentials for the administrator account of the device, and read the device configuration. The exploit can also allow the attacker to issue arbitrary commands on the device with escalated privileges.”

The company noted that there are no known workarounds that could mitigate this vulnerability in the absence of a firmware update.

The SANS Internet Storm Center, a cyber threat monitoring organization, warned at the beginning of the month that it detected probes for port 32764 TCP on the Internet, most likely targeting this vulnerability.

Source:  networkworld.com

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

Friday, January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.’”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

DoS attacks that took down big game sites abused Web’s time-sync protocol

Friday, January 10th, 2014

Miscreants who earlier this week took down servers for League of Legends, EA.com, and other online game services used a never-before-seen technique that vastly amplified the amount of junk traffic directed at denial-of-service targets.

Rather than directly flooding the targeted services with torrents of data, an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.

“Prior to December, an NTP attack was almost unheard of because if there was one it wasn’t worth talking about,” Shawn Marck, CEO of DoS-mitigation service Black Lotus, told Ars. “It was so tiny it never showed up in the major reports. What we’re witnessing is a shift in methodology.”

The technique is in many ways similar to the DNS-amplification attacks waged on servers for years. That older DoS technique sends falsified requests to open domain name system servers requesting the IP address for a particular site. DNS-reflection attacks help aggravate the crippling effects of a DoS campaign since the responses sent to the targeted site are about 50 times bigger than the request sent by the attacker.

During the first week of the year, NTP reflection accounted for about 69 percent of all DoS attack traffic by bit volume, Marck said. The average size of each NTP attack was about 7.3 gigabits per second, a more than three-fold increase over the average DoS attack observed in December. Correlating claims DERP Trolling made on Twitter with attacks Black Lotus researchers were able to observe, they estimated the attack gang had a maximum capacity of about 28Gbps.

NTP servers help people synchronize their servers to very precise time increments. Recently, the protocol was found to suffer from a condition that could be exploited by DoS attackers. Fortunately, NTP-amplification attacks are relatively easy to repel. Since virtually all the NTP traffic can be blocked with few if any negative consequences, engineers can simply filter out the packets. Other types of DoS attacks are harder to mitigate, since engineers must first work to distinguish legitimate data from traffic designed to bring down the site.

Black Lotus recommends network operators follow several practices to blunt the effects of NTP attacks. They include using traffic policers to limit the amount of NTP traffic that can enter a network, implementing large-scale DDoS mitigation systems, or opting for service-based approaches that provide several gigabits of standby capacity for use during DDoS attacks.

Source:  arstechnica.com

Unencrypted Windows crash reports give ‘significant advantage’ to hackers, spies

Wednesday, January 1st, 2014

Microsoft transmits a wealth of information from Windows PCs to its servers in the clear, claims security researcher

Windows’ error- and crash-reporting system sends a wealth of data unencrypted and in the clear, information that eavesdropping hackers or state security agencies can use to refine and pinpoint their attacks, a researcher said today.

Not coincidentally, over the weekend the popular German newsmagazine Der Spiegel reported that the U.S. National Security Agency (NSA) collects Windows crash reports from its global wiretaps to sniff out details of targeted PCs, including the installed software and operating systems, down to the version numbers and whether the programs or OSes have been patched; application and operating system crashes that signal vulnerabilities that could be exploited with malware; and even the devices and peripherals that have been plugged into the computers.

“This information would definitely give an attacker a significant advantage. It would give them a blueprint of the [targeted] network,” said Alex Watson, director of threat research at Websense, which on Sunday published preliminary findings of its Windows error-reporting investigation. Watson will present Websense’s discovery in more detail at the RSA Conference in San Francisco on Feb. 24.

Sniffing crash reports using low-volume “man-in-the-middle” methods — the classic is a rogue Wi-Fi hotspot in a public place — wouldn’t deliver enough information to be valuable, said Watson, but a wiretap at the ISP level, the kind the NSA is alleged to have in place around the world, would.

“At the [intelligence] agency level, where they can spend the time to collect information on billions of PCs, this is an incredible tool,” said Watson.

And it’s not difficult to obtain the information.

Microsoft does not encrypt the initial crash reports, said Watson, which include both those that prompt the user before they’re sent as well as others that do not. Instead, they’re transmitted to Microsoft’s servers “in the clear,” or over standard HTTP connections.

If a hacker or intelligence agency can insert themselves into the traffic stream, they can pluck out the crash reports for analysis without worrying about having to crack encryption.

And the reports from what Microsoft calls “Windows Error Reporting” (ERS), but which is also known as “Dr. Watson,” contain a wealth of information on the specific PC.

When a device is plugged into a Windows PC’s USB port, for example — say an iPhone to sync it with iTunes — an automatic report is sent to Microsoft that contains the device identifier and manufacturer, the Windows version, the maker and model of the PC, the version of the system’s BIOS and a unique machine identifier.

By comparing the data with publicly-available databases of device and PC IDs, Websense was able to establish that an iPhone 5 had been plugged into a Sony Vaio notebook, and even nail the latter’s machine ID.

If hackers are looking for systems running outdated, and thus, vulnerable versions of Windows — XP SP2, for example — the in-the-clear reports will show which ones have not been updated.

Windows Error Reporting is installed and activated by default on all PCs running Windows XP, Vista, Windows 7, Windows 8 and Windows 8.1, Watson said, confirming that the Websense techniques of deciphering the reports worked on all those editions.

Watson characterized the chore of turning the cryptic reports into easily-understandable terms as “trivial” for accomplished attackers.

More thorough crash reports, including ones that Microsoft silently triggers from its end of the telemetry chain, contain personal information and so are encrypted and transmitted via HTTPS. “If Microsoft is curious about the report or wants to know more, they can ask your computer to send a mini core dump,” explained Watson. “Personal identifiable information in that core dump is encrypted.”

Microsoft uses the error and crash reports to spot problems in its software as well as that crafted by other developers. Widespread reports typically lead to reliability fixes deployed in non-security updates.

The Redmond, Wash. company also monitors the crash reports for evidence of as-yet-unknown malware: Unexplained and suddenly-increasing crashes may be a sign that a new exploit is in circulation, Watson said.

Microsoft often boasts of the value of the telemetry to its designers, developers and security engineers, and with good reason: An estimated 80% of the world’s billion-plus Windows PCs regularly send crash and error reports to the company.

But the unencrypted information fed to Microsoft by the initial and lowest-level reports — which Watson labeled “Stage 1″ reports — comprise a dangerous leak, Watson contended.

“We’ve substantiated that this is a major risk to organizations,” said Watson.

Error reporting can be disabled manually on a machine-by-machine basis, or in large sets by IT administrators using Group Policy settings.

Websense recommended that businesses and other organizations redirect the report traffic on their network to an internal server, where it can be encrypted before being forwarded to Microsoft.

But to turn it off entirely would be to throw away a solid diagnostic tool, Watson argued. ERS can provide insights not only to hackers and spying eavesdroppers, but also the IT departments.

“[ERS] does the legwork, and can let [IT] see where vulnerabilities might exist, or whether rogue software or malware is on the network,” Watson said. “It can also show the uptake on BYOD [bring your own device] policies,” he added, referring to the automatic USB device reports.

Microsoft should encrypt all ERS data that’s sent from customer PCs to its servers, Watson asserted.

A Microsoft spokesperson asked to comment on the Websense and Der Spiegel reports said, “Microsoft does not provide any government with direct or unfettered access to our customer’s data. We would have significant concerns if the allegations about government actions are true.”

The spokesperson added that, “Secure Socket Layer connections are regularly established to communicate details contained in Windows error reports,” which is only partially true, as Stage 1 reports are not encrypted, a fact that Microsoft’s own documentation makes clear.

“The software ‘parameters’ information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted,” Microsoft acknowledged in a document about ERS.

Source:  computerworld.com

Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site PHP.net used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of PHP.net, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to PHP.net, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the PHP.net attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.

Source:  csoonline.com

Google’s Dart language heads for standardization with new Ecma committee

Friday, December 13th, 2013

Ecma, the same organization that governs the standardization and development of JavaScript (or “EcmaScript” as it’s known in standardese), has created a committee to oversee the publication of a standard for Google’s alternative Web language, Dart.

Technical Committee 52 will develop standards for Dart language and libraries, create test suites to verify conformance with the standards, and oversee Dart’s future development. Other technical committees within Ecma perform similar work for EcmaScript, C#, and the Eiffel language.

Google released version 1.0 of the Dart SDK last month and believes that the language is sufficiently stable and mature to be both used in a production capacity and put on the track toward creating a formal standard. The company asserts that this will be an important step toward embedding native Dart support within browsers.

Source:  arstechnica.com

Microsoft disrupts ZeroAccess web fraud botnet

Friday, December 6th, 2013

ZeroAccess, one of the world’s largest botnets – a network of computers infected with malware to trigger online fraud – has been disrupted by Microsoft and law enforcement agencies.

ZeroAccess hijacks web search results and redirects users to potentially dangerous sites to steal their details.

It also generates fraudulent ad clicks on infected computers then claims payouts from duped advertisers.

Also called Sirefef botnet, ZeroAccess, has infected two million computers.

The botnet targets search results on Google, Bing and Yahoo search engines and is estimated to cost online advertisers $2.7m (£1.7m) per month.

Microsoft said it had been authorised by US regulators to “block incoming and outgoing communications between computers located in the US and the 18 identified Internet Protocol (IP) addresses being used to commit the fraudulent schemes”.

In addition, the firm has also taken control of 49 domains associated with ZeroAccess.

David Finn, executive director of Microsoft Digital Crimes Unit, said the disruption “will stop victims’ computers from being used for fraud and help us identify the computers that need to be cleaned of the infection”.

‘Most robust’

The ZeroAccess botnet relies on waves of communication between groups of infected computers, instead of being controlled by a few servers.

This allows cyber criminals to control the botnet remotely from a range of computers, making it difficult to tackle.

According to Microsoft, more than 800,000 ZeroAccess-infected computers were active on the internet on any given day as of October this year.

“Due to its botnet architecture, ZeroAccess is one of the most robust and durable botnets in operation today and was built to be resilient to disruption efforts,” Microsoft said.

However, the firm said its latest action is “expected to significantly disrupt the botnet’s operation, increasing the cost and risk for cyber criminals to continue doing business and preventing victims’ computers from committing fraudulent schemes”.

Microsoft said its Digital Crimes Unit collaborated with the US Federal Bureau of Investigation (FBI) and Europol’s European Cybercrime Centre (EC3) to disrupt the operations.

Earlier this year, security firm Symantec said it had disabled nearly 500,000 computers infected by ZeroAccess and taken them out of the botnet.

Source: BBC

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com

N.S.A. may have hit Internet companies at a weak spot

Tuesday, November 26th, 2013

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

“From Echelon to Total Information Awareness to Prism, all these programs have gone under different names, but in essence do the same thing,” said Chip Pitts, a law lecturer at Stanford University School of Law.

Based in the Denver suburbs, Level 3 is not a household name like Verizon or AT&T, but in terms of its ability to carry traffic, it is bigger than the other two carriers combined. Its networking equipment is found in 200 data centers in the United States, more than 100 centers in Europe and 14 in Latin America.

Level 3 did not directly respond to an inquiry about whether it had given the N.S.A., or the agency’s foreign intelligence partners, access to Google and Yahoo’s data. In a statement, Level 3 said: “It is our policy and our practice to comply with laws in every country where we operate, and to provide government agencies access to customer data only when we are compelled to do so by the laws in the country where the data is located.”

Also, in a financial filing, Level 3 noted that, “We are party to an agreement with the U.S. Departments of Homeland Security, Justice and Defense addressing the U.S. government’s national security and law enforcement concerns. This agreement imposes significant requirements on us related to information storage and management; traffic management; physical, logical and network security arrangements; personnel screening and training; and other matters.”

Security experts say that regardless of whether Level 3’s participation is voluntary or not, recent N.S.A. disclosures make clear that even when Internet giants like Google and Yahoo do not hand over data, the N.S.A. and its intelligence partners can simply gather their data downstream.

That much was true last summer when United States authorities first began tracking Mr. Snowden’s movements after he left Hawaii for Hong Kong with thousands of classified documents. In May, authorities contacted Ladar Levison, who ran Lavabit, Mr. Snowden’s email provider, to install a tap on Mr. Snowden’s email account. When Mr. Levison did not move quickly enough to facilitate the tap on Lavabit’s network, the Federal Bureau of Investigation did so without him.

Mr. Levison said it was unclear how that tap was installed, whether through Level 3, which sold bandwidth to Lavabit, or at the Dallas facility where his servers and networking equipment are stored. When Mr. Levison asked the facility’s manager about the tap, he was told the manager could not speak with him. A spokesman for TierPoint, which owns the Dallas facility, did not return a call seeking a comment.

Mr. Pitts said that while working as the chief legal officer at Nokia in the 1990s, he successfully fended off an effort by intelligence agencies to get backdoor access into Nokia’s computer networking equipment.

Nearly 20 years later, Verizon has said that it and other carriers are forced to comply with government requests in every country in which they operate, and are limited in what they can say about their arrangements.

“At the end of the day, if the Justice Department shows up at your door, you have to comply,” Lowell C. McAdam, Verizon’s chief executive, said in an interview in September. “We have gag orders on what we can say and can’t defend ourselves, but we were told they do this with every carrier.”

Source:  nytimes.com

Encrypt everything: Google’s answer to government surveillance is catching on

Thursday, November 21st, 2013

While Microsoft’s busy selling t-shirts and mugs about how Google’s “Scroogling” you, the search giant’s chairman is busy tackling a much bigger problem: How to keep your information secure in a world full of prying eyes and governments willing to drag in data by the bucket load. And according to Google’s Eric Schmidt, the answer is fairly straightforward.

“We can end government censorship in a decade,” Schmidt said Wednesday during a speech in Washington, according to Bloomberg. “The solution to government surveillance is to encrypt everything.”

Google’s certainly putting its SSL certificates where Schmidt’s mouth is, too: In the “Encrypt the Web” scorecard released by the Electronic Frontier Foundation earlier this month, Google was one of the few Internet giants to receive a perfect five out of five score for its encryption efforts. Even basic Google.com searches default to HTTPS encryption these days, and Google goes so far as to encrypt data traveling in-network between its data centers.

Spying eyes

That last move didn’t occur in a vacuum, however. Earlier this month, one of the latest Snowden revelations revealed that the National Security Agency’s MUSCULAR program taps the links flowing between Google and Yahoo’s internal data centers.

“We have strengthened our systems remarkably as a result of the most recent events,” Schmidt said during the speech. “It’s reasonable to expect that the industry as a whole will continue to strengthen these systems.”

Indeed, Yahoo recently announced plans to encrypt, well, everything in the wake of the recent NSA surveillance revelations. Dropbox, Facebook, Sonic.net, and the SpiderOak cloud storage service also received flawless marks in the EFF’s report.

And the push for ubiquitous encryption recently gained an even more formidable proponent. The Internet Engineering Task Force working on HTTP 2.0 announced last week that the next-gen version of the crucial protocol will only work with HTTPS encrypted URLs.

Yes, all encryption, all the time could very well become the norm on the ‘Net before long. But while that will certainly raise the general level of security and privacy web-wide, don’t think for a minute that HTTPS is a silver bullet against pervasive government surveillance. In yet another Snowden-supplied revelation released in September, it was revealed that the NSA spends more than $250 million year-in and year-out in its efforts to break online encryption techniques.

Source:  csoonline.com

Repeated attacks hijack huge chunks of Internet traffic, researchers warn

Thursday, November 21st, 2013

Man-in-the-middle attacks divert data on scale never before seen in the wild.

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Researchers from network intelligence firm Renesys made that sobering assessment in a blog post published Tuesday. Since February, they have observed 38 distinct events in which large blocks of traffic have been improperly redirected to routers at Belarusian or Icelandic service providers. The hacks, which exploit implicit trust placed in the border gateway protocol used to exchange data between large service providers, affected “major financial institutions, governments, and network service providers” in the US, South Korea, Germany, the Czech Republic, Lithuania, Libya, and Iran.

The ease of altering or deleting authorized BGP routes, or of creating new ones, has long been considered a potential Achilles Heel for the Internet. Indeed, in 2008, YouTube became unreachable for virtually all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country. Later that year, researchers at the Defcon hacker conference showed how BGP routes could be manipulated to redirect huge swaths of Internet traffic. By diverting it to unauthorized routers under control of hackers, they were then free to monitor or tamper with any data that was unencrypted before sending it to its intended recipient with little sign of what had just taken place.”This year, that potential has become reality,” Renesys researcher Jim Cowie wrote. “We have actually observed live man-in-the-middle (MitM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries.”

At least one unidentified voice-over-IP provider has also been targeted. In all, data destined for 150 cities have been intercepted. The attacks are serious because they affect the Internet equivalents of a US interstate that can carry data for hundreds of thousands or even millions of people. And unlike the typical BGP glitches that arise from time to time, the attacks observed by Renesys provide few outward signs to users that anything is amiss.

“The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the Web,” Cowie wrote. “Even if he ran his own traceroute to verify connectivity to the world, the paths he’d see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with.”

Guadalajara to Washington via Belarus

Renesys observed the first route hijacking in February when various routes across the globe were mysteriously funneled through Belarusian ISP GlobalOneBel before being delivered to their final destination. One trace, traveling from Guadalajara, Mexico, to Washington, DC, normally would have been handed from Mexican provider Alestra to US provider PCCW in Laredo, Texas, and from there to the DC metro area and then, finally, delivered to users through the Qwest/Centurylink service provider. According to Cowie:

Instead, however, PCCW gives it to Level3 (previously Global Crossing), who is advertising a false Belarus route, having heard it from Russia’s TransTelecom, who heard it from their customer, Belarus Telecom. Level3 carries the traffic to London, where it delivers it to Transtelecom, who takes it to Moscow and on to Belarus. Beltelecom has a chance to examine the traffic and then sends it back out on the “clean path” through Russian provider ReTN (recently acquired by Rostelecom). ReTN delivers it to Frankfurt and hands it to NTT, who takes it to New York. Finally, NTT hands it off to Qwest/Centurylink in Washington DC, and the traffic is delivered.

Such redirections occurred on an almost daily basis throughout February, with the set of affected networks changing every 24 hours or so. The diversions stopped in March. When they resumed in May, they used a different customer of Bel Telecom as the source. In all, Renesys researchers saw 21 redirections. Then, also during May, they saw something completely new: a hijack lasting only five minutes diverting traffic to Nyherji hf (also known as AS29689, short for autonomous system 29689), a small provider based in Iceland.

Renesys didn’t see anything more until July 31 when redirections through Iceland began in earnest. When they first resumed, the source was provider Opin Kerfi (AS48685).

Cowie continued:

In fact, this was one of seventeen Icelandic events, spread over the period July 31 to August 19. And Opin Kerfi was not the only Icelandic company that appeared to announce international IP address space: in all, we saw traffic redirections from nine different Icelandic autonomous systems, all customers of (or belonging to) the national incumbent Síminn. Hijacks affected victims in several different countries during these events, following the same pattern: false routes sent to Síminn’s peers in London, leaving ‘clean paths’ to North America to carry the redirected traffic back to its intended destination.

In all, Renesys observed 17 redirections to Iceland. To appreciate how circuitous some of the routes were, consider the case of traffic passing between two locations in Denver. As the graphic below traces, it traveled all the way to Iceland through a series of hops before finally reaching its intended destination.

Cowie said Renesys’ researchers still don’t know who is carrying out the attacks, what their motivation is, or exactly how they’re pulling them off. Members of Icelandic telecommunications company Síminn, which provides Internet backbone services in that country, told Renesys the redirections to Iceland were the result of a software bug and that the problem had gone away once it was patched. They told the researchers they didn’t believe the diversions had a malicious origin.

Cowie said that explanation is “unlikely.” He went on to say that even if it does prove correct, it’s nonetheless highly troubling.

“If this is a bug, it’s a dangerous one, capable of simulating an extremely subtle traffic redirection/interception attack that plays out in multiple episodes, with varying targets, over a period of weeks,” he wrote. “If it’s a bug that can be exploited remotely, it needs to be discussed more widely within the global networking community and eradicated.”

Source:  arstechnica.com

Hackers exploit JBoss vulnerability to compromise servers

Tuesday, November 19th, 2013

Attackers are actively exploiting a known vulnerability to compromise JBoss Java EE application servers that expose the HTTP Invoker service to the Internet in an insecure manner.

At the beginning of October security researcher Andrea Micalizzi released an exploit for a vulnerability he identified in products from multiple vendors including Hewlett-Packard, McAfee, Symantec and IBM that use 4.x and 5.x versions of JBoss. That vulnerability, tracked as CVE-2013-4810, allows unauthenticated attackers to install an arbitrary application on JBoss deployments that expose the EJBInvokerServlet or JMXInvokerServlet.

Micalizzi’s exploit installs a Web shell application called pwn.jsp that can be used to execute shell commands on the operating system via HTTP requests. The commands are executed with the privileges of the OS user running JBoss, which in the case of some JBoss deployments can be a high privileged, administrative user.

Researchers from security firm Imperva have recently detected an increase in attacks against JBoss servers that used Micalizzi’s exploit to install the original pwn.jsp shell, but also a more complex Web shell called JspSpy.

Over 200 sites running on JBoss servers, including some that belong to governments and universities have been hacked and infected with these Web shell applications, said Barry Shteiman, director of security strategy at Imperva.

The problem is actually bigger because the vulnerability described by Micalizzi stems from insecure default configurations that leave JBoss management interfaces and invokers exposed to unauthenticated attacks, a issue that has been known for years.

In a 2011 presentation about the multiple ways in which unsecured JBoss installations can be attacked, security researchers from Matasano Security estimated, based on a Google search for certain strings, that there were around 7,300 potentially vulnerable servers.

According to Shteiman, the number of JBoss servers with management interfaces exposed to the Internet has more than tripled since then, reaching over 23,000.

One reason for this increase is probably that people have not fully understood the risks associated with this issue when it was discussed in the past and continue to deploy insecure JBoss installations, Shteiman said. Also, some vendors ship products with insecure JBoss configurations, like the products vulnerable to Micalizzi’s exploit, he said.

Products vulnerable to CVE-2013-4810 include McAfee Web Reporter 5.2.1, HP ProCurve Manager (PCM) 3.20 and 4.0, HP PCM+ 3.20 and 4.0, HP Identity Driven Manager (IDM) 4.0, Symantec Workspace Streaming 7.5.0.493 and IBM TRIRIGA. However, products from other vendors that have not yet been identified could also be vulnerable.

JBoss is developed by Red Hat and was recently renamed to WildFly. Its latest stable version is 7.1.1, but according to Shteiman many organizations still use JBoss 4.x and 5.x for compatibility reasons as they need to run old applications developed for those versions.

Those organizations should follow the instructions for securing their JBoss installations that are available on the JBoss Community website, he said.

IBM also provided information on securing the JMX Console and the EJBInvoker in response to Micalizzi’s exploit.

The Red Hat Security Response Team said that while CVE-2013-4810 refers to the exposure of unauthenticated JMXInvokerServlet and EJBInvokerServlet interfaces on HP ProCurve Manager, “These servlets are also exposed without authentication by default on older unsupported community releases of JBoss AS (WildFly) 4.x and 5.x. All supported Red Hat JBoss products that include the JMXInvokerServlet and EJBInvokerServlet interfaces apply authentication by default, and are not affected by this issue. Newer community releases of JBoss AS (WildFly) 7.x are also not affected by this issue.”

Like Shteiman, Red Hat advised users of older JBoss AS releases to follow the instructions available on the JBoss website in order to apply authentication to the invoker servlet interfaces.

The Red Hat security team has also been aware of this issue affecting certain versions of the JBoss Enterprise Application Platform, Web Platform and BRMS Platform since 2012 when it tracked the vulnerability as CVE-2012-0874. The issue has been addressed and current versions of JBoss Enterprise Platforms based on JBoss AS 4.x and 5.x are no longer vulnerable, the team said.

Source:  computerworld.com

Internet architects propose encrypting all the world’s Web traffic

Thursday, November 14th, 2013

A vastly larger percentage of the world’s Web traffic will be encrypted under a near-final recommendation to revise the Hypertext Transfer Protocol (HTTP) that serves as the foundation for all communications between websites and end users.

The proposal, announced in a letter published Wednesday by an official with the Internet Engineering Task Force (IETF), comes after documents leaked by former National Security Agency contractor Edward Snowden heightened concerns about government surveillance of Internet communications. Despite those concerns, websites operated by Yahoo, the federal government, the site running this article, and others continue to publish the majority of their pages in a “plaintext” format that can be read by government spies or anyone else who has access to the network the traffic passes over. Last week, cryptographer and security expert Bruce Schneier urged people to “make surveillance expensive again” by encrypting as much Internet data as possible.

The HTTPbis Working Group, the IETF body charged with designing the next-generation HTTP 2.0 specification, is proposing that encryption be the default way data is transferred over the “open Internet.” A growing number of groups participating in the standards-making process—particularly those who develop Web browsers—support the move, although as is typical in technical deliberations, there’s debate about how best to implement the changes.

“There seems to be strong consensus to increase the use of encryption on the Web, but there is less agreement about how to go about this,” Mark Nottingham, chair of the HTTPbis working group, wrote in Wednesday’s letter. (HTTPbis roughly translates to “HTTP again.”)

He went on to lay out three implementation proposals and describe their pros and cons:

A. Opportunistic encryption for http:// URIs without server authentication—aka “TLS Relaxed” as per draft-nottingham-http2-encryption.

B. Opportunistic encryption for http:// URIs with server authentication—the same mechanism, but not “relaxed,” along with some form of downgrade protection.

C. HTTP/2 to only be used with https:// URIs on the “open” Internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).

In subsequent discussion, there seems to be agreement that (C) is preferable to (B), since it is more straightforward; no new mechanism needs to be specified, and HSTS can be used for downgrade protection.

(C) also has this advantage over (A) and furthermore provides stronger protection against active attacks.

The strongest objections against (A) seemed to be about creating confusion about security and discouraging use of “full” TLS, whereas those against (C) were about limiting deployment of better security.

Keen observers have noted that we can deploy (C) and judge adoption of the new protocol, later adding (A) if necessary. The reverse is not necessarily true.

Furthermore, in discussions with browser vendors (who have been among those most strongly advocating more use of encryption), there seems to be good support for (C), whereas there’s still a fair amount of doubt/disagreement regarding (A).

Pros, cons, and carrots

As Nottingham acknowledged, there are major advantages and disadvantages for each option. Proposal A would be easier for websites to implement because it wouldn’t require them to authenticate their servers using a digital certificate that is recognized by all the major browsers. This relaxation of current HTTPS requirements would eliminate a hurdle that stops many websites from encrypting traffic now, but it also comes at a cost. The lack of authentication could make it trivial for the person at an Internet cafe or the spy monitoring Internet backbones to create a fraudulent digital certificate that impersonates websites using this form of relaxed transport layer security (TLS). That risk calls into question whether the weakened measure is worth the hassle of implementing.

Proposal B, by contrast, would make it much harder for attackers, since HTTP 2.0 traffic by default would be both encrypted and authenticated. But the increased cost and effort required by millions of websites may stymie the adoption of the new specification, which in addition to encryption offers improvements such as increased header compression and asynchronous connection multiplexing.

Proposal C seems to resolve the tension between the other two options by moving in a different direction altogether—that is, by implementing HTTP 2.0 only in full-blown HTTPS traffic. This approach attempts to use the many improvements of the new standard as a carrot that gives websites an incentive to protect their traffic with traditional HTTPS encryption.

The options that the working group is considering do a fair job of mapping the current debate over Web-based encryption. A common argument is that more sites can and should encrypt all or at least most of their traffic. Even better is when sites provide this encryption while at the same time providing strong cryptographic assurances that the server hosting the website is the one operated by the domain-name holder listed in the address bar—rather than by an attacker who is tampering with the connection.

Unfortunately, the proposals are passing over an important position in the debate over Web encryption, involving the viability of the current TLS and secure sockets layer (SSL) protocols that underpin all HTTPS traffic. With more than 500 certificate authorities located all over the world recognized by major browsers, all it takes is the compromise of one of them for the entire system to fail (although certificate pinning in some cases helps contain the damage). There’s nothing in Nottingham’s letter indicating that this single point of failure will be addressed. The current HTTPS system has serious privacy implications for end users, since certificate authorities can log huge numbers of requests for SSL-protected websites and map them to individual IP addresses. This is also unaddressed.

It’s unfortunate that the letter didn’t propose alternatives to the largely broken TLS system, such as the one dubbed Trust Assertions for Certificate Keys, which was conceived by researchers Moxie Marlinspike and Trevor Perrin. Then again, as things are now, the engineers in the HTTPbis Working Group are likely managing as much controversy as they can. Adding an entirely new way to encrypt Web traffic to an already sprawling list of considerations would probably prove to be too much.

Source:  arstechnica.com