Archive for the ‘Security’ Category

Heartbleed Affecting Wireless Users

Monday, June 2nd, 2014

New Vulnerability in Wireless Devices

Cupid Heartbleed Logo

Per Ars Techinca, a new wireless vulnerability has been detected in many wireless vendors implementation of enterprise grade wireless security wich affects wireless users. We are working with all of our vendors to find out (1) which are affected, and (2) what needs to be done to mediate this risk. Please stay tuned!

IT Consulting Case Studies: Microsoft SharePoint Server for CMS

Friday, February 14th, 2014

Gyver Networks recently designed and deployed a Microsoft SharePoint Server infrastructure for a financial consulting firm servicing banks and depository institutions with assets in excess of $200 billion.

Challenge:  A company specializing in regulatory compliance audits for financial institutions found themselves inundated by documents submitted via inconsistent workflow processes, raising concerns regarding security and content management as they continued to expand.

http://officeimg.vo.msecnd.net/en-us/files/819/194/ZA103888538.pngWith many such projects running concurrently, keeping up with the back-and-forth flow of multiple versions of the same documents became increasingly difficult.  Further complicating matters, the submission process consisted of clients sending email attachments or uploading files to a company FTP server, then emailing to let staff know something was sent.  Other areas of concern included:

  • Security of submitted financial data in transit and at rest, as defined in SSAE 16 and 201 CMR 17.00, among other standards and regulations
  • Secure, customized, compartmentalized client access
  • Advanced user management
  • Internal and external collaboration (multiple users working on the same documents simultaneously)
  • Change and version tracking
  • Comprehensive search capabilities
  • Client alerts, access to project updates and timelines, and feedback

Resolution: Gyver Networks proposed a Microsoft SharePoint Server environment as the ideal enterprise content management system (CMS) to replace their existing processes.  Once deployed, existing archives and client profiles were migrated into the SharePoint infrastructure designed for each respective client and, seamlessly, the company was fully operational and ready to go live.

Now, instead of an insecure and confusing combination of emails, FTP submissions, and cloud-hosted, third-party management software, they are able to host their own secure, all-in-one CMS on premises, including:

  • 256-bit encryption of data in transit and at rest
  • Distinct SharePoint sites and logins for each client, with customizable access permissions and retention policies for subsites and libraries
  • Advanced collaboration features, with document checkout, change review and approval, and workflows
  • Metadata options so users can find what they’re searching for instantly
  • Client-customized email alerts, views, reporting, timelines, and the ability to submit requests and feedback directly through the SharePoint portal

The end result?  Clients of this company are thrilled to have a comprehensive content management system that not only saves them time and provides secure submission and archiving, but also offers enhanced project oversight and advanced-metric reporting capabilities.

The consulting firm itself experienced an immediate increase in productivity, efficiency, and client retention rates; they are in full compliance with all regulations and standards governing security and privacy; and they are now prepared for future expansion with a scalable enterprise CMS solution that can grow as they do.

Contact Gyver Networks today to learn more about what Microsoft SharePoint Server can do for your organization.  Whether you require a simple standalone installation or a more complex hybrid SharePoint Server farm, we can assist you in planning, deploying, administration, and troubleshooting to ensure you get the most out of your investment.

IE 10 zero-day attack targets US military

Friday, February 14th, 2014

Fireeye, a security research firm, has identified a targeted and sophisticated attack which they believe to be aimed at US military personnel. Fireeye calls this specific attack Operation SnowMan.The attack was staged from the web site of the U.S. Veterans of Foreign Wars which the attackers had compromised. Pages from the site were modified to include code (in an IFRAME) which exploited an unpatched vulnerability in Internet Explorer 10 on systems which also have Adobe Flash Player.

The actual vulnerability is in Internet Explorer 10, but it relies on a malicious Flash object and a callback from that Flash object to the vulnerability trigger in JavaScript. Fireeye says they are in touch with Microsoft about the vulnerability.

The attack checks to make sure it is running on IE10 and that the user is not running the Microsoft Enhanced Mitigation Experience Toolkit (EMET), a tool which can help to harden applications against attack. So running another version of IE, including IE11, or installing EMET would protect against this attack.

The attack was first identified on February 11. Fireeye believes that it was placed on the VFW site in order to be found by US military personnel, and that the attack was timed to coincide with a long holiday weekend and the major snowstorm which struck the eastern United States this week, including the Washington DC region.

Fireeye also presents evidence that the attack comes from the same group of attackers they have identified in previous sophisticated, high-value attacks, specifically Operation DeputyDog and Operation Ephemeral Hydra. They reach this conclusion by analyzing the techniques used. They say that this group has, in the past, attacked U.S. government entities, Japanese firms, defense industrial base (DIB) companies, law firms, information technology (IT) companies, mining companies and non-governmental organizations (NGOs).

Source:  zdnet.com

Building control systems can be pathway to Target-like attack

Tuesday, February 11th, 2014

Credentials stolen from automation and control providers were used in Target hack

Companies should review carefully the network access given to third-party engineers monitoring building control systems to avoid a Target-like attack, experts say.

Security related to providers of building automation and control systems was in the spotlight this week after the security blog KrebsonSecurity reported that credentials stolen from Fazio Mechanical Services, based in Sharpsburg, Penn, were used by hackers who snatched late last year 40 million debit- and credit-card numbers from Target’s electronic cash registers, called point-of-sale (POS) systems.

The blog initially identified Fazio as a provider of refrigeration and heating, ventilation and air conditioning (HVAC) systems. The report sparked a discussion in security circles on how such a subcontractor’s credentials could provide access to areas of the retailer’s network Fazio would not need.

On Thursday, Fazio released a statement saying it does not monitor or control Target’s HVAC systems, according to KrebsonSecurity. Instead it remotely handles “electronic billing, contract submission and project management,” for the retailer.

In light of its work, Fazio having access to Target business applications that could be tied to POS systems is certainly possible. However, interviews with experts before Fazio’s clarification found that subcontractors monitoring and maintaining HVAC and other building systems remotely often have too much access to corporate networks.

“Generally what happens is some new business service needs network access, so, if there’s time pressure, it may be placed on an existing network, (without) thinking through all the security implications,” Dwayne Melancon, chief technology officer for data security company Tripwire, said.

Most building systems, such as HVAC, are Internet-enabled so maintenance companies can monitor them remotely. Use of the Shodan search engine for Internet-enabled devices can reveal thousands of systems ranging from building automation to crematoriums with weak login credentials, researchers have found.

Using homegrown technology, Billy Rios, director of threat intelligence for vulnerability management company Qualys, found on the Internet a building control system for Target’s Minneapolis-based headquarters.

While the system is connected to an internal network, Rios could not determine whether it’s a corporate network without hacking the system, which would be illegal.

“We know that we could probably exploit it, but what we don’t know is what purpose it’s serving,” he said. “It could control energy, it could control HVAC, it could control lighting or it could be for access control. We’re not sure.”

If the Web interface of such systems is on a corporate network, then some important security measures need to be taken.

All data traffic moving to and from the server should be closely monitored. To do their job, building engineers need to access only a few systems. Monitoring software should flag traffic going anywhere else immediately.

“Workstations in your HR (human resources) department should probably not be talking to your refrigeration devices,” Rios said. “Seeing high spikes in traffic from embedded devices on your corporate network is also an indication that something is wrong.”

In addition, companies should know the IP addresses used by subcontractors in accessing systems. Unrecognized addresses should be automatically blocked.

Better password management is also a way to prevent a cyberattack. In general, a subcontractor’s employees will share the same credentials to access a customer’s systems. Those credentials are seldom changed, even when an employee leaves the company.

“That’s why it’s doubly important to make sure those accounts and systems have very restricted access, so you can’t use that technician login to do other things on the network,” Melancon said.

Every company should do a thorough review of their networks to identify every building system. “Understanding where these systems are is the first step,” Rios said.

Discovery should be followed by an evaluation of the security around those systems that are on the Internet.

Source:  csoonline.com

Huge hack ‘ugly sign of future’ for internet threats

Tuesday, February 11th, 2014

A massive attack that exploited a key vulnerability in the infrastructure of the internet is the “start of ugly things to come”, it has been warned.

Online security specialists Cloudflare said it recorded the “biggest” attack of its kind on Monday.

Hackers used weaknesses in the Network Time Protocol (NTP), a system used to synchronise computer clocks, to flood servers with huge amounts of data.

The technique could potentially be used to force popular services offline.

Several experts had predicted that the NTP would be used for malicious purposes.

The target of this latest onslaught is unknown, but it was directed at servers in Europe, Cloudflare said.

Attackers used a well-known method to bring down a system known as Denial of Service (DoS) – in which huge amounts of data are forced on a target, causing it to fall over.

Cloudflare chief executive Matthew Prince said his firm had measured the “very big” attack at about 400 gigabits per second (Gbps), 100Gbps larger than an attack on anti-spam service Spamhaus last year.

Predicted attack

In a report published three months ago, Cloudflare warned that attacks on the NTP were on the horizon and gave details of how web hosts could best try to protect their customers.

NTP servers, of which there are thousands around the world, are designed to keep computers synchronised to the same time.

The fundamentals of the NTP began operating in 1985. While there have been changes to the system since then, it still operates in much the same way.

A computer needing to synchronise time with the NTP will send a small amount of data to make the request. The NTP will then reply by sending data back.

The vulnerability lies with two weaknesses. Firstly, the amount of data the NTP sends back is bigger than the amount it receives, meaning an attack is instantly amplified.

Secondly, the original computer’s location can be “spoofed”, tricking the NTP into sending the information back to somewhere else.

In this attack, it is likely that many machines were used to make requests to the NTP. Hackers spoofed their location so that the massive amounts of data from the NTP were diverted to a single target.

“Amplification attacks like that result in an attacker turning a small amount of bandwidth coming from a small number of machines into a massive traffic load hitting a victim from around the internet,” Cloudfare explained in a blog outlining the vulnerability, posted last month.

‘Ugly future’

The NTP is one of several protocols used within the infrastructure of the internet to keep things running smoothly.

Unfortunately, despite being vital components, most of these protocols were designed and implemented at a time when the prospect of malicious activity was not considered.

“A lot of these protocols are essential, but they’re not secure,” explained Prof Alan Woodward, an independent cyber-security consultant, who had also raised concerns over NTP last year.

“All you can really do is try and mitigate the denial of service attacks. There are technologies around to do it.”

Most effective, Prof Woodward suggested, was technology that was able to spot when a large amount of data was heading for one destination – and shutting off the connection.

Cloudflare’s Mr Prince said that while his firm had been able to mitigate the attack, it was a worrying sign for the future.

“Someone’s got a big, new cannon,” he tweeted. “Start of ugly things to come.”

Source:  BBC

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Cisco promises to fix admin backdoor in some routers

Monday, January 13th, 2014

Cisco Systems promised to issue firmware updates removing a backdoor from a wireless access point and two of its routers later this month. The undocumented feature could allow unauthenticated remote attackers to gain administrative access to the devices.

The vulnerability was discovered over the Christmas holiday on a Linksys WAG200G router by a security researcher named Eloi Vanderbeken. He found that the device had a service listening on port 32764 TCP, and that connecting to it allowed a remote user to send unauthenticated commands to the device and reset the administrative password.

It was later reported by other users that the same backdoor was present in multiple devices from Cisco, Netgear, Belkin and other manufacturers. On many devices this undocumented interface can only be accessed from the local or wireless network, but on some devices it is also accessible from the Internet.

Cisco identified the vulnerability in its WAP4410N Wireless-N Access Point, WRVS4400N Wireless-N Gigabit Security Router and RVS4000 4-port Gigabit Security Router. The company is no longer responsible for Linksys routers, as it sold that consumer division to Belkin early last year.

The vulnerability is caused by a testing interface that can be accessed from the LAN side on the WRVS4400N and RVS4000 routers and also the wireless network on the WAP4410N wireless access point device.

“An attacker could exploit this vulnerability by accessing the affected device from the LAN-side interface and issuing arbitrary commands in the underlying operating system,” Cisco said in an advisory published Friday. “An exploit could allow the attacker to access user credentials for the administrator account of the device, and read the device configuration. The exploit can also allow the attacker to issue arbitrary commands on the device with escalated privileges.”

The company noted that there are no known workarounds that could mitigate this vulnerability in the absence of a firmware update.

The SANS Internet Storm Center, a cyber threat monitoring organization, warned at the beginning of the month that it detected probes for port 32764 TCP on the Internet, most likely targeting this vulnerability.

Source:  networkworld.com

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

Friday, January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.’”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

DoS attacks that took down big game sites abused Web’s time-sync protocol

Friday, January 10th, 2014

Miscreants who earlier this week took down servers for League of Legends, EA.com, and other online game services used a never-before-seen technique that vastly amplified the amount of junk traffic directed at denial-of-service targets.

Rather than directly flooding the targeted services with torrents of data, an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.

“Prior to December, an NTP attack was almost unheard of because if there was one it wasn’t worth talking about,” Shawn Marck, CEO of DoS-mitigation service Black Lotus, told Ars. “It was so tiny it never showed up in the major reports. What we’re witnessing is a shift in methodology.”

The technique is in many ways similar to the DNS-amplification attacks waged on servers for years. That older DoS technique sends falsified requests to open domain name system servers requesting the IP address for a particular site. DNS-reflection attacks help aggravate the crippling effects of a DoS campaign since the responses sent to the targeted site are about 50 times bigger than the request sent by the attacker.

During the first week of the year, NTP reflection accounted for about 69 percent of all DoS attack traffic by bit volume, Marck said. The average size of each NTP attack was about 7.3 gigabits per second, a more than three-fold increase over the average DoS attack observed in December. Correlating claims DERP Trolling made on Twitter with attacks Black Lotus researchers were able to observe, they estimated the attack gang had a maximum capacity of about 28Gbps.

NTP servers help people synchronize their servers to very precise time increments. Recently, the protocol was found to suffer from a condition that could be exploited by DoS attackers. Fortunately, NTP-amplification attacks are relatively easy to repel. Since virtually all the NTP traffic can be blocked with few if any negative consequences, engineers can simply filter out the packets. Other types of DoS attacks are harder to mitigate, since engineers must first work to distinguish legitimate data from traffic designed to bring down the site.

Black Lotus recommends network operators follow several practices to blunt the effects of NTP attacks. They include using traffic policers to limit the amount of NTP traffic that can enter a network, implementing large-scale DDoS mitigation systems, or opting for service-based approaches that provide several gigabits of standby capacity for use during DDoS attacks.

Source:  arstechnica.com

Unencrypted Windows crash reports give ‘significant advantage’ to hackers, spies

Wednesday, January 1st, 2014

Microsoft transmits a wealth of information from Windows PCs to its servers in the clear, claims security researcher

Windows’ error- and crash-reporting system sends a wealth of data unencrypted and in the clear, information that eavesdropping hackers or state security agencies can use to refine and pinpoint their attacks, a researcher said today.

Not coincidentally, over the weekend the popular German newsmagazine Der Spiegel reported that the U.S. National Security Agency (NSA) collects Windows crash reports from its global wiretaps to sniff out details of targeted PCs, including the installed software and operating systems, down to the version numbers and whether the programs or OSes have been patched; application and operating system crashes that signal vulnerabilities that could be exploited with malware; and even the devices and peripherals that have been plugged into the computers.

“This information would definitely give an attacker a significant advantage. It would give them a blueprint of the [targeted] network,” said Alex Watson, director of threat research at Websense, which on Sunday published preliminary findings of its Windows error-reporting investigation. Watson will present Websense’s discovery in more detail at the RSA Conference in San Francisco on Feb. 24.

Sniffing crash reports using low-volume “man-in-the-middle” methods — the classic is a rogue Wi-Fi hotspot in a public place — wouldn’t deliver enough information to be valuable, said Watson, but a wiretap at the ISP level, the kind the NSA is alleged to have in place around the world, would.

“At the [intelligence] agency level, where they can spend the time to collect information on billions of PCs, this is an incredible tool,” said Watson.

And it’s not difficult to obtain the information.

Microsoft does not encrypt the initial crash reports, said Watson, which include both those that prompt the user before they’re sent as well as others that do not. Instead, they’re transmitted to Microsoft’s servers “in the clear,” or over standard HTTP connections.

If a hacker or intelligence agency can insert themselves into the traffic stream, they can pluck out the crash reports for analysis without worrying about having to crack encryption.

And the reports from what Microsoft calls “Windows Error Reporting” (ERS), but which is also known as “Dr. Watson,” contain a wealth of information on the specific PC.

When a device is plugged into a Windows PC’s USB port, for example — say an iPhone to sync it with iTunes — an automatic report is sent to Microsoft that contains the device identifier and manufacturer, the Windows version, the maker and model of the PC, the version of the system’s BIOS and a unique machine identifier.

By comparing the data with publicly-available databases of device and PC IDs, Websense was able to establish that an iPhone 5 had been plugged into a Sony Vaio notebook, and even nail the latter’s machine ID.

If hackers are looking for systems running outdated, and thus, vulnerable versions of Windows — XP SP2, for example — the in-the-clear reports will show which ones have not been updated.

Windows Error Reporting is installed and activated by default on all PCs running Windows XP, Vista, Windows 7, Windows 8 and Windows 8.1, Watson said, confirming that the Websense techniques of deciphering the reports worked on all those editions.

Watson characterized the chore of turning the cryptic reports into easily-understandable terms as “trivial” for accomplished attackers.

More thorough crash reports, including ones that Microsoft silently triggers from its end of the telemetry chain, contain personal information and so are encrypted and transmitted via HTTPS. “If Microsoft is curious about the report or wants to know more, they can ask your computer to send a mini core dump,” explained Watson. “Personal identifiable information in that core dump is encrypted.”

Microsoft uses the error and crash reports to spot problems in its software as well as that crafted by other developers. Widespread reports typically lead to reliability fixes deployed in non-security updates.

The Redmond, Wash. company also monitors the crash reports for evidence of as-yet-unknown malware: Unexplained and suddenly-increasing crashes may be a sign that a new exploit is in circulation, Watson said.

Microsoft often boasts of the value of the telemetry to its designers, developers and security engineers, and with good reason: An estimated 80% of the world’s billion-plus Windows PCs regularly send crash and error reports to the company.

But the unencrypted information fed to Microsoft by the initial and lowest-level reports — which Watson labeled “Stage 1″ reports — comprise a dangerous leak, Watson contended.

“We’ve substantiated that this is a major risk to organizations,” said Watson.

Error reporting can be disabled manually on a machine-by-machine basis, or in large sets by IT administrators using Group Policy settings.

Websense recommended that businesses and other organizations redirect the report traffic on their network to an internal server, where it can be encrypted before being forwarded to Microsoft.

But to turn it off entirely would be to throw away a solid diagnostic tool, Watson argued. ERS can provide insights not only to hackers and spying eavesdroppers, but also the IT departments.

“[ERS] does the legwork, and can let [IT] see where vulnerabilities might exist, or whether rogue software or malware is on the network,” Watson said. “It can also show the uptake on BYOD [bring your own device] policies,” he added, referring to the automatic USB device reports.

Microsoft should encrypt all ERS data that’s sent from customer PCs to its servers, Watson asserted.

A Microsoft spokesperson asked to comment on the Websense and Der Spiegel reports said, “Microsoft does not provide any government with direct or unfettered access to our customer’s data. We would have significant concerns if the allegations about government actions are true.”

The spokesperson added that, “Secure Socket Layer connections are regularly established to communicate details contained in Windows error reports,” which is only partially true, as Stage 1 reports are not encrypted, a fact that Microsoft’s own documentation makes clear.

“The software ‘parameters’ information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted,” Microsoft acknowledged in a document about ERS.

Source:  computerworld.com

Target’s nightmare goes on: Encrypted PIN data stolen

Friday, December 27th, 2013

After hackers stole credit and debit card records for 40 million Target store customers, the retailer said customers’ personal identification numbers, or PINs, had not been breached.

Not so.

On Friday, a Target spokeswoman backtracked from previous statements and said criminals had made off with customers’ encrypted PIN information as well. But Target said the company stored the keys to decrypt its PIN data on separate systems from the ones that were hacked.

“We remain confident that PIN numbers are safe and secure,” Molly Snyder, Target’s spokeswoman said in a statement. “The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems.”

The problem is that when it comes to security, experts say the general rule of thumb is: where there is will, there is a way. Criminals have already been selling Target customers’ credit and debit card data on the black market, where a single card is selling for as much as $100. Criminals can use that card data to create counterfeit cards. But PIN data is the most coveted of all. With PIN data, cybercriminals can make withdrawals from a customer’s account through an automatic teller machine. And even if the key to unlock the encryption is stored on separate systems, security experts say there have been cases where hackers managed to get the keys and successfully decrypt scrambled data.

Even before Friday’s revelations about the PIN data, two major banks, JPMorgan Chase and Santander Bank both placed caps on customer purchases and withdrawals made with compromised credit and debit cards. That move, which security experts say is unprecedented, brought complaints from customers trying to do last-minute shopping in the days leading to Christmas.

Chase said it is in the process of replacing all of its customers’ debit cards — about 2 million of them — that were used at Target during the breach.

The Target breach,from Nov. 27 to Dec. 15, is officially the second largest breach of a retailer in history. The biggest was a 2005 breach at TJMaxx that compromised records for 90 million customers.

The Secret Service and Justice Department continue to investigate.

Source:  nytimes.com

Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Computers share their secrets if you listen

Friday, December 20th, 2013

Be afraid, friends, for science has given us a new way in which to circumvent some of the strongest encryption algorithms used to protect our data — and no, it’s not some super secret government method, either. Researchers from Tel Aviv University and the Weizmann Institute of Science discovered that they could steal even the largest, most secure RSA 4096-bit encryption keys simply by listening to a laptop as it decrypts data.

To accomplish the trick, the researchers used a microphone to record the noises made by the computer, then ran that audio through filters to isolate the vibrations made by the electronic internals during the decryption process. With that accomplished, some cryptanalysis revealed the encryption key in around an hour. Because the vibrations in question are so small, however, you need to have a high powered mic or be recording them from close proximity. The researchers found that by using a highly sensitive parabolic microphone, they could record what they needed from around 13 feet away, but could also get the required audio by placing a regular smartphone within a foot of the laptop. Additionally, it turns out they could get the same information from certain computers by recording their electrical ground potential as it fluctuates during the decryption process.

Of course, the researchers only cracked one kind of RSA encryption, but they said that there’s no reason why the same method wouldn’t work on others — they’d just have to start all over to identify the specific sounds produced by each new encryption software. Guess this just goes to prove that while digital security is great, but it can be rendered useless without its physical counterpart. So, should you be among the tin-foil hat crowd convinced that everyone around you is a potential spy, waiting to steal your data, you’re welcome for this newest bit of food for your paranoid thoughts.

Source:  engadget.com

Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site PHP.net used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of PHP.net, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to PHP.net, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the PHP.net attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.

Source:  csoonline.com

Study finds zero-day vulnerabilities abound in popular software

Friday, December 6th, 2013

Subscribers to organizations that sell exploits for vulnerabilities not yet known to software developers gain daily access to scores of flaws in the world’s most popular technology, a study shows.

NSS Labs, which is in the business of testing security products for corporate subscribers, found that over the last three years, subscribers of two major vulnerability programs had access on any given day to at least 58 exploitable flaws in Microsoft, Apple, Oracle or Adobe products.

In addition, NSS labs found that an average of 151 days passed from the time when the programs purchased a vulnerability from a researcher and the affected vendor released a patch.

The findings, released Thursday, were based on an analysis of 10 years of data from TippingPoint, a network security maker Hewlett-Packard acquired in 2010, and iDefense, a security intelligence service owned by VeriSign. Both organizations buy vulnerabilities, inform subscribers and work with vendors in producing patches.

Stefan Frei, NSS research director and author of the report, said the actual number of secret vulnerabilities available to cybercriminals, government agencies and corporations is much larger, because of the amount of money they are willing to pay.

Cybercriminals will buy so-called zero-day vulnerabilities in the black market, while government agencies and corporations purchase them from brokers and exploit clearinghouses, such as VUPEN Security, ReVuln, Endgame Systems, Exodus Intelligence and Netragard.

The six vendors collectively can provide at least 100 exploits per year to subscribers, Frei said. According to a February 2010 price list, Endgame sold 25 zero-day exploits a year for $2.5 million.

In July, Netragard founder Adriel Desautels told The New York Times that the average vulnerability sells from around $35,000 to $160,000.

Part of the reason vulnerabilities are always present is because of developer errors and also because software makers are in the business of selling product, experts say. The latter means meeting deadlines for shipping software often trumps spending additional time and money on security.

Because of the number of vulnerabilities bought and sold, companies that believe their intellectual property makes them prime targets for well-financed hackers should assume their computer systems have already been breached, Frei said.

“One hundred percent prevention is not possible,” he said.

Therefore, companies need to have the experts and security tools in place to detect compromises, Frei said. Once a breach is discovered, then there should be a well-defined plan in place for dealing with it.

That plan should include gathering forensic evidence to determine how the breach occurred. In addition, all software on the infected systems should be removed and reinstalled.

Steps taken following a breach should be reviewed regularly to make sure they are up to date.

Source:  csoonline.com

Microsoft disrupts ZeroAccess web fraud botnet

Friday, December 6th, 2013

ZeroAccess, one of the world’s largest botnets – a network of computers infected with malware to trigger online fraud – has been disrupted by Microsoft and law enforcement agencies.

ZeroAccess hijacks web search results and redirects users to potentially dangerous sites to steal their details.

It also generates fraudulent ad clicks on infected computers then claims payouts from duped advertisers.

Also called Sirefef botnet, ZeroAccess, has infected two million computers.

The botnet targets search results on Google, Bing and Yahoo search engines and is estimated to cost online advertisers $2.7m (£1.7m) per month.

Microsoft said it had been authorised by US regulators to “block incoming and outgoing communications between computers located in the US and the 18 identified Internet Protocol (IP) addresses being used to commit the fraudulent schemes”.

In addition, the firm has also taken control of 49 domains associated with ZeroAccess.

David Finn, executive director of Microsoft Digital Crimes Unit, said the disruption “will stop victims’ computers from being used for fraud and help us identify the computers that need to be cleaned of the infection”.

‘Most robust’

The ZeroAccess botnet relies on waves of communication between groups of infected computers, instead of being controlled by a few servers.

This allows cyber criminals to control the botnet remotely from a range of computers, making it difficult to tackle.

According to Microsoft, more than 800,000 ZeroAccess-infected computers were active on the internet on any given day as of October this year.

“Due to its botnet architecture, ZeroAccess is one of the most robust and durable botnets in operation today and was built to be resilient to disruption efforts,” Microsoft said.

However, the firm said its latest action is “expected to significantly disrupt the botnet’s operation, increasing the cost and risk for cyber criminals to continue doing business and preventing victims’ computers from committing fraudulent schemes”.

Microsoft said its Digital Crimes Unit collaborated with the US Federal Bureau of Investigation (FBI) and Europol’s European Cybercrime Centre (EC3) to disrupt the operations.

Earlier this year, security firm Symantec said it had disabled nearly 500,000 computers infected by ZeroAccess and taken them out of the botnet.

Source: BBC

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com

Why security benefits boost mid-market adoption of virtualization

Monday, December 2nd, 2013

While virtualization has undoubtedly already found its footing in larger businesses and data centers, the technology is still in the process of catching on in the middle market. But a recent study conducted by a group of Cisco Partner Firms, titled “Virtualization on the Rise,” indicates just that: the prevalence of virtualization is continuing to expand and has so far proven to be a success for many small- and medium-sized businesses.

With firms where virtualization has yet to catch on, however, security is often the point of contention.

Cisco’s study found that adoption rates for virtualization are already quite high at small- to medium-sized businesses, with 77 percent of respondents indicating that they already had some type of virtualization in place around their office. These types of solutions included server virtualization, a virtual desktop infrastructure, storage virtualization, network virtualization, and remote desktop access, among others. Server virtualization was the most commonly used, with 59 percent of respondents (that said they had adopted virtualization in some form) stating that it was their solution of choice.

That all being said, there are obviously some businesses who still have yet to adopt virtualization, and a healthy chunk of respondents – 51 percent – cited security as a reason. It appeared that the larger companies with over 100 employees were more concerned about the security of virtualization, with 60 percent of that particular demographic qualifying it as their barrier to entry (while only 33 percent of smaller firms shared the same concern).

But with Cisco’s study lacking any other specificity in terms of why exactly the respondents were concerned about the security of virtualization, one can’t help but wonder: is this necessarily sound reasoning? Craig Jeske, the business development manager for virtualization and cloud at Global Technology Resources, shed some light on the subject.

“I think [virtualization] gives a much easier, more efficient, and agile response to changing demands, and that includes responding to security threats,” said Jeske. “It allow for a faster response than if you had to deploy new physical tools.”

He went on to explain that given how virtualization enhances portability and makes it easier to back up data, it subsequently makes it easier for companies to get back to a known state in the event of some sort of compromise. This kind of flexibility limits attackers’ options.

“Thanks to the agility provided by virtualization, it changes the attack vectors that people can come at us from,” he said.

As for the 33 percent of smaller firms that cited security as a barrier to entry – thereby suggesting that the smaller companies were more willing to take the perceived “risk” of adopting the technology – Jeske said that was simply because virtualization makes more sense for businesses of that size.

“When you have a small budget, the cost savings [from virtualization] are more dramatic, since it saves space and calls for a lower upfront investment,” he said. On the flip side, the upfront cost for any new IT direction is higher for a larger business. It’s easier to make a shift when a company has 20 servers versus 20 million servers; while the return on virtualization is higher for a larger company, so is the upfront investment.

Of course, there is also the obvious fact that with smaller firms, the potential loss as a result of taking such a risk isn’t as great.

“With any type of change, the risk is lower for a smaller business than for a multimillion dollar firm,” he said. “With bigger businesses, any change needs to be looked at carefully. Because if something goes wrong, regardless of what the cause was, someone’s losing their job.”

Jeske also addressed the fact that some of the security concerns indicated by the study results may have stemmed from some teams recognizing that they weren’t familiar with the technology. That lack of comfort with virtualization – for example, not knowing how to properly implement or deploy it – could make virtualization less secure, but it’s not inherently insecure. Security officers, he stressed, are always most comfortable with what they know.

“When you know how to handle virtualization, it’s not a security detriment,” he said. “I’m hesitant to make a change until I see the validity and justification behind that change. You can understand peoples’ aversion from a security standpoint and first just from the standpoint of needing to understand it before jumping in.”

But the technology itself, Jeske reiterated, has plenty of security benefits.

“Since everything is virtualized, it’s easier to respond to a threat because it’s all available from everywhere. You don’t have to have the box,” he said. “The more we’re tied to these servers and our offices, the easier it is to respond.”

And with every element being all-encompassed in a software package, he said, businesses might be able to do more to each virtual server than they could in the physical world. Virtual firewalls, intrusion detection, etc. can all be put in as an application and put closer to the machine itself so firms don’t have to bring things back out into the physical environment.

This also allows for easier, faster changes in security environments. One change can be propagated across the entire virtual environment automatically, rather than having to push it out to each physical device individually that’s protecting a company’s systems.

Jeske noted that there are benefits from a physical security standpoint, as well, namely because somebody else takes care of it for you. The servers hosting the virtualized solutions are somewhere far away, and the protection of those servers is somebody else’s responsibility.

But what with the rapid proliferation of virtualization, Jeske warned that security teams need to try to stay ahead of the game. Otherwise, it’s going to be harder to properly adopt the technology when they no longer have a choice.

“With virtualization, speed of deployment and speed of reaction are the biggest things,” said Jeske. “The servers and desktops are going to continue to get virtualized whether officers like it or not. So they need to be proactive and stay in front of it, otherwise they can find themselves in a bad position further on down the road.”

Source:  csoonline.com

This new worm targets Linux PCs and embedded devices

Wednesday, November 27th, 2013

A new worm is targeting x86 computers running Linux and PHP, and variants may also pose a threat to devices such as home routers and set-top boxes based on other chip architectures.

According to security researchers from Symantec, the malware spreads by exploiting a vulnerability in php-cgi, a component that allows PHP to run in the Common Gateway Interface (CGI) configuration. The vulnerability is tracked as CVE-2012-1823 and was patched in PHP 5.4.3 and PHP 5.3.13 in May 2012.

The new worm, which was named Linux.Darlloz, is based on proof-of-concept code released in late October, the Symantec researchers said Wednesday in a blog post.

“Upon execution, the worm generates IP [Internet Protocol] addresses randomly, accesses a specific path on the machine with well-known ID and passwords, and sends HTTP POST requests, which exploit the vulnerability,” the Symantec researchers explained. “If the target is unpatched, it downloads the worm from a malicious server and starts searching for its next target.”

The only variant seen to be spreading so far targets x86 systems, because the malicious binary downloaded from the attacker’s server is in ELF (Executable and Linkable Format) format for Intel architectures.

However, the Symantec researchers claim the attacker also hosts variants of the worm for other architectures including ARM, PPC, MIPS and MIPSEL.

These architectures are used in embedded devices like home routers, IP cameras, set-top boxes and many others.

“The attacker is apparently trying to maximize the infection opportunity by expanding coverage to any devices running on Linux,” the Symantec researchers said. “However, we have not confirmed attacks against non-PC devices yet.”

The firmware of many embedded devices is based on some type of Linux and includes a Web server with PHP for the Web-based administration interface. These kinds of devices might be easier to compromise than Linux PCs or servers because they don’t receive updates very often.

Patching vulnerabilities in embedded devices has never been an easy task. Many vendors don’t issue regular updates and when they do, users are often not properly informed about the security issues fixed in those updates.

In addition, installing an update on embedded devices requires more work and technical knowledge than updating regular software installed on a computer. Users have to know where the updates are published, download them manually and then upload them to their devices through a Web-based administration interface.

“Many users may not be aware that they are using vulnerable devices in their homes or offices,” the Symantec researchers said. “Another issue we could face is that even if users notice vulnerable devices, no updates have been provided to some products by the vendor, because of outdated technology or hardware limitations, such as not having enough memory or a CPU that is too slow to support new versions of the software.”

To protect their devices from the worm, users are advised to verify if those devices run the latest available firmware version, update the firmware if needed, set up strong administration passwords and block HTTP POST requests to -/cgi-bin/php, -/cgi-bin/php5, -/cgi-bin/php-cgi, -/cgi-bin/php.cgi and -/cgi-bin/php4, either from the gateway firewall or on each individual device if possible, the Symantec researchers said.

Source:  computerworld.com