Archive for December, 2011

WiFi Protected Setup PIN brute force vulnerability

Friday, December 30th, 2011

Vulnerability Note VU#723755

Overview

The WiFi Protected Setup (WPS) PIN is susceptible to a brute force attack. A design flaw that exists in the WPS specification for the PIN authentication significantly reduces the time required to brute force the entire PIN because it allows an attacker to know when the first half of the 8 digit PIN is correct. The lack of a proper lock out policy after a certain number of failed attempts to guess the PIN on some wireless routers makes this brute force attack that much more feasible.

I. Description

WiFi Protected Setup (WPS) is a computing standard created by the WiFi Alliance to ease the setup and securing of a wireless home network. WPS contains an authentication method called “external registrar” that only requires the router’s PIN. By design this method is susceptible to brute force attacks against the PIN.

When the PIN authentication fails the access point will send an EAP-NACK message back to the client. The EAP-NACK messages are sent in a way that an attacker is able to determine if the first half of the PIN is correct. Also, the last digit of the PIN is known because it is a checksum for the PIN. This design greatly reduces the number of attempts needed to brute force the PIN. The number of attempts goes from 108 to 104 + 103 which is 11,000 attempts in total.

It has been reported that some wireless routers do not implement any kind of lock out policy for brute force attempts. This greatly reduces the time required to perform a successful brute force attack. It has also been reported that some wireless routers resulted in a denial-of-service condition because of the brute force attempt and required a reboot.

II. Impact

An attacker within range of the wireless access point may be able to brute force the WPS PIN and retrieve the password for the wireless network, change the configuration of the access point, or cause a denial of service.

III. Solution

We are currently unaware of a practical solution to this problem.

Workarounds
Disable WPS.

Although the following will not mitigate this specific vulnerability, best practices also recommend only using WPA2 encryption with a strong password, disabling UPnP, and enabling MAC address filtering so only trusted computers and devices can connect to the wireless network.

Vendor Information

Vendor Status Date Notified Date Updated
Belkin, Inc. Affected 2011-12-27
Buffalo Inc Affected 2011-12-27
D-Link Systems, Inc. Affected 2011-12-05 2011-12-27
Linksys (A division of Cisco Systems) Affected 2011-12-05 2011-12-27
Netgear, Inc. Affected 2011-12-05 2011-12-27
Technicolor Affected 2011-12-27
TP-Link Affected 2011-12-27
ZyXEL Affected 2011-12-27

Source:  US-CERT

Networking 2012: Vendors predict what’s ahead for the industry

Friday, December 30th, 2011

As the new year approaches, a series of new challenges will arrive to confront the networking industry. SearchNetworkingUK asked vendors for their networking 2012 predictions, and here they address the transition from wireline to wireless networks, an increasingly mobile workforce, cloud computing and consumerisation and the effects that all of these changes will have on existing networks.

Trevor Dearing, head of enterprise marketing EMEA, Juniper Networks

Over the past few weeks, it has certainly been interesting to see the number of 2012 predictions emerge that focus on how cloud computing will be the ultimate game changer next year, why companies will be unlocking the value of their unstructured data and how the consumerisation of IT will become a reality. It’s almost predictable.

Upgrading legacy networks can’t be ignored. Otherwise, organisations in the next few years are at serious risk of the network becoming a bottleneck, halting innovation and stalling deployments.

I’m usually all for this season of goodwill, but I can’t sit back as everyone ignores the key ingredient that makes all these innovations work — the network.  For CIOs, a challenge for 2012 will be the unknowns caused by the adoption of new technologies and business processes, and how to plan for them. We know that the volume of information that flows through organisations is greater, as are the points of access, and the speed at which it is consumed is faster. What is unknown is where the weak point will be and how it will manifest itself.

For CIOs who want to have a robust IT infrastructure that can cope with changing user demands and new technologies, upgrading legacy networks can’t be ignored. Otherwise, organisations in the next few years are at serious risk of the network becoming a bottleneck, halting innovation and stalling deployments. The network should instead be viewed as a vital enabler for technology innovation rather than just a side consideration.

Marcus Jewell, head of UK and Ireland, Brocade

2011 has been an interesting period, and business-wise it has changed dramatically since the beginning of the year. In the early part of the 2011 there was a great deal of confidence in the market, with organisations looking to increase IT investments after what had been a difficult few years.  More recently, the macroeconomic environment has worsened again, and we now face new headwinds that were not around six months ago.

Customers need to do more with less, and vendors need to continually illustrate how they favourably impact a user’s business.

I believe this presents us, Brocade, and the general IT vendor community, with opportunities and challenges in equal measure. Customers need to do more with less, and vendors need to continually illustrate how they favourably impact a user’s business. They also need to demonstrate how they are ahead of the innovation curve in meeting future demands — vendors that fail to do these things will struggle in 2012.

Users who buy into vendor marketing campaigns and buy solutions blindly will also fail. Instead, they should intelligently assess offerings and deploy what is best for their business. So, what does 2012 have in store for the market?  I see a few key trends driving the industry in the coming 12 months:

  1. BYOD (Bring Your Own Device) changes IT procurement: The company PC is becoming a thing of the past, as businesses increasingly allow, and even encourage, employees to bring their domestic, consumer devices into the workplace and access corporate application.
  2. Campus LAN gets smart: With BYOD, the growth of smartphone/tablet usage among consumers and the unified communications market set to triple by 2015, the campus LAN will have to step up to the plate to meet demand — 2012 will be the year the campus gets smart.
  3. Rise of ‘Cloud Service Revenue’:  2011 saw organisations slowly moving toward the cloud, and this pragmatic adoption will continue in 2012 but will also see the rise of a new form of revenue generation as enterprises from outside the technology sector move toward cloud service provision.
  4. Greater commoditisation:  The maturity of server virtualisation means that hardware is less important; as real estate/energy costs spiral and companies look to reduce capital outlay (CapEx), virtualisation strategies will permeate all companies, and the CXO will become more vocal in whether or not the organization has a plan in place.
  5. Data consumption continues to skyrocket:  Businesses will need to look at innovative solutions to increase network stability and performance while driving down costs to remain competitive.  Those who ignore this trend will face major problems.
  6. And finally… the year of the fabric: Holistic data centre fabrics — from the storage environment through to the Ethernet network — are going to be the big trend in 2012.  All my previous predictions will rely on this.

John Ansell, UK country manager, HP Networking

Historically, the network was static, and it expected users and applications to be static. In today’s IT world, the network needs to be more agile so applications can be rolled out quickly and more easily adapt to business changes.

Organisations will need to simplify their networking infrastructure to keep pace with the dynamic needs of business applications:

  • There are considerable changes stressing the network: Enterprises are moving to cloud; the [rise of] consumerisation of IT; increasingly mobile users demanding rapid services; and users expecting their business network to behave like the Internet. Historically, the network was static, and it expected users and applications to be static. In today’s IT world, the network needs to be more agile so applications can be rolled out quickly and more easily adapt to business changes.
  • Fabrics that simplify data centre infrastructure with converged network, compute and storage resources across both virtual and physical environments to accommodate hybrid cloud computing models are the future. These fabrics must be open, scalable, sure, agile and rely on a common OS for configuration and management consistency.

Jim Morin, product line director, managed services & enterprise at Ciena

With a more flexible and intelligent network to the cloud, enterprises will finally be able to realise the promise of a data centre without walls —  or in other words, a completely virtual data centre.

If 2011 was the year that enterprises started truly adopting the cloud, then 2012 will be the year they start realising the need for more intelligent networking connectivity to cloud-based resources and work to implement such a model.

Several factors are driving requirements for more robust enterprise-to-cloud networks, including the need to accommodate the rapidly increasing data transfer workloads between enterprise data centres and cloud data centres. With a more flexible and intelligent network to the cloud, enterprises will finally be able to realise the promise of a data centre without walls —  or in other words, a completely virtual data centre.

This industry model will consist of an active/active architecture that replaces passive backup architectures to enable greater mobility, collaboration and availability, as well as the operation of both private and cloud infrastructures and services independent of physical location. Specifically, this new technical architecture will feature a server, storage and intelligent networking infrastructure.

Ian Foddering, Cisco UK & Ireland CTO

Networkers will have to become more adept at working flexibly and maintaining legacy systems whilst simultaneously protecting the infrastructure from external threats that could come at a heavy cost at any time. With proper planning and long-term technology investment through traditional capex models or through alternative investment initiatives, it is possible to balance these seemingly contradictory goals.

The main challenge the industry will face is balancing user empowerment needs with the limitations they must work to within their infrastructure. The apparent lack of funding could hit hard, and proof that the network is delivering a suitable return on investment will become imperative before any future decisions can be made on boosting the capabilities or speeds, etc.

Networkers will have to become more adept at working flexibly and maintaining legacy systems whilst simultaneously protecting the infrastructure from external threats that could come at a heavy cost at any time. With proper planning and long-term technology investment through traditional capex models or through alternative investment initiatives, it is possible to balance these seemingly contradictory goals. There is a strong business case to be made for doing so, in that IT can become a strategic advantage for companies that align their spending with commercial productivity needs.

Our own work as the Official Olympic Network Infrastructure Provider working with  BT will be a big technological challenge for us, where all eyes will be on us. Security, reliability and performance of the network infrastructure will be critical, with stability taking precedence over creativity.

George Humphrey, director and line of business owner, Avaya

IT departments will be compelled by business units and enterprise users to adopt more user-centric applications and devices, and as IT departments better understand industry best practices around infrastructure management, they will become more discriminating about the services they purchase, their expectations for transparency into those services and how they hold service providers accountable.

In many companies, voice and data support teams will be converged with the advent of Internet Protocol (IP) telephony; with the deployment of unified communications applications, more companies will blend their applications teams as well. IT departments will be compelled by business units and enterprise users to adopt more user-centric applications and devices, and as IT departments better understand industry best practices around infrastructure management, they will become more discriminating about the services they purchase, their expectations for transparency into those services and how they hold service providers accountable.

Suke Jawanda, chief marketing officer, Bluetooth Special Interest Group (SIG)

The new network challenges will be outweighed by the significance of the advanced capabilities provided by version 4.0 in 2012.

As more devices use Bluetooth technology, both the ecosystem and each device in it increase in value and usefulness, with ever increasing numbers of Bluetooth devices able to wirelessly connect. With 5 million new Bluetooth products shipping daily powered by Bluetooth technology, they will be available to interoperate and connect with the vast Bluetooth ecosystem of billions of devices worldwide.

I envisage more enterprises making use of this new technology and finding new and more effective ways of seamlessly sharing data over secure networks, which will greatly benefit users and administrators alike. The new challenges will be outweighed by the significance of the advanced capabilities provided by this version.

Lee Ealey-Newman, business development director, Cryptocard

Interest in the cloud is set to soar in the next year, and finding ways to make this even more effective will be a huge area of priority for anyone involved in networking.

We see a big opportunity in the cloud computing sphere, particularly buoyed by the huge amount of applications coming out around SAML and the consequential intrigue this is creating. Clearly, some of the security concerns here still need to be addressed before it can ever become truly mainstream, but interest in the cloud is set to soar in the next year, and finding ways to make this even more effective will be a huge area of priority for anyone involved in networking.

Carolyn Carter, portable network tools product manager, Fluke Networks

 Being able to solve network and application problems faster and improve overall IT efficiency will be pivotal in 2012.

Network analysis and wireless management is going to become important to executives as they become more pressured to present information into their networks as a form factor that they can take from the data centre to the production floor to the office desktop. Being able to solve network and application problems faster and improve overall IT efficiency will be pivotal in 2012.

Indeed, the ability to solve both network and application problems is crucial for today’s network engineers. That need, combined with critical staff levels in many organisations, means tools that integrate multiple functions and automate the collection of performance data will be key to greater efficiency and less downtime.

Websites, apps vulnerable to low-bandwidth, bot-free takedown, say researchers

Friday, December 30th, 2011

Microsoft rushes out emergency update for ASP .Net, first “out-of-band” in 2011

Hackers armed with a single machine and a minimal broadband connection can cripple Web servers, researchers disclosed Wednesday, putting uncounted websites and Web apps at risk from denial-of-service attacks.

In a security advisory issued the same day, Microsoft, whose ASP .Net programming language is one of several affected by the flaw, promised to patch the vulnerability and offered customers ways to protect their servers until it releases an update.

In a follow-up message, Microsoft announced it was shipping an “out-of-band,” or emergency update today. The update was released at 1 p.m. ET. Designated MS11-100 , it also fixed three other bugs in ASP .Net, one tagged “critical.” None of those three had been disclosed publicly prior to today.

The problem that caused a stir in the security community exists in many of the Web’s most popular application and site programming languages, including ASP .Net, the open-source PHP and Ruby, Oracle’s Java and Google’s V8 JavaScript, according to two German researchers, Alexander Klink and Julian Walde.

Klink and Walde, who presented their findings at the Chaos Communication Congress (CCC) conference in Berlin on Wednesday, traced the flaw to those languages’ — and others’ — handling of hash tables, a programming structure used to quickly store and retrieve data.

Unless a language randomizes hash functions or takes into account “hash collisions” — when multiple data generates the same hash — attackers can calculate the data that will trigger large numbers of collisions, then send that data as a simple HTTP request. Because each collision chews up processing cycles on the targeted server, a hacker using relatively small attack packets could consume all the processing power of even well-equipped servers, effectively knocking them offline.

Microsoft confirmed that a single 100K specially-crafted HTTP request sent to a server running ASP .Net would consume 100% of one CPU core for 90-110 seconds.

“An attacker could potentially repeatedly issue such requests, causing performance to degrade significantly enough to cause a denial of service condition for even multi-core servers or clusters of servers,” company engineers Suha Can and Jonathan Ness said in a post to the Security Research & Defense blog yesterday.

Klink and Walde estimated that packets as small as 6K would keep a single-core processor busy on a Java server.

The implications are significant for Web apps and sites that run on those servers.

“An attacker with little resources can effectively take out a site fairly easily,” said Andrew Storms, director of security operations at nCircle Security, today. “No botnet required to create havoc here.”

Microsoft’s rush to patch the flaw in ASP .Net hinted at the seriousness of the bug.

“Microsoft will be the one to watch and see if they go out of band and if so, when,” Storms said Wednesday night, before Microsoft announced today’s patch. “If they do, I sense it will be soon.”

Can and Ness of Microsoft said that the company “anticipate[s] the imminent public release of exploit code,” and urged ASP .Net customers to apply the patch or the workarounds described in the advisory.

Other programming language developers have already offered fixes to their software.

Ruby, for instance, has issued an update that includes a new randomized hash function, while PHP has shipped a release candidate for version 5.4.0 .

Some, however, will take their time implementing a fix, said Klink and Walde. Oracle told them there wasn’t anything to patch in Java itself, but said it would update the GlassFish Java server software with a future fix.

Klink and Walde credited another pair of researchers — Scott Crosby and Dan Wallach — for outlining the attack vector in 2003, and applauded the Perl programming language for patching its flaw then.

During their presentation at CCC, Klink and Walde chastised other vendors for not tackling the problem years ago.

“I’d have to agree that we all expected vendors to have fixed this by now,” said Storms. “On the other hand, there is a lot of research out there and its not always possible to be on top of everything. It’s not as though this kind of attack has been ongoing in the wild since 2003 and everyone refused to fix it.”

Klink and Walde reported their research to oCERT — the Open Source Computer Security Incident Response Team — last September. The organization then contacted the various vendors responsible for the affected languages.

oCERT issued its own advisory Wednesday.

Today’s patch from Microsoft is its first out-of-band update during 2011. Last year, the company pushed out four emergency updates.

Storms, who had praised Microsoft earlier this month for not having to go out-of-band, noted today that he had issued a caveat even then. “I did say at the December Patch Tuesday that they had a few weeks to go before the year was over,” Storms said in an instant message.

Microsoft delivered MS11-100 via its usual Windows Update and Windows Server Update Service (WSUS) channels.

More information about the hash collision flaw can be found in the advisory Klink published on his company’s website, and in the notes from their presentation ( download PDF ). Although videos of the Klink and Walde CCC talk were available on YouTube for a time Wednesday, they have since been pulled from the site.

Source:  networkworld.com

HP firmware to ‘mitigate’ LaserJet vulnerability

Friday, December 23rd, 2011

Hewlett-Packard said today that it has taken steps to prevent a “certain type of unauthorized access” to LaserJet printers.

The company didn’t describe its new firmware as a fix for the potential printer problem. Rather, it rather delicately used the word “mitigate,” the dictionary definition of which is “to make less severe or painful.” Here’s HP’s full statement on the matter:

HP has built a firmware update to mitigate this issue and is communicating this proactively to customers and partners. No customer has reported unauthorized access to HP. HP reiterates its recommendation to follow best practices for securing devices by placing printers behind a firewall and, where possible, disabling remote firmware upload on exposed printers.

Then again, HP has steadfastly declared that no customers have reported unauthorized access and that issue was overblown from the start, as in late November when it said “there has been sensational and inaccurate reporting regarding a potential security vulnerability with some HP LaserJet printers.”

At that time, it described the nature of the problem and promised a firmware update to address the issues:

The specific vulnerability exists for some HP LaserJet devices if placed on a public internet without a firewall. In a private network, some printers may be vulnerable if a malicious effort is made to modify the firmware of the device by a trusted party on the network. In some Linux or Mac environments, it may be possible for a specially formatted corrupt print job to trigger a firmware upgrade.

HP also at that time decried “speculation” that the LaserJets in question could catch fire because of a firmware update or “this proposed vulnerability.”

Despite those assurances, HP became the target of a lawsuit in early December alleging that the company sold those printers even though it knew of those alleged vulnerabilities. The lawsuit charges that software on the printers that allows for updates over the Internet does not use digital signatures to verify the authenticity of any software upgrades or downloaded modifications.

Source:  CNET

FCC approves first white spaces database, device

Friday, December 23rd, 2011

The spectrum database will be available first in the Wilmington, N.C., area

The U.S. Federal Communications Commission has approved the first database of unlicensed wireless spectrum that can be used by new so-called white spaces devices.

The FCC’s Office of Engineering and Technology on Thursdat also approved a device from KTS (Koos Technical Services) that can operate in the white spaces, which are unused bands in the area of spectrum used by television stations. The KTS device will operate in conjunction with the approved white spaces database, from Spectrum Bridge.

KTS makes a broadband transmitter device designed to operate in the white spaces.

“With today’s approval of the first TV white spaces database and device, we are taking an important step towards enabling a new wave of wireless innovation,” FCC Chairman Julius Genachowski said in a statement. “Unleashing white spaces spectrum has the potential to exceed even the many billions of dollars in economic benefit from WiFi, the last significant release of unlicensed spectrum, and drive private investment and job creation.”

Operations under the approval will be limited initially to the Wilmington, N.C., area, where the FCC has conducted white spaces tests. The operation of white spaces devices will expand nationwide as the FCC begins to approve request for protection of wireless microphones at event venues.

In recent years, several tech vendors and consumer groups have pushed for the FCC to open up the white spaces, sometimes called super Wi-Fi spectrum, to mobile broadband devices. TV stations and wireless microphone makers raises concerns about interference.

Early device tests had mixed results, leading the FCC to pursue a spectrum database approach. The new KTS device will contact the Spectrum Bridge database to identify channels that are available for operation at its location and can provide high-speed Internet connectivity, the FCC said.

Source:  infoworld. com

Hackers abuse PHP setting to inject malicious code into websites

Friday, December 23rd, 2011

Hackers modify php.ini files on compromised Web servers to hide their malicious activity from webmasters

Attackers have begun to abuse a special PHP configuration directive in order to insert malicious code into websites hosted on dedicated and VPS (virtual private servers) that have been compromised.

The technique was identified by Web security firm Sucuri Security while investigating several infected websites that had a particular malicious iframe injected into their pages.

“We’re finding that entire servers are being compromised, and the main server php.ini file (/etc/php/php.ini) has the following setting added: ;auto_append_file = “0ff”,” Sucuri security researcher David Dede said in a blog post on Thursday.

According to the PHP manual, the auto_append_file directive specifies the name of a file that is automatically parsed after the main file. This is the server-wide equivalent of the PHP require() function.

The “Off” string from the rogue php.ini directive is actually the path to a file, namely /tmp/0ff, which is created by the attackers on the compromised servers and contains the malicious iframe.

This malicious trick makes it hard for webmasters to pinpoint the source of the unauthorized code, since none of the files in their Web directory are actually altered.

“We only got access to a few dozen servers with this type of malware, but doing our crawling we identified a few thousand sites with a similar malware, so we assume they are all hacked the same way,” Dede said.

Even though Sucuri only inspected VPS and dedicate servers so far, the researcher doesn’t dismiss the possibility that some shared servers, like those used for low-cost hosting, might have been compromised in the same manner.

Attacks using this technique have already been running for several months, said Elad Sharf, a security researcher at Web security firm Websense. “This is one of many mass injection campaigns that we know about and follow.”

Sharf recommended that webmasters remove the file name from the auto_append_file setting and scan their servers for other infections using security software. Patching all software that runs on their servers and performing regular backups is fundamentally important, he said.

Denis Sinegubko, an independent security researcher and creator of the Unmask Parasites website scanner, couldn’t confirm the “auto_append_file” attacks, but said that he has seen other rogue php.ini modifications in the past.

“All critical configuration files should be under version control. Not only does it help to spot unwanted changes, but also easily restore files to their clean state,” Sinegubko said. Scanning the Web server, ftp and other available logs for suspicious activity is also something that server administrators should do on a regular basis, he added.

Sinegubko’s advice for owners of infected websites who use shared hosting servers and can’t find anything suspicious under their account, is to check if other sites hosted on the same server were also compromised.

Another method is to create an empty .php file in the topmost directory and scan its corresponding URL with one of the several free online website scanners. If any of these checks return a positive result, webmasters should contact their hosting provider and inform them about the problem, Sinegubko said.

Source:  infoworld.com

Self-healing electronic chips trialled

Friday, December 23rd, 2011

Self-repairing electronic chips are one step closer, according to a team of US researchers.

The group has created a circuit that heals itself when cracked thanks to the release of liquid metal which restores conductivity.

The process takes less than an eye blink to bring the circuit back to use.

The researchers said that their work could eventually lead to longer-lasting gadgets as well as solving one of the big problems of interplanetary travel.

The work was carried out by a team of scientists and engineers at the University of Illinois at Urbana-Champaign and is published in the journal Advanced Materials.

The process works by exploiting the stress that causes the initial damage in the chips to break open tiny reservoirs of a healing material that fills in the resulting gaps, restoring electrical flow.

Cracked circuits

To test their theory the team patterned lines of gold onto glass to form a circuit.

They then either placed microcapsules 0.01mm wide directly onto the lines or added a thin laminate into which they embedded larger 0.2mm microcapsules.

In both cases the microcapsules contained eutectic gallium-indium – a metallic material chosen for its high conductivity and low melting point.

This device was then sandwiched between another layer of glass and acrylic and connected to electricity.

The researchers then bent the circuit until it cracked causing the monitored voltage to fall to zero.

They said the ruptured microcapsules then healed most of the test circuits within one millisecond and restored nearly all of the measured voltage.

The smaller capsules healed the device every time but were a little less conductive than the larger ones which had a slightly lower success rate. The team suggested that a mix of differently sized capsules would therefore give the best result.

The devices were then monitored for four months during which time the researchers said there was no loss of conductivity.

Safe space travel

The leader of the group said the theory could prove a boon to the space industry.

“The only avenue one has right now is to simply remove that circuitry when it fails and replace it- there is no way to manually go in and fix something like this,” aerospace engineering professor Scott White told the BBC.

Graphic showing how the self-healing process works“I think the real application area that you’ll see for something like this is in electronics which are incredibly difficult to repair or replace – think about satellites or interplanetary travel where it’s physically impossible to swap out something.”

The research is an offshoot of the university’s research into extending the lifetime of rechargeable batteries.

The reason current systems fail after repeated use is often because microdamage inside the devices has disrupted the conductive flow of electrons from one end of the batteries to the other.

The team said that if they could solve the problem electric car batteries might last years longer than they do at present, making the vehicles much cheaper to maintain.

Greener gadgets

The group also claimed that the technique had the potential to offer more sustainable consumer electronic devices.

Professor White gave the example of mobile phone buttons that stopped working if repeated use had caused cracks in the circuitry below. He said self-healing systems would extend handsets’ lifespans.

When asked whether profit-driven electronics makers would want this he replied: “I believe any company would want to provide their customer with the best performing product and if they don’t, then other companies will step into the market to provide it.

“Basically what you see is that electronics are cycled now to give you added functionality.

“Maybe the way to do this is not to physically build new circuits and packages every time, but let’s have longer lasting ones.

“Then the redesigns can be more software based or functionality driven, saving us from using up our precious resources by building millions of cellphones every year.”

Source:  BBC