Archive for October, 2013

FCC lays down spectrum rules for national first-responder network

Tuesday, October 29th, 2013

The agency will also start processing applications for equipment certification

The U.S. moved one step closer to having a unified public safety network on Monday when the Federal Communications Commission approved rules for using spectrum set aside for the system.

Also on Monday, the agency directed its Office of Engineering and Technology to start processing applications from vendors to have their equipment certified to operate in that spectrum.

The national network, which will operate in the prized 700MHz band, is intended to replace a patchwork of systems used by about 60,000 public safety agencies around the country. The First Responder Network Authority (FirstNet) would operate the system and deliver services on it to those agencies. The move is intended to enable better coordination among first responders and give them more bandwidth for transmitting video and other rich data types.

The rules approved by the FCC include power limits and other technical parameters for operating in the band. Locking them down should help prevent harmful interference with users in adjacent bands and drive the availability of equipment for FirstNet’s network, the agency said.

A national public safety network was recommended by a task force that reviewed the Sept. 11, 2001, terror attacks on the U.S. The Middle Class Tax Relief and Job Creation Act of 2012 called for auctions of other spectrum to cover the cost of the network, which was estimated last year at US$7 billion.

The public safety network is required to cover 95 percent of the U.S., including all 50 states, the District of Columbia and U.S. territories. It must reach 98 percent of the country’s population.

Source:  computerworld.com

Seven essentials for VM management and security

Tuesday, October 29th, 2013

Virtualization isn’t a new trend, these days it’s an essential element of infrastructure design and management. However, while common for the most part, organizations are still learning as they go when it comes to cloud-based initiatives.

CSO recently spoke with Shawn Willson, the Vice President of Sales at Next IT, a Michigan-based firm that focuses on managed services for small to medium-sized organizations. Willson discussed his list of essentials when it comes to VM deployment, management, and security.

Preparing for time drift on virtual servers. “Guest OSs should, and need to be synced with the host OS…Failure to do so will lead to time drift on virtual servers — resulting in significant slowdowns and errors in an active directory environment,” Willson said.

Despite the impact this could have on work productivity and daily operations, he added, very few IT managers or security officers think to do this until after they’ve experienced a time drift. Unfortunately, this usually happens while attempting to recover from a security incident. Time drift can lead to a loss of accuracy when it comes to logs, making forensic investigations next to impossible.

Establish policies for managing snapshots and images. Virtualization allows for quick copies of the Guest OS, but policies need to be put in place in order to dictate who can make these copies, if copies will (or can) be archived, and if so, where (and under what security settings) will these images be stored.

“Many times when companies move to virtual servers they don’t take the time the upgrade their security policy for specific items like this, simply because of the time it requires,” Willson said.

Creating and maintaining disaster recovery images. “Spinning up an unpatched, legacy image in the case of disaster recovery can cause more issues than the original problem,” Willson explained.

To fix this, administrators should develop a process for maintaining a patched, “known good” image.

Update disaster recovery policy and procedures to include virtual drives. “Very few organizations take the time to upgrade their various IT policies to accommodate virtualization. This is simply because of the amount of time it takes and the little value they see it bringing to the organization,” Willson said.

But failing to update IT policies to include virtualization, “will only result in the firm incurring more costs and damages whenever a breach or disaster occurs,” Willson added.

Maintaining and monitoring the hypervisor. “All software platforms will offer updates to the hypervisor software, making it necessary that a strategy for this be put in place. If the platform doesn’t provide monitoring features for the hypervisor, a third party application should be used,” Willson said.

Consider disabling clip boarding between guest OSs. By default, most VM platforms have copy and paste between guest OSs turned on after initial deployment. In some cases, this is a required feature for specific applications.

“However, it also poses a security threat, providing a direct path of access and the ability to unknowingly [move] malware from one guest OS to another,” Willson said.

Thus, if copy and paste isn’t essential, it should be disabled as a rule.

Limiting unused virtual hardware. “Most IT professionals understand the need to manage unused hardware (drives, ports, network adapters), as these can be considered soft targets from a security standpoint,” Willson said.

However, he adds, “with virtualization technology we now have to take inventory of virtual hardware (CD drives, virtual NICS, virtual ports). Many of these are created by default upon creating new guest OSs under the disguise of being a convenience, but these can offer the same danger or point of entry as unused physical hardware can.”

Again, just as it was with copy and paste, if the virtualized hardware isn’t essential, it should be disabled.

Source:  csoonline.com

Adobe hack attack affected 38 million accounts

Tuesday, October 29th, 2013

The recent security breach that hit Adobe exposed customer IDs, passwords, and credit and debit card information.

A cyberattack launched against Adobe affected more than 10 times the number of users initially estimated.

On October 3, Adobe revealed that it had been the victim of an attack that exposed Adobe customer IDs and encrypted passwords. At the time, the company said that hackers gained access to credit card records and login information for around 3 million users. But the number of affected accounts has turned out to be much higher.

The attack actually involved 38 million active accounts, according to security blog Krebs on Security. Adobe confirmed that number in an e-mail to Krebs.

“So far, our investigation has confirmed that the attackers obtained access to Adobe IDs and (what were at the time valid), encrypted passwords for approximately 38 million active users,” Adobe spokeswoman Heather Edell said. “We have completed e-mail notification of these users. We also have reset the passwords for all Adobe IDs with valid, encrypted passwords that we believe were involved in the incident — regardless of whether those users are active or not.”

The attack also gained access to many invalid or inactive Adobe accounts — those with invalid encrypted passwords and those used as test accounts.

“We are still in the process of investigating the number of inactive, invalid, and test accounts involved in the incident,” Edell added. “Our notification to inactive users is ongoing.”

CNET contacted Adobe for comment and will update the story with any further details

Following the initial report of the attack, Adobe reset the passwords on compromised customer accounts and sent e-mails to those whose accounts were breached and whose credit card or debit card information was exposed.

Adobe has posted a customer security alert page with more information on the breach and an option whereby users can change their passwords.

Source:  CNET

Hackers compromise official PHP website, infect visitors with malware

Friday, October 25th, 2013

Maintainers of the open-source PHP programming language have locked down the php.net website after discovering two of its servers were hacked to host malicious code designed to surreptitiously install malware on visitors’ computers.

The compromise was discovered Thursday morning by Google’s safe browsing service, which helps the Chrome, Firefox, and Safari browsers automatically block sites that serve drive-by exploits. Traces of the malicious JavaScript code served to some php.net visitors were captured and posted to Hacker News here and, in the form of a pcap file, to a Barracuda Networks blog post here. The attacks started Tuesday and lasted through Thursday morning, PHP officials wrote in a statement posted late that evening.

Eventually, the site was moved to a new set of servers, PHP officials wrote in an earlier statement. There’s no evidence that any of the code they maintain has been altered, they added. Encrypted HTTPS access to php.net websites is temporarily unavailable until a new secure sockets layer certificate is issued and installed. The old certificate was revoked out of concern the intruders may have accessed the private encryption key. User passwords will be reset in the coming days. At time of writing, there was no indication of any further compromise.

“The php.net systems team have audited every server operated by php.net, and have found that two servers were compromised: the server which hosted the www.php.net, static.php.net and git.php.net domains and was previously suspected based on the JavaScript malware, and the server hosting bugs.php.net,” Thursday night’s statement read. “The method by which these servers were compromised is unknown at this time.”

According to a security researcher at Kaspersky Lab, Thursday’s compromise caused some php.net visitors to download “Tepfer,” a trojan spawned by the Magnitude Exploit Kit. At the time of the php.net attacks, the malware was detected by only five of 47 antivirus programs. An analysis of the pcap file suggests the malware attack worked by exploiting a vulnerability in Adobe Flash, although it’s possible that some victims were targeted by attacks that exploited Java, Internet Explorer, or other applications, Martijn Grooten, a security researcher for Virus Bulletin, told Ars.

Grooten said the malicious JavaScript was served from a file known as userprefs.js hosted directly on one of the php.net servers. While the userprefs.js code was served to all visitors, only some of those people received an additional payload that contained malicious iframe tags. The HTML code caused visitors’ browsers to connect to a series of third-party websites and eventually download malicious code. At least some of the sites the malicious iframes were pointing to were UK domains such as nkhere.reviewhdtv.co.uk, which appeared to have their domain name system server settings compromised so they resolved to IP addresses located in Moldova.

“Given what Hacker News reported (a site serving malicious JS) to some, this doesn’t look like someone manually changing the file,” Grooten said, calling into question an account php.net officials gave in their initial brief statement posted to the site. The attackers “somehow compromised the Web server. It might be that php.net has yet to discover that (it’s not trivial—some webserver malware runs entirely in memory and hides itself pretty well.)”

Ars has covered several varieties of malware that target webservers and are extremely hard to detect.

In an e-mail, PHP maintainer Adam Harvey said PHP officials first learned of the attacks at 6:15am UTC. By 8, they had provisioned a new server. In the interim, some visitors may have been exposed.

“We have no numbers on the number of visitors affected, due to the transient nature of the malicious JS,” Harvey wrote. “As the news post on php.net said, it was only visible intermittently due to interactions with an rsync job that refreshed the code from the Git repository that houses www.php.net. The investigation is ongoing. Right now we have nothing specific to share, but a full post mortem will be posted on php.net once the dust has settled.”

Source:  arstechnica.com

Top three indicators of compromised web servers

Thursday, October 24th, 2013

You slowly push open your unusually unlocked door only to find that your home is ransacked. A broken window, missing cash, all signs that someone has broken in and you have been robbed.

In the physical world it is very easy to understand what an indicator of compromise would mean for a robbery. It would simply be all the things that clue you in to the event’s occurrence. In the digital world however, things are another story.

My area of expertise is breaking into web applications. I’ve spent many years as a penetration tester attempting to gain access to internal networks through web applications connected to the Internet. I developed this expertise because of the prevalence of exploitable vulnerabilities that made it simple to achieve my goal.  In a world of phishing and drive-by downloads, the web layer is often a complicated, over-looked, compromise domain.

A perimeter web server is a gem of a host to control for any would-be attacker. It often enjoys full Internet connectivity with minimal downtime while also providing an internal connection to the target network.  These servers are routinely expected to experience attacks, heavy user traffic, bad login attempts, and many other characteristics that allow a real compromise to blend in with “normal” behavior.  The nature of many web applications running on these servers are such that encoding, obfuscation, file write operations, and even interaction with the underlying operating system are all natively supported, providing much of the functionality an attacker needs to do their bidding.  Perimeter web servers can also be used after a compromise has occurred elsewhere in the network to retain remote access so that pesky two-factor VPN’s can be avoided.

With all the reasons an attacker has to go after a web server, it’s a wonder that there isn’t a wealth of information available for detecting a server compromise by way of the application layer.  Perhaps the sheer number of web servers, application frameworks, components, and web applications culminate in a difficult situation for any analyst to approach with a common set of indicators.  While this is certainly no easy task, there are a few common areas that can be evaluated to detect a compromise with a high degree of success.

#1 Web shells

Often the product of vulnerable image uploaders and other poorly controlled file write operations, a web shell is simply a file that has been written to the web server’s file system for the purpose of executing commands. Web shells are most commonly text files with the appropriate extension to allow execution by the underlying application framework, an obvious example being commandshell.php or cmd.aspx.  Viewing the text file generally reveals code that allows an attacker to interact with the underlying operating system via built-in calls such as the ProcessStartInfo() constructor in .net or the system() call in php.  The presence of a web shell on any web server is a clear indicator of compromise in virtually every situation.

Web Shell IOC’s (Indicators of Compromise)

  • Scan all files in web root for operating system calls, given the installed application frameworks
  • Check for the existence of executable files or web application code in upload directories or non-standard locations
  • Parse web server logs to detect commands being passed as GET requests or successive POST requests to suspicious web scripts
  • Flag new processes created by the web server process because when should it ever really launch cmd.exe

#2 Administrative interfaces

Many web application frameworks and custom web applications have some form of administrative interface. These interfaces often suffer from password issues and other vulnerabilities that allow an attacker to gain access to this component. Once inside, an attacker can utilize all of the built-in functionality to further compromise the host or it’s users. While each application will have its own unique logging and available functionality, there are some common IOC’s that should be investigated.

Admin interface IOC’s

  • Unplanned deployment events such as pushing out a .war file in a Java based application
  • Modification of user accounts
  • Creation or editing of scheduled tasks or maintenance events
  • Unplanned configuration updates or backup operations
  • Failed or non-standard login events

#3 General attack activity

The typical web hacker will not fire up their favorite commercial security scanner to try and find ways into your web application as they tend to prefer a more manual approach. The ability for an attacker to quietly test your web application for exploitable vulnerabilities makes this a high reward, low risk activity.  During this investigation the intruder will focus on the exploits that lead them to their goal of obtaining access. A keen eye can detect some of this activity and isolate it to a source.

General attack IOC’s

  • Scan web server logs for (500) errors or errors handled within the application itself.  Database errors for SQL injection, path errors for file write or read operations, and permission errors are some prime candidates to indicate an issue
  • Known sensitive file access via web server process.  Investigate if web configuration files like WEB-INF/web.xml, sensitive operating system files like /etc/passwd, or static location operating system files like C:\WINDOWS\system.ini have been accessed via the web server process.
  • Advanced search engine operators in referrer headers.  It is not common for a web visitor to access your site directly from an inurl:foo ext:bar Google search
  • Large quantities of 404 page not found errors with suspicious file names may indicate an attempt to access un-linked areas of an application

Web application IOC’s still suffer from the same issues as their more traditional counterparts in that the behavior of an attacker must be highly predictable to detect their activity.  If we’re honest with ourselves, an attacker’s ability to avoid detection is only limited by their creativity and skill set.  An advanced attacker could easily avoid creating most, if not all, of the indicators in this article. That said, many attackers are not as advanced as the media makes them out to be; even better, some of them are just plain lazy. Armed with the web-specific IOC’s above, the next time you walk up to the unlocked front door of your ransacked web server, you might actually get to see who has their hand in your cookie jar.

Source:  techrepublic.com

Thinking outside the IT audit (check)box

Thursday, October 24th, 2013

More enterprises fight to move their programs from compliance management to security risk management

After years of security teams reaching into the regulatory compliance budget bucket to find the funding they need for their security efforts, some organizations are noticing that while it won short-term capital, the practice has come back to haunt them in the long run. And while it does sound cliche to hear that compliance does not equal security, many enterprises are taking steps to make sure their focus is on building resilient IT and not merely on passing an audit.

A recent report from the IT expert professional community Wisegate, Moving From Compliance to Risk-Based Security, found that the top driver for implementing a risk management program is to meet regulatory compliance requirements. Fewer than half of respondents cited the general threat landscape or an interest in getting in front of attackers.

That troubling attitude could explain why so many organizations remain in firefighting mode—jumping from one breach or security emergency to the next without any chance of getting in front of the risk.

While it can certainly be argued, and strongly so, that security wasn’t taken seriously in the days prior to regulatory mandates such as Sarbanes-Oxley, PCI DSS, and the myriad other regulations and data breach disclosure laws that followed, it’s also certainly tougher to make the strong case that, long term, organizations are better off today for their efforts. Disappointingly, many organizations are doing only the minimum of what needs to be done in order to pass the next audit and to be able to show management that their IT systems are compliant.

“The entire reason why these regulations were instituted was to try to make sure that organizations are more secure, but sadly what is often happening is checklist compliance,” says Candy Alexander, former CISO at Long Term Care Partners, LLC, and currently a member of the board of directors at the Information Systems Security Association.

Why is this? Because compliance is an easier sale to executives, experts, and CISOs. “If you actually look at the best business use of capital, for many executives it’s debatable if spending large amounts of capital on security makes sense, just from a pure return on investment perspective,” says Martin Sandren, enterprise architect, security at Blue Cross Blue Shield of Massachusetts.

There are a few companies that really “get it,” explains Alexander. “They know they are compliant, but they also know that they may, or may not, also be secure.”

These sentiments align with the findings in our eleventh annual Global Information Security Survey, conducted by PricewaterhouseCoopers CSO, and CIO magazine. The survey of more than 9,600 organizations found that only 17% of respondents had what would be considered a mature risk management program. Such a program would consist of an organization having an overall information security strategy, employing a CISO or equivalent who reports to executive leadership, having measured and reviewed the effectiveness of security within the past year, and understanding exactly what type of security events have occurred in the past year within the organization.

So how do organizations move away from the compliance and checklist mentality to more comprehensive risk management? “It’s a big jump,” says Tim McCreight, CISO for the Government of Alberta, Canada. And it’s a jump that includes leaping from reacting to incidents when they occur and trying to force security controls onto the business to enabling the business to understand the risks and make the appropriate risk-based business decisions. “That requires the business to understand its risk tolerance levels,” says McCreight.

Sounds simple, but it’s anything but. How do security professionals get the business to not only care about IT security risks, but also understand the business consequences of accepting too much IT risk?

Sandren explains how it takes good security metrics. “We have a governing structure at Blue Cross Blue Shield that is based on a risk assessment that is completed and then signed off by the business,” he says. “It’s not an easy thing to do, and it’s always going to be hard for an organization to accept the cost of risk, and officials often either don’t want to accept any risk, or they want to just ignore the risk entirely.”

Mike Rothman, analyst and president at IT security firm Securosis says that regardless of the difficulty, one of the best persuaders are data. “You are going to want to collect as much data and metrics as you can to present to the business. How you are reducing risk by responding more quickly and how investments in security are protecting business-critical assets will get their attention,” he says. “Executives love charts and numbers, and the more accurate and believable the better.”

McCreight agrees on the importance of winning the hearts and minds of the business as a way to move from a compliance-driven to an IT risk management-driven program. He adds that taking small steps of integrating security into business operations can go a long way as well. “Is the network security team aware of new projects as they arise? Is security brought in during the design phases of new IT initiatives? They need to be an integral part of the process,” he says.What it comes down to is that it’s about a not so insignificant shift in objectives—from compliance to making systems more resilient to attack. And, just like IT security itself, there’s no simple checklist on how to get there. “There’s no right or wrong way to get there. Each and every organization is going to be different because all have different risk profiles; they have different risk tolerance levels,” Alexander says. “The important thing is to work on getting there.”

Source:  csoonline.com

Cisco fixes serious security flaws in networking, communications products

Thursday, October 24th, 2013

Cisco Systems released software security updates Wednesday to address denial-of-service and arbitrary command execution vulnerabilities in several products, including a known flaw in the Apache Struts development framework used by some of them.

The company released new versions of Cisco IOS XR Software to fix an issue with handling fragmented packets that can be exploited to trigger a denial-of-service condition on various Cisco CRS Route Processor cards. The affected cards and the patched software versions available for them are listed in a Cisco advisory.

The company also released security updates for Cisco Identity Services Engine (ISE), a security policy management platform for wired, wireless, and VPN connections. The updates fix a vulnerability that could be exploited by authenticated remote attackers to execute arbitrary commands on the underlying operating system and a separate vulnerability that could allow attackers to bypass authentication and download the product’s configuration or other sensitive information, including administrative credentials.

Cisco also released updates that fix a known Apache Struts vulnerability in several of its products, including ISE. Apache Struts is a popular open-source framework for developing Java-based Web applications.

The vulnerability, identified as CVE-2013-2251, is located in Struts’ DefaultActionMapper component and was patched by Apache in Struts version 2.3.15.1 which was released in July.

The new Cisco updates integrate that patch into the Struts version used by Cisco Business Edition 3000, Cisco Identity Services Engine, Cisco Media Experience Engine (MXE) 3500 Series and Cisco Unified SIP Proxy.

“The impact of this vulnerability on Cisco products varies depending on the affected product,” Cisco said in an advisory. “Successful exploitation on Cisco ISE, Cisco Unified SIP Proxy, and Cisco Business Edition 3000 could result in an arbitrary command executed on the affected system.”

No authentication is needed to execute the attack on Cisco ISE and Cisco Unified SIP Proxy, but the flaw’s successful exploitation on Cisco Business Edition 3000 requires the attacker to have valid credentials or trick a user with valid credentials into executing a malicious URL, the company said.

“Successful exploitation on the Cisco MXE 3500 Series could allow the attacker to redirect the user to a different and possibly malicious website, however arbitrary command execution is not possible on this product,” Cisco said.

Security researchers from Trend Micro reported in August that Chinese hackers are attacking servers running Apache Struts applications by using an automated tool that exploits several Apache Struts remote command execution vulnerabilities, including CVE-2013-2251.

The existence of an attack tool in the cybercriminal underground for exploiting Struts vulnerabilities increases the risk for organizations using the affected Cisco products.

In addition, since patching CVE-2013-2251 the Apache Struts developers have further hardened the DefaultActionMapper component in more recent releases.

Struts version 2.3.15.2, which was released in September, made some changes to the DefaultActionMapper “action:” prefix that’s used to attach navigational information to buttons within forms in order to mitigate an issue that could be exploited to circumvent security constraints. The issue has been assigned the CVE-2013-4310 identifier.

Struts 2.3.15.3, released on Oct. 17, turned off support for the “action:” prefix by default and added two new settings called “struts.mapper.action.prefix.enabled” and “struts.mapper.action.prefix.crossNamespaces” that can be used to better control the behavior of DefaultActionMapper.

The Struts developers said that upgrading to Struts 2.3.15.3 is strongly recommended, but held back on releasing more details about CVE-2013-4310 until the patch is widely adopted.

It’s not clear when or if Cisco will patch CVE-2013-4310 in its products, giving that the fix appears to involve disabling support for the “action:” prefix. If the Struts applications in those products use the “action:” prefix the company might need to rework some of their code.

Source:  computerworld.com

US government releases draft cybersecurity framework

Thursday, October 24th, 2013

(Credit: The National Institute of Standards and Technology)

NIST comes out with its proposed cybersecurity standards, which outlines how private companies can protect themselves against hacks, cyberattacks, and security breaches.

The National Institute of Standards and Technology released its draft cybersecurity framework for private companies and infrastructure networks on Tuesday. These standards are part of an executive order that President Obama proposed in February.

The aim of NIST’s framework (PDF) is to create guidelines that companies can use to beef up their networks and guard against hackers and cybersecurity threats. Adopting this framework would be voluntary for companies. NIST is a non-regulatory agency within the Department of Commerce.

The framework was written with the involvement of roughly 3,000 industry and academic experts, according to Reuters. It outlines ways that companies could protect their networks and act fast if and when they experience security breaches.

“The framework provides a common language for expressing, understanding, and managing cybersecurity risk, both internally and externally,” reads the draft standards. “The framework can be used to help identify and prioritize actions for reducing cybersecurity risk and is a tool for aligning policy, business, and technological approaches to managing that risk.”

Obama’s executive order in February was part of a government effort to get cybersecurity legislation in place, but the bill was put on hold after the National Security Agency’s surveillance program was revealed.

Some of the components in Obama’s order included: expanding “real time sharing of cyber threat information” to companies that operate critical infrastructure, asking NIST to devise cybersecurity standards, and proposing a “review of existing cybersecurity regulation.”

Critical infrastructure networks, banks, and private companies have increasingly been hit by cyberattacks over the past couple of years. For example, weeks after the former head of Homeland Security, Janet Napolitano, announced that she believed a “cyber 9/11” could happen “imminently” — crippling the country’s power grid, water infrastructure, and transportation networks — hackers hit the US Department of Energy. While no data was compromised, it did show that hackers were able to breach the computer system.

In May, Congress released a survey that claimed power utilities in the U.S. are under “daily” cyberattacks. Of about 160 utilities interviewed for the survey, more than a dozen reported “daily,” “constant,” or “frequent” attempted cyberattacks on their computer systems. While the data in the survey sounded alarming, none of the utilities reported any damage to their facilities or actual breaches of their systems — but rather attempts to hack their networks.

While companies are well aware that they need to secure their networks, many are wary of signing onto this voluntary framework. According to Reuters, some companies are worried that the standards could turn into requirements.

In an effort to get companies to adopt the framework, the government has been offering a slew of incentives, including cybersecurity insurance, priority consideration for grants, and streamlined regulations. These proposed incentives are a preliminary step for the government’s cybersecurity policy and have not yet been finalized.

NIST will now take public comments for 45 days and plans to issue the final cybersecurity framework in February 2014.

Source:  CNET

Healthcare.gov code allegedly two times larger than Facebook, Windows, and OS X combined

Thursday, October 24th, 2013
(Credit: KAREN BLEIER/AFP/Getty Images)

Healthcare.gov is in shambles. Republicans, on the heels of the House’s failure to gut the Affordable Care Act (ACA), are whipping up a firestorm of criticism. Health Secretary Kathleen Sebelius is facing calls for resignation, and critics — and satirists — are asking everyone from ex-fugitive John McAfee to Edward Snowden to weigh in on the issues.

The latest controversy revolves around The New York Times’ reporting that roughly 1 percent of Healthcare.gov — or 5 million lines of code — would need to be rewritten, putting the Web site’s total size at a mind-boggling 500 million lines of code — a scale that suggests months upon months of work.

Some are naturally skeptical of that ridiculous-sounding number — as well as the credibility of The New York Times’ source, who remains unnamed. Forums of programmers on sites like Reddit have postulated that, if true, it would have to involve mounds of bloated legacy code from past systems — making it one of the largest Web systems ever built. One developer, Alex Marchant of Orange, Calif., decided to draw an interesting comparison to point that out.

Marchant’s chart included Facebook.com, which he says nears 75 million lines of code, but it’s likely larger due to his source’s exclusion of back-end components; OS X 10.4 Tiger; and Windows XP. Still, Healthcare.gov at 500 million lines is more than two times larger than all three combined.

(Credit: Alex Marchant)

For further perspective, makers of the multiplayer online game World of Warcraft regularly maintain 5.5 million lines of code for the game’s more than 7 million subscribers. How about the code that runs a gigantic, multinational bank? The Bank of New York Mellon, the oldest banking corporation in the US and the largest deposit bank in the world with close to $30 trillion in total assets, has a system built upon 112,500 Cobol programs, which amounts to 343 million lines of code.

Those examples are enough to make you think something is amiss in the 500 million figure. Still, it would come as no surprise if Healthcare.gov — plugged into thousands of outdated systems, containing countless redundancies, and rushed out the door with little technical oversight — were, in fact, the most bloated piece of software to ever hit the Web.

Source:  CNET

 

Network Solutions reports more DNS problems

Wednesday, October 23rd, 2013

Network Solutions said Tuesday it was trying to restore services after another DNS (Domain Name System) problem.

The latest issue comes two weeks after a pro-Palestinian hacking group redirected websites belonging to several companies whose records were held by Network Solutions, owned by the company Web.com.

Efforts to reach a company spokesperson were not immediately successful.

“We apologize for the issues our customers have experienced as a result of an incident on the Network Solutions DNS,” the company wrote on Facebook. “We’re in the process of restoring services, and we appreciate your patience as we work toward resolution.”

The DNS is a distributed address book for websites, translating domain names such as idg.com into an IP address that can be called into a Web browser. In the past few months, hackers have targeted companies that register domain names and their partners.

A successful DNS hijacking attack can cause thousands of Web surfers to a high-profile website to be redirected to another site even though they’ve typed in or browsed to the correct domain name.

Avira, a security company affected by the attacks two weeks ago, said hackers gained access to its Network Solutions account via a fake password-reset request. Claiming responsibility was a group calling itself the “Kdms Team,” which also attacked the hosting provider LeaseWeb about two days before.

In a separate problem, Network Solutions said Monday some customers could not send email after it was blacklisted by a security company, Trend Micro, and other anti-spam services.

In July, Network Solutions fought off a distributed denial-of-service attack (DDoS) that knocked websites offline and problems with MySQL databases.

Source:  infoworld.com

Microsoft and Symantec push to combat key, code-signed malware

Wednesday, October 23rd, 2013

Code-signed malware hot spots said to be China, Brazil, South Korea

An alarming growth in malware signed with fraudulently obtained keys and code-signing certificates in order to trick users to download harmful code is prompting Microsoft and Symantec to push for tighter controls in the way the world’s certificate authorities issue these keys used in code-signing.

It’s not just stolen keys that are the problem in code-signed malware but “keys issued to people who aren’t who they say they are,” says Dean Coclin, senior director of business development in the trust services division at Symantec.

Coclin says China, Brazil and South Korea are the hot spots today where the problem of malware signed with certificates and keys obtained from certificate authorities is the worst right now. “We need a uniform way to vet companies and individuals around the world,” says Coclin. He says that doesn’t really exist today for certificates used in code-signing, but Microsoft and Symantec are about to float a plan that might change that.

Code-signed malware appears to be aimed mostly at Microsoft Windows and Java, maintained by Oracle, says Coclin, adding that malicious code-signing of Android apps has also quickly become a lawless “Wild West.”

Under the auspices of the Certificate Authority/ Browser Forum, an industry group in which Microsoft and Symantec are members, the two companies next month plan to put forward what Coclin describes as proposed new “baseline requirements and audit guidelines” that certificate authorities would have to follow to verify the identity of purchasers of code-signing certificates. Microsoft is keenly interested in this effort because “Microsoft is out to protect Windows,” says Coclin.

These new identity-proofing requirements will be detailed next month in the upcoming CAB Forum document from its Code-Signing Group. The underlying concept is that certificate authorities would have to follow more stringent practices related to proofing identity, Coclin says.

The CAB Forum includes the main Internet browser software makers, Microsoft, Google, Opera Software and The Mozilla Foundation, combined with many of the major certificate authorities, including Symantec’s  own certificate authority units Thawte and VeriSign, which earlier acquired GeoTrust.

Several other certificate authorities, including Comodo, GoDaddy, GlobalSign, Trustwave and Network Solutions, are also CAB Forum members, plus a number of certificate authorities based abroad, such as Chunghwa Telecom Co. Ltd., Swisscom, TURKTRUST and TAIWAN-CA, Inc. It’s part of a vast and larger commercial certificate authority global infrastructure with numerous sub-authorities operating in a root-based chain of trust. Outside this commercial certificate authority structure, governments and enterprises also use their own controlled certificate authority systems to issue and manage digital certificates for code-signing purposes.

Use of digital certificates for code-signing isn’t as widespread as that for SSL, for example, but as detailed in the new White Paper on the topic from the industry group called the CA Security Council, code-signing is intended to assure the identity of software publishers and ensure that the signed code has not been tampered with.

Coclin, who is co-chair of the CAB Forum, says precise details about new anti-fraud measures for proofing the identity of those buying code-signing certificates from certificate authorities will be unveiled next month and subject to a 60-day comment period. These new proposed identity-proofing requirements will be discussed at a meeting planned in February at Google before any adoption of them.

The CAB Forum’s code-signing group is expected to espouse changes related to security that may impact software vendors and enterprises that use code-signing in their software development efforts so the CAB Forum wants maximum feedback before going ahead with its ideas on improving security in certificate issuance.

Coclin points out that commercial certificate authorities today must pass certain audits done by KPMG or PricewaterhouseCoopers, for example. In the future, if new requirements say certificate authorities have to verify the identity of customers in a certain way and they don’t do it properly, that information could be shared with an Internet browser maker like Microsoft, which makes the Internet Explorer browser. Because browsers play a central role in the certificate-based code-signing process, Microsoft, for example, could take action to ensure its browser and OS do not recognize certificates issued by certificate authorities that violate any new identity-proofing procedures. But how any of this shake out remains to be seen.

McAfee, which unlike Symantec doesn’t have a certificate authority business unit and is not a member of the CAB Forum, last month at its annual user conference presented its own research about how legitimate certificates are increasingly being used to sign malware in order to trick victims into downloading malicious code.

“The certificates aren’t actually malicious — they’re not forged or stolen, they’re abused,” said McAfee researcher Dave Marcus. He said in many instances, according to McAfee’s research on code-signed malware, the attacker has gone out and obtained legitimate certificates from a company associated with top-root certificate authorities such as Comodo, Thawte or VeriSign. McAfee has taken to calling this the problem of “abused certificates,” an expression that’s not yet widespread in the industry as a term to describe the threat.

Coclin notes that one idea that would advance security would be to have a “code-signing portal” where a certificate authority could scan the submitted code to be checked for signs of malware before it was signed. He also said a good practice is hardware-based keys and security modules to better protect private keys used as part of the code-signing process.

Source:  networkworld.com

Google unveils an anti-DDoS platform for human rights organizations and media, but will it work?

Tuesday, October 22nd, 2013

Project Shield uses company’s infrastructure to absorb attacks

On Monday, Google announced a beta service that will offer DDoS protection to human rights organizations and media, in and effort to slow the amount of censorship that such attacks cause.

The announcement of Project Shield, the name given to the anti-DDoS platform, came during a presentation in New York, at the Conflict in a Connected World summit. The gathering included security experts, hacktivists, dissidents, and technologists, in order to explore the nature of conflict and how online tools can both be a source of protection and harm when it comes to expression, and information sharing.

“As long as people have expressed ideas, others have tried to silence them. Today one out of every three people lives in a society that is severely censored. Online barriers can include everything from filters that block content to targeted attacks designed to take down websites. For many people, these obstacles are more than an inconvenience — they represent full-scale repression,” the company explained in a blog post.

Project Shield uses Google’s massive infrastructure to absorb DDoS attacks. Enrollment in the service is invite only at the moment, but it could be expanded considerable in the future. The service is free, but will follow page speed pricing, should Google open enrollment and charge for it down the line.

However, while the service is sure to help smaller websites, such as those ran by dissidents exposing corrupt regimes, or media speaking out against those in power, Google makes no promises.

“No guarantees are made in regards to uptime or protection levels. Google has designed its infrastructure to defend itself from quite large attacks and this initiative is aimed at providing a similar level of protection to third-party websites,” the company explains in a Project Shield outline.

One problem Project Shield may inadvertently create is a change in tactics. If the common forms of DDoS attacks are blocked, then more advanced forms of attack will be used. Such an escalation has already happened for high value targets, such as banks and other financial services websites.

“Using Google’s infrastructure to absorb DDoS attacks is structurally like using a CDN (Content Delivery Network) and has the same pros and cons,” Shuman Ghosemajumder, VP of strategy at Shape Security, told CSO during an interview.

The types of attacks a CDN would solve, he explained, are network-based DoS and DDoS attacks. These are the most common, and the most well-known attack types, as they’ve been around the longest.

In 2000, flood attacks were in the 400Mb/sec range, but today’s attacks scale to regularly exceed 100Gb/sec, according to anti-DDoS vendor Arbor Networks. In 2010, Arbor started to see a trend led by attackers who were advancing DDoS campaigns, by developing new tactics, tools, and targets. What that has led to is a threat that mixes flood, application and infrastructure attacks in a single, blended attack.

“It is unclear how effective [Project Shield] would be against Application Layer DoS attacks, where web servers are flooded with HTTP requests. These represent more leveraged DoS attacks, requiring less infrastructure on the part of the attacker, but are still fairly simplistic. If the DDoS protection provided operates at the application layer, then it could help,” Ghosemajumder said.

“What it would not protect against is Advanced Denial of Service attacks, where the attacker uses knowledge of the application to directly attack the origin server, databases, and other backend systems which cannot be protected against by a CDN and similar means.”

Google hasn’t mentioned directly the number of sites currently being protected by Project Shield, so there is no way to measure the effectiveness of the program from the outside.

In related news, Google also released a second DDoS related tool on Monday, which is possible thanks to data collected by Arbor networks. The Digital Attack Map, as the tool is called, is a monitoring system that allows users to see historical DDoS attack trends, and connect them to related news events on any given day. The data is also shown live, and can be granularly sorted by location, time, and attack type.

Source:  csoonline.com

Windows RT 8.1 update temporarily pulled due to a “situation”

Monday, October 21st, 2013

Some devices left unbootable after installing the update.

The Windows RT 8.1 update for devices such as Microsoft’s Surface RT has been removed from the Windows Store temporarily, after a “situation” prevented a “limited number of users” from being able to upgrade successfully.

The problem appears to be that the update is damaging certain boot data, causing affected machines to blue screen on startup. The issue is recoverable if you’ve created a recovery USB key (or have access to a machine that can create one), but Microsoft currently appears to have no easy way to create a suitable USB key from non-ARM machines.

To call this embarrassing for Microsoft is something of an understatement. While x86 PCs have extraordinary diversity in terms of hardware, software, and drivers—all things that can prevent straightforward upgrading—the Windows RT devices are extremely limited in this regard. Upgrading Windows RT tablets should be absolutely bulletproof. It’s very disappointing that it isn’t.

Update: Partially alleviating the problem, Microsoft has released a system image for Windows RT 8.1, so as long as you have another PC and a USB key, it should now be relatively easy to recover from broken upgrades.

Source:  arstechnica.com

VMware identifies vulnerabilities for ESX, vCenter, vSphere, issues patches

Friday, October 18th, 2013

VMware today said that its popular virtualization and cloud management products have security vulnerabilities that could lead to denials of service for customers using ESX and ESXi hypervisors and management platforms including vCenter Server Appliance and vSphere Update Manager.

To exploit the vulnerability an attacker would have to intercept and modify management traffic. If successful, the hacker would compromise the hostd-VMDBs, which would lead to a denial of service for parts of the program.

VMware released a series of patches that resolve the issue. More information about the vulnerability and links to download the patches can be found here.

The vulnerability exists in vCenter 5.0 for versions before update 3; and ESX versions 4.0, 4.1 and 5.0 and ESXi versions 4.0 and 4.1, unless they have the latest patches.

Users can also reduce the likelihood of the vulnerability causing a problem by running vSphere components on an isolated management network to ensure that traffic does not get intercepted.

Source:  networkworld.com

Motorola looking to exit wireless LAN business – sources

Friday, October 18th, 2013

Motorola Solutions Inc is exploring the sale of its underperforming wireless LAN business, which has grappled with declining share in a market dominated by rivals such as Cisco Systems Inc, people familiar with the matter said.

An exit from the wireless LAN market would come as Motorola, the provider of data communications and telecommunications equipment, seeks to focus on its core government and public safety division.

Motorola Solutions, which succeeded Motorola Inc following the spin-off of the mobile phones business into Motorola Mobility in 2011, provides communication services for the U.S. government and other enterprise customers. Motorola Mobility was later sold to Google Inc for $12.5 billion.

The wireless local area network unit, which is under Motorola Solutions’ enterprise division, has struggled amid competition from top players including Aruba Networks Inc and Hewlett-Packard Co, as well as smaller players such as Ubiquiti Networks Inc.

“It’s a tough market. It’s being squeezed from the top by Cisco and from the bottom by Ubiquiti,” said one of the people familiar with the matter, adding that the talks are at an early stage.

The people asked not to be named because the matter is confidential. Motorola declined to comment.

The Motorola unit’s revenues declined by a mid-single digit percentage point in the second quarter, following a massive 30 percent decline in the first quarter. The business had $216.7 million in 2012 revenues, roughly 8 percent of the $2.71 billion enterprise business.

The global enterprise wireless LAN business is expected to be a $4 billion market for 2013, according to research firm Dell’Oro group. Cisco is the market leader, with nearly 55 percent of the market segment revenue, followed by Aruba at just over 12 percent.

The value of Motorola’s wireless LAN business could not be determined.

Motorola Solutions, which has a market value of just over $16 billion, dominates the two-way radio market with its land-mobile-radio systems and public-safety products, and the U.S. government is its largest customer. The company’s government business brought in nearly 70 percent of its total revenue last year.

The attempt to shed the wireless LAN unit follows other divestitures Motorola Solutions has undertaken since 2011.

The company sold its wireless network assets to Nokia Siemens Networks for $975 million in 2011, and later that year sold small broadband networks units to buyout firm Vector Capital for an undisclosed sum.

Motorola Solutions cut its revenue forecast in July for the second time in three months because of declines in its enterprise business.

Earlier this year, the company’s chief executive said that revenue declines in the wireless LAN business were due to a “failure to execute” rather than market forces, and said the company had not done a “good enough job” selling wireless LAN products as part of its managed service offering, rather than as stand alone products.

“We have consistently lost share in WLAN and we just haven’t executed that well. Now it is a little bit more embryonic in terms of our strategic change from product to managed services but we’ve got to do a better job,” Chief Executive Greg Brown told analysts in April.

Meanwhile, Motorola Solutions is offering a voluntary buyout program to select employees in North America, or less than 20 percent of the company’s global workforce, a spokesperson for the company said on Friday. A few hundred employees are expected to participate in the separation program, the spokesperson added.

Source:  Reuters

Cisco says controversial NIST crypto-potential NSA backdoor ‘not invoked’ in products

Thursday, October 17th, 2013

Controversial crypto technology known as Dual EC DRBG, thought to be a backdoor for the National Security Agency, ended up in some Cisco products as part of their code libraries. But Cisco says they cannot be used because it chose other crypto as an operational default which can’t be changed.

Dual EC DRBG or Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) from the National Institute of Standards and Technology and a crypto toolkit from RSA is thought to have been one main way the crypto ended up in hundreds of vendors’ products.

Because Cisco is known to have used the BSAFE crypto toolkit, the company has faced questions about where Dual EC DRBG may have ended up in the Cisco product line. In a Cisco blog post today, Anthony Grieco, principle engineer at Cisco, tackled this topic in a notice about how Cisco chooses crypto.

“Before we go any further, I’ll go ahead and get it out there: we don’t use the Dual_EC_DRBG in our products. While it is true that some of the libraries in our products can support the DUAL_EC_DRBG, it is not invoked in our products.”

Grieco wrote that Cisco, like most tech companies, uses cryptography in nearly all its products, if only for secure remote management.

“Looking back at our DRBG decisions in the context of these guiding principles, we looked at all four DRBG options available in NIST SP 800-90. As none had compelling interoperability or legal implementation implications, we ultimately selected the Advanced Encryption Standard Counter mode (AES-CTR) DRBG as out default.”

Grieco stated this was “because of our comfort with the underlying implementation, the absence of any general security concerns, and its acceptable performance. Dual_EC_DRBG was implemented but wasn’t seriously considered as the default given the other good choices available.”

Grieco said the DRBG choice that Cisco made “cannot be changed by the customer.”

Faced with the Dual EC DRBG controversy, which was triggered by the revelations about the NSA by former NSA contractor Edward Snowden, NIST itself has re-opened comments about this older crypto standard.

“The DRBG controversy has brought renewed focus on the crypto industry and the need to constantly evaluate cryptographic algorithm choices,” Grieco wrote in the blog today. “We welcome this conversation as an opportunity to improve security of the communications infrastructure. We’re open to serious discussions about the industry’s cryptographic needs, what’s next for our products, and how to collectively move forward.” Cisco invited comment on that online.

Grieco concluded, “We will continue working to ensure out products offer secure algorithms, and if they don’t, we’ll fix them.”

Source:  computerworld.com

Ransomware comes of age with unbreakable crypto, anonymous payments

Thursday, October 17th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/10/ScreenShot1-640x498.jpg

Malware that takes computers hostage until users pay a ransom is getting meaner, and thanks to the growing prevalence of Bitcoin and other digital payment systems, it’s easier than ever for online crooks to capitalize on these “ransomware” schemes. If this wasn’t already abundantly clear, consider the experience of Nic, an Ars reader who fixes PCs for a living and recently helped a client repair the damage inflicted by a particularly nasty title known as CryptoLocker.

It started when an end user in the client’s accounting department received an e-mail purporting to come from Intuit. Yes, the attached archived zip file with an executable inside should have been a dead giveaway that this message was malicious and was in no way affiliated with Intuit. But accounting employees are used to receiving e-mails from financial companies. When the receiver clicked on it, he saw a white box flash briefly on his screen but didn’t notice anything else out of the ordinary. He then locked his computer and attended several meetings.

Within a few hours, the company’s IT department received word of a corrupt file stored on a network drive that was available to multiple employees, including the one who received the malicious e-mail. A quick investigation soon uncovered other corrupted files, most or all of which had been accessed by the accounting employee. By the time CryptoLocker had run its course, hundreds of gigabytes worth of company data was no longer available.

“After reading about the ransomware on reddit earlier this week, we guessed [that it was] what we were dealing with, as all the symptoms seemed to be popping up,” Nic, who asked that his last name not be published, wrote in an e-mail to Ars. “We went ahead and killed the local network connection on the machine in question and we were immediately presented with a screenshot letting us know exactly what we were dealing with.”

According to multiple participants in the month-long discussion, CryptoLocker is true to its name. It uses strong cryptography to lock all files that a user has permission to modify, including those on secondary hard drives and network storage systems. Until recently, few antivirus products detected the ransomware until it was too late. By then, victims were presented with a screen like the one displayed on the computer of the accounting employee, which is pictured above. It warns that the files are locked using a 2048-bit version of the RSA cryptographic algorithm and that the data will be forever lost unless the private key is obtained from the malware operators within three days of the infection.

“Nobody and never will be able to restore files”

“The server will destroy the key after a time specified in this window,” the screen warns, displaying a clock that starts with 72:00:00 and counts down with each passing second. “After that, nobody and never will be able to restore files. To obtain the private key for this computer, which will automatically decrypt files, you need to pay 300 USD / 300 EUR / similar amount in another currency.”

None of the reddit posters reported any success in breaking the encryption. Several also said they had paid the ransom and received a key that worked as promised. Full backup files belonging to Nic’s clients were about a week old at the time that CryptoLocker first took hold of the network. Nic advised them to comply with the demand. The ransomware operators delivered a key, and about 24 hours later, some 400 gigabytes of data was restored.

CryptoLocker accepts payment in Bitcoins or through the MoneyPak payment cards, as the following two screenshots illustrate.

The outcome hasn’t been as happy for other CryptoLocker victims. Whitehats who tracked the ransomware eventually took down some of the command and control servers that the operators relied on. As a result, people on reddit reported, some victims who paid the ransom were unable to receive the unique key needed to unlock files on their computer. The inability to undo the damage hit some victims particularly hard. Because CryptoLocker encrypted all files that an infected computer had access to, the ransomware in many cases locked the contents of backup disks that were expected to be relied upon in the event that the main disks failed. (The threat is a graphic example of the importance of “cold,” or offline backup, a backup arrangement that prevents data from being inadvertently overwritten.)

Several people have reported that the 72-hour deadline is real and that the only way it can be extended is by setting a computer’s BIOS clock back in time. Once the clock runs out, the malware uninstalls itself. Reinfecting a machine does nothing to bring back the timer or restore the old encrypted session.

Earlier this year, researchers from Symantec who infiltrated the servers of one ransomware syndicate conservatively estimated that its operators were easily able to clear $5 million per year. No wonder CryptoLocker has had such a long run. As of last week, more than three weeks after it was first published, the reddit thread was still generating five to 10 new posts per day. Also a testament to the prevalence and staying power of CryptoLocker, researchers from security firms TrendMicro and Emsisoft provided technical analyses here and here.

“This bug is super scary and could really wipe the floor with lots of small businesses that don’t have the best backup practices,” Nic observed. “Given the easy money available to scam operators, it’s not hard to see why.”

Source:  arstechnica.com

Wireless networks that follow you around a room, optimize themselves and even talk to each other out loud

Tuesday, October 8th, 2013

Graduate students at the MIT Computer Science and Artificial Intelligence Laboratory showed off their latest research at the university’s Wireless retreat on Monday, outlining software-defined MIMO, machine-generated TCP optimization, and a localized wireless networking technique that works through sound.

Swarun Kumar’s presentation on OpenRF – a Wi-Fi architecture designed to allow multiple access points to avoid mutual interference and focus signals on active clients – detailed how commodity hardware can be used to take advantage of features otherwise restricted to more specialized devices.

There were several constraints in the 802.11n wireless standard that had to be overcome, Kumar said, including a limitation on the total number of bits per subcarrier signal that could be manipulated, as well as restricting that manipulation to one out of every two such signals.

Simply disabling the Carrier Sense restrictions, however, proved an incomplete solution.

“Access points often send these beacon packets, which are meant for all clients in a network … you cannot null them at any point if you’re a client. Unfortunately, these packets will now collide” in the absence of Carrier Sense, he said.

The solution – which involved two separate transmit queues – enabled OpenRF to automatically apply its optimal settings across multiple access points, distributing the computational workload across the access points, rather than having to rely on a beefy central controller.

Kumar said the system can boost TCP throughput by a factor of 1.6 compared to bare-bones 802.11n.

*

Keith Winstein attacked the problem of TCP throughput slightly differently, however. Using a specialized algorithm called Remy – into which users can simply input network parameters and desired performance standards – he said that networks can essentially determine the best ways to configure themselves on their own.

“So these are the inputs, and the output is a congestion control algorithm,” he said. “Now this is not an easy process – this is replacing a human protocol designer. Now, it costs like $10 to get a new protocol on Amazon EC2.”

Remy works via the heuristic principle of concentrating its efforts on the use cases where a small change in the rules results in a major change in the outcome, allowing it to optimize effectively and to shift gears quickly if network conditions change.

“Computer generated end-to-end algorithms can actually outperform human generated in-network algorithms, and in addition, human generated end-to-end algorithms,” said Winstein.

Even though Remy wasn’t designed or optimized to handle wireless networks, it still handily outperforms human-generated competition, he added.

*

Peter Iannucci is a researcher looking into highly localized ways of providing wireless Internet, which he refers to as room area networks. Having dismissed a number of technologies as insufficient – Bluetooth was too clunky, NFC had limited uptake – he eventually settled on sound.

Iannucci’s acoustic network – which he has dubbed Blurt – uses high-frequency sounds to transmit the ones and zeroes of a network connection. It’s well-suited for a network confined by design to a small space.

“Acoustic networks provide great low-leakage properties, since doors and walls are intentionally sound-absorbent,” he said. “[They] work over moderate distances, using existing devices, and they don’t require any setup for ad hoc communications.”

Iannucci acknowledges that Blurt isn’t without its problems. Given that sound waves move about a million times slower than radio waves, speed is an issue – he said that Blurt can handle about 200 bits per second when using frequencies inaudible to humans, with more speed possible only at the cost of an audible whirring chirp, reminiscent of old telephone modems.

But that’s really not the point – the idea would be more to do things like verify users of a business’ free Wi-Fi are actually sitting in the restaurant, or any other tasks involving heavily location-dependent network services.

Source: networkworld.com

802.11ac ‘gigabit Wi-Fi’ starts to show potential, limits

Monday, October 7th, 2013

Vendor tests and very early 802.11ac customers provide a reality check on “gigabit Wi-Fi” but also confirm much of its promise.

Vendors have been testing their 11ac products for months, yielding data that show how 11ac performs and what variables can affect performance. Some of the tests are under ideal laboratory-style conditions; others involve actual or simulated production networks. Among the results: consistent 400M to 800Mbps throughput for 11ac clients in best-case situations, higher throughput as range increases compared to 11n, more clients serviced by each access point, and a boost in performance for existing 11n clients.

Wireless LAN vendors are stepping up product introductions, and all of them are coming out with products, among them Aerohive, Aruba Networks, Cisco (including its Meraki cloud-based offering), Meru, Motorola Solutions, Ruckus, Ubiquiti, and Xirrus.

The IEEE 802.11ac standard does several things to triple the throughput of 11n. It builds on some of the technologies introduced in 802.11n; makes mandatory some 11n options; offers several ways to dramatically boost Wi-Fi throughput; and works solely in the under-used 5GHz band.

It’s a potent combination. “We are seeing over 800Mbps on the new Apple 11ac-equipped Macbook Air laptops, and 400Mbps on the 11ac phones, such as the new Samsung Galaxy S4, that [currently] make up the bulk of 11ac devices on campus,” says Mike Davis, systems programmer, University of Delaware, Newark, Delaware.

A long-time Aruba Networks WLAN customer, the university has installed 3,700 of Aruba’s new 11ac access points on campus this summer, in a new engineering building, two new dorms, and some large auditoriums. Currently, there are on average about 80 11ac clients online with a peak of 100, out of some 24,000 Wi-Fi clients on campus.

The 11ac network seems to bear up under load. “In a limited test with an 11ac Macbook Air, I was able to sustain 400Mbps on an 11ac access point that was loaded with over 120 clients at the time,” says Davis. Not all of the clients were “data hungry,” but the results showed “that the new 11ac access points could still supply better-than-11n data rates while servicing more clients than before,” Davis says.

The maximum data rates for 11ac are highly dependent on several variables. One is whether the 11ac radios are using 80 Mhz-wide channels (11n got much of its throughput boost by being able to use 40 MHz channels). Another is whether the radios are able to use the 256 QAM modulation scheme, compared to the 64 QAM for 11n. Both of these depend on how close the 11ac clients are to the access point. Too far, and the radios “step down” to narrower channels and lower modulations.

Another variable is the number of “spatial streams,” a technology introduced with 11n, supported by the client and access point radios. Chart #1, “802.11ac performance based on spatial streams,” shows the download throughput performance.

802.11ac

In perfect conditions, close to the access point, a three-stream 11ac radio can achieve the maximum raw data rate of 1.3Gbps. But no users will actually realize that in terms of useable throughput.

“Typically, if the client is close to the access point, you can expect to lose about 40% of the overall raw bit rate due to protocol overhead – acknowledgements, setup, beaconing and so on,” says Mathew Gast, director of product management, for Aerohive Networks, which just announced its first 11ac products, the AP370 and AP390. Aerohive incorporates controller functions in a distributed access point architecture and provides a cloud-based management interface for IT groups.

“A single [11ac] client that’s very close to the access point in ideal conditions gets very good speed,” says Gast. “But that doesn’t reflect reality: you have electronic ‘noise,’ multiple contending clients, the presence of 11n clients. In some cases, the [11ac] speeds might not be much higher than 11n.”

A third key variable is the number of spatial streams, supported by both access points and clients. Most of the new 11ac access points will support three streams, usually with three transmit and three receive antennas. But clients will vary. At the University of Delaware, the new Macbook Air laptops support two streams; but the new Samsung Galaxy S4 and HTC One phones support one stream, via Broadcom’s BCM4335 11ac chipset.

Tests by Broadcom found that a single 11n data stream over a 40 MHz channel can deliver up to 60Mbps. By comparison, single-stream 11ac in an 80 MHz channels is “starting at well over 250Mbps,” says Chris Brown, director of business development for Broadcom’s wireless connectivity unit. Single-stream 11ac will max out at about 433Mbps.

There are some interesting results from these qualities. One is that the throughput at any given distance from the access point is much better in 11ac compared to 11n. “Even at 60 meters, single-stream 11ac outperforms all but the 2×2 11n at 40 MHz,” Brown says.

Another result is that 11ac access points can service a larger number of clients than 11n access points.

“We have replaced several dozen 11n APs with 11ac in a high-density lecture hall, with great success,” says University of Delaware’s Mike Davis. “While we are still restricting the maximum number of clients that can associate with the new APs, we are seeing them maintain client performance even as the client counts almost double from what the previous generation APs could service.”

Other features of 11ac help to sustain these capacity gains. Transmit beam forming (TBF), which was an optional feature in 11n is mandatory and standardized in 11ac. “TBR lets you ‘concentrate’ the RF signal in a specific direction, for a specific client,” says Mark Jordan, director, technical marketing engineering, Aruba Networks. “TBF changes the phasing slightly to allow the signals to propagate at a higher effective radio power level. The result is a vastly improved throughput-over-distance.”

A second feature is low density parity check (LDPC), which is a technique to improve the sensitivity of the receiving radio, in effect giving it better “hearing.”

The impact in Wi-Fi networks will be significant. Broadcom did extensive testing in a network set up in an office building, using both 11n and 11ac access points and clients. It specifically tested 11ac data rates and throughput with beam forming and low density parity check switched off and on, according to Brown.

Tests showed that 11ac connections with both TBR and LDPC turned on, increasingly and dramatically outperformed 11n – and even 11ac with both features turned off – as the distance between client and access point increased. For example, at one test point, an 11n client achieved 32Mbps. At the same point, the 11ac client with TBR and LDPC turned “off,” achieved about the same. But when both were turned “on,” the 11ac client soared to 102Mbps, more than three times the previous throughput.

Aruba found similar results. Its single-stream Galaxy S4 smartphone reached 238Mbps TCP downstream throughput at 15 feet, 235Mbps at 30 feet, and 193Mbps at 75 feet. At 120 feet, it was still 154Mbps. For the same distances upstream the throughput rates were: 235Mbps, 230M, 168M, and 87M.

“We rechecked that several times, to make sure we were doing it right, says Aruba’s Jordan. “We knew we couldn’t get the theoretical maximums. But now, we can support today’s clients with all the data they demand. And we can do it with the certainty of such high rates-at-range that we can come close to guaranteeing a high quality [user] experience.”

There are still other implications with 11ac. Because of the much higher up and down throughput, 11ac mobile devices get on and off the Wi-Fi channel much faster compared to 11n, drawing less power from the battery. The more efficient network use will mean less “energy per bit,” and better battery life.

A related implication is that because this all happens much faster with 11ac, there’s more time for other clients to access the channel. In other words, network capacity increases by up to six times, according to Broadcom’s Brown. “That frees up time for other clients to transmit and receive,” he says.

That improvement can be used to reduce the number of access points covering a given area: in the Broadcom office test area, four Cisco 11n access points provided connectivity. A single 11n access point could replace them, says Brown.

But more likely, IT groups will optimize 11ac networks for capacity, especially as the number of smartphones, tablets, laptops and other gear are outfitted with 11ac radios.

Even 11n clients will see improvement in 11ac networks, as University of Delaware has found.

“The performance of 11n clients on the 11ac APs has probably been the biggest, unexpected benefit,” says Mike Davis. “The 11n clients still make up 80% of the total number of clients and we’ve measured two times the performance of 11n clients on the new 11ac APs over the last generation [11n] APs.”

Wi-Fi uses Ethernet’s carrier sense multiple access with collision detection (CSMA/CD) which essentially checks to see if a channel is being used, and if so, backs off, waits and tries again. “If we’re spending less time on the net, then there’s more airtime available, and so more opportunities for devices to access the media,” says Brown. “More available airtime translates into fewer collisions and backoffs. If an overburdened 11n access point is replaced with an 11ac access point, it will increase the network’s capacity.”

In Aruba’s in-house testing, a Macbook Pro laptop with a three-stream 11n radio was connected to first to the 11n Aruba AP-135, and then to the 11ac AP-225. As shown in Chart #2, “11ac will boost throughput in 11n clients,” the laptop’s performance was vastly better on the 11ac access point, especially as the range increased.

802.11ac

These improvements are part of “wave 1” 11ac. In wave 2, starting perhaps later in 2014, new features will be added to 11ac radios: support four to eight data streams, explicit transmit beam forming, an option for 160 Mhz channels, and “multi-user MIMO,” which lets the access point talk to more than one 11ac client at the same time.

Source:  networkworld.com

Aruba announces cloud-based Wi-Fi management service

Tuesday, October 1st, 2013

Competes with Cisco-owned Meraki and Aerohive

Aruba Networks today announced a new Aruba Central cloud-based management service for Wi-Fi networks that could be valuable to companies with branch operations, schools and mid-sized networks where IT support is scarce.

Aruba still sells Wi-Fi access points but now is offering Aruba Central cloud management of local Wi-Fi zones, for which it charges $140 per AP annually.

The company also announced the new Aruba Instant 155 AP, a desktop model starting at $895 and available now and the Instant 225 AP for $1.295, available sometime later this month.

A new 3.3 version of the Instant OS is also available, and a new S1500 mobility access switch with 12 to 48 ports starting at $1,495 will ship in late 2013.

Cloud-based management of Wi-Fi is in its early stages and today constitutes about 5% of a $4 billion annual Wi-Fi market, Aruba said, citing findings by Dell’Oro Group. Aruba said it faces competition from Aerohive and Meraki, which Cisco purchased for $1.2 billion last November.

Cloud-based management of APs is ideally suited for centralizing management of branch offices or schools that don’t have their own IT staff.

“We have one interface for multiple sites, for those wanting to manage from a central platform,” said Syliva Hooks, Aruba’s director of product marketing. “There’s remote monitoring and troubleshooting. We do alerting and reports, all in wizard-based formats, and you can group all the APs from location. We’re trying to offer sophisticated functions, but presented so a generalist could use them.”

Aruba relies on multiple cloud providers and multiple data centers to support Aruba Central, Hooks said.

The two new APs provide 450 Mbps throughput in 802.11n for the 155 AP and 1.3 Gbps for the 220 AP, Aruba said. Each AP in a Wi-Fi cluster running the Instant OS can assume controller functions with intelligence built in. The first AP installed in a cluster can select itself as the master controller of the other APs and if it somehow fails, the next most senior AP selects itself as the master.

Source:  networkworld.com