Archive for the ‘Network’ Category

Building control systems can be pathway to Target-like attack

Tuesday, February 11th, 2014

Credentials stolen from automation and control providers were used in Target hack

Companies should review carefully the network access given to third-party engineers monitoring building control systems to avoid a Target-like attack, experts say.

Security related to providers of building automation and control systems was in the spotlight this week after the security blog KrebsonSecurity reported that credentials stolen from Fazio Mechanical Services, based in Sharpsburg, Penn, were used by hackers who snatched late last year 40 million debit- and credit-card numbers from Target’s electronic cash registers, called point-of-sale (POS) systems.

The blog initially identified Fazio as a provider of refrigeration and heating, ventilation and air conditioning (HVAC) systems. The report sparked a discussion in security circles on how such a subcontractor’s credentials could provide access to areas of the retailer’s network Fazio would not need.

On Thursday, Fazio released a statement saying it does not monitor or control Target’s HVAC systems, according to KrebsonSecurity. Instead it remotely handles “electronic billing, contract submission and project management,” for the retailer.

In light of its work, Fazio having access to Target business applications that could be tied to POS systems is certainly possible. However, interviews with experts before Fazio’s clarification found that subcontractors monitoring and maintaining HVAC and other building systems remotely often have too much access to corporate networks.

“Generally what happens is some new business service needs network access, so, if there’s time pressure, it may be placed on an existing network, (without) thinking through all the security implications,” Dwayne Melancon, chief technology officer for data security company Tripwire, said.

Most building systems, such as HVAC, are Internet-enabled so maintenance companies can monitor them remotely. Use of the Shodan search engine for Internet-enabled devices can reveal thousands of systems ranging from building automation to crematoriums with weak login credentials, researchers have found.

Using homegrown technology, Billy Rios, director of threat intelligence for vulnerability management company Qualys, found on the Internet a building control system for Target’s Minneapolis-based headquarters.

While the system is connected to an internal network, Rios could not determine whether it’s a corporate network without hacking the system, which would be illegal.

“We know that we could probably exploit it, but what we don’t know is what purpose it’s serving,” he said. “It could control energy, it could control HVAC, it could control lighting or it could be for access control. We’re not sure.”

If the Web interface of such systems is on a corporate network, then some important security measures need to be taken.

All data traffic moving to and from the server should be closely monitored. To do their job, building engineers need to access only a few systems. Monitoring software should flag traffic going anywhere else immediately.

“Workstations in your HR (human resources) department should probably not be talking to your refrigeration devices,” Rios said. “Seeing high spikes in traffic from embedded devices on your corporate network is also an indication that something is wrong.”

In addition, companies should know the IP addresses used by subcontractors in accessing systems. Unrecognized addresses should be automatically blocked.

Better password management is also a way to prevent a cyberattack. In general, a subcontractor’s employees will share the same credentials to access a customer’s systems. Those credentials are seldom changed, even when an employee leaves the company.

“That’s why it’s doubly important to make sure those accounts and systems have very restricted access, so you can’t use that technician login to do other things on the network,” Melancon said.

Every company should do a thorough review of their networks to identify every building system. “Understanding where these systems are is the first step,” Rios said.

Discovery should be followed by an evaluation of the security around those systems that are on the Internet.

Source:  csoonline.com

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Wireless Case Studies: Cellular Repeater and DAS

Friday, February 7th, 2014

Gyver Networks recently designed and installed a cellular bi-directional amplifier (BDA) and distributed antenna system (DAS) for an internationally renowned preparatory and boarding school in Massachusetts.

BDA Challenge: Faculty, students, and visitors were unable to access any cellular voice or data services at one of this historic campus’ sports complexes; 3G and 4G cellular reception at the suburban Boston location were virtually nonexistent.

Of particular concern to the school was the fact that the safety of its student-athletes would be jeopardized in the event of a serious injury, with precious minutes lost as faculty were forced to scramble to find the nearest landline – or leave the building altogether in search of cellular signal – to contact first responders.

Additionally, since internal communications between management and facilities personnel around the campus took place via mobile phone, lack of cellular signal at the sports complex required staff to physically leave the site just to find adequate reception.

Resolution: Gyver Networks engineers performed a cellular site survey of selected carriers throughout the complex to acquire a precise snapshot of the RF environment. After selecting the optimal donor tower signal for each cell carrier, Gyver then engineered and installed a distributed antenna system (DAS) to retransmit the amplified signal put out by the bi-directional amplifier (BDA) inside the building.

The high-gain, dual-band BDA chosen for the system offered scalability across selected cellular and PCS bands, as well as the flexibility to reconfigure band settings on an as-needed basis, providing enhancement capabilities for all major carriers now and in the future.

Every objective set forth by the school’s IT department has been satisfied with the deployment of this cellular repeater and DAS: All areas of the athletic complex now enjoy full 3G and 4G voice and data connectivity; safety and liability concerns have been mitigated; and campus personnel are able to maintain mobile communications regardless of where they are in the complex.

The case for Wi-Fi in the Internet of Things

Tuesday, January 14th, 2014

Whether it’s the “connected home” or the “Internet of Things,” many everyday home appliances and devices will soon feature some form of Internet connectivity. What form should that connectivity take? We sat down with Edgar Figueroa, president and CEO of the Wi-Fi Alliance, to discuss his belief that Wi-Fi is the clear choice.

Options are plentiful when it comes to the Internet, but some are easily disregarded for most Internet of Things designs. Ethernet and other wired solutions require additional equipment or more cabling than what is typically found in even a modern home. Cellular connectivity is pointless for stationary home goods and still too power-hungry for wearable items. Proprietary and purpose-built solutions, like ZigBee, are either too closed off or require parallel paths to solutions that are already in our homes.

Bluetooth makes a pretty good case for itself, though inconsistent user experiences remain the norm for several reasons. The latest Bluetooth specifications provide very low power data transfers and have very low overhead for maintaining a connection. The result is that the power profile for the connection is low whether you’re transacting data or not. Connection speeds are modest compared to the alternatives. But the biggest detractor for Bluetooth is inconsistency. Bluetooth has always felt kludgy; it’s an incomplete solution that will suffice until it improves. It’s helpful that Bluetooth devices can often have their performance, reliability, and features improved upon through software updates, but the experience can still remain frustrating.

Then there’s Wi-Fi.

Figueroa wanted to highlight a few key points from a study the Alliance commissioned. “Of those polled, more than half already have a non-traditional device with a Wi-Fi radio,” he said. Here, “non-traditional” falls among a broad swath of products that includes appliances, thermostats, and lighting systems. Figueroa continued, “Ninety-one percent of those polled said they’d be more likely to buy a smart device if it came equipped with Wi-Fi.” Alliance’s point: everyone already has a Wi-Fi network in their home. Why choose anything else?

One key consideration the study seems to ignore is power draw, which is one of Bluetooth’s biggest assets. Wi-Fi connections are active and power-hungry, even when they aren’t transacting large amounts of data. A separate study looking at power consumption per bit of data transferred demonstrated that Wi-Fi trumps Bluetooth by orders of magnitude. Where Wi-Fi requires large amounts of constant power, Bluetooth requires almost no power to maintain a connection.

In response to a question on the preference for low-power interfaces, Figueroa said simply, “Why?” In his eyes, the connected home isn’t necessarily a battery-powered home. Devices that connect to our Wi-Fi networks traditionally have plugs, so why must they sip almost no power?

Bluetooth has its place in devices whose current draw must not exceed the capabilities of a watch battery. But even in small devices, Wi-Fi’s performance and ability to create ad hoc networks and Wi-Fi Direct connections can better the experience, even if it’s at the risk of increasing power draw and battery size.

In the end, the compelling case for Wi-Fi’s use in the mobile space has more to do with what we want from our experiences than whether one is more power-hungry. Simplicity in all things is preferred. Even after all these years, pairing Bluetooth is usually more complex than connecting a new device to your existing Wi-Fi network. Even in the car, where Bluetooth has had a long dominance, the ability to connect multiple devices over Wi-Fi’s wide interface may ultimately be preferred. Still, despite Figueroa’s confidence, it’s an increasingly green (and preferably bill-shrinking) world looking to adopt an Internet of Things lifestyle. Wi-Fi may ultimately need to complete its case by driving power down enough to reside in all our Internet of Things devices, from the biggest to the smallest.

Source:  arstechnica.com

Cisco promises to fix admin backdoor in some routers

Monday, January 13th, 2014

Cisco Systems promised to issue firmware updates removing a backdoor from a wireless access point and two of its routers later this month. The undocumented feature could allow unauthenticated remote attackers to gain administrative access to the devices.

The vulnerability was discovered over the Christmas holiday on a Linksys WAG200G router by a security researcher named Eloi Vanderbeken. He found that the device had a service listening on port 32764 TCP, and that connecting to it allowed a remote user to send unauthenticated commands to the device and reset the administrative password.

It was later reported by other users that the same backdoor was present in multiple devices from Cisco, Netgear, Belkin and other manufacturers. On many devices this undocumented interface can only be accessed from the local or wireless network, but on some devices it is also accessible from the Internet.

Cisco identified the vulnerability in its WAP4410N Wireless-N Access Point, WRVS4400N Wireless-N Gigabit Security Router and RVS4000 4-port Gigabit Security Router. The company is no longer responsible for Linksys routers, as it sold that consumer division to Belkin early last year.

The vulnerability is caused by a testing interface that can be accessed from the LAN side on the WRVS4400N and RVS4000 routers and also the wireless network on the WAP4410N wireless access point device.

“An attacker could exploit this vulnerability by accessing the affected device from the LAN-side interface and issuing arbitrary commands in the underlying operating system,” Cisco said in an advisory published Friday. “An exploit could allow the attacker to access user credentials for the administrator account of the device, and read the device configuration. The exploit can also allow the attacker to issue arbitrary commands on the device with escalated privileges.”

The company noted that there are no known workarounds that could mitigate this vulnerability in the absence of a firmware update.

The SANS Internet Storm Center, a cyber threat monitoring organization, warned at the beginning of the month that it detected probes for port 32764 TCP on the Internet, most likely targeting this vulnerability.

Source:  networkworld.com

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

Friday, January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.'”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

Huawei sends 400Gbps over next-generation optical network

Thursday, December 26th, 2013

Huawei Technologies and Polish operator Exatel have tested a next-generation optical network based on WDM (Wavelength Division Multiplexing) technology and capable of 400Gbps throughput.

More data traffic and the need for greater transmission speed in both fixed and wireless networks have consequences for all parts of operator networks. While faster versions of technologies such as LTE are being rolled out at the edge of networks, vendors are working on improving WDM (Wavelength-Division Multiplexing) to help them keep up at the core.

WDM sends large amounts of data using a number different wavelengths or channels over a single optical fiber.

However, the test conducted by Huawei and Exatel only used one channel to send the data, which has its advantages, according to Huawei. It means the system only needs one optical transceiver, which is used to both send and receive data. That, in turn, results in lower power consumption and a smaller chance that something may go wrong, it said.

Huawei didn’t say when it expects to include the technology in commercial products.

Currently operators are upgrading their networks to include 100Gbps links. That increased third quarter spending on optical networks in North America by 13.4 percent year-over-year, following an 11.1 percent increase in the previous quarter, according to Infonetics Research. Huawei, Ciena, and Alcatel-Lucent were the WDM market share leaders, it said.

Source:  networkworld.com

New modulation scheme said to be ‘breakthrough’ in network performance

Friday, December 20th, 2013

A startup plans to demonstrate next month a new digital modulation scheme that promises to dramatically boost bandwidth, capacity, and range, with less power and less distortion, on both wireless and wired networks.

MagnaCom, a privately held company based in Israel, now has more than 70 global patent applications, and 15 issued patents in the U.S., for what it calls and has trademarked Wave Modulation (or WAM), which is designed to replace the long-dominant quadrature amplitude modulation (QAM) used in almost every wired or wireless product today on cellular, microwave radio, Wi-Fi, satellite and cable TV, and optical fiber networks. The company revealed today that it plans to demonstrate WAM at the Consumer Electronics Show, Jan. 7-10, in Las Vegas.

The vendor, which has released few specifics about WAM, promises extravagant benefits: up to 10 decibels of additional gain compared to the most advanced QAM schemes today; up to 50 percent less power; up to 400 percent more distance; up to 50 percent spectrum savings. WAM tolerates noise or interference better, has lower costs, is 100 percent backward compatible with existing QAM-based systems; and can simply be swapped in for QAM technology without additional changes to other components, the company says.

Modulation is a way of conveying data by changing some aspect of a carrier signal (sometimes called a carrier wave). A very imperfect analogy is covering a lamp with your hand to change the light beam into a series of long and short pulses, conveying information based on Morse code.

QAM, which is both an analog and a digital modulation scheme, “conveys two analog message signals, or two digital bit streams, by changing the amplitudes of two carrier waves,” as the Wikipedia entry explains. It’s used in Wi-Fi, microwave backhaul, optical fiber systems, digital cable television and many other communications systems. Without going into the technical details, you can make QAM more efficient or denser. For example, nearly all Wi-Fi radios today use 64-QAM. But 802.11ac radios can use 256-QAM. In practical terms, that change boosts the data rate by about 33 percent.

But there are tradeoffs. The denser the QAM scheme, the more vulnerable it is to electronic “noise.” And amplifying a denser QAM signal requires bigger, more powerful amplifiers: when they run at higher power, which is another drawback, they also introduce more distortion.

MagnaCom claims that WAM modulation delivers vastly greater performance and efficiencies than current QAM technology, while minimizing if not eliminating the drawbacks. But so far, it’s not saying how WAM actually does that.

“It could be a breakthrough, but the company has not revealed all that’s needed to assure the world of that,” says Will Straus, president of Forward Concepts, a market research firm that focuses on digital signal processing, cell phone chips, wireless communications and related markets. “Even if the technology proves in, it will take many years to displace QAM that’s already in all digital communications. That’s why only bounded applications — where WAM can be [installed] at both ends – will be the initial market.”

“There are some huge claims here,” says Earl Lum, founder of EJL Wireless, a market research firm that focuses on microwave backhaul, cellular base station, and related markets. “They’re not going into exactly how they’re doing this, so it’s really tough to say that this technology is really working.”

Lum, who originally worked as an RF design engineer before switching to wireless industry equities research on Wall Street, elaborated on two of those claims: WAM’s greater distance and its improved spectral efficiency.

“Usually as you go higher in modulation, the distance shrinks: it’s inversely proportional,” he explains. “So the 400 percent increase in distance is significant. If they can compensate and still get high spectral efficiency and keep the distance long, that’s what everyone is trying to have.”

The spectrum savings of up to 50 percent is important, too. “You might be able to double the amount of channels compared to what you have now,” Lum says. “If you can cram more channels into that same spectrum, you don’t have to buy more [spectrum] licenses. That’s significant in terms of how many bits-per-hertz you can realize. But, again, they haven’t specified how they do this.”

According to MagnaCom, WAM uses some kind of spectral compression to improve spectral efficiency. WAM can simply be substituted for existing QAM technology in any product design. Some of WAM’s features should result in simpler transmitter designs that are less expensive and use less power.

For the CES demonstration next month, MagnaCom has partnered with Altera Corp., which provides custom field programmable gate arrays, ASICs and other custom logic solutions.

Source:  networkworld.com

FCC postpones spectrum auction until mid 2015

Monday, December 9th, 2013

In a blog post on Friday, Federal Communications Commission Chairman Tom Wheeler said that he would postpone a June 2014 spectrum auction to mid-2015. In his post, Wheeler called for more extensive testing of “the operating systems and the software necessary to conduct the world’s first-of-a kind incentive auction.”

”Only when our software and systems are technically ready, user friendly, and thoroughly tested, will we start the auction,” wrote Wheeler. The chairman also said that he wanted to develop procedures for how the auction will be conducted, specifically after seeking public comment on those details in the second half of next year.

A separate auction for 10MHz of space will take place in January 2014. In 2012, Congress passed the Middle Class Tax Relief and Job Creation Act, which required the FCC to auction off 65MHz of spectrum by 2015. Revenue from the auction will go toward developing FirstNet, an LTE network for first responders. Two months ago, acting FCC chair Mignon Clyburn announced that the commission would start that sell-off by placing 10MHz on the auction block in January 2014. The other 55MHz would be auctioned off at a later date, before the end of 2015.

The forthcoming auction aims to pay TV broadcasters to give up lower frequencies, which will be bid on by wireless cell phone carriers like AT&T and Verizon, but also by smaller carriers who are eager to expand their spectrum property. Wheeler gave no hint as to whether he would push for restrictions on big carriers during the auction process, but he wrote, “I am mindful of the important national interest in making available additional spectrum for flexible use.”

Source:  arstechnica.com

Microsoft disrupts ZeroAccess web fraud botnet

Friday, December 6th, 2013

ZeroAccess, one of the world’s largest botnets – a network of computers infected with malware to trigger online fraud – has been disrupted by Microsoft and law enforcement agencies.

ZeroAccess hijacks web search results and redirects users to potentially dangerous sites to steal their details.

It also generates fraudulent ad clicks on infected computers then claims payouts from duped advertisers.

Also called Sirefef botnet, ZeroAccess, has infected two million computers.

The botnet targets search results on Google, Bing and Yahoo search engines and is estimated to cost online advertisers $2.7m (£1.7m) per month.

Microsoft said it had been authorised by US regulators to “block incoming and outgoing communications between computers located in the US and the 18 identified Internet Protocol (IP) addresses being used to commit the fraudulent schemes”.

In addition, the firm has also taken control of 49 domains associated with ZeroAccess.

David Finn, executive director of Microsoft Digital Crimes Unit, said the disruption “will stop victims’ computers from being used for fraud and help us identify the computers that need to be cleaned of the infection”.

‘Most robust’

The ZeroAccess botnet relies on waves of communication between groups of infected computers, instead of being controlled by a few servers.

This allows cyber criminals to control the botnet remotely from a range of computers, making it difficult to tackle.

According to Microsoft, more than 800,000 ZeroAccess-infected computers were active on the internet on any given day as of October this year.

“Due to its botnet architecture, ZeroAccess is one of the most robust and durable botnets in operation today and was built to be resilient to disruption efforts,” Microsoft said.

However, the firm said its latest action is “expected to significantly disrupt the botnet’s operation, increasing the cost and risk for cyber criminals to continue doing business and preventing victims’ computers from committing fraudulent schemes”.

Microsoft said its Digital Crimes Unit collaborated with the US Federal Bureau of Investigation (FBI) and Europol’s European Cybercrime Centre (EC3) to disrupt the operations.

Earlier this year, security firm Symantec said it had disabled nearly 500,000 computers infected by ZeroAccess and taken them out of the botnet.

Source: BBC

Case Studies: Point-to-point wireless bridge – Campus

Friday, December 6th, 2013

IMG_0095

Gyver Networks recently completed a point-to-point (PTP) bridge installation to provide wireless backhaul for a Boston college

Challenge:  The only connectivity to local network or Internet resources from this school’s otherwise modern athletic center was via a T1 line topping out at 1.5 Mbps bandwidth.  This was unacceptable not only to the faculty onsite attempting to connect to the school’s network, but to the attendees, faculty, and media outlets attempting to connect to the Internet during the high-profile events and press conferences routinely held inside.

Another vendor’s design for a 150 Mbps unlicensed wireless backhaul link failed during a VIP visit, necessitating a redesign by Gyver Networks.

http://www.gyvernetworks.com/TechBlog/wp-content/uploads/2013/12/IMG_0103.jpgResolution:  After performing a spectrum analysis of the surrounding environment, Gyver Networks determined that the wireless solution originally proposed to the school was not viable due to RF spectrum interference.

For a price point close to the unlicensed, failed design, Gyver Networks engineered a secure, 700 Mbps point-to-point wireless bridge in the licensed 80GHz band to link the main campus with the athletic center, providing adequate bandwidth for both local network and Internet connectivity at the remote site.  Faculty are now able to work without restriction, and event attendees can blog, post to social media, and upload photos and videos without constraint.

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com

Why security benefits boost mid-market adoption of virtualization

Monday, December 2nd, 2013

While virtualization has undoubtedly already found its footing in larger businesses and data centers, the technology is still in the process of catching on in the middle market. But a recent study conducted by a group of Cisco Partner Firms, titled “Virtualization on the Rise,” indicates just that: the prevalence of virtualization is continuing to expand and has so far proven to be a success for many small- and medium-sized businesses.

With firms where virtualization has yet to catch on, however, security is often the point of contention.

Cisco’s study found that adoption rates for virtualization are already quite high at small- to medium-sized businesses, with 77 percent of respondents indicating that they already had some type of virtualization in place around their office. These types of solutions included server virtualization, a virtual desktop infrastructure, storage virtualization, network virtualization, and remote desktop access, among others. Server virtualization was the most commonly used, with 59 percent of respondents (that said they had adopted virtualization in some form) stating that it was their solution of choice.

That all being said, there are obviously some businesses who still have yet to adopt virtualization, and a healthy chunk of respondents – 51 percent – cited security as a reason. It appeared that the larger companies with over 100 employees were more concerned about the security of virtualization, with 60 percent of that particular demographic qualifying it as their barrier to entry (while only 33 percent of smaller firms shared the same concern).

But with Cisco’s study lacking any other specificity in terms of why exactly the respondents were concerned about the security of virtualization, one can’t help but wonder: is this necessarily sound reasoning? Craig Jeske, the business development manager for virtualization and cloud at Global Technology Resources, shed some light on the subject.

“I think [virtualization] gives a much easier, more efficient, and agile response to changing demands, and that includes responding to security threats,” said Jeske. “It allow for a faster response than if you had to deploy new physical tools.”

He went on to explain that given how virtualization enhances portability and makes it easier to back up data, it subsequently makes it easier for companies to get back to a known state in the event of some sort of compromise. This kind of flexibility limits attackers’ options.

“Thanks to the agility provided by virtualization, it changes the attack vectors that people can come at us from,” he said.

As for the 33 percent of smaller firms that cited security as a barrier to entry – thereby suggesting that the smaller companies were more willing to take the perceived “risk” of adopting the technology – Jeske said that was simply because virtualization makes more sense for businesses of that size.

“When you have a small budget, the cost savings [from virtualization] are more dramatic, since it saves space and calls for a lower upfront investment,” he said. On the flip side, the upfront cost for any new IT direction is higher for a larger business. It’s easier to make a shift when a company has 20 servers versus 20 million servers; while the return on virtualization is higher for a larger company, so is the upfront investment.

Of course, there is also the obvious fact that with smaller firms, the potential loss as a result of taking such a risk isn’t as great.

“With any type of change, the risk is lower for a smaller business than for a multimillion dollar firm,” he said. “With bigger businesses, any change needs to be looked at carefully. Because if something goes wrong, regardless of what the cause was, someone’s losing their job.”

Jeske also addressed the fact that some of the security concerns indicated by the study results may have stemmed from some teams recognizing that they weren’t familiar with the technology. That lack of comfort with virtualization – for example, not knowing how to properly implement or deploy it – could make virtualization less secure, but it’s not inherently insecure. Security officers, he stressed, are always most comfortable with what they know.

“When you know how to handle virtualization, it’s not a security detriment,” he said. “I’m hesitant to make a change until I see the validity and justification behind that change. You can understand peoples’ aversion from a security standpoint and first just from the standpoint of needing to understand it before jumping in.”

But the technology itself, Jeske reiterated, has plenty of security benefits.

“Since everything is virtualized, it’s easier to respond to a threat because it’s all available from everywhere. You don’t have to have the box,” he said. “The more we’re tied to these servers and our offices, the easier it is to respond.”

And with every element being all-encompassed in a software package, he said, businesses might be able to do more to each virtual server than they could in the physical world. Virtual firewalls, intrusion detection, etc. can all be put in as an application and put closer to the machine itself so firms don’t have to bring things back out into the physical environment.

This also allows for easier, faster changes in security environments. One change can be propagated across the entire virtual environment automatically, rather than having to push it out to each physical device individually that’s protecting a company’s systems.

Jeske noted that there are benefits from a physical security standpoint, as well, namely because somebody else takes care of it for you. The servers hosting the virtualized solutions are somewhere far away, and the protection of those servers is somebody else’s responsibility.

But what with the rapid proliferation of virtualization, Jeske warned that security teams need to try to stay ahead of the game. Otherwise, it’s going to be harder to properly adopt the technology when they no longer have a choice.

“With virtualization, speed of deployment and speed of reaction are the biggest things,” said Jeske. “The servers and desktops are going to continue to get virtualized whether officers like it or not. So they need to be proactive and stay in front of it, otherwise they can find themselves in a bad position further on down the road.”

Source:  csoonline.com

FCC crowdsources mobile broadband research with Android app

Friday, November 8th, 2013

Most smartphone users know data speeds can vary widely. But how do the different carriers stack up against each other? The Federal Communications Commission is hoping the public can help figure that out, using a new app it will preview next week.

The FCC on Friday said that the agenda for next Thursday’s open meeting, the first under new Chairman Tom Wheeler, will feature a presentation on a new Android smartphone app that will be used to crowdsource measurements of mobile broadband speeds. 

The FCC announced it would start measuring the performance of mobile networks last September. All four major wireless carriers, as well CTIA-The Wireless Association have already agreed to participate in the app, which is called “FCC Speed Test.” It works only on Android for now — no word on when an iPhone version might be available.

While the app has been in the works for a long time, its elevation to this month’s agenda reaffirms something Wheeler told the Journal this week. During that conversation, the Chairman repeatedly emphasized his desire to “make decisions based on facts.” Given the paucity of information on mobile broadband availability and prices, this type of data collection seems like the first step toward evaluating whether Americans are getting what they pay for from their carriers in terms of mobile data speeds.

The FCC unveiled its first survey of traditional land-based broadband providers in August 2011, which showed that most companies provide access that comes close to or exceeds advertised speeds. (Those results prompted at least one Internet service provider to increase its performance during peak hours.) Expanding the data collection effort to the mobile broadband is a natural step; smartphone sales outpace laptop sales and a significant portion of Americans (particularly minorities and low-income households) rely on a smartphone as their primary connection to the Internet.

Wheeler has said ensuring there is adequate competition in the broadband and wireless markets is among his top priorities. But first the FCC must know what level of service Americans are getting from their current providers. If mobile broadband speeds perform much as advertised, it would bolster the case of those who argue the wireless market is sufficiently competitive. But if any of the major carriers were to seriously under-perform, it would raise questions about the need for intervention from federal regulators.

Source:  wsj.com

High-gain patch antennas boost Wi-Fi capacity for Georgia Tech

Tuesday, November 5th, 2013

To boost its Wi-Fi capacity in packed lecture halls, Georgia Institute of Technology gave up trying to cram in more access points, with conventional omni-directional antennas, and juggle power settings and channel plans. Instead, it turned to new high-gain directional antennas, from Tessco’s Ventev division.

Ventev’s new TerraWave High-Density Ceiling Mount Antenna, which looks almost exactly like the bottom half of a small pizza box, focuses the Wi-Fi signal from the ceiling mounted Cisco access point in a precise cone-shaped pattern, covering part of the lecture hall floor. Instead of the flakey, laggy connections, about which professors had been complaining, users now consistently get up to 144Mbps (if they have 802.11n client radios).

“Overall, the system performed much better” with the Ventev antennas, says William Lawrence, IT project manager principal with the university’s academic and research technologies group. “And there was a much more even distribution of clients across the room’s access points.”

Initially, these 802.11n access points were running 40-MHz channels, but Lawrence’s team eventually switched to the narrower 20 MHz. “We saw more consistent performance for clients in the 20-MHz channel, and I really don’t know why,” he says. “It seems like the clients were doing a lot of shifting between using 40 MHz and 20 MHz. With the narrower channel, it was very smooth and consistent: we got great video playback.”

With the narrower channel, 11n clients can’t achieve their maximum 11n throughput. But that doesn’t seem to have been a problem in these select locations, Lawrence says. “We’ve not seen that to be an issue, but we’re continuing to monitor it,” he says.

The Atlanta main campus has a fully-deployed Cisco WLAN, with about 3,900 access points, nearly all supporting 11n, and 17 wireless controllers. Virtually all of the access points use a conventional, omni-directional antenna, which radiates energy in a globe-shaped configuration with the access point at the center. But in high density classrooms, faculty and students began complaining of flakey connections and slow speeds.

The problem, Lawrence says, was the surging number of Wi-Fi devices actively being used in big classrooms and lectures halls, coupled with Wi-Fi signals, especially in the 2.4-GHz band, stepping on each other over wide sections of the hall, creating co-channel interference.

One Georgia Tech network engineer spent a lot of time monitoring the problem areas and working with students and faculty. In a few cases, the problems could be traced to a client-side configuration problem. But “with 120 clients on one access point, performance really goes downhill,” Lawrence says. “With the omni-directional antenna, you can only pack the access points so close.”

Shifting users to the cleaner 5 GHz was an obvious step but in practice was rarely feasible: many mobile devices still support only 2.4-GHz connections; and client radios often showed a stubborn willfulness in sticking with a 2.4-GHz connection on a distant access point even when another was available much closer.

Consulting with Cisco, Georgia Tech decided to try some newer access points, with external antenna mounts, and selected one of Cisco’s certified partners, Tessco’s Ventev Wireless Infrastructure division, to supply the directional antennas. The TerraWave products also are compatible with access points from Aruba, Juniper, Meru, Motorola and others.

Patch antennas focus the radio beam within a specific area. (A couple of vendors, Ruckus Wireless and Xirrus, have developed their own built-in “smart” antennas that adjust and focus Wi-Fi signals on clients.) Depending on the beamwidth, the effect can be that of a floodlight or a spotlight, says Jeff Lime, Ventev’s vice president. Ventev’s newest TerraWave High-Density products focus the radio beam within narrower ranges than some competing products, and offer higher gain (in effect putting more oomph into the signal to drive it further), he says.

One model, with a maximum power of 20 watts, can have beam widths of 18 or 28 inches vertically, and 24 or 40 inches horizontally, with a gain of 10 or 11 dBi, depending on the frequency range. The second model, with a 50-watt maximum power output, has a beamwidth in both dimension of 35 degrees, at a still higher gain of 14 dBi to drive the spotlighted signal further, in really big areas like a stadium.

At Georgia Tech, each antenna focused the Wi-Fi signal from a specific overhead access point to cover a section of seats below it. Fewer users associate with each access point. The result is a kind of virtuous circle. “It gives more capacity per user, so more bandwidth, so a better user experience,” says Lime.

The antennas come with a quartet of 36-inch cables to connect to the access points. The idea is to give IT groups maximum flexibility. But the cables initially were awkward for the IT team installing the antennas. Lawrence says they experimented with different ways of neatly and quickly wrapping up the excess cable to keep it out of the way between the access point proper and the antenna panel [see photo, below]. They also had to modify mounting clips to get them to hold in the metal grid that forms the dropped ceiling in some of the rooms. “Little things like that can cause you some unexpected issues,” Lawrence says.

Georgia Tech wifiThe IT staff worked with Cisco engineers to reset a dedicated controller to handle the new “high density group” of access points; and the controller automatically handled configuration tasks like setting access point power levels and selecting channels.

Another issue is that when the patch antennas were ceiling mounted in second- or third-story rooms, their downward-shooting signal cone reached into the radio space of access points in the floor below. Lawrence says they tweaked the position of the antennas in some cases to send the spotlight signal beaming at an angle. “I look at each room and ask ‘how am I going to deploy these antennas to minimize signal bleed-through into other areas,” he says. “Adding a high-gain antenna can have unintended consequences outside the space it’s intended for.”

But based on improved throughput and consistent signals, Lawrence says it’s likely the antennas will be used in a growing number of lecture halls and other spaces on the main and satellite campuses. “This is the best solution we’ve got for now,” he says.

Source:  networkworld.com

New malware variant suggests cybercriminals targeting SAP users

Tuesday, November 5th, 2013

The malware checks if infected systems have a SAP client application installed, ERPScan researchers said

A new variant of a Trojan program that targets online banking accounts also contains code to search if infected computers have SAP client applications installed, suggesting that attackers might target SAP systems in the future.

The malware was discovered a few weeks ago by Russian antivirus company Doctor Web, which shared it with researchers from ERPScan, a developer of security monitoring products for SAP systems.

“We’ve analyzed the malware and all it does right now is to check which systems have SAP applications installed,” said Alexander Polyakov, chief technology officer at ERPScan. “However, this might be the beginning for future attacks.”

When malware does this type of reconnaissance to see if particular software is installed, the attackers either plan to sell access to those infected computers to other cybercriminals interested in exploiting that software or they intend to exploit it themselves at a later time, the researcher said.

Polyakov presented the risks of such attacks and others against SAP systems at the RSA Europe security conference in Amsterdam on Thursday.

To his knowledge, this is the first piece of malware targeting SAP client software that wasn’t created as a proof-of-concept by researchers, but by real cybercriminals.

SAP client applications running on workstations have configuration files that can be easily read and contain the IP addresses of the SAP servers they connect to. Attackers can also hook into the application processes and sniff SAP user passwords, or read them from configuration files and GUI automation scripts, Polyakov said.

There’s a lot that attackers can do with access to SAP servers. Depending on what permissions the stolen credentials have, they can steal customer information and trade secrets or they can steal money from the company by setting up and approving rogue payments or changing the bank account of existing customers to redirect future payments to their account, he added.

There are efforts in some enterprise environments to limit permissions for SAP users based on their duties, but those are big and complex projects. In practice most companies allow their SAP users to do almost everything or more than what they’re supposed to, Polyakov said.

Even if some stolen user credentials don’t give attackers the access they want, there are default administrative credentials that many companies never change or forget to change on some instances of their development systems that have snapshots of the company data, the researcher said.

With access to SAP client software, attackers could steal sensitive data like financial information, corporate secrets, customer lists or human resources information and sell it to competitors. They could also launch denial-of-service attacks against a company’s SAP servers to disrupt its business operations and cause financial damage, Polyakov said.

SAP customers are usually very large enterprises. There are almost 250,000 companies using SAP products in the world, including over 80 percent of those on the Forbes 500 list, according to Polyakov.

If timed correctly, some attacks could even influence the company’s stock and would allow the attackers to profit on the stock market, according to Polyakov.

Dr. Web detects the new malware variant as part of the Trojan.Ibank family, but this is likely a generic alias, he said. “My colleagues said that this is a new modification of a known banking Trojan, but it’s not one of the very popular ones like ZeuS or SpyEye.”

However, malware is not the only threat to SAP customers. ERPScan discovered a critical unauthenticated remote code execution vulnerability in SAProuter, an application that acts as a proxy between internal SAP systems and the Internet.

A patch for this vulnerability was released six months ago, but ERPScan found that out of 5,000 SAProuters accessible from the Internet, only 15 percent currently have the patch, Polyakov said. If you get access to a company’s SAProuter, you’re inside the network and you can do the same things you can when you have access to a SAP workstation, he said.

Source:  csoonline.com

Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps

Friday, November 1st, 2013

Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn’t know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours.

In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet’s next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux.

“We were like, ‘Okay, we’re totally owned,'” Ruiu told Ars. “‘We have to erase all our systems and start from scratch,’ which we did. It was a very painful exercise. I’ve been suspicious of stuff around here ever since.”

In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that’s able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine’s inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations.

Another intriguing characteristic: in addition to jumping “airgaps” designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities.

“We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD,” Ruiu said. “At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we’re using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys.”

Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world’s foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer’s Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it.

But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: “badBIOS,” as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.

Bigfoot in the age of the advanced persistent threat

At times as I’ve reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he’s beginning to draw. (A compilation of Ruiu’s observations is here.)

Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he’s no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu’s computers and networks.

In contrast to the skepticism that’s common in the security and hacking cultures, Ruiu’s peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS.

“Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS,” Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: “No joke it’s really serious.” Plenty of others agree.

“Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest,” security researcher Arrigo Triulzi told Ars. “Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever.”

Been there, done that

Triulzi said he’s seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer’s peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU.

It’s also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT.

Of course, it’s one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it’s another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What’s more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran’s nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet.

“Really, everything Dragos reports is something that’s easily within the capabilities of a lot of people,” said Graham, who is CEO of penetration testing firm Errata Security. “I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy.”

Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month’s G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications.

Eureka

For most of the three years that Ruiu has been wrestling with badBIOS, its infection mechanism remained a mystery. A month or two ago, after buying a new computer, he noticed that it was almost immediately infected as soon as he plugged one of his USB drives into it. He soon theorized that infected computers have the ability to contaminate USB devices and vice versa.

“The suspicion right now is there’s some kind of buffer overflow in the way the BIOS is reading the drive itself, and they’re reprogramming the flash controller to overflow the BIOS and then adding a section to the BIOS table,” he explained.

He still doesn’t know if a USB stick was the initial infection trigger for his MacBook Air three years ago, or if the USB devices were infected only after they came into contact with his compromised machines, which he said now number between one and two dozen. He said he has been able to identify a variety of USB sticks that infect any computer they are plugged into. At next month’s PacSec conference, Ruiu said he plans to get access to expensive USB analysis hardware that he hopes will provide new clues behind the infection mechanism.

He said he suspects badBIOS is only the initial module of a multi-staged payload that has the ability to infect the Windows, Mac OS X, BSD, and Linux operating systems.

“It’s going out over the network to get something or it’s going out to the USB key that it was infected from,” he theorized. “That’s also the conjecture of why it’s not booting CDs. It’s trying to keep its claws, as it were, on the machine. It doesn’t want you to boot another OS it might not have code for.”

To put it another way, he said, badBIOS “is the tip of the warhead, as it were.”

“Things kept getting fixed”

Ruiu said he arrived at the theory about badBIOS’s high-frequency networking capability after observing encrypted data packets being sent to and from an infected laptop that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when the laptop had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine’s power cord so it ran only on battery to rule out the possibility that it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed the internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped.

With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on.

“The airgapped machine is acting like it’s connected to the Internet,” he said. “Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird.”

It’s too early to say with confidence that what Ruiu has been observing is a USB-transmitted rootkit that can burrow into a computer’s lowest levels and use it as a jumping off point to infect a variety of operating systems with malware that can’t be detected. It’s even harder to know for sure that infected systems are using high-frequency sounds to communicate with isolated machines. But after almost two weeks of online discussion, no one has been able to rule out these troubling scenarios, either.

“It looks like the state of the art in intrusion stuff is a lot more advanced than we assumed it was,” Ruiu concluded in an interview. “The take-away from this is a lot of our forensic procedures are weak when faced with challenges like this. A lot of companies have to take a lot more care when they use forensic data if they’re faced with sophisticated attackers.”

Source:  arstechnica.com

FCC lays down spectrum rules for national first-responder network

Tuesday, October 29th, 2013

The agency will also start processing applications for equipment certification

The U.S. moved one step closer to having a unified public safety network on Monday when the Federal Communications Commission approved rules for using spectrum set aside for the system.

Also on Monday, the agency directed its Office of Engineering and Technology to start processing applications from vendors to have their equipment certified to operate in that spectrum.

The national network, which will operate in the prized 700MHz band, is intended to replace a patchwork of systems used by about 60,000 public safety agencies around the country. The First Responder Network Authority (FirstNet) would operate the system and deliver services on it to those agencies. The move is intended to enable better coordination among first responders and give them more bandwidth for transmitting video and other rich data types.

The rules approved by the FCC include power limits and other technical parameters for operating in the band. Locking them down should help prevent harmful interference with users in adjacent bands and drive the availability of equipment for FirstNet’s network, the agency said.

A national public safety network was recommended by a task force that reviewed the Sept. 11, 2001, terror attacks on the U.S. The Middle Class Tax Relief and Job Creation Act of 2012 called for auctions of other spectrum to cover the cost of the network, which was estimated last year at US$7 billion.

The public safety network is required to cover 95 percent of the U.S., including all 50 states, the District of Columbia and U.S. territories. It must reach 98 percent of the country’s population.

Source:  computerworld.com

Top three indicators of compromised web servers

Thursday, October 24th, 2013

You slowly push open your unusually unlocked door only to find that your home is ransacked. A broken window, missing cash, all signs that someone has broken in and you have been robbed.

In the physical world it is very easy to understand what an indicator of compromise would mean for a robbery. It would simply be all the things that clue you in to the event’s occurrence. In the digital world however, things are another story.

My area of expertise is breaking into web applications. I’ve spent many years as a penetration tester attempting to gain access to internal networks through web applications connected to the Internet. I developed this expertise because of the prevalence of exploitable vulnerabilities that made it simple to achieve my goal.  In a world of phishing and drive-by downloads, the web layer is often a complicated, over-looked, compromise domain.

A perimeter web server is a gem of a host to control for any would-be attacker. It often enjoys full Internet connectivity with minimal downtime while also providing an internal connection to the target network.  These servers are routinely expected to experience attacks, heavy user traffic, bad login attempts, and many other characteristics that allow a real compromise to blend in with “normal” behavior.  The nature of many web applications running on these servers are such that encoding, obfuscation, file write operations, and even interaction with the underlying operating system are all natively supported, providing much of the functionality an attacker needs to do their bidding.  Perimeter web servers can also be used after a compromise has occurred elsewhere in the network to retain remote access so that pesky two-factor VPN’s can be avoided.

With all the reasons an attacker has to go after a web server, it’s a wonder that there isn’t a wealth of information available for detecting a server compromise by way of the application layer.  Perhaps the sheer number of web servers, application frameworks, components, and web applications culminate in a difficult situation for any analyst to approach with a common set of indicators.  While this is certainly no easy task, there are a few common areas that can be evaluated to detect a compromise with a high degree of success.

#1 Web shells

Often the product of vulnerable image uploaders and other poorly controlled file write operations, a web shell is simply a file that has been written to the web server’s file system for the purpose of executing commands. Web shells are most commonly text files with the appropriate extension to allow execution by the underlying application framework, an obvious example being commandshell.php or cmd.aspx.  Viewing the text file generally reveals code that allows an attacker to interact with the underlying operating system via built-in calls such as the ProcessStartInfo() constructor in .net or the system() call in php.  The presence of a web shell on any web server is a clear indicator of compromise in virtually every situation.

Web Shell IOC’s (Indicators of Compromise)

  • Scan all files in web root for operating system calls, given the installed application frameworks
  • Check for the existence of executable files or web application code in upload directories or non-standard locations
  • Parse web server logs to detect commands being passed as GET requests or successive POST requests to suspicious web scripts
  • Flag new processes created by the web server process because when should it ever really launch cmd.exe

#2 Administrative interfaces

Many web application frameworks and custom web applications have some form of administrative interface. These interfaces often suffer from password issues and other vulnerabilities that allow an attacker to gain access to this component. Once inside, an attacker can utilize all of the built-in functionality to further compromise the host or it’s users. While each application will have its own unique logging and available functionality, there are some common IOC’s that should be investigated.

Admin interface IOC’s

  • Unplanned deployment events such as pushing out a .war file in a Java based application
  • Modification of user accounts
  • Creation or editing of scheduled tasks or maintenance events
  • Unplanned configuration updates or backup operations
  • Failed or non-standard login events

#3 General attack activity

The typical web hacker will not fire up their favorite commercial security scanner to try and find ways into your web application as they tend to prefer a more manual approach. The ability for an attacker to quietly test your web application for exploitable vulnerabilities makes this a high reward, low risk activity.  During this investigation the intruder will focus on the exploits that lead them to their goal of obtaining access. A keen eye can detect some of this activity and isolate it to a source.

General attack IOC’s

  • Scan web server logs for (500) errors or errors handled within the application itself.  Database errors for SQL injection, path errors for file write or read operations, and permission errors are some prime candidates to indicate an issue
  • Known sensitive file access via web server process.  Investigate if web configuration files like WEB-INF/web.xml, sensitive operating system files like /etc/passwd, or static location operating system files like C:\WINDOWS\system.ini have been accessed via the web server process.
  • Advanced search engine operators in referrer headers.  It is not common for a web visitor to access your site directly from an inurl:foo ext:bar Google search
  • Large quantities of 404 page not found errors with suspicious file names may indicate an attempt to access un-linked areas of an application

Web application IOC’s still suffer from the same issues as their more traditional counterparts in that the behavior of an attacker must be highly predictable to detect their activity.  If we’re honest with ourselves, an attacker’s ability to avoid detection is only limited by their creativity and skill set.  An advanced attacker could easily avoid creating most, if not all, of the indicators in this article. That said, many attackers are not as advanced as the media makes them out to be; even better, some of them are just plain lazy. Armed with the web-specific IOC’s above, the next time you walk up to the unlocked front door of your ransacked web server, you might actually get to see who has their hand in your cookie jar.

Source:  techrepublic.com

Cisco fixes serious security flaws in networking, communications products

Thursday, October 24th, 2013

Cisco Systems released software security updates Wednesday to address denial-of-service and arbitrary command execution vulnerabilities in several products, including a known flaw in the Apache Struts development framework used by some of them.

The company released new versions of Cisco IOS XR Software to fix an issue with handling fragmented packets that can be exploited to trigger a denial-of-service condition on various Cisco CRS Route Processor cards. The affected cards and the patched software versions available for them are listed in a Cisco advisory.

The company also released security updates for Cisco Identity Services Engine (ISE), a security policy management platform for wired, wireless, and VPN connections. The updates fix a vulnerability that could be exploited by authenticated remote attackers to execute arbitrary commands on the underlying operating system and a separate vulnerability that could allow attackers to bypass authentication and download the product’s configuration or other sensitive information, including administrative credentials.

Cisco also released updates that fix a known Apache Struts vulnerability in several of its products, including ISE. Apache Struts is a popular open-source framework for developing Java-based Web applications.

The vulnerability, identified as CVE-2013-2251, is located in Struts’ DefaultActionMapper component and was patched by Apache in Struts version 2.3.15.1 which was released in July.

The new Cisco updates integrate that patch into the Struts version used by Cisco Business Edition 3000, Cisco Identity Services Engine, Cisco Media Experience Engine (MXE) 3500 Series and Cisco Unified SIP Proxy.

“The impact of this vulnerability on Cisco products varies depending on the affected product,” Cisco said in an advisory. “Successful exploitation on Cisco ISE, Cisco Unified SIP Proxy, and Cisco Business Edition 3000 could result in an arbitrary command executed on the affected system.”

No authentication is needed to execute the attack on Cisco ISE and Cisco Unified SIP Proxy, but the flaw’s successful exploitation on Cisco Business Edition 3000 requires the attacker to have valid credentials or trick a user with valid credentials into executing a malicious URL, the company said.

“Successful exploitation on the Cisco MXE 3500 Series could allow the attacker to redirect the user to a different and possibly malicious website, however arbitrary command execution is not possible on this product,” Cisco said.

Security researchers from Trend Micro reported in August that Chinese hackers are attacking servers running Apache Struts applications by using an automated tool that exploits several Apache Struts remote command execution vulnerabilities, including CVE-2013-2251.

The existence of an attack tool in the cybercriminal underground for exploiting Struts vulnerabilities increases the risk for organizations using the affected Cisco products.

In addition, since patching CVE-2013-2251 the Apache Struts developers have further hardened the DefaultActionMapper component in more recent releases.

Struts version 2.3.15.2, which was released in September, made some changes to the DefaultActionMapper “action:” prefix that’s used to attach navigational information to buttons within forms in order to mitigate an issue that could be exploited to circumvent security constraints. The issue has been assigned the CVE-2013-4310 identifier.

Struts 2.3.15.3, released on Oct. 17, turned off support for the “action:” prefix by default and added two new settings called “struts.mapper.action.prefix.enabled” and “struts.mapper.action.prefix.crossNamespaces” that can be used to better control the behavior of DefaultActionMapper.

The Struts developers said that upgrading to Struts 2.3.15.3 is strongly recommended, but held back on releasing more details about CVE-2013-4310 until the patch is widely adopted.

It’s not clear when or if Cisco will patch CVE-2013-4310 in its products, giving that the fix appears to involve disabling support for the “action:” prefix. If the Struts applications in those products use the “action:” prefix the company might need to rework some of their code.

Source:  computerworld.com