Archive for the ‘Hardware’ Category

Wireless Case Studies: Cellular Repeater and DAS

Friday, February 7th, 2014

Gyver Networks recently designed and installed a cellular bi-directional amplifier (BDA) and distributed antenna system (DAS) for an internationally renowned preparatory and boarding school in Massachusetts.

BDA Challenge: Faculty, students, and visitors were unable to access any cellular voice or data services at one of this historic campus’ sports complexes; 3G and 4G cellular reception at the suburban Boston location were virtually nonexistent.

Of particular concern to the school was the fact that the safety of its student-athletes would be jeopardized in the event of a serious injury, with precious minutes lost as faculty were forced to scramble to find the nearest landline – or leave the building altogether in search of cellular signal – to contact first responders.

Additionally, since internal communications between management and facilities personnel around the campus took place via mobile phone, lack of cellular signal at the sports complex required staff to physically leave the site just to find adequate reception.

Resolution: Gyver Networks engineers performed a cellular site survey of selected carriers throughout the complex to acquire a precise snapshot of the RF environment. After selecting the optimal donor tower signal for each cell carrier, Gyver then engineered and installed a distributed antenna system (DAS) to retransmit the amplified signal put out by the bi-directional amplifier (BDA) inside the building.

The high-gain, dual-band BDA chosen for the system offered scalability across selected cellular and PCS bands, as well as the flexibility to reconfigure band settings on an as-needed basis, providing enhancement capabilities for all major carriers now and in the future.

Every objective set forth by the school’s IT department has been satisfied with the deployment of this cellular repeater and DAS: All areas of the athletic complex now enjoy full 3G and 4G voice and data connectivity; safety and liability concerns have been mitigated; and campus personnel are able to maintain mobile communications regardless of where they are in the complex.

Cisco promises to fix admin backdoor in some routers

Monday, January 13th, 2014

Cisco Systems promised to issue firmware updates removing a backdoor from a wireless access point and two of its routers later this month. The undocumented feature could allow unauthenticated remote attackers to gain administrative access to the devices.

The vulnerability was discovered over the Christmas holiday on a Linksys WAG200G router by a security researcher named Eloi Vanderbeken. He found that the device had a service listening on port 32764 TCP, and that connecting to it allowed a remote user to send unauthenticated commands to the device and reset the administrative password.

It was later reported by other users that the same backdoor was present in multiple devices from Cisco, Netgear, Belkin and other manufacturers. On many devices this undocumented interface can only be accessed from the local or wireless network, but on some devices it is also accessible from the Internet.

Cisco identified the vulnerability in its WAP4410N Wireless-N Access Point, WRVS4400N Wireless-N Gigabit Security Router and RVS4000 4-port Gigabit Security Router. The company is no longer responsible for Linksys routers, as it sold that consumer division to Belkin early last year.

The vulnerability is caused by a testing interface that can be accessed from the LAN side on the WRVS4400N and RVS4000 routers and also the wireless network on the WAP4410N wireless access point device.

“An attacker could exploit this vulnerability by accessing the affected device from the LAN-side interface and issuing arbitrary commands in the underlying operating system,” Cisco said in an advisory published Friday. “An exploit could allow the attacker to access user credentials for the administrator account of the device, and read the device configuration. The exploit can also allow the attacker to issue arbitrary commands on the device with escalated privileges.”

The company noted that there are no known workarounds that could mitigate this vulnerability in the absence of a firmware update.

The SANS Internet Storm Center, a cyber threat monitoring organization, warned at the beginning of the month that it detected probes for port 32764 TCP on the Internet, most likely targeting this vulnerability.

Source:  networkworld.com

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com

Why security benefits boost mid-market adoption of virtualization

Monday, December 2nd, 2013

While virtualization has undoubtedly already found its footing in larger businesses and data centers, the technology is still in the process of catching on in the middle market. But a recent study conducted by a group of Cisco Partner Firms, titled “Virtualization on the Rise,” indicates just that: the prevalence of virtualization is continuing to expand and has so far proven to be a success for many small- and medium-sized businesses.

With firms where virtualization has yet to catch on, however, security is often the point of contention.

Cisco’s study found that adoption rates for virtualization are already quite high at small- to medium-sized businesses, with 77 percent of respondents indicating that they already had some type of virtualization in place around their office. These types of solutions included server virtualization, a virtual desktop infrastructure, storage virtualization, network virtualization, and remote desktop access, among others. Server virtualization was the most commonly used, with 59 percent of respondents (that said they had adopted virtualization in some form) stating that it was their solution of choice.

That all being said, there are obviously some businesses who still have yet to adopt virtualization, and a healthy chunk of respondents – 51 percent – cited security as a reason. It appeared that the larger companies with over 100 employees were more concerned about the security of virtualization, with 60 percent of that particular demographic qualifying it as their barrier to entry (while only 33 percent of smaller firms shared the same concern).

But with Cisco’s study lacking any other specificity in terms of why exactly the respondents were concerned about the security of virtualization, one can’t help but wonder: is this necessarily sound reasoning? Craig Jeske, the business development manager for virtualization and cloud at Global Technology Resources, shed some light on the subject.

“I think [virtualization] gives a much easier, more efficient, and agile response to changing demands, and that includes responding to security threats,” said Jeske. “It allow for a faster response than if you had to deploy new physical tools.”

He went on to explain that given how virtualization enhances portability and makes it easier to back up data, it subsequently makes it easier for companies to get back to a known state in the event of some sort of compromise. This kind of flexibility limits attackers’ options.

“Thanks to the agility provided by virtualization, it changes the attack vectors that people can come at us from,” he said.

As for the 33 percent of smaller firms that cited security as a barrier to entry – thereby suggesting that the smaller companies were more willing to take the perceived “risk” of adopting the technology – Jeske said that was simply because virtualization makes more sense for businesses of that size.

“When you have a small budget, the cost savings [from virtualization] are more dramatic, since it saves space and calls for a lower upfront investment,” he said. On the flip side, the upfront cost for any new IT direction is higher for a larger business. It’s easier to make a shift when a company has 20 servers versus 20 million servers; while the return on virtualization is higher for a larger company, so is the upfront investment.

Of course, there is also the obvious fact that with smaller firms, the potential loss as a result of taking such a risk isn’t as great.

“With any type of change, the risk is lower for a smaller business than for a multimillion dollar firm,” he said. “With bigger businesses, any change needs to be looked at carefully. Because if something goes wrong, regardless of what the cause was, someone’s losing their job.”

Jeske also addressed the fact that some of the security concerns indicated by the study results may have stemmed from some teams recognizing that they weren’t familiar with the technology. That lack of comfort with virtualization – for example, not knowing how to properly implement or deploy it – could make virtualization less secure, but it’s not inherently insecure. Security officers, he stressed, are always most comfortable with what they know.

“When you know how to handle virtualization, it’s not a security detriment,” he said. “I’m hesitant to make a change until I see the validity and justification behind that change. You can understand peoples’ aversion from a security standpoint and first just from the standpoint of needing to understand it before jumping in.”

But the technology itself, Jeske reiterated, has plenty of security benefits.

“Since everything is virtualized, it’s easier to respond to a threat because it’s all available from everywhere. You don’t have to have the box,” he said. “The more we’re tied to these servers and our offices, the easier it is to respond.”

And with every element being all-encompassed in a software package, he said, businesses might be able to do more to each virtual server than they could in the physical world. Virtual firewalls, intrusion detection, etc. can all be put in as an application and put closer to the machine itself so firms don’t have to bring things back out into the physical environment.

This also allows for easier, faster changes in security environments. One change can be propagated across the entire virtual environment automatically, rather than having to push it out to each physical device individually that’s protecting a company’s systems.

Jeske noted that there are benefits from a physical security standpoint, as well, namely because somebody else takes care of it for you. The servers hosting the virtualized solutions are somewhere far away, and the protection of those servers is somebody else’s responsibility.

But what with the rapid proliferation of virtualization, Jeske warned that security teams need to try to stay ahead of the game. Otherwise, it’s going to be harder to properly adopt the technology when they no longer have a choice.

“With virtualization, speed of deployment and speed of reaction are the biggest things,” said Jeske. “The servers and desktops are going to continue to get virtualized whether officers like it or not. So they need to be proactive and stay in front of it, otherwise they can find themselves in a bad position further on down the road.”

Source:  csoonline.com

This new worm targets Linux PCs and embedded devices

Wednesday, November 27th, 2013

A new worm is targeting x86 computers running Linux and PHP, and variants may also pose a threat to devices such as home routers and set-top boxes based on other chip architectures.

According to security researchers from Symantec, the malware spreads by exploiting a vulnerability in php-cgi, a component that allows PHP to run in the Common Gateway Interface (CGI) configuration. The vulnerability is tracked as CVE-2012-1823 and was patched in PHP 5.4.3 and PHP 5.3.13 in May 2012.

The new worm, which was named Linux.Darlloz, is based on proof-of-concept code released in late October, the Symantec researchers said Wednesday in a blog post.

“Upon execution, the worm generates IP [Internet Protocol] addresses randomly, accesses a specific path on the machine with well-known ID and passwords, and sends HTTP POST requests, which exploit the vulnerability,” the Symantec researchers explained. “If the target is unpatched, it downloads the worm from a malicious server and starts searching for its next target.”

The only variant seen to be spreading so far targets x86 systems, because the malicious binary downloaded from the attacker’s server is in ELF (Executable and Linkable Format) format for Intel architectures.

However, the Symantec researchers claim the attacker also hosts variants of the worm for other architectures including ARM, PPC, MIPS and MIPSEL.

These architectures are used in embedded devices like home routers, IP cameras, set-top boxes and many others.

“The attacker is apparently trying to maximize the infection opportunity by expanding coverage to any devices running on Linux,” the Symantec researchers said. “However, we have not confirmed attacks against non-PC devices yet.”

The firmware of many embedded devices is based on some type of Linux and includes a Web server with PHP for the Web-based administration interface. These kinds of devices might be easier to compromise than Linux PCs or servers because they don’t receive updates very often.

Patching vulnerabilities in embedded devices has never been an easy task. Many vendors don’t issue regular updates and when they do, users are often not properly informed about the security issues fixed in those updates.

In addition, installing an update on embedded devices requires more work and technical knowledge than updating regular software installed on a computer. Users have to know where the updates are published, download them manually and then upload them to their devices through a Web-based administration interface.

“Many users may not be aware that they are using vulnerable devices in their homes or offices,” the Symantec researchers said. “Another issue we could face is that even if users notice vulnerable devices, no updates have been provided to some products by the vendor, because of outdated technology or hardware limitations, such as not having enough memory or a CPU that is too slow to support new versions of the software.”

To protect their devices from the worm, users are advised to verify if those devices run the latest available firmware version, update the firmware if needed, set up strong administration passwords and block HTTP POST requests to -/cgi-bin/php, -/cgi-bin/php5, -/cgi-bin/php-cgi, -/cgi-bin/php.cgi and -/cgi-bin/php4, either from the gateway firewall or on each individual device if possible, the Symantec researchers said.

Source:  computerworld.com

High-gain patch antennas boost Wi-Fi capacity for Georgia Tech

Tuesday, November 5th, 2013

To boost its Wi-Fi capacity in packed lecture halls, Georgia Institute of Technology gave up trying to cram in more access points, with conventional omni-directional antennas, and juggle power settings and channel plans. Instead, it turned to new high-gain directional antennas, from Tessco’s Ventev division.

Ventev’s new TerraWave High-Density Ceiling Mount Antenna, which looks almost exactly like the bottom half of a small pizza box, focuses the Wi-Fi signal from the ceiling mounted Cisco access point in a precise cone-shaped pattern, covering part of the lecture hall floor. Instead of the flakey, laggy connections, about which professors had been complaining, users now consistently get up to 144Mbps (if they have 802.11n client radios).

“Overall, the system performed much better” with the Ventev antennas, says William Lawrence, IT project manager principal with the university’s academic and research technologies group. “And there was a much more even distribution of clients across the room’s access points.”

Initially, these 802.11n access points were running 40-MHz channels, but Lawrence’s team eventually switched to the narrower 20 MHz. “We saw more consistent performance for clients in the 20-MHz channel, and I really don’t know why,” he says. “It seems like the clients were doing a lot of shifting between using 40 MHz and 20 MHz. With the narrower channel, it was very smooth and consistent: we got great video playback.”

With the narrower channel, 11n clients can’t achieve their maximum 11n throughput. But that doesn’t seem to have been a problem in these select locations, Lawrence says. “We’ve not seen that to be an issue, but we’re continuing to monitor it,” he says.

The Atlanta main campus has a fully-deployed Cisco WLAN, with about 3,900 access points, nearly all supporting 11n, and 17 wireless controllers. Virtually all of the access points use a conventional, omni-directional antenna, which radiates energy in a globe-shaped configuration with the access point at the center. But in high density classrooms, faculty and students began complaining of flakey connections and slow speeds.

The problem, Lawrence says, was the surging number of Wi-Fi devices actively being used in big classrooms and lectures halls, coupled with Wi-Fi signals, especially in the 2.4-GHz band, stepping on each other over wide sections of the hall, creating co-channel interference.

One Georgia Tech network engineer spent a lot of time monitoring the problem areas and working with students and faculty. In a few cases, the problems could be traced to a client-side configuration problem. But “with 120 clients on one access point, performance really goes downhill,” Lawrence says. “With the omni-directional antenna, you can only pack the access points so close.”

Shifting users to the cleaner 5 GHz was an obvious step but in practice was rarely feasible: many mobile devices still support only 2.4-GHz connections; and client radios often showed a stubborn willfulness in sticking with a 2.4-GHz connection on a distant access point even when another was available much closer.

Consulting with Cisco, Georgia Tech decided to try some newer access points, with external antenna mounts, and selected one of Cisco’s certified partners, Tessco’s Ventev Wireless Infrastructure division, to supply the directional antennas. The TerraWave products also are compatible with access points from Aruba, Juniper, Meru, Motorola and others.

Patch antennas focus the radio beam within a specific area. (A couple of vendors, Ruckus Wireless and Xirrus, have developed their own built-in “smart” antennas that adjust and focus Wi-Fi signals on clients.) Depending on the beamwidth, the effect can be that of a floodlight or a spotlight, says Jeff Lime, Ventev’s vice president. Ventev’s newest TerraWave High-Density products focus the radio beam within narrower ranges than some competing products, and offer higher gain (in effect putting more oomph into the signal to drive it further), he says.

One model, with a maximum power of 20 watts, can have beam widths of 18 or 28 inches vertically, and 24 or 40 inches horizontally, with a gain of 10 or 11 dBi, depending on the frequency range. The second model, with a 50-watt maximum power output, has a beamwidth in both dimension of 35 degrees, at a still higher gain of 14 dBi to drive the spotlighted signal further, in really big areas like a stadium.

At Georgia Tech, each antenna focused the Wi-Fi signal from a specific overhead access point to cover a section of seats below it. Fewer users associate with each access point. The result is a kind of virtuous circle. “It gives more capacity per user, so more bandwidth, so a better user experience,” says Lime.

The antennas come with a quartet of 36-inch cables to connect to the access points. The idea is to give IT groups maximum flexibility. But the cables initially were awkward for the IT team installing the antennas. Lawrence says they experimented with different ways of neatly and quickly wrapping up the excess cable to keep it out of the way between the access point proper and the antenna panel [see photo, below]. They also had to modify mounting clips to get them to hold in the metal grid that forms the dropped ceiling in some of the rooms. “Little things like that can cause you some unexpected issues,” Lawrence says.

Georgia Tech wifiThe IT staff worked with Cisco engineers to reset a dedicated controller to handle the new “high density group” of access points; and the controller automatically handled configuration tasks like setting access point power levels and selecting channels.

Another issue is that when the patch antennas were ceiling mounted in second- or third-story rooms, their downward-shooting signal cone reached into the radio space of access points in the floor below. Lawrence says they tweaked the position of the antennas in some cases to send the spotlight signal beaming at an angle. “I look at each room and ask ‘how am I going to deploy these antennas to minimize signal bleed-through into other areas,” he says. “Adding a high-gain antenna can have unintended consequences outside the space it’s intended for.”

But based on improved throughput and consistent signals, Lawrence says it’s likely the antennas will be used in a growing number of lecture halls and other spaces on the main and satellite campuses. “This is the best solution we’ve got for now,” he says.

Source:  networkworld.com

Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps

Friday, November 1st, 2013

Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn’t know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours.

In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet’s next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux.

“We were like, ‘Okay, we’re totally owned,’” Ruiu told Ars. “‘We have to erase all our systems and start from scratch,’ which we did. It was a very painful exercise. I’ve been suspicious of stuff around here ever since.”

In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that’s able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine’s inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations.

Another intriguing characteristic: in addition to jumping “airgaps” designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities.

“We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD,” Ruiu said. “At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we’re using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys.”

Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world’s foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer’s Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it.

But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: “badBIOS,” as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.

Bigfoot in the age of the advanced persistent threat

At times as I’ve reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he’s beginning to draw. (A compilation of Ruiu’s observations is here.)

Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he’s no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu’s computers and networks.

In contrast to the skepticism that’s common in the security and hacking cultures, Ruiu’s peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS.

“Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS,” Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: “No joke it’s really serious.” Plenty of others agree.

“Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest,” security researcher Arrigo Triulzi told Ars. “Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever.”

Been there, done that

Triulzi said he’s seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer’s peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU.

It’s also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT.

Of course, it’s one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it’s another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What’s more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran’s nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet.

“Really, everything Dragos reports is something that’s easily within the capabilities of a lot of people,” said Graham, who is CEO of penetration testing firm Errata Security. “I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy.”

Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month’s G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications.

Eureka

For most of the three years that Ruiu has been wrestling with badBIOS, its infection mechanism remained a mystery. A month or two ago, after buying a new computer, he noticed that it was almost immediately infected as soon as he plugged one of his USB drives into it. He soon theorized that infected computers have the ability to contaminate USB devices and vice versa.

“The suspicion right now is there’s some kind of buffer overflow in the way the BIOS is reading the drive itself, and they’re reprogramming the flash controller to overflow the BIOS and then adding a section to the BIOS table,” he explained.

He still doesn’t know if a USB stick was the initial infection trigger for his MacBook Air three years ago, or if the USB devices were infected only after they came into contact with his compromised machines, which he said now number between one and two dozen. He said he has been able to identify a variety of USB sticks that infect any computer they are plugged into. At next month’s PacSec conference, Ruiu said he plans to get access to expensive USB analysis hardware that he hopes will provide new clues behind the infection mechanism.

He said he suspects badBIOS is only the initial module of a multi-staged payload that has the ability to infect the Windows, Mac OS X, BSD, and Linux operating systems.

“It’s going out over the network to get something or it’s going out to the USB key that it was infected from,” he theorized. “That’s also the conjecture of why it’s not booting CDs. It’s trying to keep its claws, as it were, on the machine. It doesn’t want you to boot another OS it might not have code for.”

To put it another way, he said, badBIOS “is the tip of the warhead, as it were.”

“Things kept getting fixed”

Ruiu said he arrived at the theory about badBIOS’s high-frequency networking capability after observing encrypted data packets being sent to and from an infected laptop that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when the laptop had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine’s power cord so it ran only on battery to rule out the possibility that it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed the internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped.

With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on.

“The airgapped machine is acting like it’s connected to the Internet,” he said. “Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird.”

It’s too early to say with confidence that what Ruiu has been observing is a USB-transmitted rootkit that can burrow into a computer’s lowest levels and use it as a jumping off point to infect a variety of operating systems with malware that can’t be detected. It’s even harder to know for sure that infected systems are using high-frequency sounds to communicate with isolated machines. But after almost two weeks of online discussion, no one has been able to rule out these troubling scenarios, either.

“It looks like the state of the art in intrusion stuff is a lot more advanced than we assumed it was,” Ruiu concluded in an interview. “The take-away from this is a lot of our forensic procedures are weak when faced with challenges like this. A lot of companies have to take a lot more care when they use forensic data if they’re faced with sophisticated attackers.”

Source:  arstechnica.com

Seven essentials for VM management and security

Tuesday, October 29th, 2013

Virtualization isn’t a new trend, these days it’s an essential element of infrastructure design and management. However, while common for the most part, organizations are still learning as they go when it comes to cloud-based initiatives.

CSO recently spoke with Shawn Willson, the Vice President of Sales at Next IT, a Michigan-based firm that focuses on managed services for small to medium-sized organizations. Willson discussed his list of essentials when it comes to VM deployment, management, and security.

Preparing for time drift on virtual servers. “Guest OSs should, and need to be synced with the host OS…Failure to do so will lead to time drift on virtual servers — resulting in significant slowdowns and errors in an active directory environment,” Willson said.

Despite the impact this could have on work productivity and daily operations, he added, very few IT managers or security officers think to do this until after they’ve experienced a time drift. Unfortunately, this usually happens while attempting to recover from a security incident. Time drift can lead to a loss of accuracy when it comes to logs, making forensic investigations next to impossible.

Establish policies for managing snapshots and images. Virtualization allows for quick copies of the Guest OS, but policies need to be put in place in order to dictate who can make these copies, if copies will (or can) be archived, and if so, where (and under what security settings) will these images be stored.

“Many times when companies move to virtual servers they don’t take the time the upgrade their security policy for specific items like this, simply because of the time it requires,” Willson said.

Creating and maintaining disaster recovery images. “Spinning up an unpatched, legacy image in the case of disaster recovery can cause more issues than the original problem,” Willson explained.

To fix this, administrators should develop a process for maintaining a patched, “known good” image.

Update disaster recovery policy and procedures to include virtual drives. “Very few organizations take the time to upgrade their various IT policies to accommodate virtualization. This is simply because of the amount of time it takes and the little value they see it bringing to the organization,” Willson said.

But failing to update IT policies to include virtualization, “will only result in the firm incurring more costs and damages whenever a breach or disaster occurs,” Willson added.

Maintaining and monitoring the hypervisor. “All software platforms will offer updates to the hypervisor software, making it necessary that a strategy for this be put in place. If the platform doesn’t provide monitoring features for the hypervisor, a third party application should be used,” Willson said.

Consider disabling clip boarding between guest OSs. By default, most VM platforms have copy and paste between guest OSs turned on after initial deployment. In some cases, this is a required feature for specific applications.

“However, it also poses a security threat, providing a direct path of access and the ability to unknowingly [move] malware from one guest OS to another,” Willson said.

Thus, if copy and paste isn’t essential, it should be disabled as a rule.

Limiting unused virtual hardware. “Most IT professionals understand the need to manage unused hardware (drives, ports, network adapters), as these can be considered soft targets from a security standpoint,” Willson said.

However, he adds, “with virtualization technology we now have to take inventory of virtual hardware (CD drives, virtual NICS, virtual ports). Many of these are created by default upon creating new guest OSs under the disguise of being a convenience, but these can offer the same danger or point of entry as unused physical hardware can.”

Again, just as it was with copy and paste, if the virtualized hardware isn’t essential, it should be disabled.

Source:  csoonline.com

Cisco fixes serious security flaws in networking, communications products

Thursday, October 24th, 2013

Cisco Systems released software security updates Wednesday to address denial-of-service and arbitrary command execution vulnerabilities in several products, including a known flaw in the Apache Struts development framework used by some of them.

The company released new versions of Cisco IOS XR Software to fix an issue with handling fragmented packets that can be exploited to trigger a denial-of-service condition on various Cisco CRS Route Processor cards. The affected cards and the patched software versions available for them are listed in a Cisco advisory.

The company also released security updates for Cisco Identity Services Engine (ISE), a security policy management platform for wired, wireless, and VPN connections. The updates fix a vulnerability that could be exploited by authenticated remote attackers to execute arbitrary commands on the underlying operating system and a separate vulnerability that could allow attackers to bypass authentication and download the product’s configuration or other sensitive information, including administrative credentials.

Cisco also released updates that fix a known Apache Struts vulnerability in several of its products, including ISE. Apache Struts is a popular open-source framework for developing Java-based Web applications.

The vulnerability, identified as CVE-2013-2251, is located in Struts’ DefaultActionMapper component and was patched by Apache in Struts version 2.3.15.1 which was released in July.

The new Cisco updates integrate that patch into the Struts version used by Cisco Business Edition 3000, Cisco Identity Services Engine, Cisco Media Experience Engine (MXE) 3500 Series and Cisco Unified SIP Proxy.

“The impact of this vulnerability on Cisco products varies depending on the affected product,” Cisco said in an advisory. “Successful exploitation on Cisco ISE, Cisco Unified SIP Proxy, and Cisco Business Edition 3000 could result in an arbitrary command executed on the affected system.”

No authentication is needed to execute the attack on Cisco ISE and Cisco Unified SIP Proxy, but the flaw’s successful exploitation on Cisco Business Edition 3000 requires the attacker to have valid credentials or trick a user with valid credentials into executing a malicious URL, the company said.

“Successful exploitation on the Cisco MXE 3500 Series could allow the attacker to redirect the user to a different and possibly malicious website, however arbitrary command execution is not possible on this product,” Cisco said.

Security researchers from Trend Micro reported in August that Chinese hackers are attacking servers running Apache Struts applications by using an automated tool that exploits several Apache Struts remote command execution vulnerabilities, including CVE-2013-2251.

The existence of an attack tool in the cybercriminal underground for exploiting Struts vulnerabilities increases the risk for organizations using the affected Cisco products.

In addition, since patching CVE-2013-2251 the Apache Struts developers have further hardened the DefaultActionMapper component in more recent releases.

Struts version 2.3.15.2, which was released in September, made some changes to the DefaultActionMapper “action:” prefix that’s used to attach navigational information to buttons within forms in order to mitigate an issue that could be exploited to circumvent security constraints. The issue has been assigned the CVE-2013-4310 identifier.

Struts 2.3.15.3, released on Oct. 17, turned off support for the “action:” prefix by default and added two new settings called “struts.mapper.action.prefix.enabled” and “struts.mapper.action.prefix.crossNamespaces” that can be used to better control the behavior of DefaultActionMapper.

The Struts developers said that upgrading to Struts 2.3.15.3 is strongly recommended, but held back on releasing more details about CVE-2013-4310 until the patch is widely adopted.

It’s not clear when or if Cisco will patch CVE-2013-4310 in its products, giving that the fix appears to involve disabling support for the “action:” prefix. If the Struts applications in those products use the “action:” prefix the company might need to rework some of their code.

Source:  computerworld.com

Google unveils an anti-DDoS platform for human rights organizations and media, but will it work?

Tuesday, October 22nd, 2013

Project Shield uses company’s infrastructure to absorb attacks

On Monday, Google announced a beta service that will offer DDoS protection to human rights organizations and media, in and effort to slow the amount of censorship that such attacks cause.

The announcement of Project Shield, the name given to the anti-DDoS platform, came during a presentation in New York, at the Conflict in a Connected World summit. The gathering included security experts, hacktivists, dissidents, and technologists, in order to explore the nature of conflict and how online tools can both be a source of protection and harm when it comes to expression, and information sharing.

“As long as people have expressed ideas, others have tried to silence them. Today one out of every three people lives in a society that is severely censored. Online barriers can include everything from filters that block content to targeted attacks designed to take down websites. For many people, these obstacles are more than an inconvenience — they represent full-scale repression,” the company explained in a blog post.

Project Shield uses Google’s massive infrastructure to absorb DDoS attacks. Enrollment in the service is invite only at the moment, but it could be expanded considerable in the future. The service is free, but will follow page speed pricing, should Google open enrollment and charge for it down the line.

However, while the service is sure to help smaller websites, such as those ran by dissidents exposing corrupt regimes, or media speaking out against those in power, Google makes no promises.

“No guarantees are made in regards to uptime or protection levels. Google has designed its infrastructure to defend itself from quite large attacks and this initiative is aimed at providing a similar level of protection to third-party websites,” the company explains in a Project Shield outline.

One problem Project Shield may inadvertently create is a change in tactics. If the common forms of DDoS attacks are blocked, then more advanced forms of attack will be used. Such an escalation has already happened for high value targets, such as banks and other financial services websites.

“Using Google’s infrastructure to absorb DDoS attacks is structurally like using a CDN (Content Delivery Network) and has the same pros and cons,” Shuman Ghosemajumder, VP of strategy at Shape Security, told CSO during an interview.

The types of attacks a CDN would solve, he explained, are network-based DoS and DDoS attacks. These are the most common, and the most well-known attack types, as they’ve been around the longest.

In 2000, flood attacks were in the 400Mb/sec range, but today’s attacks scale to regularly exceed 100Gb/sec, according to anti-DDoS vendor Arbor Networks. In 2010, Arbor started to see a trend led by attackers who were advancing DDoS campaigns, by developing new tactics, tools, and targets. What that has led to is a threat that mixes flood, application and infrastructure attacks in a single, blended attack.

“It is unclear how effective [Project Shield] would be against Application Layer DoS attacks, where web servers are flooded with HTTP requests. These represent more leveraged DoS attacks, requiring less infrastructure on the part of the attacker, but are still fairly simplistic. If the DDoS protection provided operates at the application layer, then it could help,” Ghosemajumder said.

“What it would not protect against is Advanced Denial of Service attacks, where the attacker uses knowledge of the application to directly attack the origin server, databases, and other backend systems which cannot be protected against by a CDN and similar means.”

Google hasn’t mentioned directly the number of sites currently being protected by Project Shield, so there is no way to measure the effectiveness of the program from the outside.

In related news, Google also released a second DDoS related tool on Monday, which is possible thanks to data collected by Arbor networks. The Digital Attack Map, as the tool is called, is a monitoring system that allows users to see historical DDoS attack trends, and connect them to related news events on any given day. The data is also shown live, and can be granularly sorted by location, time, and attack type.

Source:  csoonline.com

Cisco says controversial NIST crypto-potential NSA backdoor ‘not invoked’ in products

Thursday, October 17th, 2013

Controversial crypto technology known as Dual EC DRBG, thought to be a backdoor for the National Security Agency, ended up in some Cisco products as part of their code libraries. But Cisco says they cannot be used because it chose other crypto as an operational default which can’t be changed.

Dual EC DRBG or Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) from the National Institute of Standards and Technology and a crypto toolkit from RSA is thought to have been one main way the crypto ended up in hundreds of vendors’ products.

Because Cisco is known to have used the BSAFE crypto toolkit, the company has faced questions about where Dual EC DRBG may have ended up in the Cisco product line. In a Cisco blog post today, Anthony Grieco, principle engineer at Cisco, tackled this topic in a notice about how Cisco chooses crypto.

“Before we go any further, I’ll go ahead and get it out there: we don’t use the Dual_EC_DRBG in our products. While it is true that some of the libraries in our products can support the DUAL_EC_DRBG, it is not invoked in our products.”

Grieco wrote that Cisco, like most tech companies, uses cryptography in nearly all its products, if only for secure remote management.

“Looking back at our DRBG decisions in the context of these guiding principles, we looked at all four DRBG options available in NIST SP 800-90. As none had compelling interoperability or legal implementation implications, we ultimately selected the Advanced Encryption Standard Counter mode (AES-CTR) DRBG as out default.”

Grieco stated this was “because of our comfort with the underlying implementation, the absence of any general security concerns, and its acceptable performance. Dual_EC_DRBG was implemented but wasn’t seriously considered as the default given the other good choices available.”

Grieco said the DRBG choice that Cisco made “cannot be changed by the customer.”

Faced with the Dual EC DRBG controversy, which was triggered by the revelations about the NSA by former NSA contractor Edward Snowden, NIST itself has re-opened comments about this older crypto standard.

“The DRBG controversy has brought renewed focus on the crypto industry and the need to constantly evaluate cryptographic algorithm choices,” Grieco wrote in the blog today. “We welcome this conversation as an opportunity to improve security of the communications infrastructure. We’re open to serious discussions about the industry’s cryptographic needs, what’s next for our products, and how to collectively move forward.” Cisco invited comment on that online.

Grieco concluded, “We will continue working to ensure out products offer secure algorithms, and if they don’t, we’ll fix them.”

Source:  computerworld.com

Researchers create nearly undetectable hardware backdoor

Thursday, September 26th, 2013

University of Massachusetts researchers have found a way to make hardware backdoors virtually undetectable.

With recent NSA leaks and surveillance tactics being uncovered, researchers have redoubled their scrutiny of things like network protocols, software programs, encryption methods, and software hacks. Most problems out there are caused by software issues, either from bugs or malware. But one group of researchers at the University of Massachusetts decided to investigate the hardware side, and they found a new way to hack a computer processor at such a low-level, it’s almost impossible to detect it.

What are hardware backdoors?

Hardware backdoors aren’t exactly new. We’ve known for a while that they are possible, and we have examples of them in the wild. They are rare, and require a very precise set of circumstances to implement, which is probably why they aren’t talked about as often as software or network code. Even though hardware backdoors are rare and notoriously difficult to pull off, they are a cause of concern because the damage they could cause could be much greater than software-based threats. Stated simply, a hardware backdoor is a malicious piece of code placed in hardware so that it cannot be removed and is very hard to detect. This usually means the non-volatile memory in chips like the BIOS on a PC, or in the firmware of a router or other network device.

A hardware backdoor is very dangerous because it’s so hard to detect, and because it typically has full access to the device it runs on, regardless of any password or access control system. But how realistic are these threats? Last year, a security consultant showcased a fully-functioning hardware backdoor. All that’s required to implement that particular backdoor is flashing a BIOS with a malicious piece of code. This type of modification is one reason why Microsoft implemented Secure Boot in Windows 8, to ensure the booting process in a PC is trusted from the firmware all the way to the OS. Of course, that doesn’t protect you from other chips on the motherboard being modified, or the firmware in your router, printer, smartphone, and so on.

New research

The University of Massachusetts researchers found an even more clever way to implement a hardware backdoor. Companies have taken various measures for years now to ensure their chips aren’t modified without their knowledge. After all, most of our modern electronics are manufactured in a number of foreign factories. Visual inspections are commonly done, along with tests of the firmware code, to ensure nothing was changed. But in this latest hack, even those measures may not be enough. The way to do that is ingenious and quite complex.

The researchers used a technique called doping transistors. Basically, a transistor is made of a crystalline structure which provides the needed functionality to amplify or switch a current that goes through it. Doping a transistor means changing that crystalline structure to add impurities, and change the way it behaves. The Intel Random Number Generator (RNG) is the basic building block of any encryption system since it provides those important starting numbers with which to create encryption keys. By doping the RNG, the researchers can make the chip behave in a slightly different way. In this case, they simply changed the transistors so that one particular number became a constant instead of a variable. That means a number that was supposed to be random and impossible to predict, is now always the same.

By introducing these changes at the hardware level, it weakens the RNG, and in turn weakens any encryption that comes from keys created by that system, such as SSL connections, encrypted files, and so on. Intel chips contain self tests that are supposed to catch hardware modifications, but the researchers claim that this change is at such a low level in the hardware, that it doesn’t get detected. Fixing this flaw isn’t easy either, even if you could detect it. The RNG is part of the security process in a CPU, and for safety, it is isolated from the rest of the system. That means there is nothing a user or even administrator can do to correct the problem.

There’s no sign that this particular hardware backdoor is being used in the wild, but if this type of change is possible, then it’s likely that groups with a lot of technical expertise could find similar methods. This may lend more credence to moves from various countries to ban certain parts from some regions of the world. This summer Lenovo saw its systems being banned from defense networks in many countries after allegations that China may have added vulnerabilities in the hardware of some of its systems. Of course, with almost every major manufacturer having their electronics part made in China, that isn’t much of a relief. It’s quite likely that as hardware hacking becomes more cost effective and popular, we may see more of these types of low level hacks being performed, which could lead to new types of attacks, and new types of defense systems.

Source:  techrepublic.com

Will software-defined networking kill network engineers’ beloved CLI?

Tuesday, September 3rd, 2013

Networks defined by software may require more coding than command lines, leading to changes on the job

SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco’s isn’t the only CLI, but on the strength of the company’s domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it’s been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.

A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco’s ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

“The network used to be programmed through what we call CLIs, or command-line interfaces. We’re now changing that to create programmatic interfaces,” Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

“If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs,” Gartner analyst Joe Skorupa said. “Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter.”

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

“We’re already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies,” Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI’s days are numbered. SDN doesn’t go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

“It’s not obsolete by any definition,” Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It’s likely that some network jobs will be eliminated, he said.

This isn’t the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven’t always had a warm welcome. “Engineers will always gravitate toward a CLI when it’s available,” Miron said.

Even networking startups need to offer a Cisco CLI so their customers’ engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco’s CLI and other vendors’ management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented “bottom up” orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF’s approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there’s more room for human error and the cost of mistakes is higher, he said.

“Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time,” Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

“There’ll always be some level of CLI,” said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks’ SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is “scary,” he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

“The scale of systems is becoming so large that you can’t actually do anything by hand,” Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner’s Skorupa said.

The first group will “get it” and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won’t be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They’ll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

“All of that knowledge that you had amassed over decades of employment got written to zero,” Skorupa said. For IP network engineers who resist change, there will be a cruel irony: “SDN will do to them what they did to the guys who managed the old TDM voice systems.”

But SDN won’t spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

“The role of the network engineer, I don’t think, has ever been more important,” Kerravala said. “Cloud computing and mobile computing are network-centric compute models.”

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

“The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network,” said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft’s Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

“If you look at the Facebook network engineering team, pretty much everybody’s writing code as well,” said Najam Ahmad, Facebook’s director of technical operations for infrastructure.

Network engineers historically have used CLIs because that’s all they were given, Ahmad said. “I think we’re underestimating their ability. ”

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won’t go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

“All these people need to somewhat evolve their skills,” Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn’t announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

“You’ve got to understand the fundamentals,” Vashi said. “If you don’t know how the network infrastructure works, you could have all the background in software automation, and you don’t know what you’re doing on the network side.”

Source:  computerworld.com

IBM starts restricting hardware patches to paying customers

Thursday, August 29th, 2013

Following an Oracle practice, IBM starts to restrict hardware patches to holders of maintenance contracts

Following through on a policy change announced in 2012, IBM has started restricting availability of hardware patches to paying customers, spurring at least one advocacy group to accuse the company of anticompetitive practices.

IBM “is getting to the spot where the customer has no choice but to buy an IBM maintenance agreement, or lose access to patches and changes,” said Gay Gordon-Byrne, executive director of the Digital Right to Repair (DRTR), a coalition for championing the rights of digital equipment owners.

Such a practice could dampen the market for support service of IBM equipment from non-IBM contractors, and could diminish the resale value of IBM equipment, DRTR charged.

On Aug. 11, IBM began requiring visitors of the IBM Fix Central website to provide a serial number in order to download a patch or update. According to DRTR, IBM uses the serial number to check to see if the machine being repaired was under a current IBM maintenance contract, or under an IBM hardware warranty.

“IBM will take the serial number, validate it against its maintenance contract database, and allow [user ] to proceed or not,” Gordon-Byrne explained.

Traditionally, IBM has freely provided machine code patches and updates as a matter of quality control, Gordon-Byrne said. The company left it to the owner to decide how to maintain the equipment, either through the help of IBM, a third-party service-provider, or by itself.

This benevolent practice is starting to change, according to DRTR.

In April 2012, IBM started requiring customers to sign a license in order to access machine code updates. Then, in October of that year, the company announced that machine code updates would only be available for those customers with IBM equipment that was either under warranty or covered by an IBM maintenance agreement.

“Fix Central downloads are available only for IBM clients with hardware or software under warranty, maintenance contracts, or subscription and support,” stated the Fix Central site documentation.

Nor would IBM offer the fixes on a time-and-material contract, in which customers can go through a special bid process to buy annual access to machine code.

The company didn’t immediately start enforcing this entitlement comparison policy however — until earlier this month. “Until August, it didn’t appear that IBM had the capability,” Gordon-Byrne said. “We were wondering when they were going to do that step.”

The policy seems to apply to all IBM mainframes, servers and storage systems, with IBM X system servers being one known exception. Customer complaints forced IBM to halt the practice for X servers, according to Gordon-Byrne.

This practice is problematic to IBM customers for a number of reasons, DRTR asserted.

Such a practice limits the resale of hardware, because any prospective owner of used equipment would have to purchase a support contract from IBM if it wanted its newly acquired machine updated.

And this could be expensive. IBM also announced last year it would start charging a “re-establishment fee” for equipment owners wishing to sign a new maintenance contract for equipment with lapsed IBM support coverage. The fee could be as high as 150 percent of the yearly maintenance fee itself, according to DRTR.

IBM could also use the maintenance contracts as a way to generate more sales.

“If IBM decides it wants to jack the maintenance price in order to make a new machine sale, they can do it because there is no competition,” Gordon-Byrne said.

IBM is not the first major hardware firm to use this tactic to generate more after-market sales, according to Gordon-Byrne. Oracle adopted a similar practice for its servers after it acquired Sun Microsystems, and its considerable line of hardware, in 2010.

The Service Industry Association — which focuses on helping the computer, medical and business products service industries– created DRTR in January 2013 to fight against encroaching after-market control of hardware manufacturers. The SIA itself protested Oracle’s move away from free patches as well.

DRTR is actively tracking a number of similar cases involving after-market control of hardware, such as an Avaya antitrust trial due to start Sept. 9 in the U.S. District Court for the District of New Jersey.

IBM declined to comment for this story.

Source:  networkworld.com

Cisco responds to VMware’s NSX launch, allegiances

Thursday, August 29th, 2013

Says a software-only approach to network virtualization spells trouble for users

Cisco has responded to the groundswell of momentum and support around the introduction of VMware’s NSX network virtualization platform this week with a laundry list of the limitations of software-only based network virtualization. At the same time, Cisco said it intends to collaborate further with VMware, specifically around private cloud and desktop virtualization, even as its partner lines up a roster of allies among Cisco’s fiercest rivals.

Cisco’s response was delivered here, in a blog post from Chief Technology and Strategy Officer Padmasree Warrior.

In a nutshell, Warrior says software-only based network virtualization will leave customers with more headaches and hardships than a solution that tightly melds software with hardware and ASICs – the type of network virtualization Cisco proposes:

A software-only approach to network virtualization places significant constraints on customers.  It doesn’t scale, and it fails to provide full real-time visibility of both physical and virtual infrastructure.  In addition this approach does not provide key capabilities such as multi-hypervisor support, integrated security, systems point-of-view or end-to-end telemetry for application placement and troubleshooting.  This loosely-coupled approach forces the user to tie multiple 3rd party components together adding cost and complexity both in day-to-day operations as well as throughout the network lifecycle.  Users are forced to address multiple management points, and maintain version control for each of the independent components.  Software network virtualization treats physical and virtual infrastructure as separate entities, and denies customers a common policy framework and common operational model for management, orchestration and monitoring.

Warrior then went on to tout the benefits of the Application Centric Infrastructure (ACI),

a concept introduced by Cisco spin-in Insieme Networks at the Cisco Live conference two months ago. ACI combines hardware, software and ASICs into an integrated architecture that delivers centralized policy automation, visibility and management of both physical and virtual networks, etc., she claims.Warrior also shoots down the comparison between network virtualization and server virtualization, which is the foundation of VMware’s existence and success. Servers were underutilized, which drove the need for the flexibility and resource efficiency promised in server virtualization, she writes.

Not so with networks. Networks do not have an underutilization problem, she claims:

In fact, server virtualization is pushing the limits of today’s network utilization and therefore driving demand for higher port counts, application and policy-driven automation, and unified management of physical, virtual and cloud infrastructures in a single system.

Warrior ends by promising some “exciting news” around ACI in the coming months. Perhaps at Interop NYC in late September/early October? Cisco CEO John Chambers was just added this week to the keynote lineup at the conference. He usually appears at these venues when Cisco makes a significant announcement that same week…

Source:  networkworld.com

VMware unwraps virtual networking software – promises greater network control, security

Monday, August 26th, 2013

VMware announces that NSX – which combines network and security features – will be available in the fourth quarter

VMware today announced that its virtual networking software and security software products packaged together in an offering named NSX will be available in the fourth quarter of this year.

The company has been running NSX in beta since the spring, but as part of a broader announcement of software-defined data center functions made today at VMworld, the company took the wrapping off of its long-awaited virtual networking software. VMware has based much of the NSX functionality on technology it acquired from Nicira last year.

The generally available version of NSX includes two major new features compared to the beta: technical integration with a variety of partnering companies, including the ability for the virtual networking software to control network and compute infrastructure hardware providers. Secondly, it virtualizes some network functions like firewalling, allowing for better control of virtual networks.

The idea of virtual networking is similar to that of virtual computing: abstracting the core features of networking from the underlying hardware. Doing so lets organizations more granularly control their networks, including spinning up and down networks, as well as better segmentation of network traffic.

Nicira has been a pioneer in the network virtualization industry and last year VMware spent $1.2 billion to acquire the company. In March, VMware announced plans to integrate VMware technology into its product suite through the NSX software, but today the company announced that NSX’s general availability will be in the coming months. NSX will be a software update that is both hypervisor and hardware agnostic, says Martin Casado, chief architect, networking at VMware.

The need for the NSX software is being driven by the migration from a client-server world to a cloud world, he says. In this new architecture, there is just as much traffic, if not more, within the data center (east-west traffic) as than the data traffic between clients and the edge devices (north-south traffic).

One of the biggest advancements in the NSX software that is newly announced is virtual firewalling. Instead of using hardware or virtual firewalls that would sit at the edge of the network to control traffic, instead NSX’s firewall is embedded within the software, so it is ubiquitous throughout the deployment. This removes any bottlenecking issues that would be created by using a centralized firewall system, Casado says.

“We’re not trying to take over the firewall market or do anything with north-south traffic,” Casado says. “What we are doing is providing functionality for traffic management within the data center. There’s nothing that can do that level of protection for the east-west traffic. It’s addressing a significant need within the industry.”

VMware has signed on a bevy of partners that are compatible with the NSX platform. The software is hardware and hypervisor agnostic, meaning that the software controller can manage network functionality that is executed by networking hardware from vendors like Juniper, Arista, HP, Dell and Brocade. In press materials sent out by the company Cisco is not named as a partner, but VMware says NSX will work with networking equipment from the leading network vendor.

On the security side, services from Symantec, McAfree and Trend Micro will work within the system, while underlying compute hardware from OpenStack, CloudStack, Red Hat and Piston Cloud Computing Co. will work with NSX. Nicira has worked heavily in the OpenStack community.

“In virtual networks, where hardware and software are decoupled, a new network operating model can be achieved that delivers improved levels of speed and efficiency,” said Brad Casemore, research director for Data Center Networks at IDC. “Network virtualization is becoming a game shifter, providing an important building block for delivering the software-defined data center, and with VMware NSX, VMware is well positioned to capture this market opportunity.”

Source:  infoworld.com

SSDs maturing, but new memory tech still 10 years away

Monday, August 26th, 2013

Solid-state drive adoption will continue to grow and it will be more than 10 years before it is ultimately replaced by a new memory technology, experts said.

SSDs are getting more attractive as NAND flash gets faster and cheaper, as it provides flexibility in usage as a RAM or hard-drive alternative, said speakers and attendees at the Hot Chips conference in Stanford, California on Sunday.

Emerging memory types under development like phase-change memory (PCM), RRAM (resistive random-access memory) and MRAM (magnetoresistive RAM) may show promise with faster speed and durability, but it will be many years until they are made in volume and are priced competitively to replace NAND flash storage.

SSDs built on flash memory are now considered an alternative to spinning hard-disk drives, which have reached their speed limit. Mobile devices have moved over to flash drives, and a large number of thin and light ultrabooks are switching to SSDs, which are smaller, faster and more power efficient. However, the enterprise market still relies largely on spinning disks, and SSDs are poised to replace hard disks in server infrastructure, experts said. One of the reasons: SSDs are still more expensive than hard drives, though flash price is coming down fast.

“It’s going to be a long time until NAND flash runs out of steam,” said Jim Handy, an analyst at Objective Analysis, during a presentation.

Handy predicted that NAND flash will likely be replaced by 2023 or beyond. The capacity of SSDs is growing as NAND flash geometries get smaller, so scaling down flash will become difficult, which will increase the need for a new form of non-volatile memory that doesn’t rely on transistors.

Many alternative forms of memory are under development. Crossbar has developed RRAM (resistive random-access memory) that the company claims can replace DRAM and flash. Startup Everspin is offering its MRAM (magnetoresistive RAM) products as an alternative to flash memory. Hewlett-Packard is developing memristor, while PCM (phase-change memory) is being pursued by Micron and Samsung.

But SSDs are poised for widespread enterprise adoption as the technology consumes less energy and is more reliable. The smaller size of SSDs can also provide more storage in fewer servers, which could cut licensing costs, Handy said.

“If you were running Oracle or some other database software, you would be paying license fee based on the number of servers,” Handy said.

In 2006, famed Microsoft researcher Jim Gray said in a presentation “tape is dead, disk is tape, flash is disk, RAM locality is king.” And people were predicting the end of flash 10 years ago when Amber Huffman, senior principal engineer at Intel’s storage technologies group, started working on flash memory.

Almost ten years on, flash is still maturing and could last even longer than 10 years, Huffman said. Its adoption will grow in enterprises and client devices, and it will ultimately overtake hard drives, which have peaked on speed, she said.

Like Huffman, observers agreed that flash is faster and more durable, but also more expensive than hard drives. But in enterprises, SSDs are inherently parallel, and better suited for server infrastructures that need better throughput. Multiple SSDs can exchange large loads of data easily much like memory, Huffman said. SSDs can be plugged into PCI-Express 3.0 slots in servers for processing of applications like analytics, which is faster than hard drives on the slower SATA interface.

The $30 billion enterprise storage market is still built on spinning disks, and there is a tremendous opportunity for SSDs, said Neil Vachharajani , software architect at Pure Storage, in a speech.

Typically, dedicated pools of spinning disks are needed for applications, which could block performance improvements, Vachharajani said.

“Why not take SSDs and put them into storage arrays,” Vachharajani said. “You can treat your storage as a single pool.”

Beyond being an alternative primary storage, NAND could be plugged into memory slots as a slower form of RAM. Facebook replaced DRAM with flash memory in a server called McDipper, and is also using SSDs for long-term cold storage. Ultrabooks use SSDs as operating system cache, and servers use SSDs for temporary caching in servers before data is moved to hard drives for long-term storage.

Manufacturing enhancements are being made to make SSDs faster and smaller. Samsung this month announced faster V-NAND flash storage chips that are up to 10 times more durable than the current flash storage used in mobile devices. The flash memory employs a 3D chip structure in which storage modules are stacked vertically.

Intel is taking a different approach to scaling down NAND flash by implementing high-k metal gate to reduce leakage, according to Krishna Parat, a fellow at the chip maker’s nonvolatile memory group. As flash scales down in size, Intel will move to 3D transistor structuring, much like it does in microprocessors today on the 22-nanometer process.

But there are disadvantages. With every process shrink, the endurance of flash may drop, so steps need to be taken to preserve durability. Options would be to minimize writes by changing algorithms and controllers, and also to use compression, de-duplication and hardware encryption, attendees said.

Smarter controllers are also needed to maintain capacity efficiency and data integrity, said David Flynn, CEO of PrimaryData. Flynn was previously CEO of Fusion-io, which pioneered SSD storage in enterprises.

“Whatever Flash’s successor is, it won’t be as fast as RAM,” Flynn said. “It takes longer to change persistent states than volatile states.”

But Flynn is already looking beyond SSD into future memory types.

“The faster it gets the better,” Flynn said. “I’m excited about new, higher memories.”

But SSDs will ultimately match hard drives on price, and the newer memory and storage forms will have to wait, said Huffman, who is also the chairperson for the NVM-Express organization, which is the protocol for current and future non-volatile memory plugging into the PCI-Express slot.

“Hard drives will become the next tape,” Huffman said.

Source:  computerworld.com

Intel plans to ratchet up mobile platform performance with 14-nanometre silicon

Friday, August 23rd, 2013

Semiconductor giant Intel is to start producing mobile and embedded systems using its latest manufacturing process technology in a bid to muscle in on a market that it had previously ignored.

The company is planning to launch a number of platforms this year and next intended to ratchet up the performance of its offerings, according to sources quoted in the Far Eastern trade journal Digitimes.

By the end of 2013, a new smartphone system-on-a-chip (SoC) produced using 22-nanometre process technology, codenamed “Merrifield”, will be introduced, followed by “Moorefield” in the first half of 2014. “Morganfield”, which will be produced on forthcoming 14-nanometre process manufacturing technology, will be available from the first quarter of 2015.

Merrifield ought to offer a performance boost of about 50 per cent combined with much improved battery life compared to Intel’s current top-end smartphone platform, called Clover Trail+.

More immediately, Intel will be releasing “Bay Trail-T” microprocessors intended for Windows 8 and Android tablet computers. The Bay Trail-T architecture will offer a battery life of about eight hours in use, but weeks when it is idling, according to Digitimes sources.

The Bay Trail-T may be unveiled at the Intel Developer Forum in September, when Intel will also be unveiling “Bay Trail” on which the T-version is based. Bay Trail will be produced on the 22-nanometre Silvermont architecture.

Digitimes was quoting sources among Taiwan-based manufacturers.

Intel’s current Intel Atom microprocessors for mobile phones – such as the Motorola Raxr-I and the Prestigio MultiPhone – are based on 32-nanometre technology, a generation behind the manufacturing process technology that it is using to produce its latest desktop and laptop microprocessors.

However, the roadmap suggests that Intel is planning to produce its high-end smartphone and tablet computer microprocessors and SoC platforms using the same manufacturing technology as desktop and server products in a bid to gain an edge on ARM-based rivals from Samsung, Qualcomm, TSMC and other producers.

Manufacturers of ARM-based microprocessors, which currently dominate the high-performance market for mobile and embedded microprocessors, trail in terms of the manufacturing technology that they can build their systems with, compared to Intel.

Intel, though, has been turning its attention to mobile and embedded as laptop, PC and server sales have stalled.

Source:  computing.com

Cisco patches serious vulnerabilities in Unified Communications Manager

Thursday, August 22nd, 2013

The vulnerabilities can be exploited by attackers to execute arbitrary commands or disrupt telephony-related services, Cisco said

Cisco Systems has released new security patches for several versions of Unified Communications Manager (UCM) to address vulnerabilities that could allow remote attackers to execute arbitrary commands, modify system data or disrupt services.

The UCM is the call processing component of Cisco’s IP Telephony solution. It connects to IP (Internet Protocol) phones, media processing devices, VoIP gateways, and multimedia applications and provides services such as session management, voice, video, messaging, mobility, and web conferencing.

The most serious vulnerability addressed by the newly released patches can lead to a buffer overflow and is identified as CVE-2013-3462 in the Common Vulnerabilities and Exposures database. This vulnerability can be exploited remotely, but it requires the attacker to be authenticated on the device.

“An attacker could exploit this vulnerability by overwriting an allocated memory buffer on an affected device,” Cisco said Wednesday in a security advisory. “An exploit could allow the attacker to corrupt data, disrupt services, or run arbitrary commands.”

The CVE-2013-3462 vulnerability affects versions 7.1(x), 8.5(x), 8.6(x), 9.0(x) and 9.1(x) of Cisco UCM, Cisco said.

The company also patched three denial-of-service (DoS) flaws that can be remotely exploited by unauthenticated attackers.

One of them, identified as CVE-2013-3459, is caused by improper error handling and can be exploited by sending malformed registration messages to the affected devices. The flaw only affects Cisco UCM 7.1(x) versions.

The second DoS issue is identified as CVE-2013-3460 and is caused by insufficient limiting of traffic received on certain UDP ports. It can be exploited by sending UDP packets at a high rate on those specific ports to devices running versions 8.5(x), 8.6(x), and 9.0(x) of Cisco UCM.

The third vulnerability, identified as CVE-2013-3461, is similar but only affects the Session Initiation Protocol (SIP) port. “An attacker could exploit this vulnerability by sending UDP packets at a high rate to port 5060 on an affected device,” Cisco said. The vulnerability affects Cisco UCM versions 8.5(x), 8.6(x) and 9.0(1).

Patched versions have been released for all UCM release branches affected by these vulnerabilities and there are no known workarounds at the time that would mitigate the flaws without upgrading.

All of the patched vulnerabilities were discovered during internal testing and the company’s product security incident response team (PSIRT) is not aware of any cases where these issues have been exploited or publicly documented.

“In all cases, customers should ensure that the devices to be upgraded contain sufficient memory and confirm that current hardware and software configurations will continue to be supported properly by the new release,” Cisco said. “If the information is not clear, customers are advised to contact the Cisco Technical Assistance Center (TAC) or their contracted maintenance providers.”

Source:  networkworld.com

Trend Micro: Hacker threats to water supplies are real

Monday, August 12th, 2013

A decoy water control system disguised as belonging to a U.S. municipality, caught the attention of a hacking group tied to the Chinese military

A security researcher has shown that hackers, including an infamous group from China, are trying to break into the control systems tied to water supplies in the U.S. and other countries.

Last December, a decoy water control system disguised as belonging to a U.S. municipality, attracted the attention of a hacking group tied to the Chinese military, according to Trend Micro researcher Kyle Wilhoit. A dozen similar traps set up in eight countries lured a total of 74 attacks between March and June of this year.

Wilhoit’s work, presented last week at the Black Hat conference in Las Vegas, is important because it helps build awareness that the threat of a cyberattack against critical infrastructure is real, security experts said Tuesday.

“What Kyle is saying is really neat and important,” said Joe Weiss, a security expert and consultant in industrial control systems (ICS). “What he’s saying is that when people see what they think is a real control system, they’re going to try and go after it. That’s a scary thought.”

Indeed, people behind four of the attacks tinkered with the special communication protocol used to control industrial hardware. While their motivation is unknown, the attackers had taken a path that could be used to destroy pumps and filtration systems or whole facilities.

To sabotage specific systems, attackers would need design documents. Wilhoit’s research showed that there are hackers willing to destroy without knowing the exact consequences, according to Andrew Ginter, vice president of industrial security at Waterfall Security. “If you just start throwing random numbers into (control systems), the world is going to change,” said Ginter, who studied Wilhoit’s research. “Things are going to happen. You don’t know what. It’s a random type of sabotage.”

The Chinese hacking group, known as APT1, is the same team that security vendor Mandiant had tied to China’s People’s Liberation Army. The group, also called the Comment Crew, is focused on stealing design information, not sabotage, experts said.

Because sabotage would open itself up to retaliation and possibly war, China is unlikely to mount that type of attack. Those kinds of restraints do not exist for terrorists, however.

While Wilhoit did not identify any terrorist groups, his research did show that the attackers are interested in small utilities. He created eight honeypots, each masked by Web-based login and configuration screens created to look as if they belonged to a local water plant. The decoys were set up in Australia, Brazil, China, Ireland, Japan, Russia, Singapore and the U.S.

Attackers will often start with smaller targets to test software tools and prepare for assaults on larger facilities, Weiss said. “The perception is that they’ll have less monitoring, less experience and less of everything else (in security) than the big guys,” he said.

While Wilhoit’s honeypots showed that a threat exists, they did not reflect a real-world target. Control systems are typically not as easy to access through the Internet, particularly in larger utilities.

Buried within a company’s infrastructure, a control system would not be accessed without first penetrating a company’s defensive perimeter and then finding the IP address of the hosting computer, said Eric Cosman, vice president of standards and practices for the International Society of Automation.

None of the attackers in Wilhoit’s research showed a high level of sophistication, which wasn’t surprising. That’s because hackers typically use only the technology needed to succeed, nothing more.

“(Advanced attackers) are known to have many cards in their pockets, and they pull out the cheapest card first,” Ginter said. “If they can win the game with a two of hearts, then that’s the card they’ll play.”

Wilhoit’s research is seen as one more step toward building public awareness of the threats to critical infrastructure. In addition, such reports are expected to have an impact on regulators.

“You’re going to have public utilities commissions reading this report and asking the utilities questions,” Ginter said. “In a sense, this is a good thing. The awareness level needs to go up.”

Source:  csoonline.com