Archive for the ‘Virtual machines’ Category

IT managers are increasingly replacing servers with SaaS

Friday, December 6th, 2013

IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding, according to new data.

IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers.

In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers.

“What that means is a lot of people are buying SaaS,” said Frank Gens, referring to software-as-a-service. “A lot of capacity if shifting out of the enterprise into cloud service providers.”

The increased use of SaaS is a major reason for the market shift, but so is virtualization to increase server capacity. Data center consolidations are eliminating servers as well, along with the purchase of denser servers capable of handling larger loads.

For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining, based on market direction and the experience of IT managers.

Two years ago, when Mark Endry became the CIO and SVP of U.S. operations for Arcadis, a global consulting, design and engineering company, the firm was running its IT in-house.

“We really put a stop to that,” said Endry. Arcadis is moving to SaaS, either to add new services or substitute existing ones. An in-house system is no longer the default, he added.

“Our standard RFP for services says it must be SaaS,’ said Endry.

Arcadis has added Workday, a SaaS-based HR management system, replaced an in-house training management system with a SaaS system, and an in-house ADP HR system was replaced with a service. The company is also planning a move to Office 365, and will stop running its in-house Exchange and SharePoint servers.

As a result, in the last two years, Endry has kept the server count steady at 1,006 spread through three data centers. He estimates that without the efforts at virtualization, SaaS and other consolidations, they would have more 200 more physical servers.

Endry would like to consolidate the three data centers into one, and continue shifting to SaaS to avoid future maintenance costs, and also the need to customize and maintain software. SaaS can’t yet be used for everything, particularly ERP, but “my goal would be to really minimize the footprint of servers,” he said.

Similarly, Gerry McCartney, CIO of Purdue University is working to cut server use and switch more to SaaS.

The university’s West Lafayette, Ind., campus had some 65 data centers two years ago, many small. Data centers at Purdue are defined as any room with additional power and specialized heavy duty cooling equipment. They have closed at least 28 of them in the last 18 months.

The Purdue consolidation is the result of several broad directions: increased virtualization, use of higher density systems, and increase use of SaaS.

McCartney wants to limit the university’s server management role. “The only things that we are going to retain on campus is research and strategic support,” he said. That means that most, if not all, of the administrative functions may be moved off campus.

This shift to cloud-based providers is roiling the server market, and is expected to help send server revenue down 3.5% this year, according to IDC.

Gens says that one trend among users who buy servers is increasing interest in converged or integrated systems that combine server, storage, networking and software. They account now about for about 10% of the market, and are expected to make up 20% by 2020.

Meanwhile, the big cloud providers are heading in the opposite direction, and are increasingly looking for componentized systems they can assemble, Velcro-like, in their data centers. This has given rise to contract, or original design manufacturers (ODM), mostly overseas, who make these systems for cloud systems.


Why security benefits boost mid-market adoption of virtualization

Monday, December 2nd, 2013

While virtualization has undoubtedly already found its footing in larger businesses and data centers, the technology is still in the process of catching on in the middle market. But a recent study conducted by a group of Cisco Partner Firms, titled “Virtualization on the Rise,” indicates just that: the prevalence of virtualization is continuing to expand and has so far proven to be a success for many small- and medium-sized businesses.

With firms where virtualization has yet to catch on, however, security is often the point of contention.

Cisco’s study found that adoption rates for virtualization are already quite high at small- to medium-sized businesses, with 77 percent of respondents indicating that they already had some type of virtualization in place around their office. These types of solutions included server virtualization, a virtual desktop infrastructure, storage virtualization, network virtualization, and remote desktop access, among others. Server virtualization was the most commonly used, with 59 percent of respondents (that said they had adopted virtualization in some form) stating that it was their solution of choice.

That all being said, there are obviously some businesses who still have yet to adopt virtualization, and a healthy chunk of respondents – 51 percent – cited security as a reason. It appeared that the larger companies with over 100 employees were more concerned about the security of virtualization, with 60 percent of that particular demographic qualifying it as their barrier to entry (while only 33 percent of smaller firms shared the same concern).

But with Cisco’s study lacking any other specificity in terms of why exactly the respondents were concerned about the security of virtualization, one can’t help but wonder: is this necessarily sound reasoning? Craig Jeske, the business development manager for virtualization and cloud at Global Technology Resources, shed some light on the subject.

“I think [virtualization] gives a much easier, more efficient, and agile response to changing demands, and that includes responding to security threats,” said Jeske. “It allow for a faster response than if you had to deploy new physical tools.”

He went on to explain that given how virtualization enhances portability and makes it easier to back up data, it subsequently makes it easier for companies to get back to a known state in the event of some sort of compromise. This kind of flexibility limits attackers’ options.

“Thanks to the agility provided by virtualization, it changes the attack vectors that people can come at us from,” he said.

As for the 33 percent of smaller firms that cited security as a barrier to entry – thereby suggesting that the smaller companies were more willing to take the perceived “risk” of adopting the technology – Jeske said that was simply because virtualization makes more sense for businesses of that size.

“When you have a small budget, the cost savings [from virtualization] are more dramatic, since it saves space and calls for a lower upfront investment,” he said. On the flip side, the upfront cost for any new IT direction is higher for a larger business. It’s easier to make a shift when a company has 20 servers versus 20 million servers; while the return on virtualization is higher for a larger company, so is the upfront investment.

Of course, there is also the obvious fact that with smaller firms, the potential loss as a result of taking such a risk isn’t as great.

“With any type of change, the risk is lower for a smaller business than for a multimillion dollar firm,” he said. “With bigger businesses, any change needs to be looked at carefully. Because if something goes wrong, regardless of what the cause was, someone’s losing their job.”

Jeske also addressed the fact that some of the security concerns indicated by the study results may have stemmed from some teams recognizing that they weren’t familiar with the technology. That lack of comfort with virtualization – for example, not knowing how to properly implement or deploy it – could make virtualization less secure, but it’s not inherently insecure. Security officers, he stressed, are always most comfortable with what they know.

“When you know how to handle virtualization, it’s not a security detriment,” he said. “I’m hesitant to make a change until I see the validity and justification behind that change. You can understand peoples’ aversion from a security standpoint and first just from the standpoint of needing to understand it before jumping in.”

But the technology itself, Jeske reiterated, has plenty of security benefits.

“Since everything is virtualized, it’s easier to respond to a threat because it’s all available from everywhere. You don’t have to have the box,” he said. “The more we’re tied to these servers and our offices, the easier it is to respond.”

And with every element being all-encompassed in a software package, he said, businesses might be able to do more to each virtual server than they could in the physical world. Virtual firewalls, intrusion detection, etc. can all be put in as an application and put closer to the machine itself so firms don’t have to bring things back out into the physical environment.

This also allows for easier, faster changes in security environments. One change can be propagated across the entire virtual environment automatically, rather than having to push it out to each physical device individually that’s protecting a company’s systems.

Jeske noted that there are benefits from a physical security standpoint, as well, namely because somebody else takes care of it for you. The servers hosting the virtualized solutions are somewhere far away, and the protection of those servers is somebody else’s responsibility.

But what with the rapid proliferation of virtualization, Jeske warned that security teams need to try to stay ahead of the game. Otherwise, it’s going to be harder to properly adopt the technology when they no longer have a choice.

“With virtualization, speed of deployment and speed of reaction are the biggest things,” said Jeske. “The servers and desktops are going to continue to get virtualized whether officers like it or not. So they need to be proactive and stay in front of it, otherwise they can find themselves in a bad position further on down the road.”


FCC lays down spectrum rules for national first-responder network

Tuesday, October 29th, 2013

The agency will also start processing applications for equipment certification

The U.S. moved one step closer to having a unified public safety network on Monday when the Federal Communications Commission approved rules for using spectrum set aside for the system.

Also on Monday, the agency directed its Office of Engineering and Technology to start processing applications from vendors to have their equipment certified to operate in that spectrum.

The national network, which will operate in the prized 700MHz band, is intended to replace a patchwork of systems used by about 60,000 public safety agencies around the country. The First Responder Network Authority (FirstNet) would operate the system and deliver services on it to those agencies. The move is intended to enable better coordination among first responders and give them more bandwidth for transmitting video and other rich data types.

The rules approved by the FCC include power limits and other technical parameters for operating in the band. Locking them down should help prevent harmful interference with users in adjacent bands and drive the availability of equipment for FirstNet’s network, the agency said.

A national public safety network was recommended by a task force that reviewed the Sept. 11, 2001, terror attacks on the U.S. The Middle Class Tax Relief and Job Creation Act of 2012 called for auctions of other spectrum to cover the cost of the network, which was estimated last year at US$7 billion.

The public safety network is required to cover 95 percent of the U.S., including all 50 states, the District of Columbia and U.S. territories. It must reach 98 percent of the country’s population.


Seven essentials for VM management and security

Tuesday, October 29th, 2013

Virtualization isn’t a new trend, these days it’s an essential element of infrastructure design and management. However, while common for the most part, organizations are still learning as they go when it comes to cloud-based initiatives.

CSO recently spoke with Shawn Willson, the Vice President of Sales at Next IT, a Michigan-based firm that focuses on managed services for small to medium-sized organizations. Willson discussed his list of essentials when it comes to VM deployment, management, and security.

Preparing for time drift on virtual servers. “Guest OSs should, and need to be synced with the host OS…Failure to do so will lead to time drift on virtual servers — resulting in significant slowdowns and errors in an active directory environment,” Willson said.

Despite the impact this could have on work productivity and daily operations, he added, very few IT managers or security officers think to do this until after they’ve experienced a time drift. Unfortunately, this usually happens while attempting to recover from a security incident. Time drift can lead to a loss of accuracy when it comes to logs, making forensic investigations next to impossible.

Establish policies for managing snapshots and images. Virtualization allows for quick copies of the Guest OS, but policies need to be put in place in order to dictate who can make these copies, if copies will (or can) be archived, and if so, where (and under what security settings) will these images be stored.

“Many times when companies move to virtual servers they don’t take the time the upgrade their security policy for specific items like this, simply because of the time it requires,” Willson said.

Creating and maintaining disaster recovery images. “Spinning up an unpatched, legacy image in the case of disaster recovery can cause more issues than the original problem,” Willson explained.

To fix this, administrators should develop a process for maintaining a patched, “known good” image.

Update disaster recovery policy and procedures to include virtual drives. “Very few organizations take the time to upgrade their various IT policies to accommodate virtualization. This is simply because of the amount of time it takes and the little value they see it bringing to the organization,” Willson said.

But failing to update IT policies to include virtualization, “will only result in the firm incurring more costs and damages whenever a breach or disaster occurs,” Willson added.

Maintaining and monitoring the hypervisor. “All software platforms will offer updates to the hypervisor software, making it necessary that a strategy for this be put in place. If the platform doesn’t provide monitoring features for the hypervisor, a third party application should be used,” Willson said.

Consider disabling clip boarding between guest OSs. By default, most VM platforms have copy and paste between guest OSs turned on after initial deployment. In some cases, this is a required feature for specific applications.

“However, it also poses a security threat, providing a direct path of access and the ability to unknowingly [move] malware from one guest OS to another,” Willson said.

Thus, if copy and paste isn’t essential, it should be disabled as a rule.

Limiting unused virtual hardware. “Most IT professionals understand the need to manage unused hardware (drives, ports, network adapters), as these can be considered soft targets from a security standpoint,” Willson said.

However, he adds, “with virtualization technology we now have to take inventory of virtual hardware (CD drives, virtual NICS, virtual ports). Many of these are created by default upon creating new guest OSs under the disguise of being a convenience, but these can offer the same danger or point of entry as unused physical hardware can.”

Again, just as it was with copy and paste, if the virtualized hardware isn’t essential, it should be disabled.


VMware identifies vulnerabilities for ESX, vCenter, vSphere, issues patches

Friday, October 18th, 2013

VMware today said that its popular virtualization and cloud management products have security vulnerabilities that could lead to denials of service for customers using ESX and ESXi hypervisors and management platforms including vCenter Server Appliance and vSphere Update Manager.

To exploit the vulnerability an attacker would have to intercept and modify management traffic. If successful, the hacker would compromise the hostd-VMDBs, which would lead to a denial of service for parts of the program.

VMware released a series of patches that resolve the issue. More information about the vulnerability and links to download the patches can be found here.

The vulnerability exists in vCenter 5.0 for versions before update 3; and ESX versions 4.0, 4.1 and 5.0 and ESXi versions 4.0 and 4.1, unless they have the latest patches.

Users can also reduce the likelihood of the vulnerability causing a problem by running vSphere components on an isolated management network to ensure that traffic does not get intercepted.


Stop securing your virtualized servers like another laptop or PC

Tuesday, September 24th, 2013
Many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages. Here are the most common mistakes made and how to prevent them.

Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

We asked two security pros a couple of questions specific to ensuring security on virtual servers. Here’s what they said:

TechRepublic: What mistakes do IT managers make most often when securing their virtual servers?

Answered by Min Wang, CEO and founder AIP US

Wang: Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

Here are some more specific mistakes IT managers make regularly:

1.  IT managers rely too much on the hypervisor layer to provide security. Instead, they should be taking a 360 degree approach rather than a looking at one section or layer.

2.  When transitioning to virtual servers, too often they misconfigure their servers and the underlying network. This causes things to get even more out of whack when new servers are created and new apps are added.

3.  There’s increased complexity and many IT managers  don’t fully understand how the components interwork and how to properly secure the entire system, not just parts of it.

TechRepublic: Can you provide some tips on what IT managers can do moving forward to ensure their servers remain hack free?

Answered by Praveen Bahethi, CTO of Shilpa Systems


1.  Logins into the Xen, HyperV, KVM, and ESXi servers, as well as the VMs created within them, should be mapped to a central database such as Active Directory to ensure that all logins are logged.  These login logs should be reviewed for failures on a regular basis as the organization’s security policy defines. By using a centralized login service, the administrative staff can quickly and easily remove privileges to all VMs and the servers by disabling the central account. Password Policies applied in the Centralized Login Servers can then be enforced across the virtualized environment.

2.  The virtual host servers should have a separate physical network interface controller (NIC) for network console and management operations that is tied into a separate out of band network solution or maintained via VLAN separation.  Physical access to the servers and their storage is controlled and monitored. All patches and updates that are being applied are verified to come from the vendors of the software and have been properly vetted with checksums.

3.  Within the virtualized environment, steps should be taken to ensure that the VMs are only able to see traffic destined for them by mapping them to the proper VLAN and vSwitch. The VMs cannot modify their MAC addresses nor have their virtual NICs engaged in snooping the wire with Promiscuous mode. The VMs themselves are not able to copy/paste operations via the console, no extraneous HW is associated with them, and VM to VM communication outside of the network operations is disabled.

4.  The VMs must have proper firewall and anti-malware, anti-virus, and url-filtering in place so that accessing outside data that contains threats can be mitigated. The use of security software with the hosts using plug-ins that enable security features such as firewalls and intrusion prevention are to be added. As with any proactive security measures, review of logs and policies for handling events need to be clearly defined.

5.  The shared storage should require unique login credentials for each virtual server and the network should be segregated from the normal application data and Out of Band console traffic. This segregation can be done using VLANs or completely separate physical network connections.

6.  The upstream network should only allow traffic required for the hosts and their VMs to only pass their switch ports, dropping all other extraneous traffic. Layer 2 and Layer 3 configuration should be in place for DHCP, Spanning Tree, and routing protocol attacks. Some vendors provide additional features in their third party vSwitches which can also be used to mitigate attacks with a VM server.


Amazon and Microsoft, beware—VMware cloud is more ambitious than we thought

Tuesday, August 27th, 2013

Desktops, disaster recovery, IaaS, and PaaS make VMware’s cloud compelling.

VMware today announced that vCloud Hybrid Service, its first public infrastructure-as-a-service (IaaS) cloud, will become generally available in September. That’s no surprise, as we already knew it was slated to go live this quarter.

What is surprising is just how extensive the cloud will be. When first announced, vCloud Hybrid Service was described as infrastructure-as-a-service that integrates directly with VMware environments. Customers running lots of applications in-house on VMware infrastructure can use the cloud to expand their capacity without buying new hardware and manage both their on-premises and off-premises deployments as one.

That’s still the core of vCloud Hybrid Service—but in addition to the more traditional infrastructure-as-a-service, VMware will also have a desktops-as-a-service offering, letting businesses deploy virtual desktops to employees without needing any new hardware in their own data centers. There will also be disaster recovery-as-a-service, letting customers automatically replicate applications and data to vCloud Hybrid Service instead of their own data centers. Finally, support for the open source distribution of Cloud Foundry and Pivotal’s deployment of Cloud Foundry will let customers run a platform-as-a-service (PaaS) in vCloud Hybrid Service. Unlike IaaS, PaaS tends to be optimized for building and hosting applications without having to manage operating systems and virtual computing infrastructure.

While the core IaaS service and connections to on-premises deployments will be generally available in September, the other services aren’t quite ready. Both disaster recovery and desktops-as-a-service will enter beta in the fourth quarter of this year. Support for Cloud Foundry will also be available in the fourth quarter. Pricing information for vCloud Hybrid Service is available on VMware’s site. More details on how it works are available in our previous coverage.

Competitive against multiple clouds

All of this gives VMware a compelling alternative to Amazon and Microsoft. Amazon is still the clear leader in infrastructure-as-a-service and likely will be for the foreseeable future. However, VMware’s IaaS will be useful to customers who rely heavily on VMware internally and want a consistent management environment on-premises and in the cloud.

VMware and Microsoft have similar approaches, offering a virtualization platform as well as a public cloud (Windows Azure in Microsoft’s case) that integrates with customers’ on-premises deployments. By wrapping Cloud Foundry into vCloud Hybrid Service, VMware combines IaaS and PaaS into a single cloud service just as Microsoft does.

VMware is going beyond Microsoft by also offering desktops-as-a-service. We don’t have a ton of detail here, but it will be an extension of VMware’s pre-existing virtual desktop products that let customers host desktop images in their data centers and give employees remote access to them. With “VMware Horizon View Desktop-as-a-Service,” customers will be able to deploy virtual desktop infrastructure either in-house or on the VMware cloud and manage it all together. VMware’s hybrid cloud head honcho, Bill Fathers, said much of the work of adding and configuring new users will be taken care of automatically.

The disaster recovery-as-a-service builds on VMware’s Site Recovery Manager, letting customers see the public cloud as a recovery destination along with their own data centers.

“The disaster recovery use case is something we want to really dominate as a market opportunity,” Fathers said in a press conference today. At first, it will focus on using “existing replication capabilities to replicate into the vCloud Hybrid Service. Going forward, VMware will try to provide increasing levels of automation and more flexibility in configuring different disaster recovery destinations,” he said.

vCloud Hybrid Service will be hosted in VMware data centers in Las Vegas, NV, Sterling, VA, Santa Clara, CA, and Dallas, TX, as well as data centers operated by Savvis in New York and Chicago. Non-US data centers are expected to join the fun next year.

When asked if VMware will support movement of applications between vCloud Hybrid Service and other clouds, like Amazon’s, Fathers said the core focus is ensuring compatibility between customers’ existing VMware deployments and the VMware cloud. However, he said VMware is working with partners who “specialize in that level of abstraction” to allow portability of applications from VMware’s cloud to others and vice versa. Naturally, VMware would really prefer it if you just use VMware software and nothing else.


VMware unwraps virtual networking software – promises greater network control, security

Monday, August 26th, 2013

VMware announces that NSX – which combines network and security features – will be available in the fourth quarter

VMware today announced that its virtual networking software and security software products packaged together in an offering named NSX will be available in the fourth quarter of this year.

The company has been running NSX in beta since the spring, but as part of a broader announcement of software-defined data center functions made today at VMworld, the company took the wrapping off of its long-awaited virtual networking software. VMware has based much of the NSX functionality on technology it acquired from Nicira last year.

The generally available version of NSX includes two major new features compared to the beta: technical integration with a variety of partnering companies, including the ability for the virtual networking software to control network and compute infrastructure hardware providers. Secondly, it virtualizes some network functions like firewalling, allowing for better control of virtual networks.

The idea of virtual networking is similar to that of virtual computing: abstracting the core features of networking from the underlying hardware. Doing so lets organizations more granularly control their networks, including spinning up and down networks, as well as better segmentation of network traffic.

Nicira has been a pioneer in the network virtualization industry and last year VMware spent $1.2 billion to acquire the company. In March, VMware announced plans to integrate VMware technology into its product suite through the NSX software, but today the company announced that NSX’s general availability will be in the coming months. NSX will be a software update that is both hypervisor and hardware agnostic, says Martin Casado, chief architect, networking at VMware.

The need for the NSX software is being driven by the migration from a client-server world to a cloud world, he says. In this new architecture, there is just as much traffic, if not more, within the data center (east-west traffic) as than the data traffic between clients and the edge devices (north-south traffic).

One of the biggest advancements in the NSX software that is newly announced is virtual firewalling. Instead of using hardware or virtual firewalls that would sit at the edge of the network to control traffic, instead NSX’s firewall is embedded within the software, so it is ubiquitous throughout the deployment. This removes any bottlenecking issues that would be created by using a centralized firewall system, Casado says.

“We’re not trying to take over the firewall market or do anything with north-south traffic,” Casado says. “What we are doing is providing functionality for traffic management within the data center. There’s nothing that can do that level of protection for the east-west traffic. It’s addressing a significant need within the industry.”

VMware has signed on a bevy of partners that are compatible with the NSX platform. The software is hardware and hypervisor agnostic, meaning that the software controller can manage network functionality that is executed by networking hardware from vendors like Juniper, Arista, HP, Dell and Brocade. In press materials sent out by the company Cisco is not named as a partner, but VMware says NSX will work with networking equipment from the leading network vendor.

On the security side, services from Symantec, McAfree and Trend Micro will work within the system, while underlying compute hardware from OpenStack, CloudStack, Red Hat and Piston Cloud Computing Co. will work with NSX. Nicira has worked heavily in the OpenStack community.

“In virtual networks, where hardware and software are decoupled, a new network operating model can be achieved that delivers improved levels of speed and efficiency,” said Brad Casemore, research director for Data Center Networks at IDC. “Network virtualization is becoming a game shifter, providing an important building block for delivering the software-defined data center, and with VMware NSX, VMware is well positioned to capture this market opportunity.”


Three things to consider before buying into Disaster Recovery as a Service

Tuesday, July 2nd, 2013

Disaster Recovery as a Service (DRaaS) backs up the whole environment, not just the data.

“Most of the providers I spoke with also offer a cloud-based environment to spin up the applications and data to when you declare a disaster,” says Karyn Price, Industry Analyst, Cloud Computing Services, Frost & Sullivan. This enables enterprises to keep applications available.

Vendors offer DRaaS to increase their market share and revenues. Enterprises, especially small businesses are interested in the inexpensive yet comprehensive DR solution DRaaS offers. There are cautionary notes and considerations too that demand the smart businesss attention before and after buying into DRaaS.

DRaaS market drivers, vendors and differentiation

DRaaS is a wise move for cloud vendors hungry for a bigger slice of the infrastructure market.

“DRaaS is the first cloud service to offer value for an entire production infrastructure, all the servers and all the storage,” says John P. Morency, Research Vice President, Gartner. This opens up more of the market, providing much higher revenues for vendors.

DRaaS creates new revenue streams and opportunities for vendors, too.

“They want to bring comprehensive recovery to a wider variety of business customers,” Price says. Where only an enterprise could afford a full-blown BC/DR solution before, now the cloud offers a more affordable option for BC/DR to the small business.

Vendors leveraging DRaaS include Verizon TerreMark, Microsoft and Symantec (a joint offering), IBM, Sungard and NTT Data, Earthlink, Windstream, Bluelock, Virtustream, Verastream, EVault, and a trove of smaller contenders seeking to differentiate themselves in the marketplace, according to Price and Morency.

“While most of the DRaaS vendors are relatively similar in their cost structures and recovery time objectives, the recovery point objective is a differentiator between vendor offerings,” says Price. Whereas Dell and Virtustream each report RPOs of 5 minutes, according to Frosts Karyn Price, Windstream reports RPOs of 15 minutes to 1 hour, depending on the DRaaS service the customer chooses.

DRaaS: No drab solution

With so much to offer, DRaaS has a bright future in the BC/DR realm. Companies with no tolerance for downtime, those looking to enter the cloud for the first time, those seeking a complete DR solution and those that have infrastructure in severe weather risk locations are interested in DRaaS. DRaaS is particularly interesting to enterprises with minimal tolerance for downtime.

“Most of the DRaaS vendors I speak with offer recovery times of four hours or fewer,” says Price.

DRaaS is an option for enterprises that want to test the cloud for the first time.

“If you are in the middle of a disaster and suddenly you have no infrastructure to restore to, would you rather have a cloud-based solution that maybe you would have been wary of as your primary option or would you rather have nothing?” asks Price.

Small businesses see DRaaS as a way to finally afford a broader BC/DR solution.

“DRaaS can minimize or even completely eliminate the requirement for company capital in order to implement a DR solution,” says Jerry Irvine, CIO, Prescient Solutions and member of the National Cyber Security Task Force. Since DRaaS is a cloud solution, businesses can order it at most any capacity, making it a more cost-effective fit for smaller production environments. Of the 8,000 DRaaS production instances that Gartner estimates exist today, 85- to 90- percent are smaller instances of three to six production applications, Morency says. The VM scope of these companies is between five and sixty VMs and the associated production storage is no more than two to five TB, Morency adds.

Businesses hit with increasingly severe weather catastrophes are very interested in DRaaS.

“When you look at the aftermath of events like the Tsunami in Japan, there is a lot more awareness and a lot more pressure from the board level to do disaster recovery,” says Morency. This pressure and the affordability of DRaaS can tip the scales for many a small business.

Proceed with caution

Enterprises and small businesses considering DRaaS face a lot of due diligence before choosing a solution and a lot of work afterwards.

“It’s not like you just upload all the work to the service provider,” says Richard P. Tracy, CSO, Telos Corporation. If the enterprise requires replication to a cloud environment supported exclusively by SAS70 data centers, then the DRaaS provider better be able to demonstrate that it has SAS 70 data centers and agree to keep data in the cloud only in those facilities.

Depending on the industry, the customer must confirm that the DRaaS provider meets operational standards for HIPAA, GLBA, PCI-DSS, or some of the ISO standards as needed.

“You don’t want to trust that they do just because it says so on their website,” says Tracy.

Due to the nature of the cloud, DRaaS can offer innate data replication and redundancy for reliable backup and recovery. But unless specified otherwise, DRaaS could include only replication to failover core systems and backup may not be included.

“Many organizations define their backup systems or data repositories as critical solutions for the DR facilities to replicate,” says Irvine. This provides for replication of core systems and data backup to DRaaS.

And once the enterprise’s data successfully fails over to the DRaaS service, at some point the enterprise and the service have to roll it back to the enterprise infrastructure.

“You have to make sure that the DRaaS service will support you in that process,” says Tracy. There are processes, procedures, and metrics related to exit strategies for outsourcing that the customer must define during the disaster recovery planning process. These will depend on the organization. These procedures set the timing for how soon data restoration to the primary location takes place and how soon the company switches the systems back on.

“The SLA should define the DRaaS provider’s role in that,” says Tracy. “It’s not just failover, it’s recovery.”

DRaaS: Worth consideration

DRaaS can replicate infrastructure, applications and data to the cloud to enable full environmental recovery. The price is right and the solution is comprehensive. Still in its early stages, DRaaS is by all signs worth consideration, especially with the number and types of offerings available, and the obvious market need.


100Gbps and beyond: What lies ahead in the world of networking

Tuesday, February 19th, 2013

App-aware firewalls, SAN alternatives, and other trends for the future.

The corporate data center is undergoing a major transformation the likes of which haven’t been seen since Intel-based servers started replacing mainframes decades ago. It isn’t just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.All of this means tomorrow’s data center is going to look very different from today’s. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We’re going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it’s common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn’t fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn’t lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center’s LAN.

Part of coping with all this data transfer is being smarter about identifying network bottlenecks and removing them, such as immature network interface card drivers that slow down server throughput. Bad or sloppy routing paths can introduce network delays too. Typically, both bad drivers and bad routes haven’t been carefully previously examined because they were sufficient to handle less demanding traffic patterns.

It doesn’t help that more bandwidth can sometimes require new networking hardware. The vendors of these products are well prepared, and there are now numerous routers, switches, and network adapter cards that operate at 40- and even 100-gigabit Ethernet speeds. Plenty of vendors sell this gear: Dell’s Force10 division, Mellanox, HP, Extreme Networks, and Brocade. It’s nice to have product choices, but the adoption rate for 40GbE equipment is still rather small.

Using this fast gear is complicated by two issues. First is price: the stuff isn’t cheap. Prices per 40Gb port—that is, the cost of each 40Gb port on a switch—are typically $2,500, way more than a typical 10Gb port price. Depending on the nature of your business, these higher per-port prices might be justified, but it isn’t only this initial money. Most of these devices also require new kinds of wiring connectors that will make implementation of 40GbE difficult, and a smart CIO will keep total cost of ownership in mind when looking to expand beyond 10Gb.

As Ethernet has attained faster and faster speeds, the cable plan to run these faster networks has slowly evolved. The old RJ45 category 5 or 6 copper wiring and fiber connectors won’t work with 40GbE. New connections using the Quad Small Form-factor Pluggable or QSFP standard will be required. Cables with QSFP connectors can’t be “field terminated,” meaning IT personnel or cable installers can’t cut orange or aqua fiber to length and attach SC or LC heads themselves. Data centers will need to figure out their cabling lengths and pre-order custom cables that are manufactured with the connectors already attached ahead of time. This is potentially a big barrier for data centers used to working primarily with copper cables, and it also means any current investment in your fiber cabling likely won’t cut it for these higher-speed networks of the future either.

Still, as IT managers get more of an understanding of QSFP, we can expect to see more 40 and 100 gigabit Ethernet network legs in the future, even if the runs are just short ones that go from one rack to another inside the data center itself. These are called “top of rack” switches. They link a central switch or set of switches over a high-speed connection to the servers in that rack with slower connections. A typical configuration for the future might be one or ten gigabit connections from individual servers within one rack to a switch within that rack, and then 40GbE uplink from that switch back to larger edge or core network switches. And as these faster networks are deployed, expect to see major upgrades in network management, firewalls, and other applications to handle the higher data throughput.

The rack as a data center microcosm

In the old days when x86 servers were first coming into the data center, you’d typically see information systems organized into a three-tier structure: desktops running the user interface or presentation software, a middle tier containing the logic and processing code, and the data tier contained inside the servers and databases. Those simple days are long gone.

Still living on from that time, though, are data centers that have separate racks, staffs, and management tools for servers, for storage, for routers, and for other networking infrastructure. That worked well when the applications were relatively separate and didn’t rely on each other, but that doesn’t work today when applications have more layers and are built to connect to each other (a Web server to a database server to a scheduling server to a cloud-based service, as a common example). And all of these pieces are running on virtualized machines anyway.

Today’s equipment racks are becoming more “converged” and are handling storage, servers, and networking tasks all within a few inches of each other. The notion first started with blade technology, which puts all the essential elements of a computer on a single expansion card that can easily slip into a chassis. Blades have been around for many years, but the leap was using them along with the right management and virtualization software to bring up new instances of servers, storage, and networks. Packing many blade servers into a single large chassis also dramatically increases the density that was available in a single rack.

It is more than just bolting a bunch of things inside a rack: vendors selling these “data center in a rack” solutions are providing pre-engineering testing and integration services. They also have sample designs that can be used to specify the particular components easily that reduce cable clutter, and vendors are providing software to automate management. This arrangement improves throughput and makes the various components easier to manage. Several vendors offer this type of computing gear, including Dell’s Active Infrastructure and IBM’s PureSystems. It used to be necessary for different specialty departments within IT to configure different components here: one group for the servers, one for the networking infrastructure, one for storage, etc. That took a lot of coordination and effort. Now it can all be done coherently and with a single source.

Let’s look at Dell’s Active Infrastructure as an example. They claim to eliminate more than 755 of the steps needed to power on a server and connect it to your network. It comes in a rack with PowerEdge Intel servers, SAN arrays from Dell’s Compellent division, and blades that can be used for input/output aggregation and high-speed network connections from Dell’s Force10 division. The entire package is very energy efficient and you can deploy these systems quickly. We’ve seen demonstrations from IBM and Dell where a complete network cluster is brought up from a cold start within an hour, and all managed from a Web browser by a system administrator who could be sitting on the opposite side of the world.

Beyond the simple SAN

As storage area networks (SANs) proliferate, they are getting more complex. SANs now use more capable storage management tools to make them more efficient and flexible. It used to be the case that SAN administration was a very specialized discipline that required arcane skills and deep knowledge of array performance tuning. That is not the case any longer, and as SAN tool sets improve, even IT generalists can bring one online.

The above data center clusters from Dell and others are just one example of how SANs have been integrated into other products. Added to these efforts, there is a growing class of management tools that can help provide a “single pane of glass” view of your entire SAN infrastructure. These also can make your collection of virtual disks more efficient.

One of the problems with virtualized storage is that you can provision a lot of empty space on your physical hard drives that never gets touched by any of your virtual machines (VMs). In a typical scenario, you might have a terabyte of storage allocated to a set of virtual machines and only a few hundred gigabytes actually used by the virtual machines’ operating systems and installed applications.The dilemma here is you want to have enough space available to your virtual drive so that it has room to grow, so you often have to tie up space that could otherwise be used. This is where dynamic thin provisioning comes into play. Most SAN arrays have some type of thin provisioning built in and let you allocate storage without actually allocating it—a 1TB thin-provisioned volume reports itself as being 1TB in size but only actually takes up the amount of space in use by its data. In other words, a physical 1TB chunk of disk could be “thick” provisioned into a single 1TB volume or thin provisioned into maybe a dozen 1TB volumes, letting you oversubscribe the volume. Thin provisioning can play directly into your organization’s storage forecasting, letting you establish maximum volume sizes early and then buying physical disk to track with the volume’s growth.

Another trick many SANs can do these days is data deduplication. There are many different deduplication methods, with each vendor employing its own “secret sauce.” But they all aim to reduce or eliminate the same chunks of data being stored multiple times. When employed with virtual machines, data deduplication means commonly used operating system and application files don’t have to be stored in multiple virtual hard drives and can share one physical repository. Ultimately, this allows you to save on the copious space you need for these files. For example, a hundred Windows virtual machines all have essentially the same content in their “Windows” directories, their “Program Files” directories, and many other places. Deduplication ensures those common pieces of data are only stored once, freeing up tremendous amounts of space.

Software-defined networks

As enterprises invest heavily in virtualization and hybrid clouds, one element still lags: the ability to quickly provision network connections on the fly. In many cases this is due to procedural or policy issues.

Some of this lag can be removed by having a virtual network infrastructure that can be as easily provisioned as spinning up a new server or SAN. The idea behind these software-defined networks (SDNs) isn’t new: indeed, the term has been around for more than a decade. A good working definition of SDN is the separation of the data and control functions of today’s routers and other layer two networking infrastructure with a well-defined programming interface between the two. Most of today’s routers and other networking gear mix the two functions. This makes it hard to adjust network infrastructure as we add tens or hundreds of VMs to our enterprise data centers. As each virtual server is created, you need to adjust your network addresses, firewall rules, and other networking parameters. These adjustments can take time if done manually, and they don’t really scale if you are adding tens or hundreds of VMs at one time.

Automating these changes hasn’t been easy. While there have been a few vendors to offer some early tools, the tools were quirky and proprietary. Many IT departments employ virtual LANs, which offer a way to segment physical networks into more manageable subsets with traffic optimization and other prioritization methods. But vLANs don’t necessarily scale well either. You could be running out of head room as the amount of data that traverses your infrastructure puts more of a strain on managing multiple vLANs.

The modern origins of SDN came about through the efforts of two computer science professors: Stanford’s Nick McKeown and Berkeley’s Scott Shenker, along with several of their grad students. The project was called “Ethane” and it began more than 15 years ago, with the goal of trying to improve network security with a new series of flow-based protocols. One of these students was Martin Casado, who went on to found an early SDN startup that was later acquired by VMware in 2012. A big outgrowth of these efforts was the creation of a new networking protocol called OpenFlow.

Now Google and Facebook, among many others, have adopted the OpenFlow protocol in their own data center operations. The protocol has also gotten its own foundation, called the Open Networking Foundation, to move it through the engineering standards process.

OpenFlow offers a way to have programmatic control over how new networks are setup and torn down as the number of VMs waxes and wanes. Getting this collection of programming interfaces to the right level of specificity is key to SDN and OpenFlow’s success. Now that VMware is involved in OpenFlow, we expect to see some advances in products and support for the protocol, plus a number of vendors who will offer alternatives as the standards process evolves.

SDN makes sense for a particular use case right now: that of hybrid cloud configurations where your servers are split between your on-premises and offsite or managed service provider. This is why Google et al. are using them to knit together their numerous global sites. With OpenFlow, they can bring up new capacity across the world and have it appear as a single unified data center.

But SDN isn’t a panacea, and for the short-term it probably is easier for IT staff to add network capacity manually rather than rip out their existing networking infrastructure and replace with SDN-friendly gear. The vendors who have the lion’s share of this infrastructure are still dragging behind on the SDN and OpenFlow efforts, in part because they see this as a threat to their established businesses. As SDNs become more popular and the protocols mature, expect this situation to change.

Backup as a Service

As more applications migrate to Web services, one remaining challenge is being able to handle backups effectively across the Internet. This is useful under several situations, such as for offsite disaster recovery, quick recovery from cloud-based failures, or backup outsourcing to a new breed of service providers.

There are several issues at stake here. First is that building a fully functional offsite data center is expensive, and it requires both a skilled staff and a lot of coordinated effort to regularly test and tune the failover operations. That way, a company can be ready when disaster does strike to keep their networks and data flowing. Through the combination of managed service providers such as and vendors such as QuorumLabs, there are better ways to provide what is coming to be called “Backup as a Service.”

Both companies sell remote backup appliances that work somewhat differently to provide backups. Trustyd’s appliance is first connected to your local network and makes its initial backups at wire speeds. This is one of the limitations of any cloud-based backup service, because making the initial backup means sending a lot of data over the Internet connection. That can take days or weeks (or longer!). Once this initial copy is created, the appliance is moved to an offsite location where it continues to keep in sync with your network across the Internet. Quorum’s appliance involves using virtualized copies of running servers that are maintained offsite and kept in sync with the physical servers inside a corporate data center. Should anything happen to the data center or its Internet connection, the offsite servers can be brought online in a few minutes.

This is just one aspect of the potential problem with backup as a service. Another issue is in understanding cloud-based failures and what impact they have on your running operations. As companies virtualize more data center infrastructure and develop more cloud-based apps, understanding where the failure points are and how to recover from them will be key. Knowing what VMs are dependent on others and how to restart particular services in the appropriate order will take some careful planning.

An exemplary idea is how Netflix has developed a series of tools called “Chaos Monkey” that it has since made publicly available. Netflix is a big customer of Amazon’s Web Services, and to ensure that it can continue to operate, the company constantly and deliberately fails parts of its Amazon infrastructure. Chaos Monkey seeks out Amazon’s Auto Scaling Groups (ASGs) and terminates the various virtual machines inside a particular group. Netflix released the source code on Github and claims it can be designed for other cloud providers with a minimum of effort. If you aren’t using Amazon’s ASGs, this might be a motivation to try them out. The service is a powerful automation tool and can help you run new (or terminate unneeded) instances when your load changes quickly. Even if your cloud deployment is relatively modest, at some point your demand will grow and you don’t want to depend on your coding skills or having IT staff awake when this happens and have to respond to these changes. ASG makes it easier to juggle the various AWS service offerings to handle varying load patterns. Chaos Monkey is the next step in your cloud evolution and automation. The idea is to run this automated routine during a limited set of hours with engineers standing by to respond to the failures that it generates in your cloud-based services.

Application-aware firewalls

Firewalls are well-understood technology, but they’re not particularly “smart.” The modern enterprise needs deeper understanding of all applications that operate across the network so that it can better control and defend the enterprise. In the older days of the early-generation firewalls, it was difficult to answer questions like:

  • Are Facebook users consuming too much of the corporate bandwidth?
  • Is someone posting corporate data on a private e-mail account such as customer information or credit card numbers?
  • What changed with my network that’s impacting the perceived latency of my corporate Web apps today?
  • Do I have enough corporate bandwidth to handle Web conference calls and video streaming? What is the impact on my network infrastructure?
  • What is the appropriate business response time of key applications in both headquarters and branch offices?

The newer firewalls can answer these and other questions, because they are application-aware. They understand the way applications interact with the network and the Internet, and firewalls can then report back to administrators in near real time with easy-to-view graphical representations of network traffic.

This new breed of firewalls and packet inspection products are made by big-name vendors such as Intel/McAfee, BlueCoat Networks, and Palo Alto Networks. The firewalls of yesteryear were relatively simple devices: you specified a series of firewall rules that listed particular ports and protocols and whether you wanted to block or allow network traffic through them. That worked fine when applications were well-behaved and used predictable ports, such as file transfer on ports 20 and 21 and e-mail on ports 25 and 110. With the rise of Web-based applications, ports and protocols don’t necessarily work as well. Everyone is running their apps across ports 80 and 443, in no small part because of port-based firewalling. It’s becoming more difficult to distinguish between apps that are mission-critical and someone who is running a rogue peer-to- peer file service that needs to be shut down.

Another aspect of advanced firewalls is being able to look at changes to the network and see the root causes, or viewing time-series effects as your traffic patterns differ when things are broken today (but were, of course, working yesterday). Finally, they allow administrators or managers to control particular aspects of an application, such as allowing all users to read their Facebook wall posts but not necessarily send out any Facebook messages during working hours.

Going on from here

These six trends are remaking the data center into one that can handle higher network speeds and more advances in virtualization, but they’re only part of the story. Our series will continue with a real-world look at how massive spikes in bandwidth needs can be handled without breaking the bank at a next-generation sports stadium.


Cisco fills out SDN family with 40G switch, controller, cloud gear for data center

Tuesday, February 5th, 2013

Nexus 6000 designed for high-density 40G; InterCloud, for VM migration to hybrid clouds; ONE Controller to program Cisco switches, routers

Cisco this week will fill out its programmable networking family with a new line of data center switches, cloud connectivity extensions and a software-based SDN controller.

The new products fill out Cisco’s ONE programmable networking strategy, which was unveiled last spring as the company’s answer to the software-defined networking trend pervading the industry. Cisco ONE includes APIs, controllers and agents, and overlay networking techniques designed to enable software programmability of Cisco switches and routers to ease operations, customize forwarding and more easily extend features, among other benefits.

This week’s data center SDN rollout comes after last week’s introduction of the programmable Catalyst 3850 switch for the enterprise campus.

The new Nexus 6000 includes two configurations: the four RU Nexus 6004 and the 1 RU 6001. The 6004 scales from 48 Layer 2/3 40Gbps Ethernet ports, all at line-rate Cisco says, to 96 40G ports through four expansion slots. The switch also supports 384 Layer 2/3 10G Ethernet ports at line-rate, and 1,536 Gigabit Ethernet/10G Ethernet ports using Cisco’s FEX fabric extenders.

The Nexus 6001 sports 48 10G ports and four 40G ports through the four expansion slots. The Nexus 6000 line features 1 microsecond port-to-port latency and support for up to 75,000 virtual machines on a single switch, Cisco says. It also supports FibreChannel-over-Ethernet tunneling on its 40G ports.

The Nexus 6000 will go up against 10G and 40G offerings in Arista Networks’ 7000 series switches, Dell’s Force10 switches and Juniper’s QFabric platforms. Cisco also announced 40G expansion modules for the Nexus 5500 top of rack switch and Nexus 2248PQ fabric extender to connect into the Nexus 6000 for 10G server access and 40G aggregation.

Cisco also unveiled the first service module for its Nexus 7000 core 10G data center switch. The Network Analysis Module-NX1 (NAM-NX1) provides visibility across physical, virtual and cloud resources, Cisco says, including Layer 2-7 deep packet inspection and performance analytics. A software version, called virtual NAM (vNAM), will also be available for deployment on a switch in the cloud.

For hybrid private/public cloud deployments, Cisco unveiled the Nexus 1000V InterCloud software. This runs in conjunction with the Nexus 1000V virtual switch on a server and provides a secure tunnel, using cryptography and firewalling, into the provider cloud for migration of VM workloads into the public cloud.

Once inside the public cloud, Nexus 1000V InterCloud provides a secure “container” to isolate the enterprise VMs from other tenants, essentially forming a Layer 2 virtual private cloud within the provider’s environment. The enterprise manages that container using Cisco’s Virtual Network Management Center InterCloud software on the customer premises.

Within the context of Cisco ONE, Nexus 1000V InterCloud is an overlay, while the Nexus 6000 is a physical scaling element for the virtual data center. A key core element of Cisco ONE is the new ONE Controller unveiled this week.

ONE Controller is software that runs on a standard x86 server. It controls the interaction between a Cisco network and the applications that run on it and manage it through a set of northbound and southbound APIs handling communication between those applications and the network.

Those APIs include Cisco’s onePK set, OpenFlow and others on the southbound side between the controller and the switches and routers; and REST, Java and others on the northbound side between the controller and Cisco, customer, ISV and open source applications.

Among the Cisco applications for the ONE Controller are a previously announced network slicing program for network partitioning; and two new ones: network tapping and customer forwarding.

Network tapping provides the ability to monitor, analyze and debug network flows; and custom forwarding allows operators to program specific forwarding rules across the network based on parameters like low latency.

Cisco also provided an update on the phased rollout of Cisco ONE across its product portfolio. OnePK platform APIs will be available on the ISR G2 and ASR 1000 routers, and Nexus 3000 switch in the first half of this year. They’ll be on the Nexus 7000 switch and ASR 9000 router in the second half of 2013.

OpenFlow agents will be on the Nexus 3000 in the first half of this year. This is in keeping with Cisco’s initial plan for OpenFlow, which was changed last spring to appear first on the Catalyst 3000. OpenFlow will now appear on the Catalyst 3000 and 6500, and Nexus 7000 switch and ASR 9000 router in the second half of this year.

For Cisco ONE overlay networks, the Cloud Services Router 1000V, which was also introduced last spring, is now slated to ship this quarter. It was expected in the fourth quarter of 2012. Microsoft Hyper-V support in the Nexus 1000V virtual switch will appear in the first half of this year, as will a VXLAN Gateway for the 1000V. KVM hypervisor support will emerge in the second half of this year.

As for the product announced this week, the Nexus 6004 will ship this quarter and is priced from $40,000 for 12 40G ports to $195,000 for 48 40G ports. The Nexus 6001 will ship in the first half of this year and pricing will be announced when it ships.

The 40G module for the Nexus 5000 series will ship in the first half, with pricing to come at that time. The 40G-enabled Nexus 2248PQ will cost $12,000 and ship in the first quarter.

The NAM-NX1 for the Nexus 7000 will ship in the first half with pricing to come at shipment. The vNAM will enter proof-of-concept trials in the first half.

The Cisco ONE Controller will also be available in the first half. Pricing will be announced when it ships.


Virtual machine used to steal crypto keys from other VM on same server

Tuesday, November 6th, 2012

New technique could pierce a key defense found in cloud environments.

Piercing a key defense found in cloud environments such as Amazon’s EC2 service, scientists have devised a virtual machine that can extract private cryptographic keys stored on a separate virtual machine when it resides on the same piece of hardware.

The technique, unveiled in a research paper published by computer scientists from the University of North Carolina, the University of Wisconsin, and RSA Laboratories, took several hours to recover the private key for a 4096-bit ElGamal-generated public key using the libgcrypt v.1.5.0 cryptographic library. The attack relied on “side-channel analysis,” in which attackers crack a private key by studying the electromagnetic emanations, data caches, or other manifestations of the targeted cryptographic system.

One of the chief selling points of virtual machines is their ability to run a variety of tasks on a single computer rather than relying on a separate machine to run each one. Adding to the allure, engineers have long praised the ability of virtual machines to isolate separate tasks, so one can’t eavesdrop or tamper with the other. Relying on fine-grained access control mechanisms that allow each task to run in its own secure environment, virtual machines have long been considered a safer alternative for cloud services that cater to the rigorous security requirements of multiple customers.

“In this paper, we present the development and application of a cross-VM side-channel attack in exactly such an environment,” the scientists wrote. “Like many attacks before, ours is an access-driven attack in which the attacker VM alternates execution with the victim VM and leverages processor caches to observe behavior of the victim.”

The attack extracted an ElGamal decryption key that was stored on a VM running the open-source GNU Privacy Guard. The code that leaked the tell-tale details to the malicious VM is the latest version of the widely used libgcrypt, although earlier releases are also vulnerable. The scientists focused specifically on the Xen hypervisor, which is used by services such as EC2. The attack worked only when both attacker and target VMs were running on the same physical hardware. That requirement could make it harder for an attacker to target a specific individual or organization using a public cloud service. Even so, it seems feasible that attackers could use the technique to probe a given machine and possibly mine cryptographic keys stored on it.

The technique, as explained by Johns Hopkins University professor and cryptographer Matthew Green, works by causing the attack VM to allocate continuous memory pages and then execute instructions that load the cache of the virtual CPU with cache-line-sized blocks it controls. Green continued:

The attacker then gives up execution and hopes that the target VM will run next on the same core—and moreover, that the target is in the process of running the square-and-multiply operation. If it is, the target will cause a few cache-line-sized blocks of the attacker’s instructions to be evicted from the cache. Which blocks are evicted is highly dependent on the operations that the attacker conducts.

The technique allows attackers to acquire fragments of the cryptographic “square-and-multiply” operation carried out by the target VM. The process can be difficult, since some of the fragments can contain errors that have the effect of throwing off an attacker trying to guess the contents of a secret key. To get around this limitation, the attack compares thousands of fragments to identify those with errors. The scientists then stitched together enough reliable fragments to deduce the decryption key.

The researchers say it’s the first demonstration of a successful side-channel attack on a virtualized, multicore server. Their paper lists a few countermeasures administrators can take to close the key leakage. One is to avoid co-residency and instead use a separate, “air-gapped” computer for high-security tasks. Two additional countermeasures include the use of side-channel resistant algorithms and a defense known as core scheduling to prevent attack VMs from being able to tamper with the cache processes of the other virtual machine. Future releases of Xen already include plans to modify the way so-called processor “interrupts” are handled.

While the scope of the attack remains limited, the research is important because it opens the door to more practical attacks in the future.

“This threat has long been discussed, and security people generally agree that it’s a concern,” Green wrote. “But actually implementing such an attack has proven surprisingly difficult.”


No more VRAM: VMware abandons controversial pricing model

Tuesday, August 28th, 2012

VMware customers will no longer be penalized for using more virtual memory.

Just over a year ago, VMware shocked many of its longtime customers with a new pricing model that charged customers based on the amount of virtual infrastructure they used instead of the amount of physical infrastructure. By charging customers based on use of virtual memory, or VRAM, VMware seemingly penalized customers who succeeded in deploying many virtual machines on few physical servers.

After a customer outcry, VMware raised the VRAM “entitlements” to make the change less punitive. Today, VMware did away with the VRAM pricing model altogether.

At VMworld in San Francisco, newly minted VMware CEO Pat Gelsinger referred to VRAM as a four-letter, dirty word. “Today I am happy to say we are striking this word from the vocabulary,” he said, drawing an extended ovation from the crowd. VMworld is being attended by 20,000 people, and a huge portion of them attended this morning’s keynote.

From now on, pricing will be all per-CPU, and per-socket, Gelsinger said. By moving back to a pricing model based on usage of physical infrastructure, VMware is once again encouraging users to get as many virtual servers as they can out of each physical machine, which is the point of virtualization in the first place.

Gelsinger never mentioned specific pricing, but a press release provided a few details about the new pricing of vSphere, VMware’s flagship virtualization software.

“VMware vSphere pricing starts around $83 per processor with no core, vRAM or number of VM limits,” VMware said. “VMware vSphere Essentials is $495, and VMware vSphere Essentials Plus is $4,495. All VMware vSphere Essentials Kits includes licensing for 6 CPUs on up to 3 hosts.”

This new, hardware-based pricing applies both to the forthcoming version 5.1 of vSphere and the existing version 5.0. More details can be found in this VMware pricing document. There is also vCloud, a broader software suite including vSphere and numerous other data center automation tools. Prices for vCloud 5.1 will start at $4,995 per processor.

VMware said version 5.1 of vSphere will become generally available on September 11. It has enhancements including the ability to perform live migrations of virtual machines without the need for shared storage. We’ll have more details from VMworld as the conference goes on.


VMware virtual machines targeted by “Crisis” espionage malware

Wednesday, August 22nd, 2012

Malware may be the first to target virtual machines, long used to block attacks.

Researchers have uncovered a single espionage malware attack that is capable of infecting multiple platforms, including computers running the Windows and Mac OS X operating systems, Windows-powered mobile devices, and VMware virtual machines.

When Ars first chronicled the trojan backdoor known as Morcut last month, we reported that it turned Macs into remote spying devices that were capable of intercepting e-mail and instant-message communications and using internal microphones and cameras to spy on people in the vicinity of the machine. Since then, researchers have developed a more comprehensive view of the malware, which is known by the name “Crisis.” A JAR, or Java archive, file that masquerades as a legitimate Adobe Flash installer allows attacks to infect a much wider variety of platforms, including virtual machines, which many people use to protect themselves from infection when performing online banking or while researching malicious websites.

“This may be the first malware that attempts to spread onto a virtual machine,” Takashi Katsuki, a researcher with antivirus provider Symantec, wrote in a blog post published on Monday. “Many threats will terminate themselves when they find a virtual machine monitoring application, such as VMware, to avoid being analyzed, so this may be the next leap forward for malware authors.”

When encountering a Windows-based PC, Crisis actively searches for VMware virtual machine images. When they’re found, the malware copies itself onto an image using VMware Player, a tool that makes it easy to run multiple operating systems at the same time on the host machine.

“It does not use a vulnerability in the VMware software itself,” Katsuki wrote. “It takes advantage of an attribute of all virtualization software: namely that the virtual machine is simply a file or series of files on the disk of the host machine. These files can usually be directly manipulated or mounted, even when the virtual machines is not running.”

As illustrated in the image above, the JAR file first determines whether it’s present in a Mac or Windows environment. When loaded onto an OS X machine, Crisis accesses a Mach-O file that’s capable of running on Macs. When loaded into a Windows environment, the malware uses a standard Windows executable file to infect PCs, the VMware Player attack to infiltrate virtual machines, and a module that targets Windows Mobile devices when they’re connected to a compromised Windows computer.

So far, Crisis has been detected on fewer than 50 machines worldwide, according to data from Symantec. But given its ability to infect Macs and Windows PCs with a backdoor that taps communications sent by Skype, Adium, MSN Messenger and other apps, Crisis was already considered to be important. It’s even more noteworthy now that its virtual-machine capabilities have been uncovered.


Data center fabrics catching on, slowly

Monday, June 25th, 2012

Early adopters say the expense and time spent to revamp a data center’s switching gear are well worth it; benefits include killer bandwidth and more flexibility

When Government Employees Health Association (GEHA) overhauled its data center to implement a fabric infrastructure, the process was “really straightforward,” unlike that for many IT projects, says Brenden Bryan, senior manager of enterprise architecture. “We haven’t had any ‘gotchas’ or heartburn, with me looking back and saying ‘I wish I made that decision differently.'”

GEHA, based in Kansas City, Mo., and the nation’s second largest health plan and dental plan, processes claims for more than a million federal employees, retirees and their families. The main motivator behind switching to a fabric, Bryan says, was to simplify and consolidate and move away from a legacy Fibre Channel SAN environment.

When he started working at GEHA in August 2010, Bryan says he inherited an infrastructure that was fairly typical: a patchwork of components from different vendors with multiple points of failure. The association also wanted to virtualize its mainframe environment and turn it into a distributed architecture. “We needed an infrastructure in place that was redundant and highly available,” explains Bryan. Once the new infrastructure was in place and stable, the plan was to then move all of GEHA’s Tier 2 and Tier 3 apps to it and then, lastly, move the Tier 1 claims processing system.

GEHA deployed Ethernet switches and routers from Brocade, and now, more than a year after the six-month project was completed, he says they have a high-speed environment and a 20-to-1 ratio of virtual machines to blade hardware.

“I can keep the number of physical servers I have to buy to a minimum and get more utilization out of them,” says Bryan. “It enables me to drive the efficiencies out of my storage as well as my computing.”

Implementing a data center fabric does require some planning, however. It means having to upgrade and replace old switches with new switching gear because of the different traffic configuration used in fabrics, explains Zeus Kerravala, principal analyst at ZK Research. “Then you have to re-architect your network and reconnect servers.”

Moving flat and forward
A data center fabric is a flatter, simpler network that’s optimized for horizontal traffic flows, compared with traditional networks, which are designed more for client/server setups that send traffic from the server to the core of the network and back out, Kerravala explains.

In a fabric model, the traffic moves horizontally across the network and virtual machine, “so it’s more a concept of server-to-server connectivity.” Fabrics are flatter and have no more than two tiers, versus legacy networks, which have three or more tiers, he says. Storage networks have been designed this way for years, says Kerravala, and now data networks need to migrate this way.

One factor driving the move to fabrics is that about half of all enterprise data center workloads in Fortune 2000 companies are virtualized, and when companies get to that point, they start seeing the need to reconfigure how their servers communicate with one another and with the network.

“We look at it as an evolution in the architectural landscape of the data center network,” says Bob Laliberte, senior analyst at Enterprise Strategy Group. “What’s driving this is more server-to-server connectivity … there are all these different pieces that need to talk to each other and go out to the core and back to communicate, and that adds a lot of processing and latency.”

Virtualization adds another layer of complexity, he says, because it means dynamically moving things around, “so network vendors have been striving to simplify these complex environments.”

When data centers can’t scale
As home foreclosures spiked in 2006, Walz Group, which handles document management, fulfillment and regulatory compliance services across multiple industries, found its data center couldn’t scale effectively to take on the additional growth required to serve its clients. “IT was impeding the business growth,” says Chief Information Security Officer Bart Falzarano.

The company hired additional in-house IT personnel to deal with disparate systems and management, as well as build new servers, extend the network and add disaster recovery services, says Falzarano. “But it was difficult to manage the technology footprint, especially as we tried to move to a virtual environment,” he says. The company also had some applications that couldn’t be virtualized that would have to be managed differently. “There were different touch points in systems, storage and network. We were becoming counterproductive.”

To reduce the complexity, in 2009 Walz Group deployed Cisco’s Unified Data Center platform, a unified data center fabric architecture that combines compute, storage, network and management into a platform designed to automate IT as a service, across physical and virtual environments. The platform is connected to a NetApp SAN Storage Flexpod platform.

Previously, when they were using HP technology, Falzarano recalls, one of their database nodes went down, which required getting the vendor on the phone and eventually taking out three of the four CPUs and going through a troubleshooting process that took four hours. By the time they got the part they needed, installed it and returned to normal operations, 14 hours had passed, says Falzarano.

“Now, for the same [type of failure], if we get a degraded blade server node, we un-associate that SQL application and re-associate the SQL app in about four minutes. And you can do the same for a hypervisor,” he says.

IT has been tracking the data center performance and benchmarking some of the key metrics, and Falzarano reports that they immediately saw a poor-density reduction of 8 to 1, meaning less cabling complexity and fewer required cables. Where IT previously saw a low virtualization efficiency of 4 to 1 with the earlier technology, Falzarano says that’s now greater than 15 to 1, and the team can virtualize apps that it couldn’t before.

Other findings include a rack reduction of greater than 50 percent due to the amount of virtualization the IT team was able to achieve; more centralized systems management — now one IT engineer handles 50 systems — and what Falzarano refers to as “system mean time before failure.”

“We were experiencing a large amount of hardware failures with our past technology; one to two failures every 30 days across our multiple data centers. Now we are experiencing less than one failure per year,” he says.

Easy to implement
Like the IT executives at Walz Group, IT team leaders at GEHA believed that deploying a fabric model would not only meet the business requirements, but also reduce complexity, cost and staff needed to manage the data center. Bryan says the association also gained economies of scale by having a staff of two people who can manage an all-Ethernet environment, as opposed to needing additional personnel who are familiar with Fibre Channel.

“We didn’t have anyone on our team who was an expert in Fibre Channel, and the only way to achieve getting the claims processing system to be redundant and highly available was to leverage the Ethernet fabric expertise, which we had on staff,” he says.

Bryan says the association has been able to trim “probably a half million dollars of capital off the budget” since it didn’t have to purchase any Fibre Channel switching, and a quarter of a million dollars in operating expenses since it didn’t need staff to manage Fibre Channel. “Since collapsing everything to an Ethernet fabric, I was able to eliminate a whole stack of equipment,” says Bryan.

GEHA used a local managed services provider to help with setting up some of the more complex pieces of the architecture. “But from the time we unpacked the boxes to the time the environment was running was two days,” says Bryan. “It was very straightforward.”

And the performance, he adds, is “jaw-dropping.” A test they did copying a 4-gigabyte ISO file from one blade to another blade through the network, with the network and storage going through the same fabric, occurred in less than a second, “and we didn’t even see the transfer; I didn’t think it actually copied,” he says.

IT has now utilized the fabric for its backup environment with software from CommVault. Bryan says the association is seeing performance of about a terabyte an hour of throughput on the network, “which is probably eight to 10 times greater than before” the fabric was in place.

Today, all of GEHA’s production traffic is on the fabric, and Bryan says he couldn’t be more pleased with the infrastructure. He says scaling it out is not an issue, and is one of the major advantages with converged fabric and speed. GEHA is also able to run a very dense workload of virtual machines on a single blade, he says. “Instead of having to spend a lot of money on a lot of blades, you can increase the ROI on those blades without sacrificing performance,” says Bryan.

Laliberte says he sees a long life ahead for data center fabrics, noting that this type of architecture “is just getting started. If you think about complexity and size, and you have thousands of servers in your environment and thousands of switches, any kind of architecture change isn’t done lightly and takes time to evolve.”

Just as it took time for a three-tier architecture to evolve, it will take time for three tier to get broken down to two tier, he says, adding that flat fabric is the next logical step. “These things get announced and are available, but it still takes years to get widespread deployments,” says Laliberte.

Case study: Fabrics at work
When he used to look around his data center, all Dan Shipley would see was “a spaghetti mess” of cables and switches that were expensive to manage and error-prone. Shipley, architect at $600 million Supplies Network, a St. Louis-based wholesaler of office products, says the company had all the typical issues associated with a traditional infrastructure: some 300 servers that consumed a lot of power, took up a lot of space and experienced downtime due to hardware maintenance.

“We’re primarily an HP shop, and we had contracts on all those servers, which were from different generations, so if you lose a motherboard from one model, they’d overnight it and it was a big pain,” Shipley says. “So we said, ‘Look, we’ve got to get away from this. Virtualization is ready for prime time, and we need to get out of this traditional game.'”

Today, what Supplies Network has built in its data center is about as far from traditional as it gets. Rather than deploying Ethernet and Fibre Channel switches, the company turned to I/O Director from Xsigo, which sits on top of a rack of servers and directs traffic. All of the servers in that rack are plugged into the box, which dynamically establishes connectivity to all other data center resources. Unlike other data center fabrics, I/O Director offers InfiniBand, an open standards-based switched fabric communications link that provides high-performance computing.

“On all your servers you get rid of all those cables and Ethernet and Fibre switches and connect with one InfiniBand cable or two, for redundancy, which is what we did,” says Shipley. The cables are plugged into I/O Director. “You say ‘On servers one through 10, I want to connect all of those to this external Fibre Channel storage’ and it creates a virtual Fibre Channel storage network. So in reality, this is all running across InfiniBand, but the server … thinks it’s still connecting via Fibre Channel.”

The configuration means they now only have two cables instead of several, “and we have a ton of bandwidth.”

Supplies Network is fully virtualized, and has seen its data center shrink from about 20 racks to about four, Shipley says. Power consumption and cooling have also been reduced.

Shipley says he likes that InfiniBand has been used in the supercomputer world for a decade, and is low-cost and open, whereas other vendors “are so invested in Ethernet, they don’t want to see InfiniBand win.” Today, I/O Director runs at 56 gigabits per second, compared with the fastest Ethernet connection, which is 10 gigabits per second, he says.

In terms of cost, Shipley says a single port 10-gigabit Ethernet card is probably around $600, and an Ethernet switch port is needed on the other side, which runs approximately $1,000 per port. “So for each Ethernet connection, you’re looking at $1,600 for each one.” A 40-gigabit, single-port InfiniBand adapter is probably about $450 to $500, he says, and a 36-port InfiniBand switch box is $6,000, which works out to $167 per port.

Shipley says the company has now gotten rid of all of its core Ethernet switches in favor of InfiniBand.

“I was afraid at first because … I didn’t know much about InfiniBand,” he acknowledges, and most enterprise architectures run on Fibre Channel and Ethernet. “We brought [I/O Director] out here and did a bake-off with Cisco’s [Unified Data Center]. It whooped their butt. It was way less cost, way faster, it was simple and easy to use and Xsigo’s support has been fabulous,” he says.

Previously, big database jobs would take 12 hours, Shipley says. Since the deployment of I/O Director, those same jobs run in less than three hours. Migrating a virtual machine from one host to another now takes seconds, as opposed to minutes, he says.

He says he was initially concerned that because Xsigo is a much smaller vendor, it might not be around over the long term. But, says Shipley, “we found out VMware uses these guys.”

“What Xsigo is saying is, instead of having to use Ethernet and Fibre Channel, you can take all those out and put [their product] in and it creates a fabric,” explains Bob Laliberte, senior analyst at Enterprise Strategy Group. “They’re right, but when you’re talking about data center networking and data center fabrics, Xsigo is helping to create two tiers. But the Junipers and Ciscos and Brocades are trying to create that flat fabric.”

InfiniBand is a great protocol, Laliberte adds, but cautions that it’s not necessarily becoming more widely used. “It’s still primarily in the realm of supercomputing sites that need ultra-fast computing.”


HTML5 roundup: access a virtualized desktop from your browser with VMware

Monday, March 19th, 2012

VMware is developing an impressive new feature called WSX that will allow users to access virtualized desktops remotely through any modern Web browser. VMware developer Christian Hammond, who worked on the implementation, demonstrated a prototype this week in a blog post.

According to Hammond, WSX is built with standards-based Web technologies, including the HTML5 Canvas element and Web Sockets. The user installs and runs a lightweight Web server that acts as a relay between the Web-based client and the virtualized desktop instance. It is compatible with VMware Workstation and ESXi/vSphere.

WSX, which doesn’t require any browser plugins, is compatible out of the box with Firefox, Chrome, and Safari on the desktop. It will also work with mobile Safari on iPads that are running iOS 5 or later. Hammond says that Android compatibility is still a work in progress.

The performance is said to be good enough to provide “near-native quality and framerates” when viewing a 720p YouTube video on the virtualized desktop through WSX in Chrome or Firefox. Users who want to test the feature today can see it in action by downloading the Linux version of the VMware Workstation Technology Preview.

Although it’s still somewhat experimental, WSX is a compelling demonstration of how far the Web has evolved as a platform. It also shows how the ubiquity of Web standards make it possible to deliver complex applications across a wide range of platforms and device form factors.

Excerpt from: