Archive for the ‘VMware’ Category

Seven essentials for VM management and security

Tuesday, October 29th, 2013

Virtualization isn’t a new trend, these days it’s an essential element of infrastructure design and management. However, while common for the most part, organizations are still learning as they go when it comes to cloud-based initiatives.

CSO recently spoke with Shawn Willson, the Vice President of Sales at Next IT, a Michigan-based firm that focuses on managed services for small to medium-sized organizations. Willson discussed his list of essentials when it comes to VM deployment, management, and security.

Preparing for time drift on virtual servers. “Guest OSs should, and need to be synced with the host OS…Failure to do so will lead to time drift on virtual servers — resulting in significant slowdowns and errors in an active directory environment,” Willson said.

Despite the impact this could have on work productivity and daily operations, he added, very few IT managers or security officers think to do this until after they’ve experienced a time drift. Unfortunately, this usually happens while attempting to recover from a security incident. Time drift can lead to a loss of accuracy when it comes to logs, making forensic investigations next to impossible.

Establish policies for managing snapshots and images. Virtualization allows for quick copies of the Guest OS, but policies need to be put in place in order to dictate who can make these copies, if copies will (or can) be archived, and if so, where (and under what security settings) will these images be stored.

“Many times when companies move to virtual servers they don’t take the time the upgrade their security policy for specific items like this, simply because of the time it requires,” Willson said.

Creating and maintaining disaster recovery images. “Spinning up an unpatched, legacy image in the case of disaster recovery can cause more issues than the original problem,” Willson explained.

To fix this, administrators should develop a process for maintaining a patched, “known good” image.

Update disaster recovery policy and procedures to include virtual drives. “Very few organizations take the time to upgrade their various IT policies to accommodate virtualization. This is simply because of the amount of time it takes and the little value they see it bringing to the organization,” Willson said.

But failing to update IT policies to include virtualization, “will only result in the firm incurring more costs and damages whenever a breach or disaster occurs,” Willson added.

Maintaining and monitoring the hypervisor. “All software platforms will offer updates to the hypervisor software, making it necessary that a strategy for this be put in place. If the platform doesn’t provide monitoring features for the hypervisor, a third party application should be used,” Willson said.

Consider disabling clip boarding between guest OSs. By default, most VM platforms have copy and paste between guest OSs turned on after initial deployment. In some cases, this is a required feature for specific applications.

“However, it also poses a security threat, providing a direct path of access and the ability to unknowingly [move] malware from one guest OS to another,” Willson said.

Thus, if copy and paste isn’t essential, it should be disabled as a rule.

Limiting unused virtual hardware. “Most IT professionals understand the need to manage unused hardware (drives, ports, network adapters), as these can be considered soft targets from a security standpoint,” Willson said.

However, he adds, “with virtualization technology we now have to take inventory of virtual hardware (CD drives, virtual NICS, virtual ports). Many of these are created by default upon creating new guest OSs under the disguise of being a convenience, but these can offer the same danger or point of entry as unused physical hardware can.”

Again, just as it was with copy and paste, if the virtualized hardware isn’t essential, it should be disabled.

Source:  csoonline.com

VMware identifies vulnerabilities for ESX, vCenter, vSphere, issues patches

Friday, October 18th, 2013

VMware today said that its popular virtualization and cloud management products have security vulnerabilities that could lead to denials of service for customers using ESX and ESXi hypervisors and management platforms including vCenter Server Appliance and vSphere Update Manager.

To exploit the vulnerability an attacker would have to intercept and modify management traffic. If successful, the hacker would compromise the hostd-VMDBs, which would lead to a denial of service for parts of the program.

VMware released a series of patches that resolve the issue. More information about the vulnerability and links to download the patches can be found here.

The vulnerability exists in vCenter 5.0 for versions before update 3; and ESX versions 4.0, 4.1 and 5.0 and ESXi versions 4.0 and 4.1, unless they have the latest patches.

Users can also reduce the likelihood of the vulnerability causing a problem by running vSphere components on an isolated management network to ensure that traffic does not get intercepted.

Source:  networkworld.com

Stop securing your virtualized servers like another laptop or PC

Tuesday, September 24th, 2013
Many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages. Here are the most common mistakes made and how to prevent them.

Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

We asked two security pros a couple of questions specific to ensuring security on virtual servers. Here’s what they said:

TechRepublic: What mistakes do IT managers make most often when securing their virtual servers?

Answered by Min Wang, CEO and founder AIP US

Wang: Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

Here are some more specific mistakes IT managers make regularly:

1.  IT managers rely too much on the hypervisor layer to provide security. Instead, they should be taking a 360 degree approach rather than a looking at one section or layer.

2.  When transitioning to virtual servers, too often they misconfigure their servers and the underlying network. This causes things to get even more out of whack when new servers are created and new apps are added.

3.  There’s increased complexity and many IT managers  don’t fully understand how the components interwork and how to properly secure the entire system, not just parts of it.

TechRepublic: Can you provide some tips on what IT managers can do moving forward to ensure their servers remain hack free?

Answered by Praveen Bahethi, CTO of Shilpa Systems

Bahethi:

1.  Logins into the Xen, HyperV, KVM, and ESXi servers, as well as the VMs created within them, should be mapped to a central database such as Active Directory to ensure that all logins are logged.  These login logs should be reviewed for failures on a regular basis as the organization’s security policy defines. By using a centralized login service, the administrative staff can quickly and easily remove privileges to all VMs and the servers by disabling the central account. Password Policies applied in the Centralized Login Servers can then be enforced across the virtualized environment.

2.  The virtual host servers should have a separate physical network interface controller (NIC) for network console and management operations that is tied into a separate out of band network solution or maintained via VLAN separation.  Physical access to the servers and their storage is controlled and monitored. All patches and updates that are being applied are verified to come from the vendors of the software and have been properly vetted with checksums.

3.  Within the virtualized environment, steps should be taken to ensure that the VMs are only able to see traffic destined for them by mapping them to the proper VLAN and vSwitch. The VMs cannot modify their MAC addresses nor have their virtual NICs engaged in snooping the wire with Promiscuous mode. The VMs themselves are not able to copy/paste operations via the console, no extraneous HW is associated with them, and VM to VM communication outside of the network operations is disabled.

4.  The VMs must have proper firewall and anti-malware, anti-virus, and url-filtering in place so that accessing outside data that contains threats can be mitigated. The use of security software with the hosts using plug-ins that enable security features such as firewalls and intrusion prevention are to be added. As with any proactive security measures, review of logs and policies for handling events need to be clearly defined.

5.  The shared storage should require unique login credentials for each virtual server and the network should be segregated from the normal application data and Out of Band console traffic. This segregation can be done using VLANs or completely separate physical network connections.

6.  The upstream network should only allow traffic required for the hosts and their VMs to only pass their switch ports, dropping all other extraneous traffic. Layer 2 and Layer 3 configuration should be in place for DHCP, Spanning Tree, and routing protocol attacks. Some vendors provide additional features in their third party vSwitches which can also be used to mitigate attacks with a VM server.

Source:  techrepublic.com

Cisco responds to VMware’s NSX launch, allegiances

Thursday, August 29th, 2013

Says a software-only approach to network virtualization spells trouble for users

Cisco has responded to the groundswell of momentum and support around the introduction of VMware’s NSX network virtualization platform this week with a laundry list of the limitations of software-only based network virtualization. At the same time, Cisco said it intends to collaborate further with VMware, specifically around private cloud and desktop virtualization, even as its partner lines up a roster of allies among Cisco’s fiercest rivals.

Cisco’s response was delivered here, in a blog post from Chief Technology and Strategy Officer Padmasree Warrior.

In a nutshell, Warrior says software-only based network virtualization will leave customers with more headaches and hardships than a solution that tightly melds software with hardware and ASICs – the type of network virtualization Cisco proposes:

A software-only approach to network virtualization places significant constraints on customers.  It doesn’t scale, and it fails to provide full real-time visibility of both physical and virtual infrastructure.  In addition this approach does not provide key capabilities such as multi-hypervisor support, integrated security, systems point-of-view or end-to-end telemetry for application placement and troubleshooting.  This loosely-coupled approach forces the user to tie multiple 3rd party components together adding cost and complexity both in day-to-day operations as well as throughout the network lifecycle.  Users are forced to address multiple management points, and maintain version control for each of the independent components.  Software network virtualization treats physical and virtual infrastructure as separate entities, and denies customers a common policy framework and common operational model for management, orchestration and monitoring.

Warrior then went on to tout the benefits of the Application Centric Infrastructure (ACI),

a concept introduced by Cisco spin-in Insieme Networks at the Cisco Live conference two months ago. ACI combines hardware, software and ASICs into an integrated architecture that delivers centralized policy automation, visibility and management of both physical and virtual networks, etc., she claims.Warrior also shoots down the comparison between network virtualization and server virtualization, which is the foundation of VMware’s existence and success. Servers were underutilized, which drove the need for the flexibility and resource efficiency promised in server virtualization, she writes.

Not so with networks. Networks do not have an underutilization problem, she claims:

In fact, server virtualization is pushing the limits of today’s network utilization and therefore driving demand for higher port counts, application and policy-driven automation, and unified management of physical, virtual and cloud infrastructures in a single system.

Warrior ends by promising some “exciting news” around ACI in the coming months. Perhaps at Interop NYC in late September/early October? Cisco CEO John Chambers was just added this week to the keynote lineup at the conference. He usually appears at these venues when Cisco makes a significant announcement that same week…

Source:  networkworld.com

Amazon and Microsoft, beware—VMware cloud is more ambitious than we thought

Tuesday, August 27th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/08/vcloud-hybrid-service-640x327.png

Desktops, disaster recovery, IaaS, and PaaS make VMware’s cloud compelling.

VMware today announced that vCloud Hybrid Service, its first public infrastructure-as-a-service (IaaS) cloud, will become generally available in September. That’s no surprise, as we already knew it was slated to go live this quarter.

What is surprising is just how extensive the cloud will be. When first announced, vCloud Hybrid Service was described as infrastructure-as-a-service that integrates directly with VMware environments. Customers running lots of applications in-house on VMware infrastructure can use the cloud to expand their capacity without buying new hardware and manage both their on-premises and off-premises deployments as one.

That’s still the core of vCloud Hybrid Service—but in addition to the more traditional infrastructure-as-a-service, VMware will also have a desktops-as-a-service offering, letting businesses deploy virtual desktops to employees without needing any new hardware in their own data centers. There will also be disaster recovery-as-a-service, letting customers automatically replicate applications and data to vCloud Hybrid Service instead of their own data centers. Finally, support for the open source distribution of Cloud Foundry and Pivotal’s deployment of Cloud Foundry will let customers run a platform-as-a-service (PaaS) in vCloud Hybrid Service. Unlike IaaS, PaaS tends to be optimized for building and hosting applications without having to manage operating systems and virtual computing infrastructure.

While the core IaaS service and connections to on-premises deployments will be generally available in September, the other services aren’t quite ready. Both disaster recovery and desktops-as-a-service will enter beta in the fourth quarter of this year. Support for Cloud Foundry will also be available in the fourth quarter. Pricing information for vCloud Hybrid Service is available on VMware’s site. More details on how it works are available in our previous coverage.

Competitive against multiple clouds

All of this gives VMware a compelling alternative to Amazon and Microsoft. Amazon is still the clear leader in infrastructure-as-a-service and likely will be for the foreseeable future. However, VMware’s IaaS will be useful to customers who rely heavily on VMware internally and want a consistent management environment on-premises and in the cloud.

VMware and Microsoft have similar approaches, offering a virtualization platform as well as a public cloud (Windows Azure in Microsoft’s case) that integrates with customers’ on-premises deployments. By wrapping Cloud Foundry into vCloud Hybrid Service, VMware combines IaaS and PaaS into a single cloud service just as Microsoft does.

VMware is going beyond Microsoft by also offering desktops-as-a-service. We don’t have a ton of detail here, but it will be an extension of VMware’s pre-existing virtual desktop products that let customers host desktop images in their data centers and give employees remote access to them. With “VMware Horizon View Desktop-as-a-Service,” customers will be able to deploy virtual desktop infrastructure either in-house or on the VMware cloud and manage it all together. VMware’s hybrid cloud head honcho, Bill Fathers, said much of the work of adding and configuring new users will be taken care of automatically.

The disaster recovery-as-a-service builds on VMware’s Site Recovery Manager, letting customers see the public cloud as a recovery destination along with their own data centers.

“The disaster recovery use case is something we want to really dominate as a market opportunity,” Fathers said in a press conference today. At first, it will focus on using “existing replication capabilities to replicate into the vCloud Hybrid Service. Going forward, VMware will try to provide increasing levels of automation and more flexibility in configuring different disaster recovery destinations,” he said.

vCloud Hybrid Service will be hosted in VMware data centers in Las Vegas, NV, Sterling, VA, Santa Clara, CA, and Dallas, TX, as well as data centers operated by Savvis in New York and Chicago. Non-US data centers are expected to join the fun next year.

When asked if VMware will support movement of applications between vCloud Hybrid Service and other clouds, like Amazon’s, Fathers said the core focus is ensuring compatibility between customers’ existing VMware deployments and the VMware cloud. However, he said VMware is working with partners who “specialize in that level of abstraction” to allow portability of applications from VMware’s cloud to others and vice versa. Naturally, VMware would really prefer it if you just use VMware software and nothing else.

Source:  arstechnica.com

‘Containerization’ is no BYOD panacea: Gartner

Tuesday, June 25th, 2013

Gartner notes it’s an important IT application development question

Companies adopting BYOD policies are struggling with the thorny problem of how they might separate corporate and personal data on an employee’s device.

One technology approach to this challenge involves separating out the corporate mobile apps and the data associated with these into “containers” on the mobile device, creating a clear division as to what is subject to corporate security policies such as wiping. But one Gartner analyst delving into the “containerization” subject recently noted the current array of technology choices each have advantages and disadvantages.

“BYOD means my phone, my tablet, my pictures, my music — it’s all about the user,” said analyst Eric Maiwald at the recent Gartner Security and Risk Management Summit.

But if IT security managers want to place controls on the user device to separate out and manage corporate e-mail, applications and data, it’s possible to enforce security such as authentication, encryption, data leakage, cut-and-paste restrictions and selective content wiping through various types of container technologies.

However, the ability of containers to detect “jailbreaking” of Apple iOS devices, which strips out Apple’s security model completely, remains “nearly zero,” Maiwald added. “If you have a rooted device, a container will not protect you.”

There are many choices for container technology. The secure “container” can be embedded in the operating system itself, such as Samsung’s Knox smartphone or the Blackberry 10, Maiwald noted. And the mobile-device management (MDM) vendors such as AirWatch, MobileIron and WatchDocs also have taken a stab at containers, though Gartner sees some of what the MDM vendors are doing as more akin to “tags” available to do things like tag a mailbox and message as corporate.

Companies that include, Enterproid, Excitor, Fixmo, Good Technology, LRW Technologies, NitroDesk, VMware and Citrix also have approaches to containerization that get attention from Gartner as possible ways to containerize corporate apps.

But selecting a container vendor is not necessarily simple because what you are doing is making an important IT decision about enterprise development of apps, says Maiwald. “Container vendors provide mechanisms for linking a customized app to the container,” he said. It typically means choosing an API as part of your corporate mobile-device strategy.

For example, Citrix’s containerization software is called XenMobile, and Kurt Roemer, Citrix chief security strategist, says to make use of it, apps have to be developed using the Citrix API and SDK for this. However, there are several app developers that already do that through what Citrix calls its Worx-enabled program for XenMobile. These include Adobe, Cisco, Evernote, Egnyte and Concur, to name a few. The Citrix containerization approach, which includes an app-specific VPN, will let IT managers do many kinds of tasks, such as automating SharePoint links to mobile devices for specific apps or easily control provisioning of corporate apps on BYOD mobile devices, Roemer says.

Source:  networkworld.com

OpenDaylight: A big step toward the software-defined data center

Monday, April 8th, 2013

A who’s-who of industry players, including Cisco, launches open source project that could make SDN as pervasive as server virtualization

Manual hardware configuration is the scourge of the modern data center. Server virtualization and pooled storage have gone a long way toward making infrastructure configurable on the fly via software, but the third leg of the stool, networking, has lagged behind with fragmented technology and standards.

The OpenDaylight Project — a new open source project hosted by the Linux Foundation featuring every major networking player — promises to move the ball forward for SDN (software-defined networking). Rather than hammer out new standards, the project aims to produce an extensible, open source, virtual networking platform atop such existing standards as OpenFlow, which provides a universal interface through which either virtual or physical switches can be controlled via software.

The approach of OpenDaylight is similar to that of Hadoop or OpenStack, where industry players come together to develop core open source bits collaboratively, around which participants can add unique value. That roughly describes the Linux model as well, which may help explain why the Linux Foundation is hosting OpenDaylight.

“The Linux Foundation was contacted based on our experience and understanding of how to structure and set up an open community that can foster innovation,” said Jim Zemlin, executive director of the Linux Foundation, in an embargoed conference call last week. He added that OpenDaylight, which will be written in Java, will be available under the Eclipse Public License.

Collaboration or controversy?
It must be said that the politics of the OpenDaylight Project are mind-boggling. Cisco is on board despite the fact that SDN is widely seen as a threat to the company’s dominant position — because, when the network is virtualized, switch hardware becomes more commoditized. A cynic might be forgiven for wondering whether Cisco is there to rein things in rather than accelerate development.

Along with Cisco, the cavalcade of coopetition includes Arista Networks, Big Switch Networks, Brocade, Citrix, Dell, Ericsson, Fujitsu, HP, IBM, Intel, Juniper Networks, Microsoft, NEC, Nuage Networks, PLUMgrid, Red Hat, and VMware. BigSwitch, perhaps the highest-profile SDN upstart, is planning to donate a big chunk of its Open SDN Suite, including controller code and distributed virtual routing service applications. Although VMware has signed on, it’s unclear how the proprietary technology developed by Nicira, the SDN startup acquired for $1.2 billion by VMware last summer, will fit in.

Another question is how OpenDaylight will affect other projects. Some have voiced frustration over the Open Network Foundation’s stewardship of the OpenFlow, so OpenDaylight could be a way to work around that organization. Also, OSI president and InfoWorld contributor Simon Phipps wonders why Project Crossbow, an open source network virtualization technology built into Solaris, appears to have no role in OpenDaylight. You can be sure many more questions will emerge in the coming days and weeks.

The architecture of OpenDaylight
Zemlin described OpenDaylight as an extensible collection of technologies. “This project will focus on software and will deliver several components: an SDN controller, protocol plug-ins, applications, virtual overlay network, and the architectural and the programmatic interfaces that tie those things together.”

This list is consistent with the basic premise of SDN, where the control and data planes are separated, with a central controller orchestrating the data flows of many physical or virtual switches (the latter running on generic server hardware). OpenFlow currently provides the only standardized interface supported by many switch vendors, but OpenDaylight also plans to support other standards as well as proprietary interfaces as the project evolves.

More exciting are the “northbound” REST APIs to the controller, atop which developers will be able to build new types of applications that run on the network itself for specialized security, network management, and so on. In support of this, Cisco is contributing an application framework, while Citrix is throwing in “an application controller that integrates Layer 4-7 network services for enabling application awareness and comprehensive control.”

Although the embargoed OpenDaylight announcement was somewhat short on detail, a couple of quick conclusions can be drawn. One is that — on the model of Hadoop, Linux, and OpenStack — the future is now being hashed out in open source bits rather than standards committees. The rise in the importance of open source in the industry is simply stunning, with OpenDaylight serving as the latest confirmation.

More obviously, the amazing breadth of support for OpenDaylight signals new momentum for SDN. To carve up data center resources with the flexibility necessary for a cloud-enabled world where many tenants must coexist, the network needs to have the same software manageability as the rest of the infrastructure. OpenDaylight leaves no doubt the industry recognizes that need.

If the OpenDaylight Project can avoid getting bogged down in vendor politics, it could complete the last mile to the software defined data center in an industry-standard way that lowers costs for everyone. It could do for networking what OpenStack is doing for cloud computing.

Source:  infoworld.com

PostgreSQL updates address high-risk vulnerability, other issues

Friday, April 5th, 2013

VMware also releases fixes for its PostgreSQL-based vFabric Postgres database product

The PostgreSQL developers released updates for all major branches of the popular open-source database system on Thursday in order to address several vulnerabilities, including a high-risk one that could allow attackers to crash the server, modify configuration variables as superuser or execute arbitrary code if certain conditions are met.

“This update fixes a high-exposure security vulnerability in versions 9.0 and later,” the PostgreSQL Global Development Group said in the release announcement. “All users of the affected versions are strongly urged to apply the update immediately.”

The high-risk vulnerability, identified as CVE-2013-1899, can be exploited by sending maliciously crafted connection requests to a targeted PostgreSQL server that include command-line switches specifying a database name beginning with the “-” character. Depending on the server’s configuration, successful exploit can result in persistent denial of service, privilege escalation or arbitrary code execution.

The vulnerability can be exploited by a remote unauthenticated attacker to append error messages to files located in the PostgreSQL data directory. “Files corrupted in this way may cause the database server to crash, and to refuse to restart,” the PostgreSQL developers said in an advisory accompanying the new releases. “The database server can be fixed either by editing the files and removing the garbage text, or restoring from backup.”

Furthermore, if the attacker has access to a database user whose name is identical to a database name, he can leverage the vulnerability to temporarily modify a configuration variable with superuser privileges. If this condition is met and the attacker can also write files somewhere on the system — for example in the /tmp directory — he can exploit the vulnerability to load and execute arbitrary C code, the PostgreSQL developers said.

Systems that don’t restrict access to the PostgreSQL network port, which is common for PostgreSQL database servers running in public clouds, are especially vulnerable to these attacks.

The PostgreSQL developers advise server administrators to update their PostgreSQL installations to the newly released 9.0.13, 9.1.9 or 9.2.4 versions, and to block access to their database servers from untrusted networks. The 8.4 branch of PostgreSQL is not affected by CVE-2013-1899, but PostgreSQL 8.4.17 was also released to fix other issues.

All the new releases also address less serious security fixes, including CVE-2013-1900, which could allow a database user to guess random numbers generated by contrib/pgcrypto functions, and CVE-2013-1901, which could allow an unprivileged user to run commands that could interfere with in-progress backups.

Two other security issues with the PostgreSQL graphical installers for Linux and Mac OS X have also been addressed. They allow the insecure passing of superuser passwords to a script (CVE-2013-1903) and the use of predictable filenames in /tmp (CVE-2013-1902), the PostgreSQL developers said.

As a result of the new PostgreSQL security updates, VMware also released fixes for its vFabric Postgres relational database product that’s optimized for virtual environments. The VMware updates are vFabric Postgres 9.2.4 and 9.1.9.

Source: infoworld.com