Archive for August, 2013

Cisco cracks down on security vulnerability

Friday, August 30th, 2013

The vulnerability could allow remote, unauthenticated attackers to take control of the underlying operating system, the company said

Cisco Systems released security patches for Secure Access Control Server (Secure ACS) for Windows to address a critical vulnerability that could allow unauthenticated attackers to remotely execute arbitrary commands and take control of the underlying operating system.

Cisco Secure ACS is an application that allows companies to centrally manage access to network resources for various types of devices and users. According to Cisco’s documentation, it enforces access control policies for VPN, wireless and other network users and it authenticates administrators, authorizes commands, and provides an audit trail.

Cisco Secure ACS supports two network access control protocols: Remote Access Dial In User Service (RADIUS) and Terminal Access Controller Access-Control System Plus (TACACS+).

The newly patched vulnerability is identified as CVE-2013-3466 and affects Cisco Secure ACS for Windows versions 4.0 through 4.2.1.15 when configured as a RADIUS server with Extensible Authentication Protocol-Flexible Authentication via Secure Tunneling (EAP-FAST) authentication.

“The vulnerability is due to improper parsing of user identities used for EAP-FAST authentication,” Cisco said Wednesday in a security advisory. “An attacker could exploit this vulnerability by sending crafted EAP-FAST packets to an affected device.”

“Successful exploitation of the vulnerability may allow an unauthenticated, remote attacker to execute arbitrary commands and take full control of the underlying operating system that hosts the Cisco Secure ACS application in the context of the System user for Cisco Secure ACS running on Microsoft Windows,” the company said.

The vulnerability received the maximum severity score, 10.0, in the Common Vulnerability Scoring System (CVSS), which indicates that it is highly critical. Cisco Secure ACS for Windows version 4.2.1.15.11 was released to address the flaw.

There are no known workarounds, so upgrading to the patched version of the application is recommended.

Source:  networkworld.com

Three types of DNS attacks and how to deal with them

Friday, August 30th, 2013

DNS servers work by translating IP addresses into domain names. This is why you can enter CIO.com into the browser to visit our sister site, instead of trying to remember 65.221.110.97.

When DNS is compromised, several things can happen. However, compromised DNS servers are often used by attackers one of two ways. The first thing an attacker can do is redirect all incoming traffic to a server of their choosing. This enables them to launch additional attacks, or collect traffic logs that contain sensitive information.

The second thing an attacker can do is capture all in-bound email. More importantly, this second option also allows the attacker to send email on their behalf, using the victim organization’s domain and cashing-in on their positive reputation. Making things worse, attackers could also opt for a third option, which is doing both of those things.

“In the first scenario this can be used to attack visitors and capture login credentials and account information. The common solution of mandating SSL works until the attacker takes advantage of [the second option] to register a new certificate in your name. Once they have a valid SSL cert and control of your DNS (one and the same, basically) — they have effectively become you without needing access to any of your servers,” Rapid7’s Chief Research Officer, HD Moore, told CSO in an email.

In a blog post, Cory von Wallenstein, the CTO of Dyn Inc., a firm that specializes in traffic management and DNS, explained the three common types of DNS attacks and how to address them.

The first type of DNS attack is called a cache poisoning attack. This can happen after an attacker is successful in injecting malicious DNS data into the recursive DNS servers that are operated by many ISPs. These types of DNS servers are the closest to users from a network topology perspective, von Wallenstein wrote, so the damage is localized to specific users connecting to those servers.

“There are effective workarounds to make this impractical in the wild, and good standards like DNSSEC that provide additional protection from this type of attack,” he added.

If DNSSEC is impractical or impossible, another workaround is to restrict recursion on the name servers that need to be protected. Recursion identifies whether a server will only hand out information it has stored in cache, or if it is willing to go out on the Internet and talk to other servers to find the best answer.

“Many cache poisoning attacks leverage the recursive feature in order to poison the system. So by limiting recursion to only your internal systems, you limit your exposure. While this setting will not resolve all possible cache poisoning attack vectors, it will help you mitigate a good portion of them,” Chris Brenton, Dyn Inc.’s Director of Security, told CSO in an email.

The second type of DNS attack happens when attackers take over one or more authoritative DNS servers for a domain. In his post, von Wallenstein noted that authoritative DNS hosting is the type of service that his firm provides to Twitter. However, Dyn Inc. wasn’t targeted by the SEA, so their services to Twitter were not impacted by Tuesday’s incident.

If an attacker were to compromise an authoritative DNS, von Wallenstein explains, the effect would be global. While that wasn’t what the SEA did during their most recent attack, it’s been done before.

In 2009, Twitter suffered a separate attack by the Iranian Cyber Army. The group altered DNS records and redirected traffic to propaganda hosted on servers they controlled. The ability to alter DNS settings came after the Iranian Cyber Army compromised a Twitter staffer’s email account, and then used that account to authorize DNS changes. During that incident Dyn Inc. was the registrar contacted in order to process the change request.

Defense against these types of attacks often include strong passwords, and IP-based ACLs (acceptable client lists). Further, a solid training program that deals with social engineering will also be effective.

“I think the first step is recognizing the importance of authoritative DNS in our Internet connectivity trust model,” Brenton said.

All the time and resources in the world can be placed into securing a webserver, but if an attacker can attack the authoritative server and point the DNS records at a different IP address, “to the rest of the world its still going to look like you’ve been owned,” Brenton added.

“In fact it’s worse because that one attack will also permit them to redirect your email or any other service you are offering. So hosting your authoritative server with a trusted authority is the simplest way to resolve this problem.”

The third type of DNS attack is also the most problematic to undo. It happens when an attacker compromised the registration of the domain itself, and then uses that access to alter the DNS servers assigned to it.

This is also what the SEA did when they went after Twitter and the New York Times. They gained access to MelbourneIT, the registrar responsible for the domains targeted, and changed the authoritative DNS servers to their own.

“At this time, those authoritative nameservers answered all queries for the affected domains. What makes this attack so dangerous is whats called the TTL (time to live). Changes of this nature are globally cached on recursive DNS servers for typically 86,400 seconds, or a full day. Unless operators are able to purge caches, it can take an entire day (sometimes longer) for the effects to be reversed,” von Wallenstein wrote.

Again, Brenton’s advice for authoritative DNS will apply here as well. It’s also possible to host authoritative servers within the organization, allowing for complete control.

“If you are going to run your own authoritative servers, make sure you follow the best security practices that have been identified by SANS and the Center for Internet Security,” Brenton advised.

Source:  csoonline.com

NIST releases draft cybersecurity framework to more public scrutiny

Friday, August 30th, 2013

Following through on an order earlier this year from President Obama, the National Institute of Standards and Technology (NIST) is rapidly developing a set of guidelines and best practices to help organizations better secure their IT systems.

The agency released a draft of its preliminary cybersecurity framework and is seeking feedback from industry.

The agency is scheduled to release a full preliminary draft in October, for public review. It will then issue the final 1.0 version of the framework in February and continue to update the framework thereafter.

When finished, the framework will provide guidance for organizations on how to manage cybersecurity risk, “in a manner similar to financial, safety, and operational risk,” the document states.

In February the White House issued an executive order tasking NIST to develop a cybersecurity framework, one based on existing standards, practices and procedures that have proven to be effective.

In July, NIST issued an outline of the framework and held a workshop in San Diego to fill in some details. This draft incorporates some of that work, and was released to gather more feedback ahead of the next workshop, to be held in Dallas starting on Sept. 11.

“The Framework complements, and does not replace, an organization’s existing business or cybersecurity risk management process and cybersecurity program. Rather, the organization can use its current processes and leverage the framework to identify opportunities to improve an organization’s cybersecurity risk management,” the draft read.

When finished, the framework will consist of three parts. One component, called the core functions, will be a compilation of commonly practiced activities and references. The second component, the implementation tiers, provides guidance on how to manage cybersecurity risks. The third component, the framework profile, provides guidance on how to integrate the core functions within a cybersecurity risk strategy, or plan.

On Twitter, framework ideas are being submitted and discussed with the hashtag #NISTCSF.

Source:  computerworld.com

IBM starts restricting hardware patches to paying customers

Thursday, August 29th, 2013

Following an Oracle practice, IBM starts to restrict hardware patches to holders of maintenance contracts

Following through on a policy change announced in 2012, IBM has started restricting availability of hardware patches to paying customers, spurring at least one advocacy group to accuse the company of anticompetitive practices.

IBM “is getting to the spot where the customer has no choice but to buy an IBM maintenance agreement, or lose access to patches and changes,” said Gay Gordon-Byrne, executive director of the Digital Right to Repair (DRTR), a coalition for championing the rights of digital equipment owners.

Such a practice could dampen the market for support service of IBM equipment from non-IBM contractors, and could diminish the resale value of IBM equipment, DRTR charged.

On Aug. 11, IBM began requiring visitors of the IBM Fix Central website to provide a serial number in order to download a patch or update. According to DRTR, IBM uses the serial number to check to see if the machine being repaired was under a current IBM maintenance contract, or under an IBM hardware warranty.

“IBM will take the serial number, validate it against its maintenance contract database, and allow [user ] to proceed or not,” Gordon-Byrne explained.

Traditionally, IBM has freely provided machine code patches and updates as a matter of quality control, Gordon-Byrne said. The company left it to the owner to decide how to maintain the equipment, either through the help of IBM, a third-party service-provider, or by itself.

This benevolent practice is starting to change, according to DRTR.

In April 2012, IBM started requiring customers to sign a license in order to access machine code updates. Then, in October of that year, the company announced that machine code updates would only be available for those customers with IBM equipment that was either under warranty or covered by an IBM maintenance agreement.

“Fix Central downloads are available only for IBM clients with hardware or software under warranty, maintenance contracts, or subscription and support,” stated the Fix Central site documentation.

Nor would IBM offer the fixes on a time-and-material contract, in which customers can go through a special bid process to buy annual access to machine code.

The company didn’t immediately start enforcing this entitlement comparison policy however — until earlier this month. “Until August, it didn’t appear that IBM had the capability,” Gordon-Byrne said. “We were wondering when they were going to do that step.”

The policy seems to apply to all IBM mainframes, servers and storage systems, with IBM X system servers being one known exception. Customer complaints forced IBM to halt the practice for X servers, according to Gordon-Byrne.

This practice is problematic to IBM customers for a number of reasons, DRTR asserted.

Such a practice limits the resale of hardware, because any prospective owner of used equipment would have to purchase a support contract from IBM if it wanted its newly acquired machine updated.

And this could be expensive. IBM also announced last year it would start charging a “re-establishment fee” for equipment owners wishing to sign a new maintenance contract for equipment with lapsed IBM support coverage. The fee could be as high as 150 percent of the yearly maintenance fee itself, according to DRTR.

IBM could also use the maintenance contracts as a way to generate more sales.

“If IBM decides it wants to jack the maintenance price in order to make a new machine sale, they can do it because there is no competition,” Gordon-Byrne said.

IBM is not the first major hardware firm to use this tactic to generate more after-market sales, according to Gordon-Byrne. Oracle adopted a similar practice for its servers after it acquired Sun Microsystems, and its considerable line of hardware, in 2010.

The Service Industry Association — which focuses on helping the computer, medical and business products service industries– created DRTR in January 2013 to fight against encroaching after-market control of hardware manufacturers. The SIA itself protested Oracle’s move away from free patches as well.

DRTR is actively tracking a number of similar cases involving after-market control of hardware, such as an Avaya antitrust trial due to start Sept. 9 in the U.S. District Court for the District of New Jersey.

IBM declined to comment for this story.

Source:  networkworld.com

iOS and Android weaknesses allow stealthy pilfering of website credentials

Thursday, August 29th, 2013

Computer scientists have uncovered architectural weaknesses in both the iOS and Android mobile operating systems that make it possible for hackers to steal sensitive user data and login credentials for popular e-mail and storage services.

Both OSes fail to ensure that browser cookies, document files, and other sensitive content from one Internet domain are off-limits to scripts controlled by a second address without explicit permission, according to a just-published academic paper from scientists at Microsoft Research and Indiana University. The so-called same-origin policy is a fundamental security mechanism enforced by desktop browsers, but the protection is woefully missing from many iOS and Android apps. To demonstrate the threat, the researchers devised several hacks that carry out so-called cross-site scripting (XSS) and cross-site request forgery (CSRF) attacks to surreptitiously download user data from handsets.

The most serious of the attacks worked on both iOS and Android devices and required only that an end-user click on a booby-trapped link in the official Google Plus app. Behind the scenes, a script sent instructions that caused a text-editing app known as PlainText to send documents and text input to a Dropbox account controlled by the researchers. The attack worked against other apps, including TopNotes and Nocs.

“The problem here is that iOS and Android do not have this origin-based protection to regulate the interactions between those apps and between an app and another app’s Web content,” XiaoFeng Wang, a professor in Indiana University’s School of Informatics and Computing, told Ars. “As a result, we show that origins can be crossed and the same XSS and CSRF can happen.” The paper, titled Unauthorized Origin Crossing on Mobile Platforms: Threats and Mitigation, was recently accepted by the 20th ACM Conference on Computer and Communications Security.

All your credentials belong to us

The Plaintext app in this demonstration video was not configured to work with Dropbox. But even if the app had been set up to connect to the storage service, the attack could make it connect to the attacker’s account rather than the legitimate account belonging to the user, Wang said. All that was required was for the iPad user to click on the malicious link in the Google Plus app. In the researchers’ experiments, Android devices were susceptible to the same attack.

A separate series of attacks were able to retrieve the multi-character security tokens Android apps use to access private accounts on Facebook and Dropbox. Once the credentials are exposed, attackers could use them to download photos, documents, or other sensitive files stored in the online services. The attack, which relied on a malicious app already installed on the handset, exploited the lack of same-origin policy enforcement to bypass Android’s “sandbox” security protection. Google developers explicitly designed the mechanism to prevent one app from being able to access browser cookies, contacts, and other sensitive content created by another app unless a user overrides the restriction.

All attacks described in the 12-page paper have been confirmed by Dropbox, Facebook, and the other third-party websites whose apps were tested, Wang said. Most of the vulnerabilities have been fixed, but in many cases the patches were extremely hard to develop and took months to implement. The scientists went on to create a proof-of-concept app they called Morbs that provides OS-level protection across all apps on an Android device. It works by labeling each message with information about its origin and could make it easier for developers to specify and enforce security policies based on the sites where security tokens and other sensitive information originate.

As mentioned earlier, desktop browsers have long steadfastly enforced a same-origin policy that makes it impossible for JavaScript and other code from a domain like evilhacker.com to access cookies or other sensitive content from a site like trustedbank.com. In the world of mobile apps, the central role of the browser—and the gate-keeper service it provided—has largely come undone. It’s encouraging to know that the developers of the vulnerable apps took this research so seriously. Facebook awarded the researchers at least $7,000 in bounties (which the researchers donated to charity), and Dropbox offered valuable premium services in exchange for the private vulnerability report. But depending on a patchwork of fixes from each app maker is problematic given the difficulty and time involved in coming up with patches.

A better approach is for Apple and Google developers to implement something like Morbs that works across the board.

“Our research shows that in the absence of such protection, the mobile channels can be easily abused to gain unauthorized access to a user’s sensitive resources,” the researchers—who besides Wang, included Rui Wang and Shuo Chen of Microsoft and Luyi Xing of Indiana University—wrote. “We found five cross-origin issues in popular [software development kits] and high-profile apps such as Facebook and Dropbox, which can be exploited to steal their users’ authentication credentials and other confidential information such as ‘text’ input. Moreover, without the OS support for origin-based protection, not only is app development shown to be prone to such cross-origin flaws, but the developer may also have trouble fixing the flaws even after they are discovered.”

Source:  arstechnica.com

Cisco responds to VMware’s NSX launch, allegiances

Thursday, August 29th, 2013

Says a software-only approach to network virtualization spells trouble for users

Cisco has responded to the groundswell of momentum and support around the introduction of VMware’s NSX network virtualization platform this week with a laundry list of the limitations of software-only based network virtualization. At the same time, Cisco said it intends to collaborate further with VMware, specifically around private cloud and desktop virtualization, even as its partner lines up a roster of allies among Cisco’s fiercest rivals.

Cisco’s response was delivered here, in a blog post from Chief Technology and Strategy Officer Padmasree Warrior.

In a nutshell, Warrior says software-only based network virtualization will leave customers with more headaches and hardships than a solution that tightly melds software with hardware and ASICs – the type of network virtualization Cisco proposes:

A software-only approach to network virtualization places significant constraints on customers.  It doesn’t scale, and it fails to provide full real-time visibility of both physical and virtual infrastructure.  In addition this approach does not provide key capabilities such as multi-hypervisor support, integrated security, systems point-of-view or end-to-end telemetry for application placement and troubleshooting.  This loosely-coupled approach forces the user to tie multiple 3rd party components together adding cost and complexity both in day-to-day operations as well as throughout the network lifecycle.  Users are forced to address multiple management points, and maintain version control for each of the independent components.  Software network virtualization treats physical and virtual infrastructure as separate entities, and denies customers a common policy framework and common operational model for management, orchestration and monitoring.

Warrior then went on to tout the benefits of the Application Centric Infrastructure (ACI),

a concept introduced by Cisco spin-in Insieme Networks at the Cisco Live conference two months ago. ACI combines hardware, software and ASICs into an integrated architecture that delivers centralized policy automation, visibility and management of both physical and virtual networks, etc., she claims.Warrior also shoots down the comparison between network virtualization and server virtualization, which is the foundation of VMware’s existence and success. Servers were underutilized, which drove the need for the flexibility and resource efficiency promised in server virtualization, she writes.

Not so with networks. Networks do not have an underutilization problem, she claims:

In fact, server virtualization is pushing the limits of today’s network utilization and therefore driving demand for higher port counts, application and policy-driven automation, and unified management of physical, virtual and cloud infrastructures in a single system.

Warrior ends by promising some “exciting news” around ACI in the coming months. Perhaps at Interop NYC in late September/early October? Cisco CEO John Chambers was just added this week to the keynote lineup at the conference. He usually appears at these venues when Cisco makes a significant announcement that same week…

Source:  networkworld.com

Spear phishing poses threat to industrial control systems

Tuesday, August 27th, 2013

Hackers don’t need Stuxnet or Flame to turn off a city’s lights, say security experts

While the energy industry may fear the appearance of another Stuxnet on the systems they use to keep oil and gas flowing and the electric grid powered, an equally devastating attack could come from a much more mundane source: phishing.

Rather than worry about exotic cyber weapons like Stuxnet and its big brother, Flame, companies that have Supervisory Control and Data Acquisition (SCADA) systems — computer systems that monitor and control industrial processes — should make sure that their anti-phishing programs are in order, say security experts.

“The way malware is getting into these internal networks is by social engineering people via email,” Rohyt Belani, CEO and co-founder of the anti-phishing training firm PhishMe, said in an interview.

“You send them something that’s targeted, that contains a believable story, not high-volume spam, and people will act on it by clicking a link or opening a file attached to it,” he said. “Then, boom, the attackers get that initial foothold they’re looking for.”

In a case study cited by Belani, he recalled a very narrow attack on a single employee working the night shift monitoring his company’s SCADA systems.

The attacker researched the worker’s background on the Internet and used the fact he had four children to craft a bogus email from the company’s human resources department with a special health insurance offer for families with three or more kids.

The employee clicked a malicious link in the message and infected his company’s network with malware. “Engineers are pretty vulnerable to phishing attacks,” Tyler Klinger, a researcher with Critical Intelligence, said in an interview.

He recalled an experiment he conducted with several companies on engineers and others with access to SCADA systems in which 26 percent of the spear phishing attacks on them were successful.

Success means that the target clicked on a malicious link in the phishing mail. Klinger’s experiment ended with those clicks. In real life, those clicks would just be the beginning of the story and would not necessarily end in success for the attacker.

“If it’s a common Joe or script kiddie, a company’s IDS [Intrusion Detection Systems] systems will probably catch the attack,” Klinger said. “If they’re using a Java zero-day or something like that, there would be no defense against it.”

In addition, phishing attacks are aimed at a target’s email, which are usually located on a company’s IT network. Companies with SCADA systems typically segregate them from their IT networks with an “air gap.”

That air gap is designed to insulate the SCADA systems from the kinds of infections perpetrated by spear phishing attacks. “Air gaps are a mess these days,” Klinger said. “Stuxnet taught us that.”

“Once you’re in an engineer’s email, it’s just a matter of cross-contamination,” he added. “Eventually an engineer is going to have to access the Internet to update something on the SCADA and that’s when you get cross-contamination.”

Phishing attacks on SCADA systems are likely rare, said Raj Samani, vice president and CTO of McAfee’s EMEA.

“I would anticipate that the majority of spear phishing attacks against employees would be focused against the IT network,” Samani said in an interview. “The espionage attacks on IT systems would dwarf those against SCADA equipment.”

Still, the attacks are happening. “These are very targeted attacks and not something widely publicized,” said Dave Jevans chairman and CTO of Marble Security and chairman of the Anti-Phishing Work Group.

Jevans acknowledged, though, that most SCADA attacks involve surveillance of the systems and not infection of them. “They’re looking for how it works, can a backdoor be maintained into the system so they can use it in the future,” he said.

“Most of those SCADA systems have no real security,” Jevans said. “They rely on not being directly connected to the Internet, but there’s always some Internet connection somewhere.”

Some companies even still have dial-in numbers for connection to their systems with a modem. “Their security on that system is, ‘Don’t tell anybody the phone number,'” he said.

Source:  csoonline.com

Amazon and Microsoft, beware—VMware cloud is more ambitious than we thought

Tuesday, August 27th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/08/vcloud-hybrid-service-640x327.png

Desktops, disaster recovery, IaaS, and PaaS make VMware’s cloud compelling.

VMware today announced that vCloud Hybrid Service, its first public infrastructure-as-a-service (IaaS) cloud, will become generally available in September. That’s no surprise, as we already knew it was slated to go live this quarter.

What is surprising is just how extensive the cloud will be. When first announced, vCloud Hybrid Service was described as infrastructure-as-a-service that integrates directly with VMware environments. Customers running lots of applications in-house on VMware infrastructure can use the cloud to expand their capacity without buying new hardware and manage both their on-premises and off-premises deployments as one.

That’s still the core of vCloud Hybrid Service—but in addition to the more traditional infrastructure-as-a-service, VMware will also have a desktops-as-a-service offering, letting businesses deploy virtual desktops to employees without needing any new hardware in their own data centers. There will also be disaster recovery-as-a-service, letting customers automatically replicate applications and data to vCloud Hybrid Service instead of their own data centers. Finally, support for the open source distribution of Cloud Foundry and Pivotal’s deployment of Cloud Foundry will let customers run a platform-as-a-service (PaaS) in vCloud Hybrid Service. Unlike IaaS, PaaS tends to be optimized for building and hosting applications without having to manage operating systems and virtual computing infrastructure.

While the core IaaS service and connections to on-premises deployments will be generally available in September, the other services aren’t quite ready. Both disaster recovery and desktops-as-a-service will enter beta in the fourth quarter of this year. Support for Cloud Foundry will also be available in the fourth quarter. Pricing information for vCloud Hybrid Service is available on VMware’s site. More details on how it works are available in our previous coverage.

Competitive against multiple clouds

All of this gives VMware a compelling alternative to Amazon and Microsoft. Amazon is still the clear leader in infrastructure-as-a-service and likely will be for the foreseeable future. However, VMware’s IaaS will be useful to customers who rely heavily on VMware internally and want a consistent management environment on-premises and in the cloud.

VMware and Microsoft have similar approaches, offering a virtualization platform as well as a public cloud (Windows Azure in Microsoft’s case) that integrates with customers’ on-premises deployments. By wrapping Cloud Foundry into vCloud Hybrid Service, VMware combines IaaS and PaaS into a single cloud service just as Microsoft does.

VMware is going beyond Microsoft by also offering desktops-as-a-service. We don’t have a ton of detail here, but it will be an extension of VMware’s pre-existing virtual desktop products that let customers host desktop images in their data centers and give employees remote access to them. With “VMware Horizon View Desktop-as-a-Service,” customers will be able to deploy virtual desktop infrastructure either in-house or on the VMware cloud and manage it all together. VMware’s hybrid cloud head honcho, Bill Fathers, said much of the work of adding and configuring new users will be taken care of automatically.

The disaster recovery-as-a-service builds on VMware’s Site Recovery Manager, letting customers see the public cloud as a recovery destination along with their own data centers.

“The disaster recovery use case is something we want to really dominate as a market opportunity,” Fathers said in a press conference today. At first, it will focus on using “existing replication capabilities to replicate into the vCloud Hybrid Service. Going forward, VMware will try to provide increasing levels of automation and more flexibility in configuring different disaster recovery destinations,” he said.

vCloud Hybrid Service will be hosted in VMware data centers in Las Vegas, NV, Sterling, VA, Santa Clara, CA, and Dallas, TX, as well as data centers operated by Savvis in New York and Chicago. Non-US data centers are expected to join the fun next year.

When asked if VMware will support movement of applications between vCloud Hybrid Service and other clouds, like Amazon’s, Fathers said the core focus is ensuring compatibility between customers’ existing VMware deployments and the VMware cloud. However, he said VMware is working with partners who “specialize in that level of abstraction” to allow portability of applications from VMware’s cloud to others and vice versa. Naturally, VMware would really prefer it if you just use VMware software and nothing else.

Source:  arstechnica.com

Don’t waste your time (or money) on open-source networking, says Cisco

Monday, August 26th, 2013

Despite a desire to create open and flexible networks, network managers shouldn’t be fooled into thinking that the best way to do achieve this is through building an open-source network from scratch, according to Den Sullivan, Head of Architectures for Emerging Markets,Cisco.

In a phone interview with CNME, Sullivan said that, in most cases, attempting to build your own network using open-source technologies would result in more work and more cost.

“When you’re down there in the weeds, sticking it all together, building it yourself when you can actually go out there and buy it, I think you’re probably increasing your cost base whilst you actually think that you may be getting something cheaper,” he said.

Sullivan said he understood why network managers could be seduced by the idea of building a bespoke network from open-source technologies. However, he advised that, in practical terms, open-source networking tech was mostly limited to creating smaller programs and scripts.

“People have looked to try to do things faster, try to automate things. And with regards to scripts and small programs, they’re taking up open-source off the Web, bolting them together and ultimately coming up with a little program or script that goes and does things a little bit faster for their own particular area,” he said.

Sullivan said he hadn’t come across anyone in the Middle East creating open-source networks from scratch — and with good reason. He said that the role of IT isn’t to create something bespoke, but to align the department with the needs of the business, using whichever tools are available.

“How does the IT group align with that strategy, and then how best do they deliver it?” he asked. “Ultimately, I don’t think that is always about going and building it yourself, and stitching it all together.

“It’s almost like the application world. Say you’ve got 10,000 sales people — why would you go and build a sales tool to track their forecasting, to track their performance, to track your customer base? These things are readily available — they’re built by vendors who have got years and years of experience, so why are you going to start trying to grow your own? That’s not the role of IT as I see it today.”

Sullivan admitted that, for some businesses, stock networking tools from the big vendors did not provide enough flexibility. However, he said that a lot of the flexibility and openness that people desire could be found more easily in software-defined networking (SDN) tools, rather than open-source networking tools.

“I see people very interested in the word ‘open’ in regards to software-defined networking, but I don’t see them actually going and creating their own networks through open-source, readily available programs out there on the Internet. I do see an interest in regards to openness, flexibility, and more programmability — things like the Open Network Foundation and everything in regards to SDN,” he said.

Source:  pcadvisor.com

VMware unwraps virtual networking software – promises greater network control, security

Monday, August 26th, 2013

VMware announces that NSX – which combines network and security features – will be available in the fourth quarter

VMware today announced that its virtual networking software and security software products packaged together in an offering named NSX will be available in the fourth quarter of this year.

The company has been running NSX in beta since the spring, but as part of a broader announcement of software-defined data center functions made today at VMworld, the company took the wrapping off of its long-awaited virtual networking software. VMware has based much of the NSX functionality on technology it acquired from Nicira last year.

The generally available version of NSX includes two major new features compared to the beta: technical integration with a variety of partnering companies, including the ability for the virtual networking software to control network and compute infrastructure hardware providers. Secondly, it virtualizes some network functions like firewalling, allowing for better control of virtual networks.

The idea of virtual networking is similar to that of virtual computing: abstracting the core features of networking from the underlying hardware. Doing so lets organizations more granularly control their networks, including spinning up and down networks, as well as better segmentation of network traffic.

Nicira has been a pioneer in the network virtualization industry and last year VMware spent $1.2 billion to acquire the company. In March, VMware announced plans to integrate VMware technology into its product suite through the NSX software, but today the company announced that NSX’s general availability will be in the coming months. NSX will be a software update that is both hypervisor and hardware agnostic, says Martin Casado, chief architect, networking at VMware.

The need for the NSX software is being driven by the migration from a client-server world to a cloud world, he says. In this new architecture, there is just as much traffic, if not more, within the data center (east-west traffic) as than the data traffic between clients and the edge devices (north-south traffic).

One of the biggest advancements in the NSX software that is newly announced is virtual firewalling. Instead of using hardware or virtual firewalls that would sit at the edge of the network to control traffic, instead NSX’s firewall is embedded within the software, so it is ubiquitous throughout the deployment. This removes any bottlenecking issues that would be created by using a centralized firewall system, Casado says.

“We’re not trying to take over the firewall market or do anything with north-south traffic,” Casado says. “What we are doing is providing functionality for traffic management within the data center. There’s nothing that can do that level of protection for the east-west traffic. It’s addressing a significant need within the industry.”

VMware has signed on a bevy of partners that are compatible with the NSX platform. The software is hardware and hypervisor agnostic, meaning that the software controller can manage network functionality that is executed by networking hardware from vendors like Juniper, Arista, HP, Dell and Brocade. In press materials sent out by the company Cisco is not named as a partner, but VMware says NSX will work with networking equipment from the leading network vendor.

On the security side, services from Symantec, McAfree and Trend Micro will work within the system, while underlying compute hardware from OpenStack, CloudStack, Red Hat and Piston Cloud Computing Co. will work with NSX. Nicira has worked heavily in the OpenStack community.

“In virtual networks, where hardware and software are decoupled, a new network operating model can be achieved that delivers improved levels of speed and efficiency,” said Brad Casemore, research director for Data Center Networks at IDC. “Network virtualization is becoming a game shifter, providing an important building block for delivering the software-defined data center, and with VMware NSX, VMware is well positioned to capture this market opportunity.”

Source:  infoworld.com

SSDs maturing, but new memory tech still 10 years away

Monday, August 26th, 2013

Solid-state drive adoption will continue to grow and it will be more than 10 years before it is ultimately replaced by a new memory technology, experts said.

SSDs are getting more attractive as NAND flash gets faster and cheaper, as it provides flexibility in usage as a RAM or hard-drive alternative, said speakers and attendees at the Hot Chips conference in Stanford, California on Sunday.

Emerging memory types under development like phase-change memory (PCM), RRAM (resistive random-access memory) and MRAM (magnetoresistive RAM) may show promise with faster speed and durability, but it will be many years until they are made in volume and are priced competitively to replace NAND flash storage.

SSDs built on flash memory are now considered an alternative to spinning hard-disk drives, which have reached their speed limit. Mobile devices have moved over to flash drives, and a large number of thin and light ultrabooks are switching to SSDs, which are smaller, faster and more power efficient. However, the enterprise market still relies largely on spinning disks, and SSDs are poised to replace hard disks in server infrastructure, experts said. One of the reasons: SSDs are still more expensive than hard drives, though flash price is coming down fast.

“It’s going to be a long time until NAND flash runs out of steam,” said Jim Handy, an analyst at Objective Analysis, during a presentation.

Handy predicted that NAND flash will likely be replaced by 2023 or beyond. The capacity of SSDs is growing as NAND flash geometries get smaller, so scaling down flash will become difficult, which will increase the need for a new form of non-volatile memory that doesn’t rely on transistors.

Many alternative forms of memory are under development. Crossbar has developed RRAM (resistive random-access memory) that the company claims can replace DRAM and flash. Startup Everspin is offering its MRAM (magnetoresistive RAM) products as an alternative to flash memory. Hewlett-Packard is developing memristor, while PCM (phase-change memory) is being pursued by Micron and Samsung.

But SSDs are poised for widespread enterprise adoption as the technology consumes less energy and is more reliable. The smaller size of SSDs can also provide more storage in fewer servers, which could cut licensing costs, Handy said.

“If you were running Oracle or some other database software, you would be paying license fee based on the number of servers,” Handy said.

In 2006, famed Microsoft researcher Jim Gray said in a presentation “tape is dead, disk is tape, flash is disk, RAM locality is king.” And people were predicting the end of flash 10 years ago when Amber Huffman, senior principal engineer at Intel’s storage technologies group, started working on flash memory.

Almost ten years on, flash is still maturing and could last even longer than 10 years, Huffman said. Its adoption will grow in enterprises and client devices, and it will ultimately overtake hard drives, which have peaked on speed, she said.

Like Huffman, observers agreed that flash is faster and more durable, but also more expensive than hard drives. But in enterprises, SSDs are inherently parallel, and better suited for server infrastructures that need better throughput. Multiple SSDs can exchange large loads of data easily much like memory, Huffman said. SSDs can be plugged into PCI-Express 3.0 slots in servers for processing of applications like analytics, which is faster than hard drives on the slower SATA interface.

The $30 billion enterprise storage market is still built on spinning disks, and there is a tremendous opportunity for SSDs, said Neil Vachharajani , software architect at Pure Storage, in a speech.

Typically, dedicated pools of spinning disks are needed for applications, which could block performance improvements, Vachharajani said.

“Why not take SSDs and put them into storage arrays,” Vachharajani said. “You can treat your storage as a single pool.”

Beyond being an alternative primary storage, NAND could be plugged into memory slots as a slower form of RAM. Facebook replaced DRAM with flash memory in a server called McDipper, and is also using SSDs for long-term cold storage. Ultrabooks use SSDs as operating system cache, and servers use SSDs for temporary caching in servers before data is moved to hard drives for long-term storage.

Manufacturing enhancements are being made to make SSDs faster and smaller. Samsung this month announced faster V-NAND flash storage chips that are up to 10 times more durable than the current flash storage used in mobile devices. The flash memory employs a 3D chip structure in which storage modules are stacked vertically.

Intel is taking a different approach to scaling down NAND flash by implementing high-k metal gate to reduce leakage, according to Krishna Parat, a fellow at the chip maker’s nonvolatile memory group. As flash scales down in size, Intel will move to 3D transistor structuring, much like it does in microprocessors today on the 22-nanometer process.

But there are disadvantages. With every process shrink, the endurance of flash may drop, so steps need to be taken to preserve durability. Options would be to minimize writes by changing algorithms and controllers, and also to use compression, de-duplication and hardware encryption, attendees said.

Smarter controllers are also needed to maintain capacity efficiency and data integrity, said David Flynn, CEO of PrimaryData. Flynn was previously CEO of Fusion-io, which pioneered SSD storage in enterprises.

“Whatever Flash’s successor is, it won’t be as fast as RAM,” Flynn said. “It takes longer to change persistent states than volatile states.”

But Flynn is already looking beyond SSD into future memory types.

“The faster it gets the better,” Flynn said. “I’m excited about new, higher memories.”

But SSDs will ultimately match hard drives on price, and the newer memory and storage forms will have to wait, said Huffman, who is also the chairperson for the NVM-Express organization, which is the protocol for current and future non-volatile memory plugging into the PCI-Express slot.

“Hard drives will become the next tape,” Huffman said.

Source:  computerworld.com

China’s Internet hit by DDoS attack, sites taken down for hours

Monday, August 26th, 2013

China’s Internet was taken down in an attack on Sunday that could have been perpetrated by sophisticated hackers or a single individual, security experts say.

According to the Wall Street Journal, which earlier reported on the outage, China on Sunday was hit with what the government has called, the biggest distributed denial-of-service attack ever to rock its “.cn” sites. The attack, which lasted up to four hours, according to security company CloudFlare, left many sites with the .cn extension down. According to the Journal, parts of the affected sites were still accessible during the outage, due mainly to site owners storing parts of their pages in cache.

In a statement on the matter, China’s Internet watchdog China Internet Network Information Center confirmed the attack, saying that it was indeed the largest the country has ever faced. The group said that it was gradually restoring services and would work to improve the top-level domain’s security to safeguard against similar attacks.

It’s not currently known who attacked the Chinese domain. However, in a statement on the matter, CloudFlare CEO Matthew Prince said that while it’s possible a sophisticated group of hackers took .cn down, “it may have well been a single individual.”

Source:  CNET

Popular download management program has hidden DDoS component, researchers say

Friday, August 23rd, 2013

Recent versions of Orbit Downloader, a popular Windows program for downloading embedded media content and other types of files from websites, turns computers into bots and uses them to launch distributed denial-of-service (DDoS) attacks, according to security researchers.

Starting with version 4.1.1.14 released in December, the Orbit Downloader program silently downloads and uses a DLL (Dynamic Link Library) component that has DDoS functionality, malware researchers from antivirus vendor ESET said Wednesday in a blog post.

The rogue component is downloaded from a location on the program’s official website, orbitdownloader.com, the ESET researchers said. An encrypted configuration file containing a list of websites and IP (Internet Protocol) addresses to serve as targets for attacks is downloaded from the same site, they said.

Orbit Downloader has been developed since at least 2006 and judging by download statistics from software distribution sites like CNET’s Download.com and Softpedia.com it is, or used to be, a popular program.

Orbit Downloader was downloaded almost 36 million times from Download.com to date and around 12,500 times last week. Its latest version is 4.1.1.18 and was released in May.

In a review of the program, a CNET editor noted that it installs additional “junk programs” and suggested alternatives to users who need a dedicated download management application.

When they discovered the DDoS component, the ESET researchers were actually investigating the “junk programs” installed by Orbit Downloader in order to determine if the program should be flagged as a “potentially unwanted application,” known in the industry as PUA.

“The developer [of Orbit Downloader], Innoshock, generates its revenue from bundled offers, such as OpenCandy, which is used to install third-party software as well as to display advertisements,” the researchers said, noting that such advertising arrangements are normal behavior for free programs these days.

“What is unusual, though, is to see a popular utility containing additional code for performing Denial of Service (DoS) attacks,” they said.

The rogue Orbit Downloader DDoS component is now detected by ESET products as a Trojan program called Win32/DDoS.Orbiter.A. It is capable of launching several types of attacks, the researchers said.

First, it checks if a utility called WinPcap is installed on the computer. This is a legitimate third-party utility that provides low-level network functionality, including sending and capturing network packets. It is not bundled with Orbit Downloader, but can be installed on computers by other applications that need it.

If WinPcap is installed, Orbit’s DDoS component uses the tool to send TCP SYN packets on port 80 (HTTP) to the IP addresses specified in its configuration file. “This kind of attack is known as a SYN flood,” the ESET researchers said.

If WinPcap is not present, the rogue component directly sends HTTP connection requests on port 80 to the targeted machines, as well as UDP packets on port 53 (DNS).

The attacks also use IP spoofing techniques, the source IP addresses for the requests falling into IP address ranges that are hardcoded in the DLL file.

“On a test computer in our lab with a gigabit Ethernet port, HTTP connection requests were sent at a rate of about 140,000 packets per second, with falsified source addresses largely appearing to come from IP ranges allocated to Vietnam,” the ESET researchers said.

After adding a detection signature for the DLL component, the ESET researchers also identified an older file called orbitnet.exe that had almost the same functionality as the DLL file, but downloaded its configuration from a different website, not orbitdownloader.com.

This suggests that Orbit Downloader might have had DDoS functionality since before version 4.1.1.14. The orbitnet.exe file is not bundled with any older Orbit Downloader installers, but it might have been downloaded post-installation, like the DLL component.

This is a possibility, but it can’t be demonstrated with certainty, Peter Kosinar, a technical fellow at ESET who was involved in the investigation, said Thursday. It might also be distributed though other means, he said.

Adding to the confusion is that an older version of orbitnet.exe than the one found by ESET is distributed with Orbit Downloader 4.1.1.18. The reason for this is unclear since Orbit Downloader 4.1.1.18 also downloads and uses the DLL DDoS component. However, it indicates a clear relationship between orbitnet.exe and Orbit Downloader.

The fact that a popular program like Orbit Downloader is used as a DDoS tool creates problems not only for the websites that it’s used to attack, but also for the users whose computers are being abused.

According to Kosinar, there is no rate limit implemented for the packets sent by the DDoS component. This means that launching these attacks can easily consume the user’s Internet connection bandwidth, affecting his ability to access the Internet through other programs.

Users who install Orbit Downloader expect the program to streamline their downloads and increase their speed, but it turns out that the application has the opposite effect.

Orbit Downloader is developed by a group called Innoshock, but it’s not clear if this is a company or just a team of developers. Attempts to contact Innoshock for comment Thursday via two Gmail addresses listed on its website and the Orbit Downloader site, as well as via Twitter, remained unanswered.

The program’s users also seem to have noticed its DDoS behavior judging by comments left on Download.com and the Orbit Downloader support forum.

Orbit Downloder version 4.1.1.18 is generating a very high amount of DDoS traffic, a user named raj_21er said on the support forum on June 12. “The DDoS flooding is so huge that it just hangs the gateway devices/network switches completely and breaks down the entire network operation.”

“I was using Orbit Downloader for the past one week on my desktop when I suddenly noticed that the internet access was pretty much dead in the last 2 days,” another user named Orbit_User_5500 said. Turning off the desktop system restored Internet access to the other network computers and devices, he said.

Since adding detection of this DDoS component, ESET received tens of thousands of detection reports per week from deployments of its antivirus products, Kosinar said.

Source:  csoonline.com

Mozilla may reject long-lived digital certificates after similar move by Google

Friday, August 23rd, 2013

Starting in early 2014 Google Chrome will block certificates issued after July 1, 2012, with a validity period of more than 60 months

Mozilla is considering the possibility of rejecting as invalid SSL certificates issued after July 1, 2012, with a validity period of more than 60 months. Google already made the decision to block such certificates in Chrome starting early next year.

“As a result of further analysis of available, publicly discoverable certificates, as well as the vibrant discussion among the CA/B Forum [Certificate Authority/Browser Forum] membership, we have decided to implement further programmatic checks in Google Chrome and the Chromium Browser in order to ensure Baseline Requirements compliance,” Ryan Sleevi, a member of the Google Chrome Team said Monday in a message to the CA/B Forum mailing list.

The checks will be added to the development and beta releases of Google Chrome at the beginning of 2014. The changes are expected in the stable release of Chrome during the first quarter of next year, Sleevi said.

The Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, sometimes simply referred to as the Baseline Requirements, is a set of guidelines agreed upon by all certificate authorities (CAs) and browser vendors that are members of the CA/B Forum.

Version 1.0 of the Baseline Requirements went into effect on July 1, 2012, and states that “Certificates issued after the Effective Date MUST have a Validity Period no greater than 60 months.” It also says that certificates to be issued after April 1, 2015, will need to have a validity period no greater than 39 months, but there are some clearly defined exceptions to this requirement.

The shortening of certificate validity period is a proactive measure that would allow for a timely implementation of changes made to the requirements in the future. It would be hard for future requirements, especially those with a security impact, to have a practical effect if older certificates that aren’t compliant with them would remain valid for 10 more years.

Google identified 2,038 certificates that were issued after July 1, 2012, and have validity periods longer than 60 months, in violation of the current Baseline Requirements.

“We encourage CAs that have engaged in this unfortunate practice, which appears to be a very limited subset of CAs, to reach out to affected customers and inform them of the upcoming changes,” Sleevi said referring to the fact that Chrome will start blocking those certificates in the beginning of 2014.

On Thursday, a discussion was started on the Mozilla bug tracker on whether the company should enforce a similar block in its products.

“Everyone agrees such certs, when newly issued, are incompatible with the Baseline Requirements,” said Gervase Markham, who deals with issues of project governance at Mozilla, on the bug tracker. “Some CAs have argued that when reissued, this is not so, but Google does not agree with them. We should consider making the same change.”

Daniel Veditz, the security lead at Mozilla said that he sees why CAs might have a problem with this from a business and legal standpoint. If a CA already sold a “product” — in this case a certificate — in the past with certain terms and would later violate those terms by deciding to reduce the certificate’s validity period, they might be in hot water, he said.

“Although it does seem as if reissuing as a 60-month cert with the promise to reissue with the balance later ought to be satisfactory,” Vediz said.

Markham agreed. “No one is asking CAs to not give customers what they’ve paid for in terms of duration; it will just need to be 2 (or more) separate certs,” he said. “I agree that changing certs once every 5 years rather than every 10 might be a minor inconvenience for customers who use the same web server hardware and software for more than 5 years, but I’m not sure how large a group that is.”

Mozilla’s PR firm in the U.K. could not immediately provide a statement from the company regarding this issue.

Source:  csoonline.com

Intel plans to ratchet up mobile platform performance with 14-nanometre silicon

Friday, August 23rd, 2013

Semiconductor giant Intel is to start producing mobile and embedded systems using its latest manufacturing process technology in a bid to muscle in on a market that it had previously ignored.

The company is planning to launch a number of platforms this year and next intended to ratchet up the performance of its offerings, according to sources quoted in the Far Eastern trade journal Digitimes.

By the end of 2013, a new smartphone system-on-a-chip (SoC) produced using 22-nanometre process technology, codenamed “Merrifield”, will be introduced, followed by “Moorefield” in the first half of 2014. “Morganfield”, which will be produced on forthcoming 14-nanometre process manufacturing technology, will be available from the first quarter of 2015.

Merrifield ought to offer a performance boost of about 50 per cent combined with much improved battery life compared to Intel’s current top-end smartphone platform, called Clover Trail+.

More immediately, Intel will be releasing “Bay Trail-T” microprocessors intended for Windows 8 and Android tablet computers. The Bay Trail-T architecture will offer a battery life of about eight hours in use, but weeks when it is idling, according to Digitimes sources.

The Bay Trail-T may be unveiled at the Intel Developer Forum in September, when Intel will also be unveiling “Bay Trail” on which the T-version is based. Bay Trail will be produced on the 22-nanometre Silvermont architecture.

Digitimes was quoting sources among Taiwan-based manufacturers.

Intel’s current Intel Atom microprocessors for mobile phones – such as the Motorola Raxr-I and the Prestigio MultiPhone – are based on 32-nanometre technology, a generation behind the manufacturing process technology that it is using to produce its latest desktop and laptop microprocessors.

However, the roadmap suggests that Intel is planning to produce its high-end smartphone and tablet computer microprocessors and SoC platforms using the same manufacturing technology as desktop and server products in a bid to gain an edge on ARM-based rivals from Samsung, Qualcomm, TSMC and other producers.

Manufacturers of ARM-based microprocessors, which currently dominate the high-performance market for mobile and embedded microprocessors, trail in terms of the manufacturing technology that they can build their systems with, compared to Intel.

Intel, though, has been turning its attention to mobile and embedded as laptop, PC and server sales have stalled.

Source:  computing.com

Amazon is said to have tested a wireless network

Friday, August 23rd, 2013

Amazon.com Inc. (AMZN) has tested a new wireless network that would allow customers to connect its devices to the Internet, according to people with knowledge of the matter.

The wireless network, which was tested in Cupertino, California, used spectrum controlled by satellite communications company Globalstar Inc. (GSAT), said the people who asked not to be identified because the test was private.

The trial underlines how Amazon, the world’s largest e-commerce company, is moving beyond being a Web destination and hardware maker and digging deeper into the underlying technology for how people connect to the Internet. That would let Amazon create a more comprehensive user experience, encompassing how consumers get online, what device they use to connect to the Web and what they do on the Internet.

Leslie Letts, a spokeswoman for Amazon, didn’t respond to a request for comment. Katherine LeBlanc, a spokeswoman for Globalstar, declined to comment.

Amazon isn’t the only Internet company that has tested technology allowing it to be a Web gateway. Google Inc. (GOOG) has secured its own communications capabilities by bidding for wireless spectrum and building high-speed, fiber-based broadband networks in 17 cities, including Austin, Texas and Kansas City, Kansas. It also operates a Wi-Fi network in Mountain View, California, and recently agreed to provide wireless connectivity at Starbucks Corp. (SBUX)’s coffee shops.

Always Trying

Amazon continually tries various technologies, and it’s unclear if the wireless network testing is still taking place, said the people. The trial was in the vicinity of Amazon’s Lab126 research facilities in Cupertino, the people said. Lab126 designs and engineers Kindle devices.

“Given that Amazon’s becoming a big player in video, they could look into investing into forms of connectivity,” independent wireless analyst Chetan Sharma said in an interview.

Amazon has moved deeper into wireless services for several years, as it competes with tablet makers like Apple Inc. (AAPL) and with Google, which runs a rival application store. Amazon’s Kindle tablets and e-book readers have built-in wireless connectivity, and the company sells apps for mobile devices. Amazon had also worked on its own smartphone, Bloomberg reported last year.

Chief Executive Officer Jeff Bezos is aiming to make Amazon a one-stop shop for consumers online, a strategy that spurred a 27 percent increase in sales to $61.1 billion last year. It’s an approach investors have bought into, shown in Amazon’s stock price, which has more than doubled in the past three years.

Globalstar’s Spectrum

Globalstar is seeking regulatory approval to convert about 80 percent of its spectrum to terrestrial use. The Milpitas, California-based company applied to the Federal Communications Commission for permission to convert its satellite spectrum to provide Wi-Fi-like services in November 2012.

Globalstar met with FCC Chairwoman Mignon Clyburn in June, and a decision on whether the company can convert the spectrum could come within months. A company technical adviser conducted tests that showed the spectrum may be able to accommodate more traffic and offer faster speeds than traditional public Wi-Fi networks.

“We are now well positioned in the ongoing process with the FCC as we seek terrestrial authority for our spectrum,” Globalstar CEO James Monroe said during the company’s last earnings call.

Neil Grace, a spokesman for the FCC, declined to comment.

If granted FCC approval, Globalstar is considering leasing its spectrum, sharing service revenues with partners, and other business models, one of the people said. With wireless spectrum scarce, Globalstar’s converted spectrum could be of interest to carriers and cable companies, seeking to offload ballooning mobile traffic, as well as to technology companies.

The FCC issued the permit to trial wireless equipment using Globalstar’s spectrum to the satellite service provider’s technical adviser, Jarvinian Wireless Innovation Fund. In a letter to the FCC dated July 1, Jarvinian managing director John Dooley said his company is helping “a major technology company assess the significant performance benefits” of Globalstar’s spectrum.

Source:  bloomberg.com

Nasdaq stops all trading due to systems issue, plans to reopen in a limited capacity soon

Thursday, August 22nd, 2013

http://www.blogcdn.com/www.engadget.com/media/2013/08/nas.jpg
Well, this is rather peculiar. The Nasdaq stock market — the entire Nasdaq, which lists major tech firms such as Apple and Facebook — has temporarily suspended all trading due to a technical issue.

The exchange sent an alert to traders at 12:14PM ET today announcing that it was halting all trading “until further notice,” according to a New York Times report. Reuters is reporting that Nasdaq will reopen trading soon, but with a 5-minute quote period. The market will not be canceling open orders, however, so firms that don’t want their orders processed once everything’s up and running should cancel their orders manually now.

It’s not entirely clear what caused the issue, or how and when it will be resolved, but you better believe it’s causing some commotion on Wall Street, and could impact traders for days and months to come.

Update (2:28PM ET): CNBC and the Wall Street Journal are reporting that Nasdaq will resume limited trading beginning at 2:45PM ET.

Update (2:32PM ET): CNBC is now reporting that trading will resume with just two securities at 2:45PM ET. Full trading will begin at 3:10PM ET.

Source:  engadget.com

Update for deprecation of MD5 hashing algorithm for Microsoft Root Certificate Program

Thursday, August 22nd, 2013

Executive Summary

Microsoft is announcing the availability of an update for supported editions of Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, and Windows RT that restricts the use of certificates with MD5 hashes. This restriction is limited to certificates issued under roots in the Microsoft root certificate program. Usage of MD5 hash algorithm in certificates could allow an attacker to spoof content, perform phishing attacks, or perform man-in-the-middle attacks.

The update is available on the Download Center for all affected releases of Microsoft Windows except for Windows RT (no update for Windows RT is available at this time). In addition, Microsoft is planning to release this update through Microsoft Update on February 11, 2014 after customers have a chance to assess the impact of this update and take necessary actions in their environments.

Recommendation

Microsoft recommends that customers download, test, and apply the update at the earliest opportunity. Please see the Suggested Actions section of this advisory for more information.

Note that the 2862966 update is a prerequisite and must be applied before this update can be installed. The 2862966 update contains associated framework changes to Microsoft Windows. For more information, see Microsoft Knowledge Base Article 2862966.

Known Issues

Microsoft Knowledge Base Article 2862973 documents the currently known issues that customers may experience when installing this update. The article also documents recommended solutions for these issues.

Excerpt from:  microsoft.com

Next up for WiFi

Thursday, August 22nd, 2013

Transitioning from the Wi-Fi-shy financial industry, Riverside Medical Center’s CSO Erik Devine remembers his shock at the healthcare industry’s wide embrace of the technology when he joined the hospital in 2011.

“In banking, Wi-Fi was almost a no-go because everything is so overly regulated. Wireless here is almost as critical as wired,” Devine still marvels. “It’s used for connectivity to heart pumps, defibrillators, nurse voice over IP call systems, surgery robots, remote stroke consultation systems, patient/guest access and more.”

To illustrate the level of dependence the organization has on Wi-Fi, Riverside Medical Center calls codes over the PA system — much like in medical emergencies — when the network goes down. “Wireless is such a multifaceted part of the network that it’s truly a big deal,” he says.

And getting bigger. Besides the fact that organizations are finding new ways to leverage Wi-Fi, workers have tasted the freedom of wireless, have benefited from the productivity boost, and are demanding increased range and better performance, particularly now that many are showing up with their own devices (the whole bring your own device thing). The industry is responding in kind, introducing new products and technologies, including gigabit Wi-Fi (see “Getting ready for gigabit Wi-Fi“), and it is up to IT to orchestrate this new mobile symphony.

“Traffic from wireless and mobile devices will exceed traffic from wired devices by 2017,” according to the Cisco Visual Networking Index. While only about a quarter of consumer IP traffic originated from non-PC devices in 2012, non-PC devices will account for almost half of consumer IP traffic by 2017, Cisco says.

Cisco Visual Networking IndexIT gets it, says Tony Hernandez, principal in Grant Thornton’s business consulting practice. Wi-Fi is no longer an afterthought in IT build-outs. “The average office worker still might have a wired connection, but they also have the capability to use Wi-Fi across the enterprise,” says Hernandez, noting the shift has happened fast.

“Five years ago, a lot of enterprises were looking at Wi-Fi for common areas such as lobbies and cafeterias and put that traffic on an isolated segment of the network,” Hernandez says. “If users wanted access to corporate resources from wireless, they’d have to use a VPN.”

Hernandez credits several advances for Wi-Fi’s improved stature: enterprise-grade security; sophisticated, software-based controllers; and integrated network management.

Also in the mix: pressure from users who want mobility and flexibility for their corporate machines as well as the ability to access the network from their own devices, including smartphones, tablets and laptops.

Where some businesses have only recently converted to 802.11n from the not-too-distant past of 802.11a/b/g, they now have to decide if their next Wi-Fi purchases will support 802.11ac, the draft IEEE standard that addresses the need for gigabit speed. “The landscape is still 50/50 between 802.11g and 802.11n,” Hernandez says. “There are many businesses with older infrastructure that haven’t refreshed their Wi-Fi networks yet.”

What will push enterprises to move to 802.11ac? Heavier reliance on mobile access to video such as videoconferencing and video streaming, he says.

Crash of the downloads

David Heckaman, vice president of technology development at luxury hospitality chain Mandarin Oriental Hotel Group, remembers the exact moment he knew Wi-Fi had gained an equal footing with wired infrastructure in his industry.A company had booked meeting room space at one of Mandarin Oriental’s 30 global properties to launch its new mobile app and answered all the hotel’s usual questions about anticipated network capacity demands. Not yet familiar with the impact of dense mobile usage, the IT team didn’t account for the fallout when the 200-plus crowd received free Apple iPads to immediately download and launch the new app. The network crashed. “It was a slap in the face: What was good enough before wouldn’t work. This was a whole new world,” Heckaman says.

Seven to eight years ago, Wi-Fi networks were designed to address coverage and capacity wasn’t given much thought. When Mandarin Oriental opened its New York City property in 2003, for example, IT installed two or three wireless access points in a closet on each floor and used a distributed antenna to extend coverage to the whole floor. At the time, wireless only made up 10% of total network usage. As the number climbed to 40%, capacity issues cropped up, forcing IT to rethink the entire architecture.

“We didn’t really know what capacity needs were until the Apple iPhone was released,” Heckaman says. Now, although a single access point could provide signal coverage for every five rooms, the hotel is putting access points in almost every room to connect back to an on-site controller.

Heckaman’s next plan involves adding centralized Wi-Fi control from headquarters for advanced reporting and policy management. Instead of simply reporting that on-site controllers delivered a certain number of sessions and supported X amount of overall bandwidth, he would be able to evaluate in real-time actual end-device performance. “We would be able to report on the quality of the connection and make adjustments accordingly,” he says.

Where he pinpoints service degradation, he’ll refresh access points with those that are 802.11ac-enabled. As guests bring more and more devices into their rooms and individually stream movies, play games or perform other bandwidth-intensive actions, he predicts the need for 802.11ac will come faster than anticipated.

“We have to make sure that the physical link out of the building, not the guest room access point, remains the weakest point and that the overall network is robust enough to handle it,” he says.

Getting schooled on wireless

Craig Canevit, IT administrator at the University of Tennessee at Knoxville, has had many aha! moments when it comes to Wi-Fi across the 27,000-student campus. For instance, when the team first engineered classrooms for wireless, it was difficult to predict demand. Certain professors would need higher capacity for their lectures than others, so IT would accommodate them. If those professors got reassigned to different rooms the next year, they would immediately notice performance issues.

“They had delays and interruption of service so we had to go back and redesign all classrooms with more access points and more capacity,” Canevit says.

The university also has struggled with the fact that students and faculty are now showing up with numerous devices. “We see at least three devices per person, including smartphones, tablets, gaming consoles, Apple TV and more,” he says. IT has the dual challenge of supporting the education enterprise during the day and residential demands at night.

The school’s primary issue has revolved around IP addresses, which the university found itself low on as device count skyrocketed. “Devices require IP addresses even when sitting in your pocket and we faced a terrible IP management issue,” he says. IT had to constantly scour the network for unused IP addresses to “feed the monster.”

Eventually the team came too close to capacity for comfort and had to act. Canevit didn’t think IPv6 was widely enough supported at the time, so the school went with Network Address Translation instead, hiding private IP addresses behind a single public address. A side effect of NAT is that mapping network and security issues to specific devices becomes more challenging, but Canevit says the effort is worth it.

Looking forward, the university faces the ongoing challenge of providing Wi-Fi coverage to every dorm room and classroom. That’s a bigger problem than capacity. “We only give 100Mbps on the wired network in residence halls and don’t come close to hitting capacity,” he says, so 802.11ac is really not on the drawing board. What’s more, 802.11ac would exacerbate his coverage problem. “To get 1Gbps, you’ll have to do channel bonding, which leaves fewer overlapping channels available and takes away from the density,” he says.

What he is intrigued by is software-defined networking. Students want to use their iPhone to control their Apple TV and other such devices, which is impossible currently because of subnets. “If you allowed this in a dorm, it would degrade quality for everyone,” he says. SDN could give wireless administrators a way around the problem by making it possible to add boatloads of virtual LANs. “Wireless will become more of a provisioning than an engineering issue,” Canevit predicts.

Hospital all-in with Wi-Fi

Armand Stansel, director of IT infrastructure at Houston’s The Methodist Hospital System, recalls a time when his biggest concern regarding Wi-Fi was making sure patient areas had access points. “That was in early 2000 when we were simply installing Internet hotspots for patients with laptops,” he says.

Today, the 1,600-bed, five-hospital system boasts 100% Wi-Fi coverage. Like Riverside Medical Center, The Methodist Hospital has integrated wireless deep into the clinical system to support medical devices such as IV pumps, portable imaging systems for radiology, physicians’ tablet-based consultations and more. The wireless network has 20,000 to 30,000 touches a day, which has doubled in the past few years, Stansel says.

And if IT has its way, that number will continue to ramp up. Stansel envisions a majority of employees working on the wireless network. He wants to transition back-office personnel to tablet-based docking systems when the devices are more “enterprise-ready” with better security and durability (battery life and the device itself).

Already he has been able to reduce wired capacity by more than half due to the rise of wireless. Patient rooms, which used to have numerous wired outlets, now only require a few for the wired patient phone and some telemetry devices.

When the hospital does a renovation or adds new space, Stansel spends as much time planning the wired plant as he does studying the implications for the Wi-Fi environment, looking at everything from what the walls are made of to possible sources of interference. And when it comes to even the simplest construction, such as moving a wall, he has to deploy a team to retest nearby access points. “Wireless does complicate things because you can’t leave access points static. But it’s such a necessity, we have to do it,” he says.

He also has to reassess his access point strategy on an ongoing basis, adding more or relocating others depending on demand and traffic patterns. “We always have to look at how the access point is interacting with devices. A smartphone connecting to Wi-Fi has different needs than a PC and we have to monitor that,” he says.

The Methodist Hospital takes advantage of a blend of 802.11b, .11g and .11n in the 2.4GHz and 5GHz spectrums. Channel bonding, he has found, poses challenges even for .11n, reducing the number of channels available for others. The higher the density, he says, the less likely he can take full advantage of .11n. He does use n for priority locations such as the ER, imaging, radiology and cardiology, where users require higher bandwidth.

Stansel is betting big that wireless will continue to grow. In fact, he believes that by 2015 it will surpass wired 3-to-1. “There may come a point where wired is unnecessary, but we’re just not there yet,” he says.

Turning on the ac

Stansel is, however, onboard with 802.11ac. The Methodist Hospital is an early adopter of Cisco’s 802.11ac wireless infrastructure. To start, he has targeted the same locations that receive 802.11n priority. If a patient has a cardiac catheterization procedure done, the physician who performed the procedure can interactively review the results with the patient and family while he is still in the recovery room, referencing dye images from a wireless device such as a tablet. Normally, physicians have to verbally brief patients just out of surgery, then do likewise with the family, and wait until later to go over high-definition images from a desktop.

Current wireless technologies have strained to support access to real-time 3D imaging (also referred to as 4D), ultrasounds and more. Stansel expects better performance as 802.11ac is slowly introduced.

Riverside Medical Center’s Devine is more cautious about deploying 802.11ac, saying he is still a bit skeptical. “Can we get broader coverage with fewer access points? Can we get greater range than with 802.11n? That’s what is important to us,” he says.

In the meantime, Devine plans to deploy 20% to 25% more access points to support triangulation for location of equipment. He’ll be able to replace RFID to stop high-value items such as Ascom wireless phones and heart pumps from walking out the door. “RFID is expensive and a whole other network to manage. If we can mimic what it does with Wi-Fi, we can streamline operations,” he says.

High-power access points currently are mounted in each hallway, but Devine wants to swap those out with low-power ones and put regular-strength access points in every room. If 802.11ac access points prove to be affordable, he’ll consider them, but won’t put off his immediate plans in favor of the technology.

The future of Wi-Fi

Enterprise Strategy Group Senior Analyst John Mazur says that Wi-Fi should be front and center in every IT executive’s plans. BYOD has tripled the number of Wi-Fi connected devices and new access points offer about five times the throughput and twice the range of legacy Wi-Fi access points. In other words, Mazur says, Wi-Fi is up to the bandwidth challenge.

He warns IT leaders not to be scared off by spending projections, which, according to ESG’s 2013 IT Spending Intentions Survey, will be at about 2012 levels and favor cost-cutting (like Devine’s plan to swap out RFID for Wi-Fi) rather than growth initiatives.

But now is the time, he says, to set the stage for 802.11ac, which is due to be ratified in 2014. “IT should require 802.11ac support from their vendors and get a commitment on the upgrade cost and terms before signing a deal. Chances are you won’t need 802.11ac’s additional bandwidth for a few years, but you shouldn’t be forced to do forklift upgrades/replacements of recent access points to get .11ac. It should be a relatively simple module or software upgrade to currently marketed access points.”

While 802.11ac isn’t even fully supported by wireless clients yet, Mazur recommends keeping your eye on the 802.11 sky. Another spec, 802.11ad, which operates in the 60GHz spectrum and is currently geared toward home entertainment connectivity and near-field HD video connectivity, could be — like other consumer Wi-Fi advances — entering the enterprise space sooner rather than later.

Source:  networkworld.com

Cisco patches serious vulnerabilities in Unified Communications Manager

Thursday, August 22nd, 2013

The vulnerabilities can be exploited by attackers to execute arbitrary commands or disrupt telephony-related services, Cisco said

Cisco Systems has released new security patches for several versions of Unified Communications Manager (UCM) to address vulnerabilities that could allow remote attackers to execute arbitrary commands, modify system data or disrupt services.

The UCM is the call processing component of Cisco’s IP Telephony solution. It connects to IP (Internet Protocol) phones, media processing devices, VoIP gateways, and multimedia applications and provides services such as session management, voice, video, messaging, mobility, and web conferencing.

The most serious vulnerability addressed by the newly released patches can lead to a buffer overflow and is identified as CVE-2013-3462 in the Common Vulnerabilities and Exposures database. This vulnerability can be exploited remotely, but it requires the attacker to be authenticated on the device.

“An attacker could exploit this vulnerability by overwriting an allocated memory buffer on an affected device,” Cisco said Wednesday in a security advisory. “An exploit could allow the attacker to corrupt data, disrupt services, or run arbitrary commands.”

The CVE-2013-3462 vulnerability affects versions 7.1(x), 8.5(x), 8.6(x), 9.0(x) and 9.1(x) of Cisco UCM, Cisco said.

The company also patched three denial-of-service (DoS) flaws that can be remotely exploited by unauthenticated attackers.

One of them, identified as CVE-2013-3459, is caused by improper error handling and can be exploited by sending malformed registration messages to the affected devices. The flaw only affects Cisco UCM 7.1(x) versions.

The second DoS issue is identified as CVE-2013-3460 and is caused by insufficient limiting of traffic received on certain UDP ports. It can be exploited by sending UDP packets at a high rate on those specific ports to devices running versions 8.5(x), 8.6(x), and 9.0(x) of Cisco UCM.

The third vulnerability, identified as CVE-2013-3461, is similar but only affects the Session Initiation Protocol (SIP) port. “An attacker could exploit this vulnerability by sending UDP packets at a high rate to port 5060 on an affected device,” Cisco said. The vulnerability affects Cisco UCM versions 8.5(x), 8.6(x) and 9.0(1).

Patched versions have been released for all UCM release branches affected by these vulnerabilities and there are no known workarounds at the time that would mitigate the flaws without upgrading.

All of the patched vulnerabilities were discovered during internal testing and the company’s product security incident response team (PSIRT) is not aware of any cases where these issues have been exploited or publicly documented.

“In all cases, customers should ensure that the devices to be upgraded contain sufficient memory and confirm that current hardware and software configurations will continue to be supported properly by the new release,” Cisco said. “If the information is not clear, customers are advised to contact the Cisco Technical Assistance Center (TAC) or their contracted maintenance providers.”

Source:  networkworld.com