Archive for the ‘Cloud’ Category

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

Friday, January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.'”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

IT managers are increasingly replacing servers with SaaS

Friday, December 6th, 2013

IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding, according to new data.

IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers.

In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers.

“What that means is a lot of people are buying SaaS,” said Frank Gens, referring to software-as-a-service. “A lot of capacity if shifting out of the enterprise into cloud service providers.”

The increased use of SaaS is a major reason for the market shift, but so is virtualization to increase server capacity. Data center consolidations are eliminating servers as well, along with the purchase of denser servers capable of handling larger loads.

For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining, based on market direction and the experience of IT managers.

Two years ago, when Mark Endry became the CIO and SVP of U.S. operations for Arcadis, a global consulting, design and engineering company, the firm was running its IT in-house.

“We really put a stop to that,” said Endry. Arcadis is moving to SaaS, either to add new services or substitute existing ones. An in-house system is no longer the default, he added.

“Our standard RFP for services says it must be SaaS,’ said Endry.

Arcadis has added Workday, a SaaS-based HR management system, replaced an in-house training management system with a SaaS system, and an in-house ADP HR system was replaced with a service. The company is also planning a move to Office 365, and will stop running its in-house Exchange and SharePoint servers.

As a result, in the last two years, Endry has kept the server count steady at 1,006 spread through three data centers. He estimates that without the efforts at virtualization, SaaS and other consolidations, they would have more 200 more physical servers.

Endry would like to consolidate the three data centers into one, and continue shifting to SaaS to avoid future maintenance costs, and also the need to customize and maintain software. SaaS can’t yet be used for everything, particularly ERP, but “my goal would be to really minimize the footprint of servers,” he said.

Similarly, Gerry McCartney, CIO of Purdue University is working to cut server use and switch more to SaaS.

The university’s West Lafayette, Ind., campus had some 65 data centers two years ago, many small. Data centers at Purdue are defined as any room with additional power and specialized heavy duty cooling equipment. They have closed at least 28 of them in the last 18 months.

The Purdue consolidation is the result of several broad directions: increased virtualization, use of higher density systems, and increase use of SaaS.

McCartney wants to limit the university’s server management role. “The only things that we are going to retain on campus is research and strategic support,” he said. That means that most, if not all, of the administrative functions may be moved off campus.

This shift to cloud-based providers is roiling the server market, and is expected to help send server revenue down 3.5% this year, according to IDC.

Gens says that one trend among users who buy servers is increasing interest in converged or integrated systems that combine server, storage, networking and software. They account now about for about 10% of the market, and are expected to make up 20% by 2020.

Meanwhile, the big cloud providers are heading in the opposite direction, and are increasingly looking for componentized systems they can assemble, Velcro-like, in their data centers. This has given rise to contract, or original design manufacturers (ODM), mostly overseas, who make these systems for cloud systems.

Source:  computerworld.com

Aruba announces cloud-based Wi-Fi management service

Tuesday, October 1st, 2013

Competes with Cisco-owned Meraki and Aerohive

Aruba Networks today announced a new Aruba Central cloud-based management service for Wi-Fi networks that could be valuable to companies with branch operations, schools and mid-sized networks where IT support is scarce.

Aruba still sells Wi-Fi access points but now is offering Aruba Central cloud management of local Wi-Fi zones, for which it charges $140 per AP annually.

The company also announced the new Aruba Instant 155 AP, a desktop model starting at $895 and available now and the Instant 225 AP for $1.295, available sometime later this month.

A new 3.3 version of the Instant OS is also available, and a new S1500 mobility access switch with 12 to 48 ports starting at $1,495 will ship in late 2013.

Cloud-based management of Wi-Fi is in its early stages and today constitutes about 5% of a $4 billion annual Wi-Fi market, Aruba said, citing findings by Dell’Oro Group. Aruba said it faces competition from Aerohive and Meraki, which Cisco purchased for $1.2 billion last November.

Cloud-based management of APs is ideally suited for centralizing management of branch offices or schools that don’t have their own IT staff.

“We have one interface for multiple sites, for those wanting to manage from a central platform,” said Syliva Hooks, Aruba’s director of product marketing. “There’s remote monitoring and troubleshooting. We do alerting and reports, all in wizard-based formats, and you can group all the APs from location. We’re trying to offer sophisticated functions, but presented so a generalist could use them.”

Aruba relies on multiple cloud providers and multiple data centers to support Aruba Central, Hooks said.

The two new APs provide 450 Mbps throughput in 802.11n for the 155 AP and 1.3 Gbps for the 220 AP, Aruba said. Each AP in a Wi-Fi cluster running the Instant OS can assume controller functions with intelligence built in. The first AP installed in a cluster can select itself as the master controller of the other APs and if it somehow fails, the next most senior AP selects itself as the master.

Source:  networkworld.com

Research shows IT blocking applications based on popularity not risk

Thursday, September 26th, 2013

Tactic leads to less popular, but still risky cloud-based apps freely accessing networks

A new study, based on collective data taken from 3 million users across more than 100 companies, shows that cloud-based apps and services are being blocked based on popularity rather than risk.

A new study from Skyhigh Networks, a firm that focuses on cloud access security, shows that most of the cloud-based blocking within IT focuses on popular well-known apps, and not risk. The problem with this method of security is that often, cloud-based apps that offer little to no risk are prohibited on the network, while those that actually do pose risk are left alone, freely available to anyone who knows of them.

Moreover, the data collected from some 3 million users across 100 organizations shows that IT seriously underestimates the number of cloud-based apps and services running on their network. For example, on average there are about 545 cloud services in use by a given organization, yet if asked IT will cite a number that’s only a fraction of that.

When it comes to the type of cloud-based apps and services blocked by IT, the primary focus seems to be on preventing productivity loss rather than risk, and frequently blocked access centers on name recognition. For example, Netflix is the number one blocked app overall, and services such as iCloud, Google Drive, Dropbox, SourceForge, WebEx, Bit.ly, StumbleUpon, and Skype, are commonly flagged too.

However, while those services do have some risk associated with them, they are also top brands depending on their vertical. Yet, while they’re flagged and prohibited on many networks, services such as SendSpace, Codehaus, FileFactory, authorSTREAM, MovShare, and WeTransfer are unrestricted, but actually pose more than the other commonly blocked apps.

Digging deeper, the study shows that in the financial services sector, iCloud, and Google Drive are commonly blocked, yet SendSpace and CloudApp, which are direct alternatives, are rarely — if ever — filtered. In healthcare, Dropbox and Memeo (an up and coming file sharing service) are blocked, which is expected. Yet, once again, healthcare IT allows services such as WeTransfer, 4shared, and Hostingbulk on the network.

In the high tech sector, Skype, Google Drive, and Dropbox are commonly expunged from network traffic, yet RapidGator, ZippyShare, and SkyPath are fully available. In manufacturing, where WatchDox, Force.com, and Box are regularly blocked, CloudApp, SockShare, and RapidGator are fully used by employees seeking alternatives.

In a statement, Rajiv Gupta, founder and CEO at Skyhigh Networks, said that the report shows that “there are no consistent policies in place to manage the security, compliance, governance, and legal risks of cloud services.”

Separately, in comments to CSO, Gupta agreed that one of the main causes for this large disconnect in content filtering is a lack of understanding when it comes to the risks behind most cloud-based apps and services (outside of the top brands), and that many commercial content filtering solutions simply do not cover the alternatives online, or as he put it, “they’re not cloud aware.”

This, if anything, proves that risk management can’t be confined within a checkbox and a bland category within a firewall’s content filtering rules.

“Cloud is very much the wild, wild west. Taming the cloud today largely is a whack-a-mole exercise…with your bare hands,” Gupta told us.

Source:  csoonline.com

Schools’ use of cloud services puts student privacy at risk

Tuesday, September 24th, 2013

Vendors should promise not to use targeted advertising and behavioral profiling, SafeGov said

Schools that compel students to use commercial cloud services for email and documents are putting privacy at risk, says a campaign group calling for strict controls on the use of such services in education.

A core problem is that cloud providers force schools to accept policies that authorize user profiling and online behavioral advertising. Some cloud privacy policies stipulate that students are also bound by these policies, even when they have not had the opportunity to grant or withhold their consent, said privacy campaign group SafeGov.org in a report released on Monday.

There is also the risk of commercial data mining. “When school cloud services derive from ad-supported consumer services that rely on powerful user profiling and tracking algorithms, it may be technically difficult for the cloud provider to turn off these functions even when ads are not being served,” the report said.

Furthermore, by failing to create interfaces that distinguish between ad-free and ad-supported versions, students may be lured from ad-free services for school use to consumer ad-driven services that engage in highly intrusive processing of personal information, according to the report. This could be the case with email, online video, networking and basic search.

Also, contracts used by cloud providers don’t guarantee ad-free services because they are ambiguously worded and include the option to serve ads, the report said.

SafeGov has sought support from European Data Protection Authorities (DPAs), some of which endorsed the use of codes of conduct establishing rules to which schools and cloud providers could voluntarily agree. Such codes should include a binding pledge to ban targeted advertising in schools as well as the processing or secondary use of data for advertising purposes, SafeGov recommended.

“We think any provider of cloud computing services to schools (Google Apps and Microsoft 365 included) should sign up to follow the Codes of Conduct outlined in the report,” said a SafeGov spokeswoman in an email.

Even when ad serving is disabled the privacy of students may still be jeopardized, the report said.

For example, while Google’s policy for Google Apps for Education states that no ads will be shown to enrolled students, there could still be a privacy problem, according to SafeGov.

“Based on our research, school and government customers of Google Apps are encouraged to add ‘non-core’ (ad-based) Google services such as search or YouTube, to the Google Apps for Education interface, which takes students from a purportedly ad-free environment to an ad-driven one,” the spokeswoman said.

“In at least one case we know of, it also requires the school to force students to accept the privacy policy before being able to continue using their accounts,” she said, adding that when this is done the user can click through to the ad-supported service without a warning that they will be profiled and tracked.

This issue was flagged by the French and Swedish DPAs, the spokeswoman said.

In September, the Swedish DPA ordered a school to stop using Google Apps or sign a reworked agreement with Google because the current terms of use lacked specifics on how personal data is being handled and didn’t comply with local data laws.

However, there are some initiatives that are encouraging, the spokeswoman said.

Microsoft’s Bing for Schools initiative, an ad-free, no cost version of its Bing search engine that can be used in public and private schools across the U.S., is one of them, she said. “This is one of the things SafeGov is trying to accomplish with the Codes of Conduct — taking out ad-serving features completely when providing cloud services in schools. This would remove the ad-profiling risk for students,” she said.

Microsoft and Google did not respond to a request for comment.

Source:  computerworld.com

SaaS governance: Five key questions

Monday, September 23rd, 2013

Increasingly savvy customers are sharpening their requirements for SaaS. Providers must be able to answer these key questions for potential clients.

IT governance is linked to security and data protection standards—but it is more than that. Governance includes aligning IT with business strategies, optimizing operational and system workflows and processes, and the insertion of an IT control structure for IT assets that meets the needs of auditors and regulators.

As more companies move to cloud-based solutions like SaaS (software as a service), regulators and auditors are also sharpening their requirements. “What we are seeing is an increased number of our corporate clients asking us for our own IT audits, which they, in turn, insert into their enterprise audit papers that they show auditors and regulators,” said one SaaS manager.

This places more pressure on SaaS providers, which still do not consistently perform audits, and often will admit that when they do, it is usually at the request of a prospect before the prospect signs with them.

Should enterprise IT and its regulators be concerned? The answer is fast changing to “yes.”

This means that now is the time for SaaS providers to get their governance in order.

Here are five questions that SaaS providers can soon expect to hear from clients and prospects:

#1 Can you provide me with an IT security audit?

Clients and prospects will want to know what your physical facility and IT security audit results have been, in addition to the kinds of security measures that you employ on a day to day basis. They will expect that your security measures are best-in-class, and that you also have data on internal and external penetration testing.

#2 What are your data practices?

How often do you back up data? Where do you store it? If you are using multi-tenant systems on a single server, how can a client be assured that its data (and systems) remain segregated from the systems and data of others that are also running on the same server? Can a client authorize its own security permissions for its data, down to the level of a single individual within the company or at a business partner’s?

#3 How will you protect my intellectual property?

You will get clients that will want to develop custom applications or reports for their business alone. In some cases, the client might even develop it on your cloud. In other cases, the client might retain your services to develop a specification defined by the client into a finished application. The question is this: whose property does the custom application become, and who has the right to distribute it?

One SaaS provider takes the position that all custom reports it delivers (even if individual clients pay for their development) belong to the provider—and that the provider is free to repurpose the reports for others. Another SaaS provider obtains up-front funding from the client for a custom application, and then reimburses the client for the initial funding as the provider sells the solution to other clients. In both cases, the intellectual property rights are lost to the client—but there are some clients that won’t accept these conditions.

If you are a SaaS provider, it’s important to understand the industry verticals you serve and how individuals in these industry verticals feel about intellectual property.

#4 What are your standards of performance?

I know of only one SaaS provider that actually penalizes itself in the form of “credits” toward the next month’s bill it if the provider fails to meet an uptime SLA (service level agreement). The majority of SaaS companies I have spoken with have internal SLAs—but they don’t issue them to their customers. As risk management assumes a larger role in IT governance, corporate IT managers are going to start asking their SaaS partners for SLAs with “teeth” in them that include financial penalties.

#5 What kind of disaster recovery and business continuation plan do you have?

The recent spate of global natural disasters has nearly every company and their regulators and auditors focused on DR and BC. They will expect their SaaS providers to do the same. SaaS providers that own and control their own data centers are in a strong position. SaaS providers that contract with third-party data centers (where the end client has no direct relationship with the third-party data center) are riskier. For instance, whose liability is it if the third-party data center fails? Do you as a SaaS provider indemnify your end clients? It’s an important question to know the answer to—because your clients are going to be asking it.

Source:  techrepublic.com

Will software-defined networking kill network engineers’ beloved CLI?

Tuesday, September 3rd, 2013

Networks defined by software may require more coding than command lines, leading to changes on the job

SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco’s isn’t the only CLI, but on the strength of the company’s domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it’s been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.

A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco’s ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

“The network used to be programmed through what we call CLIs, or command-line interfaces. We’re now changing that to create programmatic interfaces,” Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

“If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs,” Gartner analyst Joe Skorupa said. “Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter.”

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

“We’re already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies,” Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI’s days are numbered. SDN doesn’t go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

“It’s not obsolete by any definition,” Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It’s likely that some network jobs will be eliminated, he said.

This isn’t the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven’t always had a warm welcome. “Engineers will always gravitate toward a CLI when it’s available,” Miron said.

Even networking startups need to offer a Cisco CLI so their customers’ engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco’s CLI and other vendors’ management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented “bottom up” orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF’s approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there’s more room for human error and the cost of mistakes is higher, he said.

“Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time,” Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

“There’ll always be some level of CLI,” said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks’ SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is “scary,” he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

“The scale of systems is becoming so large that you can’t actually do anything by hand,” Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner’s Skorupa said.

The first group will “get it” and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won’t be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They’ll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

“All of that knowledge that you had amassed over decades of employment got written to zero,” Skorupa said. For IP network engineers who resist change, there will be a cruel irony: “SDN will do to them what they did to the guys who managed the old TDM voice systems.”

But SDN won’t spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

“The role of the network engineer, I don’t think, has ever been more important,” Kerravala said. “Cloud computing and mobile computing are network-centric compute models.”

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

“The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network,” said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft’s Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

“If you look at the Facebook network engineering team, pretty much everybody’s writing code as well,” said Najam Ahmad, Facebook’s director of technical operations for infrastructure.

Network engineers historically have used CLIs because that’s all they were given, Ahmad said. “I think we’re underestimating their ability. ”

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won’t go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

“All these people need to somewhat evolve their skills,” Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn’t announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

“You’ve got to understand the fundamentals,” Vashi said. “If you don’t know how the network infrastructure works, you could have all the background in software automation, and you don’t know what you’re doing on the network side.”

Source:  computerworld.com

Amazon and Microsoft, beware—VMware cloud is more ambitious than we thought

Tuesday, August 27th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/08/vcloud-hybrid-service-640x327.png

Desktops, disaster recovery, IaaS, and PaaS make VMware’s cloud compelling.

VMware today announced that vCloud Hybrid Service, its first public infrastructure-as-a-service (IaaS) cloud, will become generally available in September. That’s no surprise, as we already knew it was slated to go live this quarter.

What is surprising is just how extensive the cloud will be. When first announced, vCloud Hybrid Service was described as infrastructure-as-a-service that integrates directly with VMware environments. Customers running lots of applications in-house on VMware infrastructure can use the cloud to expand their capacity without buying new hardware and manage both their on-premises and off-premises deployments as one.

That’s still the core of vCloud Hybrid Service—but in addition to the more traditional infrastructure-as-a-service, VMware will also have a desktops-as-a-service offering, letting businesses deploy virtual desktops to employees without needing any new hardware in their own data centers. There will also be disaster recovery-as-a-service, letting customers automatically replicate applications and data to vCloud Hybrid Service instead of their own data centers. Finally, support for the open source distribution of Cloud Foundry and Pivotal’s deployment of Cloud Foundry will let customers run a platform-as-a-service (PaaS) in vCloud Hybrid Service. Unlike IaaS, PaaS tends to be optimized for building and hosting applications without having to manage operating systems and virtual computing infrastructure.

While the core IaaS service and connections to on-premises deployments will be generally available in September, the other services aren’t quite ready. Both disaster recovery and desktops-as-a-service will enter beta in the fourth quarter of this year. Support for Cloud Foundry will also be available in the fourth quarter. Pricing information for vCloud Hybrid Service is available on VMware’s site. More details on how it works are available in our previous coverage.

Competitive against multiple clouds

All of this gives VMware a compelling alternative to Amazon and Microsoft. Amazon is still the clear leader in infrastructure-as-a-service and likely will be for the foreseeable future. However, VMware’s IaaS will be useful to customers who rely heavily on VMware internally and want a consistent management environment on-premises and in the cloud.

VMware and Microsoft have similar approaches, offering a virtualization platform as well as a public cloud (Windows Azure in Microsoft’s case) that integrates with customers’ on-premises deployments. By wrapping Cloud Foundry into vCloud Hybrid Service, VMware combines IaaS and PaaS into a single cloud service just as Microsoft does.

VMware is going beyond Microsoft by also offering desktops-as-a-service. We don’t have a ton of detail here, but it will be an extension of VMware’s pre-existing virtual desktop products that let customers host desktop images in their data centers and give employees remote access to them. With “VMware Horizon View Desktop-as-a-Service,” customers will be able to deploy virtual desktop infrastructure either in-house or on the VMware cloud and manage it all together. VMware’s hybrid cloud head honcho, Bill Fathers, said much of the work of adding and configuring new users will be taken care of automatically.

The disaster recovery-as-a-service builds on VMware’s Site Recovery Manager, letting customers see the public cloud as a recovery destination along with their own data centers.

“The disaster recovery use case is something we want to really dominate as a market opportunity,” Fathers said in a press conference today. At first, it will focus on using “existing replication capabilities to replicate into the vCloud Hybrid Service. Going forward, VMware will try to provide increasing levels of automation and more flexibility in configuring different disaster recovery destinations,” he said.

vCloud Hybrid Service will be hosted in VMware data centers in Las Vegas, NV, Sterling, VA, Santa Clara, CA, and Dallas, TX, as well as data centers operated by Savvis in New York and Chicago. Non-US data centers are expected to join the fun next year.

When asked if VMware will support movement of applications between vCloud Hybrid Service and other clouds, like Amazon’s, Fathers said the core focus is ensuring compatibility between customers’ existing VMware deployments and the VMware cloud. However, he said VMware is working with partners who “specialize in that level of abstraction” to allow portability of applications from VMware’s cloud to others and vice versa. Naturally, VMware would really prefer it if you just use VMware software and nothing else.

Source:  arstechnica.com

Avoid built-in SSD encryption to ensure data recovery after failure, warns specialist

Monday, July 8th, 2013

Companies wanting to ensure their data is recoverable from solid state disk (SSD) drives should make sure they use third-party encryption tools with known keys rather than relying on devices’ built-in encryption, a data-recovery specialist has advised.

Noting that the shift from mechanical hard drives to flash RAM-based solid state disk (SSD) drives had increased the complexity of data recovery, Adrian Briscoe, general manager of data-recovery specialist Kroll Ontrack, told CSO Australia that the growing use of SSD in business servers, mobile phones, tablets, laptops and even cloud data centres had made recovering data from the devices “a very black or white situation”.

“You either get everything or you don’t get anything at all” from damaged SSD-based equipment, he explained.

“With mechanical hard drives it’s a percentage situation, particularly since large drives are typically not used to capacity. But with SSDs we spend a lot of time trying to find ways of recovering data. The major issue is interacting with the [SSD controller] chips: Although there are only six controller chip makers, there are at least 220 manufacturers of SSD devices, and the way they’re designed is different from one device to the next.”

Many manufacturers, in particular, had taken their own approaches to data security, automatically scrambling the information on SSDs with encryption keys that are stored on the device itself.

That has presented new challenges for the company’s data-recovery engineers, who work from a dedicated data-recovery clean-room in Brisbane where damaged hard drives are regularly rebuilt to the point where their data can be recovered.

The proportion of SSD and flash RAM media going to that cleanroom had grown steadily, from 2.1 per cent of all data recovery jobs in late 2008 to 6.41 per cent of jobs in Q4 2012.

Recovering data from SSDs is already more difficult than sequential-write hard drives because SSD-stored data is distributed throughout the flash RAM cells by design. Once SSD-stored keys are made inaccessible by damage to the device, however, recovering the data becomes far more complicated – and chances of getting any of it back plummet.

“SSD devices do have encryption on them, and we are recommending people not use hardware encryption on an SSD if they are wanting to ever recover data from that device,” Briscoe explained, suggesting that users instead run computer-based software like the open-source TrueCrypt, whose keys can be managed by the user rather than internally by the drive itself.

“By having encryption turned on, an SSD with a hardware key is going to fail any data recovery effort,” he continued. “We are not hackers, and we can’t get into encrypted data. Instead, we’re recommending that people use something that holds the key outside the device.”

Many users had yet to appreciate the complexity that SSD poses, with a November 2012 customer survey suggesting just 31 per cent were aware of the complexity of SSD-based encryption and 48% saying there was no additional risk posed by using SSDs. An additional 38 per cent said they didn’t know.

The SSD challenge isn’t limited to smartphone-wielding users, however: as data-centre operators increasingly turn to SSD to boost the effective speed of their data-storage operations, Briscoe warned that a growing number of the company’s recovery operations were involving data lost to cloud-computing operators.

“A lot of vendors are using hybrid solutions with a bank of SSDs in a storage area network, then write data to [conventional] drives,” he said.

“We’re seeing more and more instances of cloud providers losing data: they rely very much on snapshots, and if something happens to the data – if there is corruption to the operating system or some type of user error – we are having more and more cloud providers coming to us with data loss.”

Source:  cso.com

Avoiding cloud security pitfalls

Monday, July 8th, 2013

Enterprises need to take responsibility for cloud security and work together with their service providers if they are going to avoid the pitfalls, said Telstra enterprise and infrastructure services IT director Lalitha Biddulph.

Speaking at the Datacentre Dynamics Converged conference in Sydney, Biddulph told delegates that there were a number of risks that enterprises should be aware of when using cloud services such as data loss and leakage.

“This is a platform on which you are putting data in a multi-tenanted utility model. It is possible that the service could be misconfigured and your data may be compromised,” she said.

In addition, cross site scripting could create opportunities for cyber criminals to hijack credentials from the cloud provider.

“Watch out for people who are constantly scanning the system for vulnerabilities. Your cloud provider will need to have good security monitoring because it is a shared environment,” Biddulph said.

She added that organisations need to watch out for cloud lock-in.

“A lot of cloud services are proprietary and once you move your data in there, you may have given away your right to shift data by choosing to use a particular service.

“Due diligence is something that you need to be aware of. A friend of mine has decided to put 40,000 employees’ email onto Google Mail because he has done the assessment and deemed it an acceptable risk,” she said.

When using a cloud provider, Biddulph said that backup services are critical.

“One method we use at Telstra is putting data in one state and backing it up in another state. Alternatively, you could have your data with one cloud provider and back it up with another.”

She pointed out that cloud computing offers businesses a number of benefits including reduced costs, speed to market and flexibility.

“Due to the proliferation of businesses owned by Telstra, we have a true hybrid cloud model. We use applications that run on proprietary hardware. We also have private cloud and dedicated hosting,” she said.

Source:  pcadvisor.co

BYOD blues: What to do when employees leave

Tuesday, July 2nd, 2013
The bring your own device (BYOD) trend is gaining steam, thanks to the cost benefits and increased productivity that can come from allowing employees to provision their own technology. Mobile workers are more likely to put in more hours, so if your employees want to buy their own equipment and do more work on their own time, it’s a win for the company.At least, a BYOD-practicing workforce seems like a win right until you have to let one of your BYOD workers go and there’s no easy way to ask if you can please see their iPad for a moment because you want to check if there’s anything on their personal device that doesn’t belong to them.

As more workplaces embrace BYOD practices, they’ll increasingly confront the question of how to balance the benefits of a self-provisioned workforce against the risks of company assets walking out the door when workers are let go. What can IT departments currently do to minimize risk when BYOD-practicing employees are laid off? What practices and policies can they put in place to make future departures as smooth as possible?

BYOD layoffs: What you can do now

It’s a fact that some data always walks with the employees: email addresses of business contacts, or knowledge of the organization’s key business practices and initiatives. In the old days, people slipped files into their briefcases. Digital files just mean that copying and moving information can be done quickly.

Skip through the denial and anger stages and just accept that some data is inherently more vulnerable than others, and it’s that vulnerable data, such as emails, that will be walking out the door.”There’s no definitive way to get on to a [departing] employee’s personal devices and undo what’s been done,” says Joshua Weiss, CEO of mobile application development firm TeliApp. “And if your workers have been using off-the-shelf solutions like Dropbox, it’s virtually impossible with some sort of exit interview.”

Rick Veague, CTO of IFS Technologies, says that you can sift structured communications data into three distinct categories: email, files that could contain company information, and mobile data. Once you’ve sifted out the data, you can figure out whether your soon-to-be-ex employee is really in danger of walking out with the company’s assets on an iPad.

“Mobile data is a big problem, so it’s time to start compartmentalizing risks. This way, you can find a balance between the benefits of a [BYOD] workforce and the risks,” Veague says.

And how can your IT department manage the risks without cutting into the perceived BYOD benefits? By planning ahead for the next employee departure.

BYOD layoffs: Plan for the future

If your company is in the happy position of not having to lay anyone off in the near future, then you have time to get a game plan together. Here is a rundown of policies and practices you should consider implementing to make the unfortunate event go more smoothly, while mitigating company risk.

Have a written BYOD policy: This is a simple idea in theory, but not an easy one in practice. TeliApp’s Weiss says that it took his company three months to come up with their current policy. “It started off as a simple paragraph and turned into what felt like a three-page demand letter,” he says.

Why did it take so long? TeliApp treated it like a software development project. After that one paragraph, Weiss and his management team began compiling what-if scenarios and incorporating them into the policy — what Weiss calls the policy’s “alpha testing.” Once the team discovered they hadn’t thought of everything, they expanded the BYOD policy to include the real-life situations that arose. After this beta period, the policy was set.

For managers looking to establish a BYOD policy, here are some of the issues to consider:

  • Defining “acceptable business use” for the device, such as which activities are determined to directly or indirectly benefit the business.
  • Defining the limits of “acceptable personal use” on company time, such as whether employees will be able to play Angry Birds or load their Kindle’s ebook collection.
  • Defining which apps are allowed or which are not.
  • Defining which company resources (email, calendars, and so on) may be accessed via a personal device.
  • Defining which behaviors won’t be tolerated under the rubric of doing business, such as using the device to harass others on company time, or texting and checking email while driving.
  • Listing which devices IT will allow to access their networks (it helps to be as specific as possible with models, operating systems, and versions).
  • Determining when devices are presented to IT for proper configuration of employment-specific applications and accounts on the device.
  • Outlining the reimbursement policies for costs, such as the purchase of devices and/or software, the worker’s mobile coverage, and roaming charges.
  • Listing security requirements for devices that must be met before personal devices are allowed to connect to company networks.
  • Listing the what-ifs, including what to do if a device is lost or stolen, what to expect after five failed logins to the device or to a specific application, and what liabilities and risks the employee assumes for physical maintenance of the device.

Consider other employee policies

Most companies have established noncompete, confidentiality, and nondisclosure agreements. With these legal protections in place, Weiss says, your employees are already constrained from walking off with a company’s intellectual property and using it for their personal gain.

Monitor where your data is going

This is where IT can shine. By setting up shared company file servers and as well as protocols for who can access files and how, IT can monitor people accessing any locally hosted files.

Weiss says that TeliApp runs on the understanding that anything on the company server is company property, and so users don’t copy files to their desktops. If someone does copy a file, the action is immediately logged and remedied. “Everyone understands the policy after their first well-meaning screw-up,” Weiss says.

Try to keep data off local devices

When choosing applications and services, make sure a lot of data can’t be downloaded and saved to local devices. One of the keys to minimizing risk in a BYOD workplace is restricting user access to networks and central repositories. You’ll want to find tools that can sync all user data to a central account that an administrator controls access to. You’ll also want to find ways to place intermediary technologies between the company network and employee devices. It will ultimately reduce IT’s workload and add a layer of security to the company’s networks.

“If you mobile-enable users and they have access to your enterprise data in an unrestricted fashion, you have to actively manage that device, which is difficult to do,” Veague says.

One example of a cloud-based service that can minimize risk to the BYOD workplace: YouMail. The voicemail service stores all its customers’ voicemails and call history in the cloud, so an employer who has YouMail as its voicemail standard will retain contact information and voicemail content even after an individual user leaves. The downside? In the current business-class offerings, users can still access their old accounts. However, in a forthcoming enterprise product, which is still in private beta, but aiming for customer deployment by the end of the summer, an administrator will be able to activate and deactivate individual user accounts.

You’ll also want tools that let an administrator remotely wipe or delete an account. This way, former workers can maintain their device, yet they will no longer have access to their old accounts in certain apps.

Find applications that minimize the amount of data that’s downloaded to any mobile device, Veague suggests, and follow this rule of thumb: “If you can’t access the app, you can’t access the data.” If this rule is followed, then all an IT admin has to do when an employee leaves is shut off the individual user account; the data remains safe.

Do sweeps regularly

One of the downsides of a self-provisioning workforce is that not every worker is going to be as assiduous about application updates, security measures, and backups as a dedicated IT professional is. So have IT step in and do regular security check-ups on any devices that are allowed to access company networks. Because security requirements will be written into any BYOD policy, users will know that their devices are going to be scanned and updated regularly.

Hire carefully

This last step may be out of IT’s hands, but it is often the first step in avoiding any problems. Weiss says, “You have to know who you’re hiring — it all comes down to that. If you don’t think a person’s trustworthy, regardless of what their credentials are, then don’t hire them.”

With these steps in place, the risks of letting employees provision their own hardware are managed in a way that lets IT professionals still maintain their primary responsibilities to a company without being perceived as an obstacle for mobile-mad employees to work around. And being seen as business-friendly while also protecting the business? That’s the real win-win when you think about employees’ departures as you’re bringing both them and their devices on board.

Source:  pcworld.com

Three things to consider before buying into Disaster Recovery as a Service

Tuesday, July 2nd, 2013

Disaster Recovery as a Service (DRaaS) backs up the whole environment, not just the data.

“Most of the providers I spoke with also offer a cloud-based environment to spin up the applications and data to when you declare a disaster,” says Karyn Price, Industry Analyst, Cloud Computing Services, Frost & Sullivan. This enables enterprises to keep applications available.

Vendors offer DRaaS to increase their market share and revenues. Enterprises, especially small businesses are interested in the inexpensive yet comprehensive DR solution DRaaS offers. There are cautionary notes and considerations too that demand the smart businesss attention before and after buying into DRaaS.

DRaaS market drivers, vendors and differentiation

DRaaS is a wise move for cloud vendors hungry for a bigger slice of the infrastructure market.

“DRaaS is the first cloud service to offer value for an entire production infrastructure, all the servers and all the storage,” says John P. Morency, Research Vice President, Gartner. This opens up more of the market, providing much higher revenues for vendors.

DRaaS creates new revenue streams and opportunities for vendors, too.

“They want to bring comprehensive recovery to a wider variety of business customers,” Price says. Where only an enterprise could afford a full-blown BC/DR solution before, now the cloud offers a more affordable option for BC/DR to the small business.

Vendors leveraging DRaaS include Verizon TerreMark, Microsoft and Symantec (a joint offering), IBM, Sungard and NTT Data, Earthlink, Windstream, Bluelock, Virtustream, Verastream, EVault, Hosting.com and a trove of smaller contenders seeking to differentiate themselves in the marketplace, according to Price and Morency.

“While most of the DRaaS vendors are relatively similar in their cost structures and recovery time objectives, the recovery point objective is a differentiator between vendor offerings,” says Price. Whereas Dell and Virtustream each report RPOs of 5 minutes, according to Frosts Karyn Price, Windstream reports RPOs of 15 minutes to 1 hour, depending on the DRaaS service the customer chooses.

DRaaS: No drab solution

With so much to offer, DRaaS has a bright future in the BC/DR realm. Companies with no tolerance for downtime, those looking to enter the cloud for the first time, those seeking a complete DR solution and those that have infrastructure in severe weather risk locations are interested in DRaaS. DRaaS is particularly interesting to enterprises with minimal tolerance for downtime.

“Most of the DRaaS vendors I speak with offer recovery times of four hours or fewer,” says Price.

DRaaS is an option for enterprises that want to test the cloud for the first time.

“If you are in the middle of a disaster and suddenly you have no infrastructure to restore to, would you rather have a cloud-based solution that maybe you would have been wary of as your primary option or would you rather have nothing?” asks Price.

Small businesses see DRaaS as a way to finally afford a broader BC/DR solution.

“DRaaS can minimize or even completely eliminate the requirement for company capital in order to implement a DR solution,” says Jerry Irvine, CIO, Prescient Solutions and member of the National Cyber Security Task Force. Since DRaaS is a cloud solution, businesses can order it at most any capacity, making it a more cost-effective fit for smaller production environments. Of the 8,000 DRaaS production instances that Gartner estimates exist today, 85- to 90- percent are smaller instances of three to six production applications, Morency says. The VM scope of these companies is between five and sixty VMs and the associated production storage is no more than two to five TB, Morency adds.

Businesses hit with increasingly severe weather catastrophes are very interested in DRaaS.

“When you look at the aftermath of events like the Tsunami in Japan, there is a lot more awareness and a lot more pressure from the board level to do disaster recovery,” says Morency. This pressure and the affordability of DRaaS can tip the scales for many a small business.

Proceed with caution

Enterprises and small businesses considering DRaaS face a lot of due diligence before choosing a solution and a lot of work afterwards.

“It’s not like you just upload all the work to the service provider,” says Richard P. Tracy, CSO, Telos Corporation. If the enterprise requires replication to a cloud environment supported exclusively by SAS70 data centers, then the DRaaS provider better be able to demonstrate that it has SAS 70 data centers and agree to keep data in the cloud only in those facilities.

Depending on the industry, the customer must confirm that the DRaaS provider meets operational standards for HIPAA, GLBA, PCI-DSS, or some of the ISO standards as needed.

“You don’t want to trust that they do just because it says so on their website,” says Tracy.

Due to the nature of the cloud, DRaaS can offer innate data replication and redundancy for reliable backup and recovery. But unless specified otherwise, DRaaS could include only replication to failover core systems and backup may not be included.

“Many organizations define their backup systems or data repositories as critical solutions for the DR facilities to replicate,” says Irvine. This provides for replication of core systems and data backup to DRaaS.

And once the enterprise’s data successfully fails over to the DRaaS service, at some point the enterprise and the service have to roll it back to the enterprise infrastructure.

“You have to make sure that the DRaaS service will support you in that process,” says Tracy. There are processes, procedures, and metrics related to exit strategies for outsourcing that the customer must define during the disaster recovery planning process. These will depend on the organization. These procedures set the timing for how soon data restoration to the primary location takes place and how soon the company switches the systems back on.

“The SLA should define the DRaaS provider’s role in that,” says Tracy. “It’s not just failover, it’s recovery.”

DRaaS: Worth consideration

DRaaS can replicate infrastructure, applications and data to the cloud to enable full environmental recovery. The price is right and the solution is comprehensive. Still in its early stages, DRaaS is by all signs worth consideration, especially with the number and types of offerings available, and the obvious market need.

Source:  networkworld.com

Largest ever DDoS attack directed at financial firm, Prolexic reports

Tuesday, June 4th, 2013

DDoS attackers attempted to bring down an unnamed financial services firm earlier this week using one of the largest traffic bombardments ever recorded, mitigation firm Prolexic has reported.

The 167 Gbps peak attack hit what is being described only as a “realtime financial exchange” on 27 May using the same DNS reflection method used to strike anti-spam organisation Spamhaus in late March, the company said.

Although smaller than the Spamhaus assault, it still registered as the largest ever defended by Prolexic in its 10-year history, which must on its own make it one of the largest ever recorded.

Despite its size, Prolexic had been able to distribute the traffic across four sites in Hong Kong, San Jose, Ashburn in Virgina, and London, with the latter bearing the greatest burden at a peak of 90Gbps.

“This was a massive attack that made up in brute force what it lacked in sophistication,” commented Prolexic’s CEO, Scott Hammack.

“Because of the proactive DDoS defense strategies Prolexic had put in place with this client, no malicious traffic reached its website and downtime was avoided. In fact, the company wasn’t aware it was under attack.”

The fact that the attacked business was a customer of Prolexic is one important difference between the incident and what happened to Spamhaus.

When Spamhaus was assaulted by a vast 300Gbps peak DNS reflection attack, it engaged the help of a content delivery network (CDN) called CloudFlare to help defend itself. The attackers then turned their fire on the Tier-1 providers used by CloudFlare in an attempt to cause maximum harm.

The attackers picking on the financial services firm would have known that Prolexic’s mitigation stood between themselves and the target from the start, raising the possibility that they were testing the ability of this sort of attack to overload dedicated defenses.

“It’s only a matter of time, possibly by the end of this quarter, before the 200Gbps marker is crossed,” predicted Hammack.

The firm was investing in the infrastructure necessary to cope with up to 1.2Tbps peak traffic loads by the end of 2013, he added.

DNS reflection (or amplification) attacks have become a new front in DDoS tactics in recent times despite being widely discussed for years. One possibility is that they are partly a reaction to the growth of DDoS mitigation firms and the desire of attackers to boost the size of their activity using open responders.

As EU security agency ENISA pointed out after the Spamhaus incident, the vulnerabilites exploited by the attackers were addressed by IETF best practice recomendations as far back as the year 2000.

Source:  networkworld.com

Microsoft plugs security systems into its worldwide cloud

Thursday, May 30th, 2013

In a move designed to starve botnets where they live, Microsoft launched a program on Tuesday to plug its security intelligence systems into its global cloud, Azure.

The new offering, known as the Cyber Threat Intelligence Program, or C-TIP, will enable ISPs and CERTs to receive information on infected computers on their systems in near-real time, Microsoft said.

“All too often, computer owners, especially those who may not be using up-to-date, legitimate software and anti-malware protection, unwittingly fall victim to cybercriminals using malicious software to secretly enlist their computers into an army of infected computers known as a botnet, which can then be used by cybercriminals for a wide variety of attacks online,” Microsoft explained in a blog post.

Microsoft has been a leader in the industry in taking down botnets. Its victims include zombie armies enlisted with malware such as Conficker, Waledac, Rustock, Kelihos, Zeus, Nitol and Bamital.

Once a network is taken down, though, its minions must be sanitized. That’s what ISPs and CERTs do with the information they receive from Project MARS (Microsoft Active Response for Security), which is now plugged into Azure.

“While our clean-up efforts to date have been quite successful, this expedited form of information sharing should dramatically increase our ability to clean computers and help us keep up with the fast-paced and ever-changing cybercrime landscape,” Microsoft noted.

“It also gives us another advantage: cybercriminals rely on infected computers to exponentially leverage their ability to commit their crimes, but if we’re able to take those resources away from them, they’ll have to spend time and money trying to find new victims, thereby making these criminal enterprises less lucrative and appealing in the first place,” it added.

Following a botnet takedown, its zombies must be purged in a “remediation phase” of the operation. “The remediation phase is designed to clean up the systems that are infected after the command and control infrastructure is taken over,” said Jeff Williams, director of security strategy at Dell Secureworks

“To leave the infected systems would allow criminals to use the existing malware to create a new botnet,” he told CSO. “It’s a critical component of takedown work to remediate the infected systems.”

In addition to allowing Microsoft to feed remediation information to ISPs and CERTs quickly, Azure allows Microsoft to scale up its botnet busting efforts without a hiccup.

Currently, Microsoft manages hundreds of millions of events a day with its security intelligence systems. It foresees that number climbing into the ten to hundreds of billions in the future, noted T.J. Campana, director of the Microsoft Cybercrime Center.

Now the only data Microsoft is putting into its intelligence systems is MARS program data. “As we increase the number of takedowns we do per year, the size of the attacks and work with more partners around the world, we’ll be processing a much larger set of IP addresses and events per day,” Campana said.

Azure allows Microsoft to accommodate that expansion. “The ability to have that kind of elasticity dynamically through Azure has been a huge advantage to us,” he added.

For one security analyst, the move to Azure was long overdue. “It’s something Microsoft should be proactive about because it has millions of endpoints from which to collect this information,” Gartner security analyst Avivah Litan told CSO.

“This is long overdue,” she added. “They should have done something like this a couple of  years ago.”

Source:  networkworld.com

Microsoft rolls out standards-compliant two-factor authentication

Thursday, April 18th, 2013

Microsoft today announced that it is rolling out optional two-factor authentication to the 700 million or so Microsoft Account users, confirming last week’s rumors. The scheme will become available to all users “in the next few days.”

It works essentially identically to existing schemes already available for Google accounts. Two-factor authentication augments a password with a one-time code that’s delivered either by text message or generated in an authentication app.

Computers that you trust will be allowed to skip the two factors and just use a password, and application-specific passwords can be generated for interoperating with software that doesn’t support two-factor authentication.

Microsoft has its own authentication app for Windows Phone. It isn’t offering apps for iOS or Android—however, it doesn’t need to. The system it’s using is standard, specified in RFC 6238, and Google uses the same system. As a result, Google’s own Authenticator app for Android can be used to authenticate Microsoft Accounts. And vice versa: Microsoft’s Authenticator app for Windows Phone works with Google accounts.

Source:  arstechnica.com

VMware’s hybrid cloud gambit will rely on its public cloud partners

Friday, March 22nd, 2013

VMware has been rather cagey about its plans to launch its own hybrid cloud service, announced at a recent Strategic Forum for Institutional Investors. Companies are usually more than happy to talk journalists’ ears off about a new product or service, but when InfoWorld reached out to VMware about this one, a spokesman said the company had nothing further to share beyond what it presented in a sparse press release and a two-hour, multi-topic webcast.

In a nutshell, here’s what VMWare has revealed: It will offer a VMware vCloud Hybrid Service later this year, designed to let customers seamlessly extend their private VMware clouds to public clouds run by the company’s 220 certified vCloud Services Providers. Although the public component would run on partners’ hardware, VMware employees would manage the hybrid component and the underlying software.

For example, suppose Company X is running a critical cloud application on its own private, VMware-virtualized cloud. The company unexpectedly sees a massive uptick in demand for the service. Rather than having to hustle to install new hardware, Company X could leverage VMware’s hybrid service to consume public-cloud resources on the fly. In the process, Company X would not have to make any changes to the application, the networking architecture, or any of the underlying policies, as VMWare CEO Pat Gelsinger described the service.

“[T]he power of what we’ll uniquely be delivering, is this ability to not change the app, not change the networking, not change the policies, not change the security, and be able to run it private or public. That you could burst through the cloud, that you could develop in the cloud, deploy internally, that you could DR in the cloud, and do so without changing the apps, with that complete flexibility of a hybrid service” he said.

One of the delicate points in this plan is the question of how it will impact the aforementioned 220 VSPP partners, which include such well-known companies as CDW, Dell, and AT&T as well as lesser-known providers likeLokahi and VADS. Would VMware inserting itself into the mix result in the company stepping on its partners’ toes and eating up some of their cloud-hosting revenue?

Gelsinger did take pains to emphasize that the hybrid service would be “extremely partner-friendly,” adding that “every piece of intellectual property that we’re developing here we’re making available to VSPP partners,” he said. “Ultimately, we see this as another tool for business agility.”

451 Research Group analyst Carl Brooks took an optimistic view on the matter. “Using VSPP partner’s data centers and white-labeling existing infrastructure would both soothe hurt feelings and give VMware an ability to source and deploy new cloud locations extremely quickly, with minimal investment,” he said.

Gartner Research VP Chris Wolf, however, had words of caution for VMware as well as partner providers. “VMware needs to be transparent with provider partners about where it will leave them room to innovate. Of course, partners must remember that VMware reserves the right to change its mind as the market evolves, thus potentially taking on value adds that it originally left to its partners. SP partners are in a tough spot. VMware has brought many of them business, and they have to consider themselves at a crossroads,” he wrote.

Indeed, VMware’s foray into the hybrid cloud world isn’t sitting well with all of its partners. Tom Nats, managing partner at VMware service provider Bit Refinery, told CRN that the vCloud Hybrid Service is not a welcome development. “Many partners have built up [their infrastructure] and stayed true to VMware, and now all of a sudden we are competing with them,” he said.

As to customers: Will they feel comfortable with entrusting their cloud efforts in part to VMware and in part to one or more VMWare partners? Building and managing a cloud is complex enough without adding new parties into the mix. One reason Amazon Web Services has proven such a successful public cloud offering is that they fall under the purview of one entity. When a problem arises, there’s just one entity to call and one throat to choke. Under VMWare’s hybrid cloud model, customers may need to scrutinize SLAs carefully to determine which party would be responsible for which instances of downtime. Meanwhile, VMWare would have to be vigilant in ensuring that its partners were all running their respective clouds properly.

Source:  infoworld.com

The 49ers’ plan to build the greatest stadium Wi-Fi network of all time

Tuesday, March 19th, 2013

When the San Francisco 49ers’ new stadium opens for the 2014 NFL season, it is quite likely to have the best publicly accessible Wi-Fi network a sports facility in this country has ever known.

The 49ers are defending NFC champions, so 68,500 fans will inevitably walk into the stadium for each game. And every single one of them will be able to connect to the wireless network, simultaneously, without any limits on uploads or downloads. Smartphones and tablets will run into the limits of their own hardware long before they hit the limits of the 49ers’ wireless network.

Jon Brodkin

Until now, stadium executives have said it’s pretty much impossible to build a network that lets every single fan connect at once. They’ve blamed this on limits in the amount of spectrum available to Wi-Fi, despite their big budgets and the extremely sophisticated networking equipment that largesse allows them to purchase. Even if you build the network perfectly, it would choke if every fan tried to get on at once—at least according to conventional wisdom.

But the people building the 49ers’ wireless network do not have conventional sports technology backgrounds. Senior IT Director Dan Williams and team CTO Kunal Malik hail from Facebook, where they spent five years building one of the world’s largest and most efficient networks for the website. The same sensibilities that power large Internet businesses and content providers permeate Williams’ and Malik’s plan for Santa Clara Stadium, the 49ers’ nearly half-finished new home.

“We see the stadium as a large data center,” Williams told me when I visited the team’s new digs in Santa Clara.

I had previously interviewed Williams and Malik over the phone, and they told me they planned to make Wi-Fi so ubiquitous throughout the stadium that everyone could get on at once. I had never heard of such an ambitious plan before—how could this be possible?

Today’s networks are impressive—but not unlimited

An expansive Wi-Fi network at this year’s Super Bowl in the New Orleans Superdome was installed to allow as many as 30,000 fans to get online at once. This offloaded traffic from congested cellular networks and gave fans the ability to view streaming video or do other bandwidth-intensive tasks meant to enhance the in-game experience. (Don’t scoff—as we’ve noted before, three-plus-hour NFL games contain only 11 minutes of actual game action, or a bit more if you include the time quarterbacks spend shouting directions at teammates at the line of scrimmage. There is plenty of time to fill up.)

Superdome officials felt a network allowing 30,000 simultaneous connections would be just fine, given that the previous year’s Super Bowl saw only 8,260 at its peak. They were generally right, as the network performed well, even for part of the game’s power outage.

The New England Patriots installed a full-stadium Wi-Fi network this past season as well. It was never used by more than 10,000 or so people simultaneously, or by more than 16,000 people over the course of a full game. “Can 70,000 people get on the network at once? The answer to that is no,” said John Brams, director of hospitality and venues at the Patriots’ network vendor, Enterasys. “If everyone tried to do it all at once, that’s probably not going to happen.”

But as more fans bring smart devices into stadiums, activities like viewing instant replays or live camera angles available only to ticket holders will become increasingly common. It’ll put more people on the network at once and require bigger wireless pipes. So if Williams and Malik have their way, every single 49ers ticket holder will enjoy a wireless connection faster than any wide receiver sprinting toward the end zone.

“Is it really possible to give Wi-Fi to 68,500 fans at once?” I asked. I expected some hemming and hawing about how the 49ers will do their best and that not everyone will ever try to use the network at once anyway.

“Yes. We can support all 68,500,” Williams said emphatically.

How?

“How not?” he answered.

Won’t you have to limit the capacity each fan can get?

Again, absolutely not. “Within the stadium itself, there will probably be a terabit of capacity. The 68,500 will not be able to penetrate that. Our intentions in terms of Wi-Fi are to be able to provide a similar experience that you would receive with LTE services, which today is anywhere from 20 to 40 megabits per second, per user.

“The goal is to provide you with enough bandwidth that you would saturate your device before you saturate the network,” Williams said. “That’s what we expect to do.”

Fans won’t be limited by what section they’re in, either. If the 49ers offer an app that allows fans to order food from their seats, or if they offer a live video streaming app, they’ll be available to all fans.

“The mobile experience should not be limited to, ‘Hey, because you sit in a club seat you can see a replay, but because you don’t sit in a club seat you can’t see a replay,'” Malik said. “That’s not our philosophy. Our philosophy is to provide enhancement of the game experience to every fan.” (The one exception would be mobile features designed specifically for physical features of luxury boxes or club seats that aren’t available elsewhere in the stadium.)

It’s the design that counts

Current stadium Wi-Fi designs, even with hundreds of wireless access points distributed throughout a stadium, often can support only a quarter to a half of fans at once. They also often limit bandwidth for each user to prevent network slowdowns.

The Patriots offer fans a live video and instant replay app, with enough bandwidth to access video streams, upload photos to social networks, and use the Internet in general. Enterasys confirmed to Ars that the Patriots do enforce a bandwidth cap to prevent individual users from overloading the network, but Enterasys would not say exactly how big the cap is. The network has generally been a success, but some users of the Patriots app have taken to the Android app store to complain about the stadium Wi-Fi’s performance.

According to Williams, most current stadium networks are limited by a fundamental problem: sub-optimal location of wireless access points.

“A typical layout is overhead, one [access point] in front of the section, one behind the section, and they point towards each other,” he said. “This overhead design is widely used and provides enough coverage for those using the design.”

Williams would not reveal the exact layout of the 49ers’ design, perhaps to prevent the competition from catching on. How many access points will there be? “Zero to 1,500,” he said in a good-natured attempt to be both informative and vague.

That potentially doubles or quadruples the typical amount of stadium access points—the Super Bowl had 700 and the Patriots have 375. But this number isn’t the most important thing. “The number of access points will not give you any hint on whether the Wi-Fi is going to be great or not,” Malik said. “Other factors control that.”

If the plan is to generate more signal strength, just adding more access points to the back and front of a section won’t do that.

The Santa Clara Stadium design “will be unique to football stadiums,” Williams said. “The access points will be “spread and distributed. It’s really the best way to put it. Having your antennas distributed evenly around fans.” The 49ers are testing designs in Candlestick Park and experimenting with different access points in a lab. The movement of fans and the impact of weather on Wi-Fi performance are among the factors under analysis.

“Think of a stadium where it’s an open bowl, its raining, people are yelling, standing, how do you replicate that in your testing to show that if people are jumping from their seats, how is Wi-Fi going to behave, what will happen to the mobile app?” Malik said. “There is a lot that goes on during a game that is hard to replicate in your conceptual simulation testing. That is one of the big challenges where we have to be very careful.”

“We will make great use of Candlestick over the next year as we continue to test,” Williams said. “We’re evaluating placement of APs and how that impacts RF absorption during the game with folks in their seats, with folks out of their seats.”

Wi-Fi will be available in the stands, in the suites, in the walkways, in the whole stadium. The team has not yet decided whether to make Wi-Fi available in outdoor areas such as concourses and parking lots.

The same could theoretically be done at the 53-year-old Candlestick Park, even though it was designed decades before Wi-Fi was invented. Although the stadium serves as a staging ground for some of the 49ers’ wireless network tests, public access is mainly limited to premium seating areas and the press box.

The reason Wi-Fi in Candlestick hasn’t been expanded is a practical one. With only one year left in the facility, the franchise has decided not to invest any more money in its network. But Williams said 100 percent Wi-Fi coverage with no bandwidth caps could be done in any type of stadium, no matter how old. He says the “spectrum shortage” in stadiums is just a myth.

With the new stadium still undergoing construction, it was too early for me to test anything resembling Santa Clara Stadium’s planned Wi-Fi network. For what it’s worth, I was able to connect to the 49ers’ guest Wi-Fi in their offices with no password, and no problems.

The 2.4GHz problem

There is one factor preventing better stadium Wi-Fi that even the 49ers may not be able to solve, however. Wi-Fi works on both the 2.4GHz and 5GHz bands. Generally, 5GHz is better because it offers more powerful signals, less crowded airwaves and more non-overlapping channels that can be devoted to Wi-Fi use.

The 2.4GHz band has 11 channels overall and only three that don’t overlap with each other. By using somewhat unconventionally small 20MHz channels in the 5GHz range, the 49ers will be able to use about eight non-overlapping channels. That’s despite building an outdoor stadium, which is more restricted than indoor stadiums due to federal requirements meant to prevent interference with systems like radar.

Each 49ers access point will be configured to offer service on one channel, and access points that are right next to each other would use different channels to prevent interference. So even if you’re surrounding fans with access points, as the 49ers plan to, they won’t interfere with each other.

But what if most users’ devices are only capable of connecting to the limited and crowded 2.4GHz band? Enterasys said 80 percent of Patriots fans connecting to Wi-Fi this past season did so from devices supporting only the 2.4GHz band, and not the 5GHz one.

“You have to solve 2.4 right now to have a successful high-density public Wi-Fi,” Brams said.

The iPhone 5 and newer Android phones and tablets do support both the 2.4GHz and 5GHz bands, however. Williams said by the time Santa Clara Stadium opens in 2014, he expects 5GHz-capable devices to be in much wider use.

When asked if the 49ers would be able to support 100 percent of fans if most of them can only connect to 2.4GHz, Williams showed a little less bravado.

“For those 2.4 users we will certainly design it so that there’s less interference,” he said. “It is a more dense environment if you are strictly constrained in 2.4, but we are not constrained in 2.4. We’re not trying to answer the 2.4 problem, because we have 5 available.”

“It’s 2013, we have another year and a half of iteration,” he also said. “We’ll probably be on, what, the iPhone 7 by then? The move to 5GHz really just makes us lucky. We’re doing this at the right time.”

Building a stadium in Facebook’s image

Williams and Malik both joined the 49ers last May. Malik was hired first, and then brought his old Facebook friend, Williams, on board. Malik had been the head of IT at Facebook, while Williams was the website’s first network engineer and later a director. They both left the site, basically because they felt there was nothing left to accomplish. Williams did some consulting, and Malik initially planned to take some time off.

Williams was a long-time 49ers season ticket holder, but that was far from the only thing that sold him on coming to the NFL.

“I had been looking for something challenging and fun again,” Williams said. “Once you go through an experience like Facebook, it’s really hard to find something that’s similar. When Kunal came to me, I remember it like it was yesterday. He said, ‘If you’re looking for something like Facebook you’re not going to find it. Here’s a challenge.'”

“This is an opportunity to change the way the world consumes live sports in a stadium,” Malik said. “The technology problems live sports has today are unsolved and no one has ever done what we are attempting to do here. That’s what gets me out of bed every day.”

Williams and Malik have built the 49ers’ network in Facebook’s image. That means each service—Wi-Fi, point-of-sale, IPTV, etc.—gets its own autonomous domain, a different physical switching system to provide it bandwidth. That way, problems or slowdowns in one service do not affect another one.

“It’s tribal knowledge that’s only developed within large content providers, your Facebooks, your Googles, your Microsofts,” Williams said. “You’ll see the likes of these large content providers build a different network that is based on building blocks, where you can scale vertically as well as horizontally with open protocols and not proprietary protocols.

“This design philosophy is common within the content provider space but has yet to be applied to stadiums or venues. We are taking a design we have used in the past, and we are applying it here, which makes sense because there is a ton of content. I would say stadium networks are 10 years behind. It’s fun for us to be able to apply what we learned [at Facebook].”

The 49ers are still evaluating what Wi-Fi equipment they will use. The products available today would suit them fine, but by late 2014 there will likely be stadium-class access points capable of using the brand-new 802.11ac protocol, which allows greater throughput in the 5GHz range than the widely used 802.11n. 11ac consumer devices are rare today, but the 49ers will use 802.11ac access points to future-proof the stadium if appropriate gear is available. 11ac is backwards compatible with 11n, so supporting the new protocol doesn’t leave anyone out—the 49ers also plan to support previous standards such as 11a, 11b, and 11g.

802.11ac won’t really become crucial until 802.11n’s 5GHz capabilities are exhausted, said Daren Dulac, director of business development and technology alliances at Enterasys.

“Once we get into 5GHz, there’s so much more capacity there that 11ac doesn’t even become relevant until we’ve reached capacity in the 5GHz range,” he said. “We really think planning for growth right now in 5GHz is acceptable practice for the next couple of years.”

Santa Clara Stadium network construction is expected to begin in Q1 2014. Many miles of cabling will support the “zero to 1,500” access points, which connect back to 48 server closets or mini-data centers in the stadium that in turn tie back to the main data center.

“Based on service type you plug into your specific switch,” Williams said. “If you’re IPTV, you’re in an IPTV switch, if you’re Wi-Fi you’re in a Wi-Fi switch. If you’re in POS [point-of-sale], you’re in a POS switch. It will come down to a Wi-Fi cluster, an IPTV cluster, a POS cluster, all autonomous domains that are then aggregated by a very large fabric, that allows them to communicate lots of bandwidth throughput, and allows them to communicate to the Internet.”

Whereas Candlestick Park’s network uses Layer 2 bridging—with all of the Wi-Fi nodes essentially on a single LAN— Santa Clara Stadium will rely on Layer 3 IP routing, turning the stadium itself into an Internet-like network. “We will be Layer 3 driven, which means we do not have the issue of bridge loops, spanning tree problems, etc.,” Williams said.

Keeping the network running smoothly

Wireless networks should be closely watched during games to identify interference from any unauthorized devices and identify usage trends that might result in changes to access points. At the Patriots’ Gillette Stadium, management tools show bandwidth usage, the number of fans connected to each access point, and even what types of devices they’re using (iPhone, Android, etc.) If an access point was overloaded by fans, network managers would get an alert. Altering radio power, changing antenna tilt, or adding radios may be required, but generally any major changes are made between games.

Enlarge / Dashboard view of Patriots’ in-game connectivity.
Enterasys

“In terms of real-time correction, it depends on what the event is,” said John Burke, a senior architect at Enterasys. “Realistically, some of these APs are overhead. If an access point legitimately went down and it’s on the catwalk above 300 [the balcony sections] you’re not going to fix that in the game. That’s something that would have to wait.”

So far, the Patriots’ capacity has been enough. Fans have yet to overwhelm a single access point. Even if they did, there is some overlap among access points, allowing fans to get on in case one AP is overloaded (or just broken).

The 49ers will use similar management tools to watch network usage and adjust access point settings in real time during games. “We expect to overbuild and actually play with things throughout,” Williams said. “Though we are building the environment to support 100 percent capacity, we do not expect 100 percent capacity to be used, so we believe we will be able to move resources around as needed [during each game].”

The same sorts of security protections in place in New England will be used in Santa Clara. Business systems will be password-protected and encrypted, and there will be encrypted tunnels between access points and the back-end network. While that level of protection won’t extend to the public network, fans shouldn’t be able to attack each other, because peer-to-peer connections will not be allowed.

What if the worst happens and the power goes out? During the Super Bowl’s infamous power outage, Wi-Fi did stay on for at least a while. Williams and Malik acknowledged that no system is perfect, but they said that they plan for Wi-Fi uptime even if power is lost.

“We have generators in place, and we’ll have UPS systems, so from a communications standpoint our plan is to keep all the communication infrastructure up and online [during outages],” Williams said. “But all of this stuff is man-made.”

A small team that does it all

Believe it or not, the 49ers have a tech team of less than 10 people, yet the organization is designing and building everything itself. Sports teams often outsource network building to carriers or equipment vendors, but not the 49ers. Besides building its own Wi-Fi network, the team will build a carrier-neutral distributed antenna system to boost cellular signals within the stadium.

“We are control freaks,” Williams said with a laugh. He explained that doing everything themselves makes it easier to track down problems, accept responsibility, and fix things. They also feel the need to take ownership of the project because none of the existing networks in the rest of the league approach what they want to achieve. There is a lot of low-hanging fruit just from solving the easy problems other franchises haven’t addressed, they think.Not all the hardware must be in-house, though. The 49ers will use cloud services like Amazon’s Elastic Compute Cloud when it makes sense.

“Let’s say we want to integrate a POS system with ordering,” Malik said. “If you have an app that lets you order food, and there’s a point of sale system, all the APIs and integration need to sit in the cloud. There’s no reason for it to sit in our data center.”

There are cases where the cloud is clearly not appropriate, though. Say the team captures video on site and distributes it to fans’ devices—pushing that video to a faraway cloud data center in the middle of that process would slow things down dramatically. And ultimately, the 49ers have a greater vision than just providing Wi-Fi to fans.

When I toured a preview center meant to show off the stadium experience to potential ticket buyers, a mockup luxury suite had an iPad embedded in the wall with a custom application for controlling a projector. That provides a hint of what the 49ers might provide.

“Our view is whatever you have at home you should have in your suite,” Williams said. “If that means there’s an iPad on the wall or an application you can use, hopefully that’s available. Your life should be much easier in this stadium.”

And whatever applications are built should be cross-platform. As Malik said, the 49ers are moving away from proprietary technologies to standards-based systems so they can provide nifty mobile features to fans regardless of what device they use.

Williams and Malik are already working long hours, and their jobs will get even more time-intensive when network construction actually begins. But they wouldn’t have it any other way—particularly the longtime season ticket holder Williams.

When work is “tied to something that you love deeply, which is sports, and tied to your favorite team in the world, that’s awesome,” Williams said. “I’m crazy about it, man. I get super passionate.”

Source:  arstechnica.com

U.S. mobile consumers spent $95B on data in 2012, topping what they spent on voice

Tuesday, March 5th, 2013

TIA report shows ‘historic transition’ for mobile industry.

Talk about a shift in behavior — or maybe that should be text about a shift.

It seemed only a matter of time, but today the Telecommunication Industry Association (TIA) said that 2012 marked the first time that U.S. wireless data spending topped voice spending. Also, according to the association’s 2013 ICT Market Review & Forecast report, there are more wireless subscriptions than there are adults in the country.

The “industry is squarely in the middle of an historic transition,” said Grant Seiffert, president of the association that represents high-tech manufacturers and suppliers of communications technology. “Wireless had a breakthrough year in 2012. … While wireless penetration will level off in the years ahead, infrastructure investments will continue surging in order to meet the heavy demand for mobile data.”

A number of factors will fuel a boom for the industry, he said, including more spending on cloud services and cybersecurity, and the continuing rise of smartphones and tablets.

Here are some specifics of the report:

  • Consumers spent $94.8 billion on mobile data services, versus $92.4 billion on voice.
  • Wireless penetration among adults reached 102.5 percent, and the TIA predicts that carriers will add 40.3 million subscribers over the next four years, for a penetration of 111.3 percent in 2016.
  • U.S. wireline spending was $39.1 billion in 2012, compared with $27 billion for wireless infrastructure. By 2016, wireline spending is expected to climb to $44.4 billion, while wireless will reach $38.4 billion.
  • The overall telecommunications industry experienced 7 percent worldwide growth in 2012, down 3 percent from 2011. While growth actually accelerated in the U.S. (from 5.9 percent in 2011 to 6.2 percent in 2012), international markets saw a decline (11.3 percent versus 7.2 percent).

Source:  CNET

100Gbps and beyond: What lies ahead in the world of networking

Tuesday, February 19th, 2013

App-aware firewalls, SAN alternatives, and other trends for the future.

The corporate data center is undergoing a major transformation the likes of which haven’t been seen since Intel-based servers started replacing mainframes decades ago. It isn’t just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.All of this means tomorrow’s data center is going to look very different from today’s. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We’re going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it’s common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn’t fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn’t lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center’s LAN.

Part of coping with all this data transfer is being smarter about identifying network bottlenecks and removing them, such as immature network interface card drivers that slow down server throughput. Bad or sloppy routing paths can introduce network delays too. Typically, both bad drivers and bad routes haven’t been carefully previously examined because they were sufficient to handle less demanding traffic patterns.

It doesn’t help that more bandwidth can sometimes require new networking hardware. The vendors of these products are well prepared, and there are now numerous routers, switches, and network adapter cards that operate at 40- and even 100-gigabit Ethernet speeds. Plenty of vendors sell this gear: Dell’s Force10 division, Mellanox, HP, Extreme Networks, and Brocade. It’s nice to have product choices, but the adoption rate for 40GbE equipment is still rather small.

Using this fast gear is complicated by two issues. First is price: the stuff isn’t cheap. Prices per 40Gb port—that is, the cost of each 40Gb port on a switch—are typically $2,500, way more than a typical 10Gb port price. Depending on the nature of your business, these higher per-port prices might be justified, but it isn’t only this initial money. Most of these devices also require new kinds of wiring connectors that will make implementation of 40GbE difficult, and a smart CIO will keep total cost of ownership in mind when looking to expand beyond 10Gb.

As Ethernet has attained faster and faster speeds, the cable plan to run these faster networks has slowly evolved. The old RJ45 category 5 or 6 copper wiring and fiber connectors won’t work with 40GbE. New connections using the Quad Small Form-factor Pluggable or QSFP standard will be required. Cables with QSFP connectors can’t be “field terminated,” meaning IT personnel or cable installers can’t cut orange or aqua fiber to length and attach SC or LC heads themselves. Data centers will need to figure out their cabling lengths and pre-order custom cables that are manufactured with the connectors already attached ahead of time. This is potentially a big barrier for data centers used to working primarily with copper cables, and it also means any current investment in your fiber cabling likely won’t cut it for these higher-speed networks of the future either.

Still, as IT managers get more of an understanding of QSFP, we can expect to see more 40 and 100 gigabit Ethernet network legs in the future, even if the runs are just short ones that go from one rack to another inside the data center itself. These are called “top of rack” switches. They link a central switch or set of switches over a high-speed connection to the servers in that rack with slower connections. A typical configuration for the future might be one or ten gigabit connections from individual servers within one rack to a switch within that rack, and then 40GbE uplink from that switch back to larger edge or core network switches. And as these faster networks are deployed, expect to see major upgrades in network management, firewalls, and other applications to handle the higher data throughput.

The rack as a data center microcosm

In the old days when x86 servers were first coming into the data center, you’d typically see information systems organized into a three-tier structure: desktops running the user interface or presentation software, a middle tier containing the logic and processing code, and the data tier contained inside the servers and databases. Those simple days are long gone.

Still living on from that time, though, are data centers that have separate racks, staffs, and management tools for servers, for storage, for routers, and for other networking infrastructure. That worked well when the applications were relatively separate and didn’t rely on each other, but that doesn’t work today when applications have more layers and are built to connect to each other (a Web server to a database server to a scheduling server to a cloud-based service, as a common example). And all of these pieces are running on virtualized machines anyway.

Today’s equipment racks are becoming more “converged” and are handling storage, servers, and networking tasks all within a few inches of each other. The notion first started with blade technology, which puts all the essential elements of a computer on a single expansion card that can easily slip into a chassis. Blades have been around for many years, but the leap was using them along with the right management and virtualization software to bring up new instances of servers, storage, and networks. Packing many blade servers into a single large chassis also dramatically increases the density that was available in a single rack.

It is more than just bolting a bunch of things inside a rack: vendors selling these “data center in a rack” solutions are providing pre-engineering testing and integration services. They also have sample designs that can be used to specify the particular components easily that reduce cable clutter, and vendors are providing software to automate management. This arrangement improves throughput and makes the various components easier to manage. Several vendors offer this type of computing gear, including Dell’s Active Infrastructure and IBM’s PureSystems. It used to be necessary for different specialty departments within IT to configure different components here: one group for the servers, one for the networking infrastructure, one for storage, etc. That took a lot of coordination and effort. Now it can all be done coherently and with a single source.

Let’s look at Dell’s Active Infrastructure as an example. They claim to eliminate more than 755 of the steps needed to power on a server and connect it to your network. It comes in a rack with PowerEdge Intel servers, SAN arrays from Dell’s Compellent division, and blades that can be used for input/output aggregation and high-speed network connections from Dell’s Force10 division. The entire package is very energy efficient and you can deploy these systems quickly. We’ve seen demonstrations from IBM and Dell where a complete network cluster is brought up from a cold start within an hour, and all managed from a Web browser by a system administrator who could be sitting on the opposite side of the world.

Beyond the simple SAN

As storage area networks (SANs) proliferate, they are getting more complex. SANs now use more capable storage management tools to make them more efficient and flexible. It used to be the case that SAN administration was a very specialized discipline that required arcane skills and deep knowledge of array performance tuning. That is not the case any longer, and as SAN tool sets improve, even IT generalists can bring one online.

The above data center clusters from Dell and others are just one example of how SANs have been integrated into other products. Added to these efforts, there is a growing class of management tools that can help provide a “single pane of glass” view of your entire SAN infrastructure. These also can make your collection of virtual disks more efficient.

One of the problems with virtualized storage is that you can provision a lot of empty space on your physical hard drives that never gets touched by any of your virtual machines (VMs). In a typical scenario, you might have a terabyte of storage allocated to a set of virtual machines and only a few hundred gigabytes actually used by the virtual machines’ operating systems and installed applications.The dilemma here is you want to have enough space available to your virtual drive so that it has room to grow, so you often have to tie up space that could otherwise be used. This is where dynamic thin provisioning comes into play. Most SAN arrays have some type of thin provisioning built in and let you allocate storage without actually allocating it—a 1TB thin-provisioned volume reports itself as being 1TB in size but only actually takes up the amount of space in use by its data. In other words, a physical 1TB chunk of disk could be “thick” provisioned into a single 1TB volume or thin provisioned into maybe a dozen 1TB volumes, letting you oversubscribe the volume. Thin provisioning can play directly into your organization’s storage forecasting, letting you establish maximum volume sizes early and then buying physical disk to track with the volume’s growth.

Another trick many SANs can do these days is data deduplication. There are many different deduplication methods, with each vendor employing its own “secret sauce.” But they all aim to reduce or eliminate the same chunks of data being stored multiple times. When employed with virtual machines, data deduplication means commonly used operating system and application files don’t have to be stored in multiple virtual hard drives and can share one physical repository. Ultimately, this allows you to save on the copious space you need for these files. For example, a hundred Windows virtual machines all have essentially the same content in their “Windows” directories, their “Program Files” directories, and many other places. Deduplication ensures those common pieces of data are only stored once, freeing up tremendous amounts of space.

Software-defined networks

As enterprises invest heavily in virtualization and hybrid clouds, one element still lags: the ability to quickly provision network connections on the fly. In many cases this is due to procedural or policy issues.

Some of this lag can be removed by having a virtual network infrastructure that can be as easily provisioned as spinning up a new server or SAN. The idea behind these software-defined networks (SDNs) isn’t new: indeed, the term has been around for more than a decade. A good working definition of SDN is the separation of the data and control functions of today’s routers and other layer two networking infrastructure with a well-defined programming interface between the two. Most of today’s routers and other networking gear mix the two functions. This makes it hard to adjust network infrastructure as we add tens or hundreds of VMs to our enterprise data centers. As each virtual server is created, you need to adjust your network addresses, firewall rules, and other networking parameters. These adjustments can take time if done manually, and they don’t really scale if you are adding tens or hundreds of VMs at one time.

Automating these changes hasn’t been easy. While there have been a few vendors to offer some early tools, the tools were quirky and proprietary. Many IT departments employ virtual LANs, which offer a way to segment physical networks into more manageable subsets with traffic optimization and other prioritization methods. But vLANs don’t necessarily scale well either. You could be running out of head room as the amount of data that traverses your infrastructure puts more of a strain on managing multiple vLANs.

The modern origins of SDN came about through the efforts of two computer science professors: Stanford’s Nick McKeown and Berkeley’s Scott Shenker, along with several of their grad students. The project was called “Ethane” and it began more than 15 years ago, with the goal of trying to improve network security with a new series of flow-based protocols. One of these students was Martin Casado, who went on to found an early SDN startup that was later acquired by VMware in 2012. A big outgrowth of these efforts was the creation of a new networking protocol called OpenFlow.

Now Google and Facebook, among many others, have adopted the OpenFlow protocol in their own data center operations. The protocol has also gotten its own foundation, called the Open Networking Foundation, to move it through the engineering standards process.

OpenFlow offers a way to have programmatic control over how new networks are setup and torn down as the number of VMs waxes and wanes. Getting this collection of programming interfaces to the right level of specificity is key to SDN and OpenFlow’s success. Now that VMware is involved in OpenFlow, we expect to see some advances in products and support for the protocol, plus a number of vendors who will offer alternatives as the standards process evolves.

SDN makes sense for a particular use case right now: that of hybrid cloud configurations where your servers are split between your on-premises and offsite or managed service provider. This is why Google et al. are using them to knit together their numerous global sites. With OpenFlow, they can bring up new capacity across the world and have it appear as a single unified data center.

But SDN isn’t a panacea, and for the short-term it probably is easier for IT staff to add network capacity manually rather than rip out their existing networking infrastructure and replace with SDN-friendly gear. The vendors who have the lion’s share of this infrastructure are still dragging behind on the SDN and OpenFlow efforts, in part because they see this as a threat to their established businesses. As SDNs become more popular and the protocols mature, expect this situation to change.

Backup as a Service

As more applications migrate to Web services, one remaining challenge is being able to handle backups effectively across the Internet. This is useful under several situations, such as for offsite disaster recovery, quick recovery from cloud-based failures, or backup outsourcing to a new breed of service providers.

There are several issues at stake here. First is that building a fully functional offsite data center is expensive, and it requires both a skilled staff and a lot of coordinated effort to regularly test and tune the failover operations. That way, a company can be ready when disaster does strike to keep their networks and data flowing. Through the combination of managed service providers such as Trustyd.com and vendors such as QuorumLabs, there are better ways to provide what is coming to be called “Backup as a Service.”

Both companies sell remote backup appliances that work somewhat differently to provide backups. Trustyd’s appliance is first connected to your local network and makes its initial backups at wire speeds. This is one of the limitations of any cloud-based backup service, because making the initial backup means sending a lot of data over the Internet connection. That can take days or weeks (or longer!). Once this initial copy is created, the appliance is moved to an offsite location where it continues to keep in sync with your network across the Internet. Quorum’s appliance involves using virtualized copies of running servers that are maintained offsite and kept in sync with the physical servers inside a corporate data center. Should anything happen to the data center or its Internet connection, the offsite servers can be brought online in a few minutes.

This is just one aspect of the potential problem with backup as a service. Another issue is in understanding cloud-based failures and what impact they have on your running operations. As companies virtualize more data center infrastructure and develop more cloud-based apps, understanding where the failure points are and how to recover from them will be key. Knowing what VMs are dependent on others and how to restart particular services in the appropriate order will take some careful planning.

An exemplary idea is how Netflix has developed a series of tools called “Chaos Monkey” that it has since made publicly available. Netflix is a big customer of Amazon’s Web Services, and to ensure that it can continue to operate, the company constantly and deliberately fails parts of its Amazon infrastructure. Chaos Monkey seeks out Amazon’s Auto Scaling Groups (ASGs) and terminates the various virtual machines inside a particular group. Netflix released the source code on Github and claims it can be designed for other cloud providers with a minimum of effort. If you aren’t using Amazon’s ASGs, this might be a motivation to try them out. The service is a powerful automation tool and can help you run new (or terminate unneeded) instances when your load changes quickly. Even if your cloud deployment is relatively modest, at some point your demand will grow and you don’t want to depend on your coding skills or having IT staff awake when this happens and have to respond to these changes. ASG makes it easier to juggle the various AWS service offerings to handle varying load patterns. Chaos Monkey is the next step in your cloud evolution and automation. The idea is to run this automated routine during a limited set of hours with engineers standing by to respond to the failures that it generates in your cloud-based services.

Application-aware firewalls

Firewalls are well-understood technology, but they’re not particularly “smart.” The modern enterprise needs deeper understanding of all applications that operate across the network so that it can better control and defend the enterprise. In the older days of the early-generation firewalls, it was difficult to answer questions like:

  • Are Facebook users consuming too much of the corporate bandwidth?
  • Is someone posting corporate data on a private e-mail account such as customer information or credit card numbers?
  • What changed with my network that’s impacting the perceived latency of my corporate Web apps today?
  • Do I have enough corporate bandwidth to handle Web conference calls and video streaming? What is the impact on my network infrastructure?
  • What is the appropriate business response time of key applications in both headquarters and branch offices?

The newer firewalls can answer these and other questions, because they are application-aware. They understand the way applications interact with the network and the Internet, and firewalls can then report back to administrators in near real time with easy-to-view graphical representations of network traffic.

This new breed of firewalls and packet inspection products are made by big-name vendors such as Intel/McAfee, BlueCoat Networks, and Palo Alto Networks. The firewalls of yesteryear were relatively simple devices: you specified a series of firewall rules that listed particular ports and protocols and whether you wanted to block or allow network traffic through them. That worked fine when applications were well-behaved and used predictable ports, such as file transfer on ports 20 and 21 and e-mail on ports 25 and 110. With the rise of Web-based applications, ports and protocols don’t necessarily work as well. Everyone is running their apps across ports 80 and 443, in no small part because of port-based firewalling. It’s becoming more difficult to distinguish between apps that are mission-critical and someone who is running a rogue peer-to- peer file service that needs to be shut down.

Another aspect of advanced firewalls is being able to look at changes to the network and see the root causes, or viewing time-series effects as your traffic patterns differ when things are broken today (but were, of course, working yesterday). Finally, they allow administrators or managers to control particular aspects of an application, such as allowing all users to read their Facebook wall posts but not necessarily send out any Facebook messages during working hours.

Going on from here

These six trends are remaking the data center into one that can handle higher network speeds and more advances in virtualization, but they’re only part of the story. Our series will continue with a real-world look at how massive spikes in bandwidth needs can be handled without breaking the bank at a next-generation sports stadium.

Source:  arstechnica.com

Microsoft brings solar Wi-Fi to rural Kenya

Thursday, February 14th, 2013

Using derelict TV frequencies, old-fashioned antennas and solar power, Microsoft is trialling a pioneering form of broadband technology in Africa

GAKAWA Senior Secondary School is located in Kenya’s western Rift Valley Province, about 10 kilometres from Nanyuki town. It is not an easy place to live. There are no cash crops, no electricity, no phone lines, and rainfall is sporadic to say the least.

“For internet access we had to travel the 10 kilometres to Nanyuki and it would cost 100 Kenya shillings [about $1.20] to get there,” says Beatrice Nderango, the school’s headmistress.

Not for much longer. Solar-powered Wi-Fi is being installed in the area that will give local people easy access to the internet for the first time. The pilot project – named Mawingu, the Swahili word for “cloud” – is part of an initiative by Microsoft and local telecoms firms to provide affordable, high-speed wireless broadband to rural areas. If and when it is rolled out nationwide, as planned, it will mean that Kenya could lead the way with a model of wireless broadband access that in the West has been tied up in red tape.

Because the village has no power, Microsoft is working with Kenyan telecoms firm Indigo to install solar-powered base stations that supply a wireless signal at a bandwidth that falls into what is called the “white spaces” spectrum.

This refers to the bits of the wireless spectrum that are being freed up as television moves from analogue to digital – a set of frequencies between 400 megahertz and about 800 megahertz. Such frequencies penetrate walls, bend around hills and travel much longer distances than the conventional Wi-Fi we have at home. That means that the technology requires fewer base stations to provide wider coverage, and wannabe web surfers in the village need only a traditional TV antenna attached to a smartphone or tablet to access the signal and get online. Microsoft is supplying some for the trial, as well as solar-powered charging stations.

To begin with, Indigo has set up two solar-powered white-space base stations in three villages to deliver wireless broadband access to 20 locations, including schools, healthcare clinics, community centres and government offices.

“Africa is the perfect location to pioneer white-space technology,” says Indigo’s Peter Henderson, thanks to governments’ open-mindedness. Indeed, Kenya has a strong chance of being in the global vanguard of white-space roll-out. While the US has already legalised use of derelict TV bands, it has yet to standardise the database technology that will tell devices which frequencies are free to use at their GPS location.

In the UK, white-space access should finally be up and running by the end of 2013, says William Webb of white-space startup Neul in Cambridge. “White-space trials are also taking place in Japan, Indonesia, Malaysia, South Africa and many other countries – and some of these may move directly to allowing access without needing lengthy consultations,” he says. In many cases, it has been these consultations that have slowed the technology’s progress.

Microsoft aims to roll out the initiative to other African nations, such as sub-Saharan countries. “Internet access is a life-changing experience and it’s going to give both our students and teachers added motivation for learning,” says Nderango. “It will also make my job as headmistress a little easier.”

Source:  newscientist.com