Archive for February, 2013

Google’s white spaces database goes live in test next week

Thursday, February 28th, 2013

Two years ago, Google was one of ten entities selected by the Federal Communications Commission to operate a white spaces database. Google’s database is finally just about ready to go: on Monday, the company will begin a 45-day trial allowing the database to be tested by the public.

White spaces technology allows unused TV spectrum to be repurposed for wireless Internet networks. The companies Spectrum Bridge and Telcordia have already completed their tests and have started operating. Google is the third to reach this stage. The databases are necessary to ensure that wireless Internet networks use only empty spectrum and thus don’t interfere with TV broadcasts.

“This is a limited trial that is intended to allow the public to access and test Google’s database system to ensure that it correctly identifies channels that are available for unlicensed radio transmitting devices that operate in the TV band (unlicensed TV band devices), properly registers radio transmitting facilities entitled to protection, and provides protection to authorized services and registered facilities as specified in the rules,” the FCC said yesterday. “We encourage all interested parties to test the database and provide appropriate feedback to Google.”

If nothing goes wrong, Google’s database could be open for business a few months after the test closes.

The test doesn’t necessarily signal that Google itself is on the cusp of creating wireless networks using white spaces spectrum, although it could. Google has already become an Internet service provider with Google Fiber in Kansas City and has offered free public Wi-Fi in a small part of New York City and Mountain View.

“This has nothing to do with Google creating a wireless network, though Google is interested in the business and could, potentially, create a white space network on down the line,” Steven Crowley, a wireless engineer who blogs about the FCC, wrote in an e-mail.

White spaces networks haven’t exactly revolutionized broadband Internet access in the US, but companies pushing the technology still hope it will have an impact, particularly in rural areas. An incentive auction the FCC is planning to reclaim spectrum controlled by TV broadcasters may increase the airwaves available to white spaces networks.

The FCC decided to authorize multiple white spaces databases to prevent any one company from having a stranglehold over the process. Google, meanwhile, may be hedging its bets. Public Knowledge Senior VP Harold Feld thinks Google applied to become a database provider so it wouldn’t have to worry about anyone else providing key infrastructure.

“I have no specific information, but my belief has always been that Google applied primarily to cover its rear end and make sure that—however they ultimately ended up monetizing the TVWS [TV white spaces]—they didn’t need to worry about someone else having some kind of control over one of the key components (the database),” Feld wrote in an e-mail. “So it is not (IMO) that this demonstrates any specific plans about what it wants to do in the TVWS, it just means that Google doesn’t want anyone to be able to mess with them once they launch whatever they are going to do.”

We’ve contacted Google to see if the company will provide any information on their long-term plans.

The remaining database operators that must go through public tests are Microsoft, Comsearch, Frequency Finder, KB Enterprises LLC and LS Telcom, Key Bridge Global LLC, Neustar, and WSdb LLC.


Wireless LAN vendors target surging carrier Wi-Fi market

Monday, February 25th, 2013

Ruckus, Aruba products aim at large-scale, integrated Wi-Fi services

Two wireless LAN vendors are targeting the next big explosion in Wi-Fi growth: hotspots and hotzones created by carriers and other services providers.

Both Ruckus Wireless and Aruba Networks this week at the Mobile World Congress Show in Barcelona outlined products aimed at this provider market. The goal is to be part of a crystallizing of hardware and software that can integrate Wi-Fi with core mobile networks.

As part of its reference design for carrier-based Wi-Fi services, Ruckus announced a new family of outdoor 802.11n access points, the ZoneFlex 7782 series. Four models offer different internal and external antenna configuration options. All have three transmit and three receive antennas supporting three data streams for a maximum data rate of 900Mbps. All three have Ruckus’ patented BeamFlex adaptive antenna technology, designed to boost gain and reduce interference. There’s also a GPS receiver, which service providers can leverage for location-based services.

Image Alt Text

Deliberately bland in design, the new Ruckus ZoneFlex 7782 outdoor access point aims at high-performance carrier Wi-Fi networks: dual-band, 3-stream 802.11n with a data rate of nearly 1Gbps.

The company also unveiled a Wi-Fi traffic analysis application for carriers, called the SmartCell Insight analytics engine, which runs on Ruckus’ Smartcell 2000 Gateway, which bridges Wi-Fi and cellular networks. The software sifts out a wealth of data about access point usage, bandwidth, subscriber activity and other metrics, and packs them into a data warehouse. Pre-written and custom reports translate the raw data into information about how well the Wi-Fi network is performing. A battery of standard APIs let carriers export the information to existing data-mining tools and interface with core network applications.

Finally, Ruckus announced SmartPoint, which adds to the ZoneFlex 7321-U access point a USB port that can accept a 3G, 4G, or WiMAX external dongle. The idea is to quickly and easily create a wireless backhaul option where a cable isn’t possible (such as a city bus). Ruckus automatically pushes to the access point the needed driver software for specific 3G/4G/WiMAX dongles. KDDI in Japan, with an extensive WiMAX network, can offer shop owners a Ruckus access point for hotspot Wi-Fi, with a WiMAX dongle for easy backhaul to the Internet.

Both the 7782 outdoor access point, priced at $3,000, and Smartpoint, at $400 are available now; the analytics application, with pricing based on the size of the network, will ship in the second quarter.

Aruba’s carrier play

Aruba, too, is recasting its WLAN architecture via software updates to address carrier requirements for creating a high-capacity, secure and reliable Wi-Fi service for mobile subscribers.

Dubbed Aruba HybridControl, the new code gives Aruba’s 7200 Mobility Controller massive scalability. Aruba says the software update will let the 7200 manage over 32,000 hotspots. That translates into over 100,000 individual access points, because each hotspot can have several of the vendor’s Aruba Instant access points. The scaling lowers carriers’ backend capital costs, cuts data center power demand, and needs less rack space, according to Aruba. The Aruba Instant model offloads cellular traffic locally to the Internet, while centralizes selected traffic such as billing and legal intercept via an IPSec connection to the 7200 controllers at the core.

HybridControl offers “zero-touch activation” for factory-default access points, with no need for any manual pre-provisioning. Switched on, these access points interface with the Aruba Activate cloud service to discover the carrier’s configuration management system and download it. Then, the access points use an assigned X.509 certificate to authenticate with an Aruba controller and set up an IPSec tunnel.

The HybridControl architecture leverages existing Aruba features such as:

  • AppRF, to identify and prioritize real-time applications, such as Microsoft Lync, to create different classes of service;
  • ClearPass Policy Management, a server application to authenticate new access points joining the mobile core network.

The carrier-focused HybridControl offering includes several products: the Aruba 7200 Mobility Controller, available now with prices starting at $38,000; Aruba Instant access points, available now with prices starting at about $400; Aruba Activate, available now and free of charge for Aruba customers. The software update for the 7200 will be available as a free Aruba OS upgrade in the second quarter.


Aruba announces controller and software for hybrid wireless networks

Monday, February 25th, 2013

New 7200 Mobility Controller offloads cellular to Wi-Fi

Aruba Networks announced a Wi-Fi controller today that can create more efficient pathways for wireless traffic and control more than 32,000 Wi-Fi hotspots.

Aruba said the new 7200 Mobility Controller will be use far less power and cost much less than competing technology from Cisco, the market leader.

Aruba’s announcement of the controller, made on the first day of Mobile World Congress, is part of a trend of new software and hardware that equipment makers are offering to service providers and large enterprises to make large Wi-Fi networks and Wi-Fi hotspots more efficient, partly by reducing the wireless demand on cellular networks.

The 7200 will start at $37,995. Two rack unit 7200s (with one for redundancy) will serve about the same number of access points as seven Cisco 8500 controllers, but cost 40 times less, Manav Khurana, senior director of product marketing at Aruba, said in an interview.

The controller relies on new software called HybridControl Wi-Fi, which incorporates management capabilities for devices used by workers and guests inside organizations.

Last week, Cisco unveiled Quantum software and hardware to help carriers and enterprises improve wireless connections that are hybrid networks of 3G and 4G cellular and Wi-Fi. The devices will be demonstrated at Mobile World Congress in Barcelona this week.


Server hack prompts call for cPanel customers to take “immediate action”

Monday, February 25th, 2013

Change root and account passwords and rotate SSH keys, company advises.

The providers of the cPanel website management application are warning some users to immediately change their systems’ root or administrative passwords after discovering one of its servers has been hacked.

In an e-mail sent to customers who have filed a cPanel support request in the past six months, members of the company’s security team said they recently discovered the compromise of a server used to process support requests.

“While we do not know if your machine is affected, you should change your root level password if you are not already using SSH keys,” they wrote, according to a copy of the e-mail posted to a community forum. “If you are using an unprivileged account with ‘sudo’ or ‘su’ for root logins, we recommend you change the account password. Even if you are using SSH keys we still recommend rotating keys on a regular basis.”

The e-mail advised customers to take “immediate action on their own servers,” although team members still don’t know the exact nature of the compromise. Company representatives didn’t respond to an e-mail from Ars asking if they could rule out the possibility that customer names, e-mail addresses, or other personal data were exposed. It’s also unclear whether the company followed wide-standing recommendations to cryptographically protect passwords. So-called one-way hashes convert plain-text passwords into long unique strings that can only be reversed using time-consuming cracking techniques. This post will be updated if cPanel representatives respond later.

The cPanel compromise is the latest in a long string of high-profile hacks to be disclosed over the past few weeks. Other companies that have warned users they were hacked include The New York Times, The Wall Street Journal, security firm Bit9 Twitter, Facebook, Apple, and Microsoft. On Tuesday, a computer firm issued an unusually detailed report linking China’s military to hacks against US companies, although at least some of the most recent attacks are believed to have originated in Eastern Europe.

It’s unclear how many cPanel users are affected by the most recently disclosed compromise. The hack has the potential to be serious because the passwords at risk could give unfettered control to a large number of customers’ Unix-based computers.


FCC orders 2M people to power down cell phone signal boosters

Thursday, February 21st, 2013

Devices must be turned off until customers ask for carrier approval.

The Federal Communications Commission today enacted a set of rules governing the sale and deployment of wireless signal boosters, devices consumers use to improve cell phone signals. More than 2 million of these devices are in use across the country, and until now consumers who bought them could just turn them on and let them work their magic.

Not anymore. Anyone who buys one of these devices from now on must seek the permission of carriers. Even the 2 million devices already in use must be turned off immediately unless their owners register them. The FCC states in an FAQ:

Did the FCC recently adopt new rules for signal boosters?

Yes. The FCC recently adopted new rules to improve signal booster design so these devices won’t cause interference to wireless networks. The FCC also adopted new rules about what cell phone users need to do before using a signal booster.

I already have a signal booster; do I need to do anything under the new rules?

Yes. Under the FCC’s new rules, you (1) need your wireless provider’s permission to use the booster, and (2) must register the booster with your wireless provider. Absent your provider’s permission, you may not continue using your booster.

For practical purposes, there is a good chance you could keep using that device without getting any threatening legal letters. But technically, the FCC could issue fines to customers who fail to comply, Public Knowledge Legal Director Harold Feld told Ars. There’s no word yet on what will happen to consumers who fail to register or whether carriers would actively seek them out.

(UPDATE: The FCC has already changed the language on that FAQ, indicating that the onus may not be on owners of existing devices to register with carriers. The FCC deleted the sentence that says “Absent your provider’s permission, you may not continue using your booster.” Instead, it now says, “If a wireless provider or the FCC asks you to turn off your signal booster because it is causing interference to a wireless network, you must turn off your booster and leave it off until the interference problem can be resolved. When the new rules go into effect, you will be able to purchase a booster with additional safeguards that protect wireless networks from interference.” For buyers of new devices, the FAQ does still say that “[b]efore use, you must register this device with your wireless provider and have your provider’s consent.”)

There are good reasons for the FCC to regulate these devices. They could cause interference with cellular networks, even if the ones today generally haven’t been too problematic. Everyone from consumer advocates to booster device makers, carriers, and the FCC agrees that standards to prevent interference are good. But Feld, other consumer advocates, and the makers of these devices say it’s unfair to consumers to make them register with carriers.

Carriers aren’t ready to detail registration policies

Major carriers haven’t said how the registration process will work, but one conceivable outcome is that they could charge customers an extra fee to use boosters, like they do with other devices that improve signals.

Wireless boosters are “saving the carriers money by not making them build more towers, but now they can charge you for improving the holes in their own network,” Feld said.

Requiring a specific carrier’s permission is odd, because a wireless booster can be used to improve signals on just about any network, said Michael Calabrese, director of the Wireless Future Project at the New America Foundation. Calabrese helped advise the government on its recent spectrum sharing plan. “97 percent of the boosters sold are wideband boosters, meaning they amplify the signals of all carriers equally,” Calabrese told Ars. “For some reason, the commission has delegated authority to the carrier.”

(UPDATE: After this story ran, we got a more positive take on the registration requirement from Wilson Electronics, a signal booster manufacturer. “We don’t see registration having a surcharge added to it-—or being an end-all scenario,” the company said. “90+ carriers already gave blanket approval to boosters that meet the specs. And the [FCC] commissioners were clear about the registration not being cumbersome. They also mentioned that they suspect few people will actually register the products given that consent will already be given.” Wilson did oppose the registration requirement, but said on the whole the ruling was good and should only affect poorly designed products. Wi-Ex, maker of zBoost boosters, said “consumers do not have to power down the zBoost consumer products. Our Boosters do not interfere with the wireless providers’ networks—consumers can continue using them and look for the notice their provider will send them regarding registration of the product with them.”)

If carriers are stingy with device approvals, households with subscriptions to multiple carriers could have to purchase one booster for each carrier—even though there’s no technical reason preventing a single booster from covering phones from multiple carriers.

Even though consumers have to register devices they already bought, booster manufacturers were given one year to clear out existing inventory before they have to sell new hardware that meets the interference rules. If they can keep selling existing devices, it’s difficult to imagine they’ve caused cellular providers too much trouble.

FCC commissioner Robert McDowell said, “wireless service providers have experienced some harmful interference with boosters interacting with their networks.” McDowell also lauded boosters for improving signals, even in tunnels where cellular connections are often disrupted.

Commissioner Mignon Clyburn said for millions of people, cellular service disruptions “are more than rare, trivial annoyances.” Commissioner Jessica Rosenworcel said that 41 percent of children live in homes served only by cell phones and not landlines, making boosters one option for ensuring that emergency calls and doctor’s calls can be completed.

In addition to consumer-strength boosters for homes, small offices, and vehicles, there are industrial-strength wireless boosters designed for airports, hospitals, stadiums, etc. Industrial-strength boosters have to meet stricter interference standards because they transmit at higher power levels than consumer-grade ones, and anyone who operates an industrial-class booster will need an FCC license.

Carriers both large and small have reportedly assured the FCC that they won’t be unreasonable in providing approval for use of consumer-grade boosters.

We’ve asked Verizon Wireless, AT&T, T-Mobile, and Sprint to provide information on how the registration process will work and whether consumers will have to pay any extra fees when using wireless signal boosters. We haven’t gotten any specific answers, but that’s not surprising. AT&T pointed out to us this afternoon that the FCC hadn’t even released the text of its order yet. (The FCC has since released the order.)

The FCC booster FAQ we mentioned earlier says, “Most wireless providers consent to the use of signal boosters. Some providers may not consent to the use of this device on their network.” In other words, carriers generally allow it, but they’re not legally required to.

AT&T told us that it’s “pleased that the FCC has adopted technical standards designed to protect our customers from interference caused by signal boosters while allowing well-designed boosters to remain in the marketplace. For these standards to be most effective, however, it is important that they are coupled with appropriate enforcement and consumer outreach.”

Similarly, T-Mobile said it “supports the FCC’s decision today to facilitate the deployment of well-designed third-party signal boosters that can improve wireless coverage, provided they meet technical criteria to prevent interference and that consumers obtain consent from their service provider. T-Mobile will have more to say on this topic once we’ve had a chance to review the FCC’s Report and Order.”

Sprint declined comment today, but has publicly supported the FCC rules. Verizon told us “Our goal is always to make things as simple as possible for customers, and will work to do so here as well.”

FCC says it took the best deal it could get

After the FCC unanimously approved the new rules today, FCC Chairman Julius Genachowski was asked why the carrier-consent provision became part of the final order. He said that carriers have agreed to play nice, so the FCC let them decide which devices consumers get to use. He said the FCC could revisit its decision later on if carriers end up acting unreasonably.

“One of the things that helped facilitate an outcome here was the commitment by a number of carriers to provide that consent,” Genachowski said. “Our goal is to put in place clear rules of the road that enable and authorize signal boosters for consumers as quickly as possible, but as part of a framework that prevents interference. We all said, ‘You know what, this is the fastest way to get from A to B. Meanwhile, we’re going to monitor this.’ We expect it to work but we haven’t ruled out other options for the future if for some reason it doesn’t.”

Feld accused the FCC of rolling over for carriers by giving them the right to reject wireless boosters that customers want to use. Calabrese called it “profoundly anti-consumer.” Besides charging monthly fees, Calabrese said carriers could strike exclusive deals with device makers to make sure they get a cut of each device sale.

“They could change their mind at any point, and enter into a royalty agreement with a particular booster maker,” he said.

Feld explains that the boosters take your cellular signal and increase its power so that it reaches a cell tower (whereas femtocells work by taking your cellular signal and dumping it onto your home wireline network). Wireless boosters could be particularly useful in rural areas and in emergency situations, when one tower is down but the booster allows a connection to a further tower, he said.

“What’s particularly irritating is we’ve got about 2 million people who bought these devices in good faith,” Feld said. “The FCC is requiring them to go and register with their carriers and get permission [to keep using them].”

Prior to the FCC decision, the CTIA Wireless Association said the commission should take the stance that “commercial wireless providers must consent to the use of signal boosters on their networks prior to their operation.”

“For the past decade, manufacturers of signal boosters have marketed and sold these devices without properly informing consumers of the need to work with affected licensees prior to operating devices,” the CTIA said.

Wireless signal booster maker Wilson Electronics pleaded its case with the FCC too, arguing that the registration requirement is nonsensical, given that new devices won’t even hit the market unless they meet interference requirements. Wilson’s boosters are designed for use in the home, offices, and vehicles, with devices costing anywhere from $140 to several hundred dollars.

“Now that the Commission is on the verge of adopting network protection standards that were collaboratively developed by carriers and manufacturers to safeguard wireless networks, it would defeat the purpose of the rulemaking for the Commission to decide to empower carriers to deny consumers access to robust boosters that have been designed to meet those very standards,” Wison wrote in a letter to the FCC on Feb. 12.

Expanding Wi-Fi

The FCC today also approved a Notice of Proposed Rulemaking to expand the amount of spectrum available to Wi-Fi in the 5GHz band by 195MHz, or about 35 percent. While the wireless booster decision was a final order, the spectrum one was preliminary and generally uncontroversial. The FCC is just beginning what might be a two-year process to expand the 5GHz band. Some interference concerns will have to be addressed. Read our previous coverage for more details.


Ericsson: Cellular data demand doubled annually the last five years – Are you ready for ’13?

Tuesday, February 19th, 2013

Global cellular data traffic has doubled in the past year, according to a report released by Ericsson, attributable in particular to an increase in 4G and LTE devices.

This increased demand for mobile signal is expected to at least double again this year, as it has in each of the past five years (see Ericsson graph above), which begs the question:  Is your facility equipped to deal with the continued surge in cellular signal demand?

Knowledge workers, sales staff, and others have come to rely almost exclusively on cell phones as they spend less and less time at their desks, to say nothing of clients and visitors who expect a reasonable level of mobile connectivity at your site.  Additionally, new workspace philosophies such as activity-based workplaces, mobility centers, hotelling and hot desking will only increase reliance on cellular connectivity.

Yet, even within the same office, hospital or university campus, warehouse or other facility, cellular signal can be drastically different, allowing some users to maintain acceptable mobile voice and data connections while other frustrated users drop calls and apps fail to connect to data sources.  Whether the fault lies with structural interference or inadequate cell network coverage in your area is irrelevant to your end users, as decreased productivity and morale can often result from an inability to communicate as expected.

These problems can be identified and remedied, however, with a cellular repeater/amplifier solution created specifically for your facility by qualified Gyver Networks RF engineers.

Gyver Networks will survey the location to create a complete picture of your RF environment, then engineer and install the optimal system to provide 3G, 4G, and LTE cellular signal to your building or campus, whether you require a DAS (distributed antenna system) or cellular base station.

Ensure that the continued increase in mobile demand doesn’t have a negative impact on your continued growth.  Contact Gyver Networks today for a free consultation.


100Gbps and beyond: What lies ahead in the world of networking

Tuesday, February 19th, 2013

App-aware firewalls, SAN alternatives, and other trends for the future.

The corporate data center is undergoing a major transformation the likes of which haven’t been seen since Intel-based servers started replacing mainframes decades ago. It isn’t just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.All of this means tomorrow’s data center is going to look very different from today’s. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We’re going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it’s common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn’t fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn’t lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center’s LAN.

Part of coping with all this data transfer is being smarter about identifying network bottlenecks and removing them, such as immature network interface card drivers that slow down server throughput. Bad or sloppy routing paths can introduce network delays too. Typically, both bad drivers and bad routes haven’t been carefully previously examined because they were sufficient to handle less demanding traffic patterns.

It doesn’t help that more bandwidth can sometimes require new networking hardware. The vendors of these products are well prepared, and there are now numerous routers, switches, and network adapter cards that operate at 40- and even 100-gigabit Ethernet speeds. Plenty of vendors sell this gear: Dell’s Force10 division, Mellanox, HP, Extreme Networks, and Brocade. It’s nice to have product choices, but the adoption rate for 40GbE equipment is still rather small.

Using this fast gear is complicated by two issues. First is price: the stuff isn’t cheap. Prices per 40Gb port—that is, the cost of each 40Gb port on a switch—are typically $2,500, way more than a typical 10Gb port price. Depending on the nature of your business, these higher per-port prices might be justified, but it isn’t only this initial money. Most of these devices also require new kinds of wiring connectors that will make implementation of 40GbE difficult, and a smart CIO will keep total cost of ownership in mind when looking to expand beyond 10Gb.

As Ethernet has attained faster and faster speeds, the cable plan to run these faster networks has slowly evolved. The old RJ45 category 5 or 6 copper wiring and fiber connectors won’t work with 40GbE. New connections using the Quad Small Form-factor Pluggable or QSFP standard will be required. Cables with QSFP connectors can’t be “field terminated,” meaning IT personnel or cable installers can’t cut orange or aqua fiber to length and attach SC or LC heads themselves. Data centers will need to figure out their cabling lengths and pre-order custom cables that are manufactured with the connectors already attached ahead of time. This is potentially a big barrier for data centers used to working primarily with copper cables, and it also means any current investment in your fiber cabling likely won’t cut it for these higher-speed networks of the future either.

Still, as IT managers get more of an understanding of QSFP, we can expect to see more 40 and 100 gigabit Ethernet network legs in the future, even if the runs are just short ones that go from one rack to another inside the data center itself. These are called “top of rack” switches. They link a central switch or set of switches over a high-speed connection to the servers in that rack with slower connections. A typical configuration for the future might be one or ten gigabit connections from individual servers within one rack to a switch within that rack, and then 40GbE uplink from that switch back to larger edge or core network switches. And as these faster networks are deployed, expect to see major upgrades in network management, firewalls, and other applications to handle the higher data throughput.

The rack as a data center microcosm

In the old days when x86 servers were first coming into the data center, you’d typically see information systems organized into a three-tier structure: desktops running the user interface or presentation software, a middle tier containing the logic and processing code, and the data tier contained inside the servers and databases. Those simple days are long gone.

Still living on from that time, though, are data centers that have separate racks, staffs, and management tools for servers, for storage, for routers, and for other networking infrastructure. That worked well when the applications were relatively separate and didn’t rely on each other, but that doesn’t work today when applications have more layers and are built to connect to each other (a Web server to a database server to a scheduling server to a cloud-based service, as a common example). And all of these pieces are running on virtualized machines anyway.

Today’s equipment racks are becoming more “converged” and are handling storage, servers, and networking tasks all within a few inches of each other. The notion first started with blade technology, which puts all the essential elements of a computer on a single expansion card that can easily slip into a chassis. Blades have been around for many years, but the leap was using them along with the right management and virtualization software to bring up new instances of servers, storage, and networks. Packing many blade servers into a single large chassis also dramatically increases the density that was available in a single rack.

It is more than just bolting a bunch of things inside a rack: vendors selling these “data center in a rack” solutions are providing pre-engineering testing and integration services. They also have sample designs that can be used to specify the particular components easily that reduce cable clutter, and vendors are providing software to automate management. This arrangement improves throughput and makes the various components easier to manage. Several vendors offer this type of computing gear, including Dell’s Active Infrastructure and IBM’s PureSystems. It used to be necessary for different specialty departments within IT to configure different components here: one group for the servers, one for the networking infrastructure, one for storage, etc. That took a lot of coordination and effort. Now it can all be done coherently and with a single source.

Let’s look at Dell’s Active Infrastructure as an example. They claim to eliminate more than 755 of the steps needed to power on a server and connect it to your network. It comes in a rack with PowerEdge Intel servers, SAN arrays from Dell’s Compellent division, and blades that can be used for input/output aggregation and high-speed network connections from Dell’s Force10 division. The entire package is very energy efficient and you can deploy these systems quickly. We’ve seen demonstrations from IBM and Dell where a complete network cluster is brought up from a cold start within an hour, and all managed from a Web browser by a system administrator who could be sitting on the opposite side of the world.

Beyond the simple SAN

As storage area networks (SANs) proliferate, they are getting more complex. SANs now use more capable storage management tools to make them more efficient and flexible. It used to be the case that SAN administration was a very specialized discipline that required arcane skills and deep knowledge of array performance tuning. That is not the case any longer, and as SAN tool sets improve, even IT generalists can bring one online.

The above data center clusters from Dell and others are just one example of how SANs have been integrated into other products. Added to these efforts, there is a growing class of management tools that can help provide a “single pane of glass” view of your entire SAN infrastructure. These also can make your collection of virtual disks more efficient.

One of the problems with virtualized storage is that you can provision a lot of empty space on your physical hard drives that never gets touched by any of your virtual machines (VMs). In a typical scenario, you might have a terabyte of storage allocated to a set of virtual machines and only a few hundred gigabytes actually used by the virtual machines’ operating systems and installed applications.The dilemma here is you want to have enough space available to your virtual drive so that it has room to grow, so you often have to tie up space that could otherwise be used. This is where dynamic thin provisioning comes into play. Most SAN arrays have some type of thin provisioning built in and let you allocate storage without actually allocating it—a 1TB thin-provisioned volume reports itself as being 1TB in size but only actually takes up the amount of space in use by its data. In other words, a physical 1TB chunk of disk could be “thick” provisioned into a single 1TB volume or thin provisioned into maybe a dozen 1TB volumes, letting you oversubscribe the volume. Thin provisioning can play directly into your organization’s storage forecasting, letting you establish maximum volume sizes early and then buying physical disk to track with the volume’s growth.

Another trick many SANs can do these days is data deduplication. There are many different deduplication methods, with each vendor employing its own “secret sauce.” But they all aim to reduce or eliminate the same chunks of data being stored multiple times. When employed with virtual machines, data deduplication means commonly used operating system and application files don’t have to be stored in multiple virtual hard drives and can share one physical repository. Ultimately, this allows you to save on the copious space you need for these files. For example, a hundred Windows virtual machines all have essentially the same content in their “Windows” directories, their “Program Files” directories, and many other places. Deduplication ensures those common pieces of data are only stored once, freeing up tremendous amounts of space.

Software-defined networks

As enterprises invest heavily in virtualization and hybrid clouds, one element still lags: the ability to quickly provision network connections on the fly. In many cases this is due to procedural or policy issues.

Some of this lag can be removed by having a virtual network infrastructure that can be as easily provisioned as spinning up a new server or SAN. The idea behind these software-defined networks (SDNs) isn’t new: indeed, the term has been around for more than a decade. A good working definition of SDN is the separation of the data and control functions of today’s routers and other layer two networking infrastructure with a well-defined programming interface between the two. Most of today’s routers and other networking gear mix the two functions. This makes it hard to adjust network infrastructure as we add tens or hundreds of VMs to our enterprise data centers. As each virtual server is created, you need to adjust your network addresses, firewall rules, and other networking parameters. These adjustments can take time if done manually, and they don’t really scale if you are adding tens or hundreds of VMs at one time.

Automating these changes hasn’t been easy. While there have been a few vendors to offer some early tools, the tools were quirky and proprietary. Many IT departments employ virtual LANs, which offer a way to segment physical networks into more manageable subsets with traffic optimization and other prioritization methods. But vLANs don’t necessarily scale well either. You could be running out of head room as the amount of data that traverses your infrastructure puts more of a strain on managing multiple vLANs.

The modern origins of SDN came about through the efforts of two computer science professors: Stanford’s Nick McKeown and Berkeley’s Scott Shenker, along with several of their grad students. The project was called “Ethane” and it began more than 15 years ago, with the goal of trying to improve network security with a new series of flow-based protocols. One of these students was Martin Casado, who went on to found an early SDN startup that was later acquired by VMware in 2012. A big outgrowth of these efforts was the creation of a new networking protocol called OpenFlow.

Now Google and Facebook, among many others, have adopted the OpenFlow protocol in their own data center operations. The protocol has also gotten its own foundation, called the Open Networking Foundation, to move it through the engineering standards process.

OpenFlow offers a way to have programmatic control over how new networks are setup and torn down as the number of VMs waxes and wanes. Getting this collection of programming interfaces to the right level of specificity is key to SDN and OpenFlow’s success. Now that VMware is involved in OpenFlow, we expect to see some advances in products and support for the protocol, plus a number of vendors who will offer alternatives as the standards process evolves.

SDN makes sense for a particular use case right now: that of hybrid cloud configurations where your servers are split between your on-premises and offsite or managed service provider. This is why Google et al. are using them to knit together their numerous global sites. With OpenFlow, they can bring up new capacity across the world and have it appear as a single unified data center.

But SDN isn’t a panacea, and for the short-term it probably is easier for IT staff to add network capacity manually rather than rip out their existing networking infrastructure and replace with SDN-friendly gear. The vendors who have the lion’s share of this infrastructure are still dragging behind on the SDN and OpenFlow efforts, in part because they see this as a threat to their established businesses. As SDNs become more popular and the protocols mature, expect this situation to change.

Backup as a Service

As more applications migrate to Web services, one remaining challenge is being able to handle backups effectively across the Internet. This is useful under several situations, such as for offsite disaster recovery, quick recovery from cloud-based failures, or backup outsourcing to a new breed of service providers.

There are several issues at stake here. First is that building a fully functional offsite data center is expensive, and it requires both a skilled staff and a lot of coordinated effort to regularly test and tune the failover operations. That way, a company can be ready when disaster does strike to keep their networks and data flowing. Through the combination of managed service providers such as and vendors such as QuorumLabs, there are better ways to provide what is coming to be called “Backup as a Service.”

Both companies sell remote backup appliances that work somewhat differently to provide backups. Trustyd’s appliance is first connected to your local network and makes its initial backups at wire speeds. This is one of the limitations of any cloud-based backup service, because making the initial backup means sending a lot of data over the Internet connection. That can take days or weeks (or longer!). Once this initial copy is created, the appliance is moved to an offsite location where it continues to keep in sync with your network across the Internet. Quorum’s appliance involves using virtualized copies of running servers that are maintained offsite and kept in sync with the physical servers inside a corporate data center. Should anything happen to the data center or its Internet connection, the offsite servers can be brought online in a few minutes.

This is just one aspect of the potential problem with backup as a service. Another issue is in understanding cloud-based failures and what impact they have on your running operations. As companies virtualize more data center infrastructure and develop more cloud-based apps, understanding where the failure points are and how to recover from them will be key. Knowing what VMs are dependent on others and how to restart particular services in the appropriate order will take some careful planning.

An exemplary idea is how Netflix has developed a series of tools called “Chaos Monkey” that it has since made publicly available. Netflix is a big customer of Amazon’s Web Services, and to ensure that it can continue to operate, the company constantly and deliberately fails parts of its Amazon infrastructure. Chaos Monkey seeks out Amazon’s Auto Scaling Groups (ASGs) and terminates the various virtual machines inside a particular group. Netflix released the source code on Github and claims it can be designed for other cloud providers with a minimum of effort. If you aren’t using Amazon’s ASGs, this might be a motivation to try them out. The service is a powerful automation tool and can help you run new (or terminate unneeded) instances when your load changes quickly. Even if your cloud deployment is relatively modest, at some point your demand will grow and you don’t want to depend on your coding skills or having IT staff awake when this happens and have to respond to these changes. ASG makes it easier to juggle the various AWS service offerings to handle varying load patterns. Chaos Monkey is the next step in your cloud evolution and automation. The idea is to run this automated routine during a limited set of hours with engineers standing by to respond to the failures that it generates in your cloud-based services.

Application-aware firewalls

Firewalls are well-understood technology, but they’re not particularly “smart.” The modern enterprise needs deeper understanding of all applications that operate across the network so that it can better control and defend the enterprise. In the older days of the early-generation firewalls, it was difficult to answer questions like:

  • Are Facebook users consuming too much of the corporate bandwidth?
  • Is someone posting corporate data on a private e-mail account such as customer information or credit card numbers?
  • What changed with my network that’s impacting the perceived latency of my corporate Web apps today?
  • Do I have enough corporate bandwidth to handle Web conference calls and video streaming? What is the impact on my network infrastructure?
  • What is the appropriate business response time of key applications in both headquarters and branch offices?

The newer firewalls can answer these and other questions, because they are application-aware. They understand the way applications interact with the network and the Internet, and firewalls can then report back to administrators in near real time with easy-to-view graphical representations of network traffic.

This new breed of firewalls and packet inspection products are made by big-name vendors such as Intel/McAfee, BlueCoat Networks, and Palo Alto Networks. The firewalls of yesteryear were relatively simple devices: you specified a series of firewall rules that listed particular ports and protocols and whether you wanted to block or allow network traffic through them. That worked fine when applications were well-behaved and used predictable ports, such as file transfer on ports 20 and 21 and e-mail on ports 25 and 110. With the rise of Web-based applications, ports and protocols don’t necessarily work as well. Everyone is running their apps across ports 80 and 443, in no small part because of port-based firewalling. It’s becoming more difficult to distinguish between apps that are mission-critical and someone who is running a rogue peer-to- peer file service that needs to be shut down.

Another aspect of advanced firewalls is being able to look at changes to the network and see the root causes, or viewing time-series effects as your traffic patterns differ when things are broken today (but were, of course, working yesterday). Finally, they allow administrators or managers to control particular aspects of an application, such as allowing all users to read their Facebook wall posts but not necessarily send out any Facebook messages during working hours.

Going on from here

These six trends are remaking the data center into one that can handle higher network speeds and more advances in virtualization, but they’re only part of the story. Our series will continue with a real-world look at how massive spikes in bandwidth needs can be handled without breaking the bank at a next-generation sports stadium.


Retail copies of Office 2013 are tied to a single computer forever

Friday, February 15th, 2013

With the launch of Office 2013, Microsoft has seen fit to upgrade the terms of the license agreement, and it’s not in favor of the end user. It seems installing a copy of the latest version of Microsoft’s Office suite of apps ties it to a single machine. For life.

What does that mean in real terms? It means if your machine dies or you upgrade to a new computer you cannot take a copy of Office 2013 with you to new hardware. You will need to purchase another copy, which again will be tied to the machine it is installed upon forever.

This license change has been confirmed by The Age’s reporter Adam Turner after several frustrating calls to Microsoft’s tech support and PR departments. It effectively turns Office 2013 into the equivalent on the Windows OEM license where you get one chance to use it on a single piece of hardware.

On previous versions of Office it was a different story. The suite was associated with a “Licensed Device” and could only be used on a single device. But there was nothing to stop you uninstalling Office and installing it on another machine perfectly legally. With that option removed, Office 2013 effectively becomes a much more expensive proposition for many. As a reminder, Office 2013 costs anywhere from $140 to $400 depending on the version chosen (Office Home & Student, Office Home & Business, or Office Professional), all of which carry the new license agreement.

Of course, Microsoft has a solution to this in the form of Office 365. Instead of buying a retail copy tied to a single machine, you could instead subscribe to Office 365, which is tied to the user not the hardware, and can be used across 5 PCs or 4 Macs at any one time. But subscriptions aren’t for everyone, and eventually you end up paying more for the software.

It’s more likely these new license terms will push users to choose an alternative to Office 2013 or Office 365. Both OpenOffice and LibreOffice are free and good enough for the consumer market. Google is also continuing to push its free-to-use Google Docs as an alternative to Office.


Sports Stadiums and Wi-Fi Connectivity: Will It Ever Get Better?

Thursday, February 14th, 2013

Reliable, speedy, in-stadium wireless connectivity is what most sports fans dream of.

The roar of the crowd rips through the stadium as the last shot clinches victory for the hometown team. You let out a triumphant hoot, embrace the people around you in the stands and relish the joyful delirium as it sets in.

Right then and there, in that moment of bliss, you decide you want to share this moment with the world, so you whip out your smartphone to post a status update to Facebook. But when you open up the Facebook app, you’re hit with the non-connectivity message of doom: “No Internet Connection.”

If you’ve ever gone to a professional sporting event — an NBA, NHL, NFL or MLS game for starters — you’ve probably run into this frustrating scenario a time or two. And you’re not alone.

Connectivity inside stadiums was a hot topic among sports executives and technologists at the On Deck Sports & Technology Conference in New York City.

“Right now we’re not meeting the fans’ expectations,” said Matt Higgins, CEO of RSE Ventures, a sports and entertainment venture capital firm, during a session. “In most venues, you can’t even send a text message. Fans expect to be able to interact with content in venue as they do at home.”

But why haven’t we figured out wireless connectivity yet? After all, Wi-Fi was born in the late 1990s, and we’re now on the fourth iteration of the mobile broadband standard so what’s the hold up? It turns out that the answers to this technology mystery aren’t so easy to unravel.

What’s to Blame: Inadequate Funds or Technology?

When it comes to pointing fingers at what, exactly, is gumming up the Wi-Fi in stadiums, people cast blame on either the lack of money invested in connectivity or the inadequacy of wireless technology itself.

Higgins believes that teams investing in wireless technology by themselves isn’t going to cut it. The leagues need to band together to make more substantial investments, he argues.

“[The leagues] should bring everyone together and make the investment,” Higgins said. “A team can’t come up with the ROI on that investment. In the venues that are aggressive, like we are in Miami, we have 1,200 Wi-Fi access points. But it’s expensive.”

Scott O’Neil, former president of Madison Square Garden Sports, is of the mind that throwing money at the problem won’t solve it. In his experience, wireless technology can’t handle the demands of a connected sea of fans in a stadium.

“The technology just isn’t there yet,” O’Neil said during a session. Upgrades to infrastructure technology alone didn’t deliver on the same wireless experience that most people get at home when he worked for MSG Sports.

Does this mean technology has left us out to dry? Not quite.

In another panel session at the conference, Gene Arantowicz, senior director of business development in Cisco’s Sports and Entertainment Group, said the technology necessary to deliver Wi-Fi in stadiums “has arrived.”

“It’s [called] a high-density wireless network,” he said. Wireless technology in stadiums is something that Cisco Systems in particular has been aggressive on innovating. They’ve dubbed these wireless-ready stadiums “Cisco Connected Stadiums.” And if you’ve ever been inside one, you’ll be impressed.

Arantowicz does acknowledge, however, that the Cisco experience is not yet ubiquitous in most stadiums.

“Not everybody has it,” he said. It’s been rolled out by Cisco only “in over 100 deployments in 20 some countries.”

Arantowicz cited the company’s recently announced partnership with the Barclays Center in Brooklyn, which will allow fans to stream live video in the stadium, as an example of the innovative stadiums that are within reach.

Bob Jordan, senior vice president of the Van Wagner Sports Group, applauds the advances that technology has made, but he also points out that Wi-Fi wasn’t necessarily built for the stadium use case.

“High-density Wi-Fi is good, but [Wi-Fi] really wasn’t designed to do that,” he said.

Using baseball stadiums as an example, Jordan highlighted the infrastructural challenges MLB teams face since the distance from the dugout to the outfield exceeds the range of even the most advanced access points.

“At the end of the day, we have a convergence of the way the buildings are built and [we’re] wrestling with how do we get [wireless] in these venues,” he said.

The Wi-Fi challenges don’t mean anyone in sports is going to give up trying to improve connectivity anytime soon though. It just means the sports industry has its work cut out for it.

For any team operating in today’s market, “technology is a requirement,” Jordan said definitively. And there’s no getting around it.


Bali to get 250,000 Wi-Fi access points

Thursday, February 14th, 2013

PT Telekomunikasi Indonesia (Telkom) has targeted installation of 250,000 WiFi access points across Bali through 2013.

“There will be 50,000 WiFi access points in public places, and 200,000 in housing facilities,” declared Ketut Budi Utama, general manager of PT Telkom’s south Bali branch office.

At present, there are only 3,800 WiFi access points in public places, out of the company’s planned 15,000 to be installed in the first quarter of 2013.

Meanwhile, PT Telkom has only installed 3,000 WiFi access points in housing facilities in Bali.

Previously, the company had stated it was targeting installation of 1 million wireless Internet services in Indonesia.


Microsoft brings solar Wi-Fi to rural Kenya

Thursday, February 14th, 2013

Using derelict TV frequencies, old-fashioned antennas and solar power, Microsoft is trialling a pioneering form of broadband technology in Africa

GAKAWA Senior Secondary School is located in Kenya’s western Rift Valley Province, about 10 kilometres from Nanyuki town. It is not an easy place to live. There are no cash crops, no electricity, no phone lines, and rainfall is sporadic to say the least.

“For internet access we had to travel the 10 kilometres to Nanyuki and it would cost 100 Kenya shillings [about $1.20] to get there,” says Beatrice Nderango, the school’s headmistress.

Not for much longer. Solar-powered Wi-Fi is being installed in the area that will give local people easy access to the internet for the first time. The pilot project – named Mawingu, the Swahili word for “cloud” – is part of an initiative by Microsoft and local telecoms firms to provide affordable, high-speed wireless broadband to rural areas. If and when it is rolled out nationwide, as planned, it will mean that Kenya could lead the way with a model of wireless broadband access that in the West has been tied up in red tape.

Because the village has no power, Microsoft is working with Kenyan telecoms firm Indigo to install solar-powered base stations that supply a wireless signal at a bandwidth that falls into what is called the “white spaces” spectrum.

This refers to the bits of the wireless spectrum that are being freed up as television moves from analogue to digital – a set of frequencies between 400 megahertz and about 800 megahertz. Such frequencies penetrate walls, bend around hills and travel much longer distances than the conventional Wi-Fi we have at home. That means that the technology requires fewer base stations to provide wider coverage, and wannabe web surfers in the village need only a traditional TV antenna attached to a smartphone or tablet to access the signal and get online. Microsoft is supplying some for the trial, as well as solar-powered charging stations.

To begin with, Indigo has set up two solar-powered white-space base stations in three villages to deliver wireless broadband access to 20 locations, including schools, healthcare clinics, community centres and government offices.

“Africa is the perfect location to pioneer white-space technology,” says Indigo’s Peter Henderson, thanks to governments’ open-mindedness. Indeed, Kenya has a strong chance of being in the global vanguard of white-space roll-out. While the US has already legalised use of derelict TV bands, it has yet to standardise the database technology that will tell devices which frequencies are free to use at their GPS location.

In the UK, white-space access should finally be up and running by the end of 2013, says William Webb of white-space startup Neul in Cambridge. “White-space trials are also taking place in Japan, Indonesia, Malaysia, South Africa and many other countries – and some of these may move directly to allowing access without needing lengthy consultations,” he says. In many cases, it has been these consultations that have slowed the technology’s progress.

Microsoft aims to roll out the initiative to other African nations, such as sub-Saharan countries. “Internet access is a life-changing experience and it’s going to give both our students and teachers added motivation for learning,” says Nderango. “It will also make my job as headmistress a little easier.”


Emergency Alert System devices vulnerable to hacker attacks, researchers say

Thursday, February 14th, 2013

Devices used by many radio and TV stations to broadcast emergency messages as part of the U.S. Emergency Alert System (EAS) contain critical vulnerabilities that expose them to remote hacker attacks, according to researchers from security consultancy firm IOActive.

The EAS is a national public warning system that can be used by the president or local and state authorities to deliver emergency information to the general public. This information is transmitted by broadcasters, cable television systems, wireless cable systems, satellite digital audio radio service (SDARS) providers, and direct broadcast satellite (DBS) providers.

EAS participants are required to install and maintain special decoding and encoding devices on their infrastructure that allow the transmission and relay of EAS messages.

IOActive Labs researcher Mike Davis found several critical vulnerabilities in EAS devices that are widely used by radio and TV stations nationwide, said Cesar Cerrudo, chief technology officer of IOActive, Wednesday via email.

The vulnerabilities allow attackers to remotely compromise the devices and broadcast fake EAS messages, he said. “We contacted CERT [U.S. Computer Emergency Readiness Team] almost a month ago and CERT is coordinating with the vendor to get the issues fixed.”

At least two products from one of the main vendors of EAS devices are affected, so many radio and TV stations could be vulnerable, he said.

Cerrudo declined to name the vulnerable products or the affected vendor before the vulnerabilities get fixed. He hopes that this will happen soon so that IOActive researchers can discuss their findings at the RSA 2013 security conference in San Francisco later this month.

“We found some devices directly connected to the Internet and we think that it’s possible that hackers are currently exploiting some of these vulnerabilities or some other flaws,” Cerrudo said.

On Monday, hackers compromised the EAS equipment of several local TV stations in Michigan and Montana. The attackers interrupted regular programming to broadcast an audio message alerting viewers that “the bodies of the dead are rising from their graves and are attacking the living.”

The affected stations included ABC 10, CW 5 and Northern Michigan University’s WNMU-TV 13 in Marquette, Michigan, and CBS affiliate KRTV in Great Falls, Montana.

“The hacker responsible for creating and airing a bogus Emergency Alert System message on the air at ABC 10 — CW 5 Monday evening and at least three other TV stations, including WNMU-TV 13 at Northern Michigan University, has been found,” ABC 10 station manager Cynthia Thompson said Tuesday in a blog post. “It has been determined that a ‘back door’ attack allowed the hacker to access the security of the EAS equipment.”

“Providing Emergency Alert information is a vital duty of a broadcaster,” Thompson said. “The nature of the message Monday night was not necessarily dangerous, but the fact that the system was vulnerable to outside intrusion is a danger.”

Cerrudo agreed. These issues are critical because next time hackers might use these systems to generate real panic instead of making zombie apocalypse jokes, he said.

For example, they could send an emergency message claiming there’s an ongoing terrorist attack using anthrax that has already made victims, he said. “This would really scare the population and depending how the attack is performed it could have drastic consequences.”


SDN meet sheds little light on Cisco ‘Daylight’

Thursday, February 14th, 2013

Despite the promise of SDNs and the filled sessions at Network World’s Open Network Exchange conference this week, lots of questions remain on the technological trend virtually everyone says will redefine networking.

First, what is this “Daylight” open source SDN controller Cisco, IBM, HP, NEC and Citrix are reportedly developing? SDNCentral broke the story last week that the companies are forming an open-source foundation, modeled after the Apache and OpenStack foundations, and plan to unveil the Daylight controller project around the time of the Open Networking Summit conference in April.

Other questions swirling around the Open Network Exchange conference were: is anyone working to standardize the northbound API between controllers, and orchestration systems and applications? Currently, those APIs are proprietary and/or various versions of RESTful or Java APIs.

Also, OpenFlow is the most popular southbound API between controllers and switches; but is there work underway to standardize a controller-to-controller API for redundancy and failover and scalability requirements? Currently, no there is not, participants agree.

And lastly, can SDN, with its promise of network virtualization, emulate the models of server and storage virtualization that came a generation before it? And can those models serve as successful schematics for how network virtualization should unfold?

“Virtualizing the network is going to be the next big thing,” says Casimer DeCusatis, distinguished engineer in IBM Systems Networking. “It’s going to change the playing field a little bit. Those that have been successful in networking are not necessarily going to remain successful. The only way this works at all is if it’s based on open industry standards and interoperability.”

DeCusatis would not comment on the Daylight project or the accuracy of the SDNCentral reports. Nor would Don Clark, director of corporate business development for NEC; Sarwar Raza, director of Cloud & Software Defined Networking at HP Networking; and a Cisco spokesperson.

But Daylight sounds a lot like what Juniper was referring to when it disclosed an effort to coalesce the industry around an open source controller to serve as a third “standard” alternative to those from Cisco/Insieme and VMware/Nicira.  Juniper backtracked a bit from that when it divulged its overall SDN strategy earlier this year, saying that its vision had “evolved” since then to a belief that open source may not be the best source for core controller functionality.

This week, Juniper would not comment when asked if it would join or support or participate in the Daylight effort, but said it dovetails with its own SDN principles. From a company spokesperson:

As you know, we believe industry collaboration and a move toward standards will be key to realizing the potential of SDN and we are actively involved in key industry initiatives that will play a role in this effort. Juniper’s approach to SDN includes six principals aimed at helping enterprises and service providers address their networking challenges. A large part of SDN’s success, and the fifth principal in Juniper’s vision, will hinge on standardizing protocols for interoperable, heterogeneous support across vendors, providing choice and lowering cost.

We are not going to comment directly on the effort described but it is consistent with the principles of SDN we’ve discussed over the past few weeks. We believe industry collaboration/standards will be key to realizing the potential.

Sources say the Daylight effort will not stop with the companies already identified as participants. The roster is expected to expand by the time it’s announced in April. They’ve also heard rumors it’s being led by David Ward, Cisco’s Service Provider Chief Architect, Chief Technology Officer, but that could not be confirmed.

But a padded roster doesn’t necessarily mean all are behind the effort. Some may want to just witness the progress and learn from it; others may want to try and blunt its momentum.

Jim Metzler of consultancy Ashton, Metzler & Associates, and moderator of the Open Network Exchange conference, noted the same dynamic in the Open Networking Foundation, which has 90 user, vendor and service provider customers ostensibly rallying behind OpenFlow:

“Some are pushing (the effort), some are watching it and some are slowing it down,” he said. “It’s like herding kittens.”

Until the consortia sort it all out, users at the Open Network Exchange still have open questions about SDNs: implementation, use cases, and benefits.

“What are some of the inhibitors” of SDNs, one participant asked, before rattling off a litany of potential obstacles, like education and training of those with CLI-intensive skill sets honed over decades; vendors not adopting standards fast enough; co-existence of “pure” SDN switches and networks with those supporting hybrid implementations; ambiguity on how latency will be handled in controller-to-switch interactions; production proof points; and justification of its value to business.

“How do you sell it internally?” the participant asked. “How do you make the case that it’s not bleeding edge, but a lot of dollars that could be saved?”

Others wanted to hear more about what wasn’t discussed in vendor presentations at the conference.

“We haven’t talked about future ASIC development going forward,” another participant said. “What’s going to happen in ASICs to facilitate SDN? What about name and directory service tie-in? How is failover handled, if at all? There was nothing mentioned about QoS; and very little was said about the enterprise – it was too data center focused.”


Microsoft suggests fix for iOS 6.1/Exchange problem: Block iPhone users

Thursday, February 14th, 2013

iOS 6.1 hammering Exchange, dragging down server performance.

iOS 6.1 devices are hammering Exchange servers with excessive traffic, causing performance slowdowns that led Microsoft to suggest a drastic fix for the most severe cases: throttle traffic from iOS 6.1 users or block them completely.

“When a user syncs a mailbox by using an iOS 6.1-based device, Microsoft Exchange Server 2010 Client Access server (CAS) and Mailbox (MBX) server resources are consumed, log growth becomes excessive, memory and CPU use may increase significantly, and server performance is affected,” Microsoft wrote on Tuesday in a support document.

The problem also affects Exchange Online in Microsoft’s Office 365 cloud service. Office 365 customers may get an error message on iOS 6.1 devices stating “Cannot Get Mail: The connection to the server failed.” The Microsoft support article says both Apple and Microsoft are investigating the problem.

Microsoft suggests several fixes, starting out gently, then escalating to the complete blockage of iOS 6.1 devices. Based on the fixes suggested, the problems may be caused when iOS devices connect to Exchange calendars.

The first workaround is “do not process Calendar items such as meeting requests on iOS 6.1 devices. Also, immediately restart the iOS 6.1 device.”

If that doesn’t work, users are instructed to remove their Exchange accounts from their phones or tablets while the Exchange Server administrator runs a “remove device” command on the server side. After 30 minutes, users can add the Exchange accounts back onto their devices but should be advised “not to process Calendar items on the device.”

If that doesn’t work, the fixes get more serious. The next method is for the server administrator to create a custom throttling policy limiting the number of transactions iOS 6.1 users can make with the server. “The throttling policy will reduce the effect of the issue on server resources,” Microsoft notes. “However, users who receive the error should immediately restart their devices and stop additional processing of Calendar items.”

One Exchange administrator who created a throttling policy through PowerShell to solve the problem provides a guide here, but Microsoft also has a page providing instructions.

Finally, the last method Microsoft recommends is to block iOS 6.1 users. “You can block iOS 6.1 users by using the Exchange Server 2010 Allow/Block/Quarantine feature,” Microsoft notes. (See this post for more detailed instructions.)

Businesses of all sizes limiting or blocking iOS devices

We don’t know exactly how widespread this problem is. It’s clearly not affecting everyone, but the impact seems to run the gamut from small businesses to large.

“We’re using Exchange 2010 in a small software firm with about a dozen iOS users (each with multiple iOS devices),” Shourya Ray, chief administrative officer of Spin Systems in Virginia, told Ars via e-mail. “Last week our Exchange server froze (internal mail was being routed, but external mail stopped flowing).”

It turned out that the 300GB VMware virtual machine hosting the Exchange server was full. “You can imagine our surprise when that VM filled up overnight,” Ray said. “If we were running Exchange in a typical hardware-based server with a 1TB drive, it would have taken us a week to realize the problem.”

How did it happen, and how did the company get things working “normally” again? “The transaction log had 200,000 records and was the indication of a problem,” Ray said. “Our temporary solution has been to ask iOS users to switch to manual pull rather than ActiveSync push. For heavy e-mail users, we are recommending an automatic pull every 30 minutes. So far, that seems to have kept Exchange happy with no other issues since last week. Let’s hope that Apple and Microsoft put their heads to together and fix this soon.”

We heard from several other people on Twitter that they have been bit by the iOS 6.1/Exchange problem. One said, “My 22,000+ employee enterprise has blocked iOS 6.1, execs all have iOS.”

A support thread on Microsoft’s Exchange Server site was opened January 31 to discuss the excessive logging caused by iOS 6.1. The server administrator who began the thread identified an iPad that “caused over 50GB worth of logs” on a single database.

The thread got more than a dozen replies. One Exchange administrator explained that “malformed meetings on a device cause the device to get into a sync loop which causes excessive transaction log growth on the Exchange mailbox servers.” This in turn “will cause Exchange performance issues and potentially transaction log drives to run out of disk space which would then bring down Exchange.”

To solve the problem, this admin simply “disabled all iOS 6.1 on our Exchange system.”

iOS 6.1 was released on January 28. iOS 6.1.1 came out a couple of days ago, but for now it can only be installed on the iPhone 4S and is designed to fix cellular performance and reliability. Apple didn’t mention anything about Exchange fixes when releasing this latest version. Last year, iOS 6.0.1 fixed an Exchange problem that could lead to entire meetings being canceled when even a single iOS user declined a meeting invitation.

The iOS 6.1 problem isn’t the first time iOS has caused Exchange servers to perform poorly. An Apple support article from 2010 describes sync problems in iOS 4 and says, “Exchange Server administrators may notice their servers running slowly.” At the time, Microsoft noted iOS 4 led to “Exchange administrators… seeing heavier than normal loads on their servers from users with iOS devices.” Microsoft got in touch with Apple to fix that problem.

We’ve asked both Apple and Microsoft how many users are impacted by the latest problem, and when a more permanent fix is coming. We also asked Apple if it agrees with the workarounds suggested by Microsoft. Microsoft told us it has nothing else to say, as the “support article contains the latest.” Apple has not responded to our request for comment as of yet.

UPDATE: Apple posted a support document of its own today, describing the problem thusly:

When you respond to an exception to a recurring calendar event with a Microsoft Exchange account on a device running iOS 6.1, the device may begin to generate excessive communication with Microsoft Exchange Server. You may notice increased network activity or reduced battery life on the iOS device. This extra network activity will be shown in the logs on Exchange Server and it may lead to the server blocking the iOS device. This can occur with iOS 6.1 and Microsoft Exchange 2010 SP1 or later, or Microsoft Exchange Online (Office365).

Apple’s suggested fix is to turn the Exchange calendar off and back on again within the iPhone’s settings. An operating system update to fix the problem is on the way. “Apple has identified a fix and will make it available in an upcoming software update,” Apple said.


FCC invests $10M in new network security but leaves backdoor unlocked

Wednesday, February 13th, 2013

GAO finds job was rushed, sloppy—some problems too severe to share with public.

In August of 2011, while in the middle of upgrading its network security monitoring, the Federal Communications Commission discovered it had already been hacked. Over the next month, the commission’s IT staff and outside contractors worked to identify the source of the breach, finding an unspecified number of PCs infected with backdoor malware.

After pulling the infected systems from the network, the FCC determined it needed to do something dramatic to fix the significant security holes in its internal networks that allowed the malware in. The organization began pulling together a $10 million “Enhanced Secured Network” project to accomplish that.

But things did not go well with ESN. In January, a little less than a year after the FCC presented its plan of action to the House and Senate’s respective Appropriations Committees, a Government Accountability Office audit of the project, released publicly last week, found that the FCC essentially dumped that $10 million in a hole. The ESN effort failed to properly implement the fixes, and it left software and systems put in place misconfigured—even failing to take advantage of all the features of the malware protection the commission had selected, leaving its workstations still vulnerable to attack. In fact, the full extent of the problems is so bad the GAO’s entire findings have been restricted to limited distribution.

“As a result of these and other deficiencies, FCC faces an unnecessary risk that individuals could gain unauthorized access to its sensitive systems and information,” the report concluded. And much of the work done to deploy the security system must be redone before the FCC’s systems approach anything resembling the security goals set for the project.

The FCC’s leadership acknowledges there’s a lot left to be done. “The GAO’s review of this project covers a period of time during which the Commission faced an unusual level of urgency, and we look forward to sharing our further progress with Congress and GAO at a later time, when these security initiatives are more fully deployed and developed,” FCC Managing Director David Robbins wrote in response to the GAO’s findings. But the commission also has some personnel issues to address—all of this is transpiring as the FCC looks for a new chief information officer. Ironically, the FCC’s CIO Robert Naylor stepped down in January to take a new job; he is now the CIO of a cyber security firm that caters to the intelligence community.

Measure once, cut twice

The FCC is a small organization as government agencies go, with about 2,000 employees and a budget request for 2013 of $340 million. It relies heavily on outside help for its IT operations—and on more outside help to figure out how to buy that help. The aquisition of the  ESN project was managed by Octo Consulting Group, a company led by three former Gartner executives and the former CIO of the Department of Agriculture’s Forest Service. The company claims on its website to have “designed the FCC Cyber Security Strategy, and managed and executed three defining Cyber Security contracts.” The consulting firm also provided contracting support for the FCC’s CIO as all of its major IT support contracts were preparing to expire mid-2012.

Update: “Octo was responsible for providing ‘acquisition support to the FCC’ for the ESN contract  (i.e. Assisting FCC Acquisition & Contracts personnel with developing the Statement of Work used to acquire the hardware and services for the $10M ESN contract you referenced),” Octo Consulting Group president Mehul Sanghani said in an email to Ars. “”Once the contract was awarded, Octo was also tasked with providing project management support to supplement the FCC IT staff that was tasked with overseeing the work.” The actual work on ESN was done by MicroTech and subcontractor Booz Allen Hamilton.

At the time of the discovery of the network intrusion in 2011, the FCC’s network security was dated at best. The ESN project, which was originally projected to be completed this month, is intended to “enhance and augment FCC’s existing security controls through changes to the network architecture and by implementing, among other things, additional intrusion detection tools, network firewalls, and audit and monitoring tools,” according to the GAO. The program was also supposed to provide the FCC with an ongoing “cyber threat analysis and mitigation program” that would do continuous risk assessment and reduction and control the damage from attacks that managed to breach the commission’s security measures.

Contracts to do the work on ESN were awarded in April of 2012, just two months after plans for the project were submitted to Congress. By June, all of the security hardware and software licenses had been purchased. Implementation was in full swing.

But apparently the work was done so quickly that no one bothered to check it. While new security hardware and software was deployed, the GAO found that “FCC did not effectively implement or securely configure key security tools and devices to protect these users and its information against cyber attacks… Certain boundary protection controls were configured in a manner that limited the effectiveness of network monitoring controls.”

The rush to get things in place also led to some other sloppy work. The GAO’s auditors found that passwords to gain access to some of the network monitoring systems “were not always strongly encrypted.” And while tools had been put in place to detect malware and block malicious network traffic, the tools had been left only partially configured.

The mishandling of security is being raised as an issue by some who do business with the FCC, especially because news of the original breach was never disclosed to the public—even as the FCC was formulating a proposed a rule that would require people with commercial interests in broadcast stations to submit their social security numbers to an FCC database. As Harry Cole, a communications lawyer with the firm Fletcher, Heald, and Hildreth put it in a post to the firm’s blog,” it seems extraordinarily inappropriate for the Commission, knowing of those vulnerabilities, to then propose that a huge number of folks must provide to the FCC the crown jewels of their identity, their social security numbers.”


Wi-Fi expansion could harm smart car wireless network, automakers say

Wednesday, February 13th, 2013

FCC wants to expand Wi-Fi in spectrum used by future vehicle-to-vehicle network.

A government plan to add spectrum to Wi-Fi’s 5GHz band might interfere with vehicle-to-vehicle wireless networks that would improve highway safety, dozens of auto industry representatives said in a letter to the Federal Communications Commission today.

The FCC’s planned Wi-Fi expansion would result in the 5GHz band stretching from 5.150GHz to 5.925GHz, improving wireless Internet access in homes and public areas. This requires sharing airwaves with Dedicated Short Range Communications (DSRC) in the 5.850-5.925GHz part of that spectrum. As we’ve written, DSRC could eventually enable wireless mesh networks on highways, allowing cars to cooperate and thus avoid accidents.

“The Intelligent Transportation Society of America (ITS America), along with major automakers, safety advocates and transportation officials from across the country, are joining together to urge the Federal Communications Commission (FCC) to protect the 5.9 GHz band of spectrum set aside for connected vehicle technology—which is expected to save thousands of lives each year—from potentially harmful interference that could result from allowing unlicensed Wi-Fi-based devices to operate in the band,” the trade group said in an announcement.

The actual content of the group’s letter shows that industry players aren’t taking a hard line against the Wi-Fi expansion. The letter asks the FCC to perform “due diligence” on the issue of possible interference, and that a final decision on the Wi-Fi expansion come after a US Department of Transportation decision on implementing a connected vehicle network.

In addition to automakers such as Volvo, Chrysler, and Hyundai-Kia, the letter was signed by various researchers and even government officials, such as state transportation officials in Texas, Washington state, Michigan, and California.

The National Telecommunications & Information Administration (NTIA) has also said the potential interference must be studied closely. The ITSA letter echoed the NTIA’s position, saying “We share NTIA’s concern about the potential risks associated with introducing a substantial number of unlicensed devices into the 5.9 GHz band on which connected vehicle systems are based, and support NTIA’s conclusion that further analysis is needed to determine whether and how the multiple risk factors could be mitigated.”

The vehicle concerns by themselves may not be a deal-breaker in the plan to expand Wi-Fi spectrum, but they are indicative of how complicated and lengthy the process could be. The NTIA has said it won’t finalize its recommendations to the FCC until December 2014.

The FCC was certainly aware that its proposal requires the use of spectrum already dedicated to other uses, and has acknowledged that significant cooperation with federal agencies is needed. Ultimately, sharing this spectrum between Wi-Fi and other uses might require a database similar to the ones used for White Spaces networks. Such a setup will probably also be used for the FCC’s plan to share government-controlled spectrum with cellular providers.

A sharing system may well be good enough to prevent interference between cars on the roads and Wi-Fi users in buildings, particularly as 5GHz airwaves don’t travel as far as the ones in the 2.4GHz Wi-Fi band, or the ones typically used for cellular networks. Given the nearly two-year timeline for the NTIA to do research and report back to the FCC, there’s little reason to think anything harmful will happen to the future smart car networks, but the process may impact how the Wi-Fi expansion is implemented.

Oh and just in case you were wondering, this is completely unrelated to the false “free Wi-Fi everywhere” story that we had to debunk last week.


Executive order to raise “volume, quality of cyber threat information”

Wednesday, February 13th, 2013

Executive order to raise “volume, quality of cyber threat information”

Just before issuing the 2013 State of the Union address, President Barack Obama signed an executive order on cybersecurity—creating a series of “best practices” between “critical infrastructure” corporations and the National Institute of Standards and Technology (NIST).

“It is the policy of the United States Government to increase the volume, timeliness, and quality of cyber threat information shared with US private sector entities so that these entities may better protect and defend themselves against cyber threats,” the order states.

According to The Hill, a draft version of this framework will be due in 240 days and the final will be published within a year from now.

The order comes after Cyber Intelligence Sharing and Protection Act (CISPA) failed in Congress last year—although it may be poised for a comeback. While many civil libertarians were concerned that CISPA did not have adequate privacy protections, some have shown some cautious optimism about the new order.

“The executive order says that privacy must be built into the government’s cybersecurity plans and activities, not as an afterthought but rather as part of the design,” said Center for Democracy and Technology President Leslie Harris in a statement.

“By explicitly requiring adherence to fair information practice principles, the order adopts a comprehensive formulation of privacy. The annual privacy assessment, properly done, can create accountability to the public for government actions taken in the name of cybersecurity.”

Others, including the American Civil Liberties Union, agreed.

“The president’s executive order rightly focuses on cybersecurity solutions that don’t negatively impact civil liberties,” Michelle Richardson, a legislative counsel for the ACLU, added, in a statement. “For example, greasing the wheels of information sharing from the government to the private sector is a privacy-neutral way to distribute critical cyber information.”


4G to affect TV reception in two million [UK] homes

Wednesday, February 13th, 2013

Filters will be provided for Freeview televisions which experience reception problems following the roll out of 4G later this year.

Ofcom estimates that the TV viewing in up to 2.3 million British households could be affected by 4G but only 40% of them have Freeview.

Satellite receivers will not be affected, the watchdog claims.

A fund provided by the 4G auction winners will be used to pay for filters for those who need them.

At the moment only mobile operator EE is able to offer customers the 4G service, which provides faster mobile internet connections.

The other operators are currently bidding for licences in an auction run by telecoms watchdog Ofcom.

Up to £180m from the auction will be used to fund the filters, a spokesperson from Ofcom said.

However, around 1% of affected Freeview households will be unable to use them and will be offered an alternative instead.

Ofcom estimates there may be fewer than 1000 homes in the UK who will not be able to access those alternatives either and will be left without television services.

A not-for-profit organisation called Digital Mobile Spectrum Limited (DMSL) has been created to tackle the problem.

“I look forward to working closely with broadcasters and mobile network operators to ensure everyone continues to be able to receive their current TV service,” said newly appointed chief executive Simon Beresford-Wiley.

“DMSL plans to pre-empt the majority of potential interference issues caused by 4G at 800 MHz and existing TV services. We’re focused on being able to provide anyone who may be affected with the information and equipment they’ll need to ensure they continue to receive free-to-air TV.”

Last month Freeview homes in South Wales had to retune their TVs and boxes following technical changes to a transmitter in order to make way for 4G.

Source:  BBC

Mobile’s dawning signal crisis

Wednesday, February 13th, 2013

Telecommunications tower (Copyright: SPL)

In April 1973, Marty Cooper made a phone call that put him straight into the history books. As he strolled down Lexington Avenue, New York, the Motorola executive (CK) whipped out an enormous prototype handset that he had built and placed the first public, mobile phone call.

The brief chat – and the photograph that immortalised the moment – marks the start of the mobile phone era. But Cooper’s legacy extends far beyond just that first conversation.

Along with a host of inventions, the engineer also formulated – and lent his name to – a mathematical law that captures the inexorable progress of our communications. Cooper’s Law, as it is known, shows how our use of the ether has grown since Guglielmo Marconi first transmitted radio waves 2.4 kilometres across the streets of Bologna – eight decades ahead of Cooper’s own historic transmission.

It has been estimated that the technology available when Marconi made his first transatlantic transmission, radio techniques were able to support just 50 simultaneous conversations worldwide. Since then radio capacity has grown by a factor of a trillion – doubling every two-and-a-half years. That’s Cooper’s law.

As well as describing progress, the law also become the mobile industry’s ruthless master: providing an aggressive roadmap for the rise of mobile culture.

The industry met this challenge thanks to advances in technology.

But now the game has changed. Although few in the industry acknowledge it publically, Coopers Law, which has stood for more than a century, is broken. And it is all down to the phone in your pocket.

Bin there, sent that

To understand the scale of the problem, you only need to look at the numbers.

For example, the mobile giant Ericsson has been tracking the growth in mobile traffic for years. But 2009 was a landmark year, according to the firm’s Patrik Cerwall: “That year saw more data traffic than voice traffic over the mobile networks”. And the data traffic has been doubling every year since – far outracing Cooper’s law.

The big accelerator was the smartphone, which suddenly made the data-carrying capacity of 3G networks attractive. “People didn’t really understand the benefit of 3G until the app concept changed everything,” Cerwall elaborates.

Data-hungry video is also driving demand. Networking firm Cisco has just reported video downloads last year crossed the 50% threshold, accounting for half of all data transferred over the mobile networks.

At the moment, there are around 1.1 billion smart phones across the world; by 2018 (the horizon for the Ericsson forecasts) that will treble to 3.3 billion. If you think that in 2012, smartphones represented only 18% of total global handsets, but represented 92% of total global traffic, you begin to see the problem.

And the growth will continue relentlessly, according to the Cisco analysis. In 2012, for example, global mobile data traffic grew 70% from 2011, to 885 petabytes per month – that is 885 million gigabytes of data. And in the next five years, it is expected to increase 13-fold, eventually reaching 11.2 exabytes (11, 200 million gigabytes) per month by 2017, according to Cisco.

These dramatic hikes will in part be driven by more people switching to smartphones, particularly in emerging markets, as well as new features on phones and in apps.

The impact of simple changes in an app was dramatically demonstrated in November 2012 when Facebook released new version of its mobile app for Android and Apple phones. Prior to the release, according to networking firm Alactel, the social network already accounted for 10% of the signalling and 15% of the airtime load on 2G/3G networks, respectively. But, as users around the world updated and started to use this new version, the firm noticed a dramatic increase of almost 60% in the signalling load and 25% in the airtime consumed by new features in the app.

However, data hikes will not just be driven by consumers. Firms also predict a rise in so-called machine-to-machine (M2M) communication, that will connect the mobile networks to an array of inanimate objects – from bins that will signal when they are full to electricity meters that will constantly call in to the utility company.

By the end of this year, Cisco predicts that the number of mobile-connected devices will exceed the number of people on earth, and by 2017 there will be more than 10 billion.

No wonder the chairman of the US Federal Communications Commission recently declared: “The clock is ticking on our mobile future.

Running out

The illusion is that the airwaves, like the atmosphere they pass through, are effectively limitless. We can’t see them, they can travel in any direction and link any two points – why should they be limited? Yet, in practice they are as hemmed in as a motorway through a city.

Radio spectrum is a limited resource, strictly farmed out by national and international regulation. At the moment it is all spoken for by the military, mariners, aviation, broadcasters and many more – all the way up to the very extreme of useful frequencies at 300 gigahertz.

No-one can get more bandwidth without someone else losing out. The 4G spectrum auction that recently began in the UK, for example, is the equivalent of adding a new six-lane motorway to the existing wireless infrastructure (itself already running at 10-lanes), built on virtual land vacated by old-fashion TV broadcasts.

It helps, but will only keep the expansion going for a certain time. Which is why mobile operators, and their rivals, are gearing up for major spectrum negotiations at the International Telecommunications Union in 2015. The so-called WRC-2015 conference aims to carve up the available spectrum amongst different competing uses. But an overriding priority is identifying and allocating additional frequencies to mobile services.

Already, the stakeholders are preparing their positions. Ericsson’s Afif Osseiran, project coordinator for the European consortium Metis, says the ITU conference “will be a crucial moment for laying out the spectrum needs for the 2020s.”

But industry will not just rely on these delicate negotiations to secure its future. Much of the advance in the past 20 years has not been about how many of these wireless “lanes” we have, but how efficiently we use them.

Like a newly built motorway that’s used by just a few cars, the first generation of phones were incredibly wasteful of the spectrum they used. Capacity was wasted in the same way as the gaps between vehicles represented lost transport opportunities.

In going from 1G to 2G, there was a 1,000-fold increase in capacity, mostly not because of the new radio lanes added in, but because more traffic was squeezed onto those lanes.

And in going from 2G to 3G, capacity rose another factor of 1,000: digital techniques managed to squeeze out yet more of the empty space.

But with the latest generation of tricks being rolled out in 4G (actually described as 3G Long Term Evolution by developers), the industry is running out of ways to improve the efficiency further.

These limits that determine how much information can be transmitted were established in the 1940s by the American engineer Claude Shannon. Although his employers, the Bell Labs of AT&T telephone, were interested primarily about the limitations of telephone wires, Shannon’s equations can be used equally for radio transmissions.

And mobile experts generally accept that the limits to data flow revealed by Shannon’s formulae are close to being reached.

Data crunch

So how will the mobile industry meet this challenge and keep satisfy out appetite for data?

The industry is clearly optimistic. It already confidently speaks of 5G – a further generation of technology that will roll out as current ideas have run their course. What exactly they mean by 5G is poorly defined, but a host of tricks are being discussed that it’s hoped will keep past trends going well into the next decade.

Which is just as well, as the lure of being immersed in a seamless flow of data will only become more compelling, says Rich Howard, formerly head of wireless research at Bell Labs and now with Winlab at Rutgers University.

“Mature technology is invisible – and that’s the direction we’re heading,” he says.

Howard looks forward to a day when phones begin to make intelligent decisions by themselves.

“What you want is a digital assistant that, while you’re having a call with somebody, will be busy looking at options for actions relevant to that call and have them available,” he says. So, if you are talking about a train journey, the phone could begin to check your calendar, ticket prices and connections. By the time you hang up, it would be able to present you with a list of available options. “Everytime you start to say something, you turn around and it’s already done, the way you want it done.”

It is a vision that is a world away from Cooper’s first call forty years ago and one that is only going to add the coming data crunch.

How the industry plans to keep up and deliver this future will be explored in the next article in this series.

Source:  BBC

Facebook briefly takes over entire Internet with redirection bug

Friday, February 8th, 2013

A bug with Facebook’s ubiquitous embedded widgets redirected millions from the websites they were visiting to Facebook itself Thursday. It’s now fixed, but for awhile, some of the largest sites in the world were inaccessible.

Visitors to such sites as CNN, The Washington Post, BuzzFeed, the Gawker network (including Gizmodo), and NBC News were instantly transferred to a Facebook error page upon loading the intended site.

Users not logged into Facebook did not get the error, which led people to conclude, correctly, that the problem was with the Facebook Connect service, which governs “likes,” “comments” and other Web-based features of the social network.

Facebook declined to disclose the specifics of the error, but offered the following statement via email to NBC News:

For a short period of time, there was a bug that redirected people from third party sites with Facebook Login to The issue was quickly resolved.

It was “quickly resolved” for some, at least: On, the disturbance lasted for less than 15 minutes, from 4:30 to 4:42 p.m. PT. For other sites it was longer; ReadWriteWeb said the bug lasted about an hour, from 4 to 5 p.m. PT.

Those on Twitter (and Facebook, of course) jammed those sites nearly instantaneously with comments on how it was a little disturbing that Facebook could essentially hijack a large part of the Internet — even if it was unintentional.

If the error had not been fixed by Facebook in a timely manner, the affected sites would likely have be able to intervene by disabling the widgets themselves. The temporary loss of Facebook login and likes would be a small price to pay for an accessible website.

Because the outage was so widespread and public, a more thorough explanation can probably be expected from Facebook once officials there have time to take a closer look at what went wrong.