Archive for June, 2011

Microsoft to test white-space spectrum for wireless

Monday, June 27th, 2011

A Microsoft-led consortium will begin a test in Britain this week to investigate how unused TV spectrum could be employed for new wireless broadband networks, according to a Financial Times report.

The group, which includes the BBC, British Sky Broadcasting, and telecommunications giant BT, hopes to tap “white spaces” to create “super Wi-Fi” networks to sate bandwidth-hungry smartphones, according to the report.

“Spectrum is a finite natural resource. We can’t make more and we must use it efficiently and wisely,” Dan Reed, Microsoft’s vice president of technology policy and strategy, told the newspaper. “The TV white spaces offer tremendous potential to extend the benefits of wireless connectivity to many more people, in more locations, through the creation of super Wi-Fi networks.”

The 300MHz to 400MHz of unused “white space” spectrum is considered prime spectrum for offering wireless broadband services because it can travel long distances and penetrate through walls. The Federal Communications Commission unanimously agreed in November 2008 to open up this spectrum for unlicensed use.

Microsoft has been testing new technology that uses the unlicensed spectrum on its 500-acre Redmond, Wash., campus. The company built the wireless network using only two base stations to transmit the signals via the white-space spectrum. Signals that use the white-space spectrum travel at least three times farther than signals transmitted over other unlicensed spectrum, such as Wi-Fi. This means it can cover an area that is almost nine times as large as one that uses Wi-Fi and because it operates at a much lower frequency than Wi-Fi, it can penetrate buildings much more easily.

FCC Chairman Julius Genachowski and others have compared the white-space market to Wi-Fi–a $4 billion-a-year industry that also does not require a spectrum license. Last year, Microsoft commissioned research that suggests white-space applications may generate $3.9 billion to $7.3 billion in economic value each year.

Source:  CNET

FBI targets cyber security scammers

Monday, June 27th, 2011

A gang that made more than $72m (£45m) peddling fake security software has been shut down in a series of raids.

Co-ordinated by the FBI, the raids were carried out in the US, UK and six other countries.

The money was made by selling software that claimed to find security risks on PCs and then asked for cash to fix the non-existent problems.

The raids seized 40 computers used to do fake scans and host webpages that tricked people into using the software.

Account closed

About one million people are thought to have installed the fake security software, also known as scareware, and handed over up to $129 for their copy. Anyone who did not pay but had downloaded the code was bombarded with pop-ups warning them about the supposed security issues.

Raids conducted in Latvia as part of the attack on the gang allowed police to gain control of five bank accounts used to funnel cash to the group’s ringleaders.

Although no arrests are believed to have been made during the raids, the FBI said the computers seized would be analysed and its investigation would continue.

The raids on the gang were part of an international effort dubbed Operation Trident Tribunal. In total, raids in 12 nations were carried out to thwart two separate gangs peddling scareware.

The second gang used booby-trapped adverts to trick victims. Raids by Latvian police on this gang led to the arrest of Peteris Sahurovs and Marina Maslobojeva who are alleged to be its operators.

According to the FBI, the pair worked their scam by pretending to be an advertising agency that wanted to put ads on the website of the Minneapolis Star Tribune newspaper.

Once the ads started running, the pair are alleged to have changed them to install fake security software on victims’ machines that mimicked infection by a virus. On payment of a fee the so-called infection was cured. Those that did not pay found their machine was unusable until they handed over cash.

This ruse is believed to have generated a return of about $2m.

“Scareware is just another tactic that cyber criminals are using to take money from citizens and businesses around the world,” said assistant director Gordon Snow of the FBI’s Cyber Division in a statement.

Source:  BBC

Intel Talks Up 50-Core Processor; Aims at Exaflops Performance

Wednesday, June 22nd, 2011

At the International Supercomputing Conference (ISC) yesterday, Intel gave more details about its forthcoming 50-core processor, which is now slated to ship sometime next year, and talked about moving to systems capable of exaflops (quintillion floating point operations per second) of performance by the end of the decade.

The chip, which is known as “Knights Corner,” has been discussed before. It uses the company’s “many integrated core” (MIC) architecture, initially with fifty x86-compatible cores. These cores are compatible with software designed for earlier Intel chips, but they are simpler than the ones in the company’s mainstream Core architecture. Intel had previously disclosed that this will use the company’s 22nm technology (including the trigate transistors), but it now confirmed that it will ship next year in systems from a variety of hardware makers. At the show, a number of vendors, including SGI, Dell, HP, IBM, Colfax and Supermicro, announced plans to build workstations or servers using the Knights Corner processor.

Currently, an earlier version of this chip, known as “Knights Ferry,” is available to some of Intel’s development partners. All of this is an outgrowth of what was once the Larrabee project, but focuses on massive x86 cores for high-performance computing, rather than as graphics processors. In this way, it will be going after the same kind of markets that Nvidia has been looking at with its “general purpose GPU” (GPGPU) products, such as its Telsa line. Intel has touted that x86 compatibility will make the MIC architecture easier to program for, because it can use standard languages and frameworks.

At ISC, Intel and a few of its partners, including Forschungszentrum Juelich, Leibniz Supercomputing Centre (LRZ), CERN and the Korea Institute of Science and Technology Information (KISTI), showed some early work with the Knights Ferry development platform.

Intel says it powers 77 percent of the supercomputers on the Top500 list (although, it should be noted that the top of the list is now held by a Japanese computer that uses more than 68,000 8-core Sparc64 chips made by Fujitsu). In addition, several of the systems near the top of the list are big users of GPU technology, typically Nvidia’s Tesla products.

Kirk Skaugen, vice president and general manager of Intel’s Data Center Group, talked about how the most of the existing Intel-based machines on the Top500 list use the more standard Xeon processors. In the future, he expected a bigger emphasis on the Many Integrated Core architecture and said that Intel had a roadmap for using the architecture to get to exaflops of performance by the end of the decade.

Also today, Tilera introduced its next generation of a chip that in some ways takes the same approach. The new Tile-GX 3000 series includes versions with 32, 64, and 100 low-power cores on a single chip. These aren’t x86 cores, but they draw considerably less power, with the company saying each core would only draw less than half a watt of power. These chips are specifically designed for “scale-out” applications, such as huge Web applications with a lot of throughput processing; database applications, such as noSQL solutions; and data mining applications, such as Hadoop.

The 32-core version is due out in the third quarter, with the 64- and 100-core versions due out in the first quarter of 2012. These chips are manufactured on TSMC’s 40nm process.

In both cases, they are an indication that many-core computing is gaining popularity, initially in specialized applications. It’s easy to see how this could broaden to more users and more applications, but clearly one big issue is software development. That was one reason why I was so interested by Microsoft’s announcement of C++ AMP, which should work on a variety of different kinds of processors.

Source:  PCMag.com

ICANN approves plan to vastly expand top-level domains

Wednesday, June 22nd, 2011

Do you find the reliance on things like .com, .net, and .org too restrictive? Haven’t found a country code that floats your boat? ICANN, the organization responsible for managing the domain name system, has decided that it’s time for a more flexible system for managing the top-level domains that help translate IP addresses into human-readable form. The plan has been in the works since 2009, but it has experienced a series of delays. Now, though, the organization has finally approved a process for handling new generic top-level domains (gTLDs), and will begin accepting applications in January.

Prior to ICANN’s existence, gTLDs were pretty limited: .com .edu .gov .int .mil .net .org and .arpa, although a large collection of country codes also existed. In 2003 and 2004, however, the organization began allowing a cautious expansion, adding things like .name and .biz (along with some oddities like .aero and .cat). And, just this year, it approved the .xxx domain after a rather contentious consideration period.

ICANN apparently recognized that there’s a continued interest in expanding gTLDs, and set about creating a mechanism to handle requests as they come in, rather than to consider them in batches on an ad-hoc basis. And at least according the FAQ site that it has set up, the organization expects a busy response: “Soon entrepreneurs, businesses, governments and communities around the world will be able to apply to operate a Top-Level Domain of their own choosing.” (More details, including an Applicant Guidebook, are also available.)

Still, the FAQ also makes it clear that grabbing a gTLD won’t be an exercise in casual vanity. Simply getting your application processed will cost $185,000 and, should it be approved, you’ll end up being responsible for managing it. Do not take this lightly, ICANN warns, since “this involves a number of significant responsibilities, as the operator of a new gTLD is running a piece of visible Internet infrastructure.” Presumably, service providers will take care of this hassle, but that will simply add to the cost of succeeding.

ICANN suggests the changes will “unleash the global human imagination.” At best, the unleashing will be pretty limited, with a maximum of 1,000 new domains a year. Some of these will undoubtedly show signs of imagination through a clever use of character combinations in some URLs. Mostly, however, we expect that the new gTLDs will simply provide domain registrars with the opportunity to suggest you buy even more domains when you register a .com or .net.

Source:  arstechnica.com

Deep Shot uses camera to move application states between PC, phone

Friday, June 17th, 2011

Apple isn’t the only entity trying to ease users’ transitions between devices. Tsung-Hsiang Chang, a graduate student at MIT’s Computer Science and Artificial Intelligence Lab, and Yang Li, a Google employee, have developed an application that lets users transfer the state of an application from a computer to a smartphone or vice versa just by snapping a picture of the computer’s screen. Once the picture is taken, the application is opened right where the user left off.

The application is called Deep Shot, and was designed to work with Web apps. Most Web apps can describe the state they’re in with a combination of symbols, called the unique resource identifier (URI), which Deep Shot can use to seamlessly transfer the working state of an open app without the need for cables or interacting with a third-party app to handle the syncing. Deep Shot is technically a third party, but it appears to work in the background and doesn’t involve itself in a visible way with the transfer process.

With Deep Shot’s software installed on a computer and phone, users can take a picture with the phone of an open application on the computer—like a restaurant’s page on Yelp. The phone’s software will then use digital vision algorithms to figure out what application is in the picture and open it. Meanwhile, the computer transmits the corresponding URI to the phone using a WiFi connection, though “the medium can be replaced with any networking protocols,” Chang tells Ars.

The phone opens the Yelp app, reads the URI, and produces the same page without the user having to search for the restaurant again, e-mail the page to himself, or use other workarounds. Changing how much of the screen is photographed even changes how the information is displayed on the phone.

Likewise, users can throw an app’s state from a phone to the computer by taking a picture of the computer’s screen with the phone again. The phone uses the picture to figure out which computer it should connect with based on the appearance of the screen in the picture, and then pushes the app or page and its state to the computer.

Deep Shot’s system also doesn’t require linear transactions between different versions of the same app. URI transactions could also work more generally between two different kinds of mapping apps or review services, if desired. The creators note that Deep Shot could work with other software and non-Web applications, though Jeffrey Nichols, a researcher at IBM’s Almaden research center, notes that it would require an agreement on interoperability standards, which are tough to set up and maintain. Nichols told MIT News he hopes that “companies like Microsoft would really consider adding it,” but cautions that he thinks computing is moving away from native apps toward Web ones.

Deep Shot currently only works with a handful of Web apps, including Google Maps and Yelp, but the creators note that it could be made to work with any Web app that determines its state using URIs. The problem is that URIs are often used less plainly than in applications like Google Maps, so they can be harder to extract and exchange between devices.

There are some features we’d like to see added, like the ability to move working states between devices in the background, without having to have the relevant app pop open each time Deep Shot is used. We could also envision simpler additional services that could fill out Deep Shot, like if photos of text on a PC screen could become copy- and paste-able on the phone.

The app was developed at Google, so Google holds the rights, but it hasn’t put forth any official plans for it. In a space where companies are falling over each other to offer cloud and syncing services, Deep Shot could be a serious contribution to Google’s syncing arsenal.

Source:  arstechnica

World IPv6 Day looms: what might break (and how to fix it)

Wednesday, June 8th, 2011

When the clock hits midnight on Wednesday, June 8 UTC, World IPv6 day begins. Many Web destinations—including the four most popular (Google, Facebook, YouTube, and Yahoo)—will become reachable over IPv6 for 24 hours. (In the US, that’s 8PM EDT, 5PM PDT on Tuesday). As the current IPv4 protocol is quickly running out of its remaining 32-bit addresses, adopting its successor’s Brobdingnagian 128-bit address space is long overdue.

AAAA records

Still, it’s widely expected that a small fraction of all users will encounter issues when they type “facebook login” into Google come Tuesday evening. On June 8, the World IPv6 Day participants will add AAAA (“quad A”) records holding IPv6 addresses to their domain name (DNS) servers, in addition to the normal A (for “address”) records that exist for IPv4 addresses. This makes the DNS servers “dual stack,” having both IPv4 and IPv6 connectivity.

Depending on various cache timeouts, applications will start seeing these IPv6 addresses within minutes to hours—or perhaps only after an application is restarted or the entire system is rebooted. If the system doesn’t have IPv6 connectivity, it will ignore the IPv6 addresses in the DNS and connect over IPv4 as usual. If the system thinks it has IPv6 connectivity, it will try to connect to the destination over IPv6. If the connection works, the content is loaded over IPv6 and everything otherwise works like it normally does.

Potential issues

However, there are three possible ways in which loading content over IPv6 can be derailed. The least problematic one is that at some point, the OS in your computer or a router along the way realizes the IPv6 destination can’t be reached. The OS, possibly with help from an ICMPv6 (Internet Control Message Protocol for IPv6) error message sent by a router, will then tell the requesting application that the connection couldn’t be made. This happens pretty much immediately, so well-behaved applications will simply continue their connection attempts with the next available address—probably an IPv4 address.

I had this happen once during an Internet Engineering Task Force (IETF) meeting when the IETF had only recently made its servers reachable over IPv6. The routers that connected the meeting network didn’t know where to send packets addressed to the IETF servers, but because of the ICMPv6 messages, my browser immediately fell back to IPv4 and I never even noticed the IPv6 dis-connectivity until I tried the command line FTP tool.

Things get worse when there are no ICMPv6 error messages or these messages are ignored. In that case, it usually takes some time for the system to determine that a connection attempt isn’t going anywhere, so the user will be looking at a blank page for 10 to 60 seconds. Then, most browsers or other applications will retry over IPv4. Timeouts may apply to elements such as images, too, so loading an entire page this way can take a long time. But at least it loads… eventually.

The worst situation is the one where small packets make it to the server and back, so the TCP session gets established (the initial TCP setup packets are small), but then large packets containing actual data don’t make it through. In and of itself, a packet size mismatch is a normal situation that is solved through Path MTU Discovery (PMTUD), but some firewalls filter the ICMPv6 packets that make PMTUD work.

The result is that sessions get established (so there is no fallback to IPv4), but actual data can’t be transferred. Sessions then hang forever. (Though some operating systems have PMTUD “black hole detection” that will get the data flowing eventually.)

Expert views

DNS expert Cricket Liu is eager to see what comes out of World IPv6 Day. “We don’t really know what’s going to happen when AAAA records will be deployed along with A records on a global scale. That’s why these big players are going to deliberately induce this to see what happens. We know that some fraction of clients think they have IPv6 but it doesn’t work. But there’s also the issue where both a server and a client have IPv6 connectivity, but the two ISPs involved haven’t set up any peering.”

Liu currently works for Infoblox, a company producing an appliance that uses DHCP and DNS technology to manage naming and addressing in corporate networks. “We began supporting IPv6 several years ago,” Liu told Ars. “We now see a dramatic surge in IPv6 interest, but relatively few enterprises are implementing IPv6 right away. Most are in the information gathering phase.”

“Having separate names for IPv6, such as ipv6.google.com, is untenable in the future, this way IPv6 will be a second-class citizen. Content producers need to enable dual stack, but we don’t know what impact this will have. The big players are going to induce this deliberately on World IPv6 Day to see what happens. It’s going to be an interesting day.”

Qing Li, chief scientist at Blue Coat, expresses a similar sentiment. “Nobody can foresee all these issues. Some will be simple, some will be harder,” he said.

Li advocates application proxies as a way to get the transition to IPv6 underway. Not coincidentally, his employer makes just such proxies, but he makes a good point. Just adding IPv6, which is going to happen on World IPv6 Day, is an easy first step. Providing all services over IPv6 will be much harder. Webpages are made up of many elements that are loaded from many different servers; making sure those are all IPv6-capable won’t be easy.

Li also has a warning to prospective IPv4 address buyers. “Trading addresses is risky. Addresses come with a history; they may have been the sources of botnets and been blacklisted for years. Buying addresses without knowing their history is a bad idea,” he said.

Li plans to use the opportunity afforded by World IPv6 Day to observe where all the IPv6 requests originate. “From which region, mobile or fixed, corporate, botnets?” he said. “We want to analyze security threats at the application level after the experiment. We are very invested in security over IPv6 at the application layer.”

We also spoke with Owen DeLong, “IPv6 evangelist” at service provider Hurricane Electric, where “every day is World IPv6 Day.” DeLong expects that the experiment will show that the scope of the connection problem is smaller than currently estimated.

“As such, I am hoping it will provide more of an impetus for content developers to deploy AAAA records,” he said.

On the other hand, he’s not that optimistic about the ability of ISPs to light up native (as opposed to tunneled) IPv6 in the short term. “The last mile infrastructure situation for IPv6 is still pretty abysmal, with many systems still lacking meaningful IPv6 support, especially in the DSL and PON (passive optical networks, like Verizon’s FiOS) environments, but, even in many of the deployed DOCSIS systems,” he said. “A lot of DOCSIS deployments still use DOCSIS 2.0, and even with DOCSIS 3.0 the management systems may not be ready for IPv6. At least the CPE [customer premises equipment, or home router] guys know it’s important now, even though that’s 12 to 18 months late.”

Paradoxically, the fact that we’re now witnessing the last months of the IPv4 address supply has finally brought IPv6 the attention that it hasn’t been getting for so many years, but the depletion of the IPv4 address pools may also make it harder to actually deploy IPv6. After all, IPv6 doesn’t buy you access to IPv4 services, so heroic efforts will be necessary to keep some form of IPv4 running, with Carrier Grade or Large Scale Network Address Translators (CGNs/LSNs) using many layers of network address translation to allow a large number of users to share a small number of IPv4 addresses. This takes away time and money that could otherwise be spent on deploying IPv6.

This is a waste of time, according to DeLong: “LSN, NAT64, and the like all offer a severely degraded experience. I think that the only good way to address the IPv4 shortage is migration to IPv6. Everything else is putting a band-aid on an arterial bleed. All that really does is create a blood-soaked bandage, but the patient is still suffering critical blood loss. IPv4 simply isn’t sustainable much beyond its current point no matter what we do.”

But there’s also good news. “I think the next few years are going to be very exciting,” he said. “I’m looking forward to the transition, actually, because I think we’re on the verge of being able to bring real innovation back to the Internet and remove a lot of limitations that we have taken for granted in the consumer space for the last 20 years.”

Hurricane Electric routinely handles several Gbps of IPv6 traffic, which will likely “significantly jump” for WIPv6D, as IPv6 customers will be able to reach many more destinations over IPv6. However, DeLong isn’t worried about the additional traffic. “The IPv6 traffic displaces IPv4 traffic, so the total amount of traffic will remain the same,” he said. The issues

In some ways, efforts to implement IPv6 are like Zeno’s paradox of the tortoise and Achilles. In this ancient thought experiment, a tortoise challenges Achilles to a race, claiming that even though the (semi-) invincible hero Achilles is much faster, he would never win the race if the tortoise received a head start. After all, by the time Achilles covered the 10 meter head start—yes, apparently ancient Greece used the metric system—the tortoise would have also progressed, say by one meter. By the time Achilles traversed that additional meter, the tortoise would be 0.1 meters further along, and so on, ad infinitum; Achilles never pulls even with the tortoise.

In the same way, whenever a feature is implemented for IPv6, something new has been added to IPv4 that must also be added to IPv6 before the new protocol reaches parity with the old one.

But it gets worse. In its early days, IPv6 deployment was helped by training wheels in the form of automatic tunnels that magically turn IPv4 connectivity into IPv6 connectivity. There are two popular forms of automatic tunneling: 6to4 and Teredo. The difference is that 6to4 requires a public IPv4 address, so it can only run on a home router (or computer) that is directly connected to the outside world; Teredo is set up to work through IPv4 Network Address Translators (NATs). These protocols have been getting a lot of bad press these days, but that’s unfair: they are very useful as a quick and easy way to get IPv6 connectivity.

The problem is that Apple turned on 6to4 by default on its Airport Extreme base stations some time ago, and Microsoft has 6to4 and Teredo enabled by default on all Windows systems that have IPv6 enabled. (That would be XP after IPv6 is explicitly enabled and Vista or 7 unless it’s explicitly disabled.) In this article, Geoff Huston uncovers how surprisingly bad Teredo connectivity works in practice: it fails at least 37 percent of the time, and it’s almost always a lot slower than regular IPv4 or IPv6 when it does work. The saving grace is that although Windows Vista and 7 have Teredo enabled out of the box, they are set up to avoid using it because Windows will ignore IPv6 addresses in the DNS if it only has Teredo.

Although Windows doesn’t ignore IPv6 addresses in the DNS if it has 6to4 connectivity, it will prefer IPv4-to-IPv4 over 6to4-to-IPv6. A few point updates ago, Mac OS 10.6 was also changed to do the same. So these training wheels should be out of the way now and not end up between the spokes of the main IPv4 or IPv6 wheels.

In a presentation at the RIPE meeting last November, Tore Anderson explored some more subtle interactions between suboptimal DNS behavior and 6to4/Teredo, and between native IPv6 and 6to4/Teredo. The former has been addressed in Mac OS, Firefox, and Opera updates, so assuming everyone uses the latest versions of everything, the number of people experiencing problems on World IPv6 Day could be as low as 0.04 percent. We’ll see. The solutions

A very good way to see if your system will behave as it should come World IPv6 Day is to check RIPE’s IPv6 eyechart. This page loads images from 42 sites that are currently dual stack and from 13 WIPv6D participants. If the image loads, a green badge is shown; if it doesn’t load within a reasonable time, a red badge is shown. If everything is green, you’re in the clear. If you have broken IPv6, you’ll see the 42 dual stack sites in red. You may also see only a few in red in the case of localized problems.

If you see more than one or two red badges, you may want to consult ARIN’s IPv6 wiki, which has a long list of potential problems with different hardware, software, operating systems, and ISPs. If you’re on Windows, a quick and temporary fix is available on the Microsoft Support website. The fix consists of a small utility that makes the system prefer IPv4 over IPv6 until WIPv6D. Afterwards, the normal behavior of preferring IPv6 over IPv4 will be reinstated.

Because the eyechart page needs to be reachable for people who have broken IPv6, it only has IPv4, so it can’t be used to get an overview of IPv6-only reachability. However, if you are running an IPv6-only (or dual stack) system, you can check ipv6.ipv6eyechart.ripe.net. Apparently, a few World IPv6 Day participants didn’t want to wait, because some already turn up in green using only IPv6, while others aren’t yet reachable over IPv6.

If all of this has inspired you to join the IPv6 fun, but your ISP doesn’t offer IPv6 connectivity, the best way to move forward is by using a tunnel broker. These offer free one-to-one tunnels with much better reliability than the automatic tunneling systems. The disadvantage is that the tunnels need to be configured manually and often require the installation of drivers. If you have a static IPv4 address, Hurricane Electric’s tunnelbroker.net is a good place to get a tunnel. SixXS also offers “anything in anything” tunnels that use UDP encapsulation to work better through NAT. These also dynamically discover the tunnel endpoints so a static IPv4 address isn’t required.

Have a happy World IPv6 Day, and don’t forget to report back to us if you encounter any problems.

Source:  arstechnica.com

Cisco: Internet traffic to quadruple by 2015

Wednesday, June 1st, 2011

The amount of Internet traffic crisscrossing the world will quadruple by 2015 as the number of networked devices surpasses 15 billion, according to a report out today from Cisco.

Releasing its fifth annual Visual Networking Index Forecast today, the networking giant forecast that global Internet traffic will reach 966 exabytes a year in just four years. One exabyte equals 1 million terabytes, 1 billion gigabytes, or about 250 million DVDs.

Per month, global IP traffic will hit 80.5 exabytes by 2015, up from about 20.2 exabytes per month in 2010. And per second, traffic will hit 245 terabytes, the equivalent of about 62,500 DVDs.

The increase alone in global traffic between 2014 and 2015 will be 200 exabytes, more than the total amount of all IP-based traffic seen last year.

The dramatic jump in Internet traffic will occur as a result of four key factors, Cisco says:

  1. More devices. Driven by demand for mobile phones, tablets, smart appliances, and other connected gadgets, the number of Internet-connected devices will be twice the number of people on the planet in another four years.
  2. More people. By 2015, almost 3 billion people will be surfing the Net, more than 40 percent of the world’s total population.
  3. Faster speeds. The average broadband speed is expected to jump to 28 megabits per second in 2015, up from 7 Mbps now.
  4. More videos. In another four years, 1 million minutes of video, or 764 days’ worth, will cross the Internet every second.

Computers accounted for 97 percent of all traffic last year. That number will drop to 87 percent by 2015, as more mobile devices hop online. As a result, mobile Internet traffic around the world will jump 26 times, to 75 exabytes per year or 6.3 exabytes per month in 2015.

The number of people accessing online video will increase by about 500 million users in another four years. Web-enabled TVs will also scoop up their share of more data, according to Cisco, accounting for 10 percent of all consumer Internet traffic and 18 percent of online video traffic by 2015.

 

 

Source:  CNET and YouTube