Archive for the ‘Mobile’ Category

Wireless Case Studies: Cellular Repeater and DAS

Friday, February 7th, 2014

Gyver Networks recently designed and installed a cellular bi-directional amplifier (BDA) and distributed antenna system (DAS) for an internationally renowned preparatory and boarding school in Massachusetts.

BDA Challenge: Faculty, students, and visitors were unable to access any cellular voice or data services at one of this historic campus’ sports complexes; 3G and 4G cellular reception at the suburban Boston location were virtually nonexistent.

Of particular concern to the school was the fact that the safety of its student-athletes would be jeopardized in the event of a serious injury, with precious minutes lost as faculty were forced to scramble to find the nearest landline – or leave the building altogether in search of cellular signal – to contact first responders.

Additionally, since internal communications between management and facilities personnel around the campus took place via mobile phone, lack of cellular signal at the sports complex required staff to physically leave the site just to find adequate reception.

Resolution: Gyver Networks engineers performed a cellular site survey of selected carriers throughout the complex to acquire a precise snapshot of the RF environment. After selecting the optimal donor tower signal for each cell carrier, Gyver then engineered and installed a distributed antenna system (DAS) to retransmit the amplified signal put out by the bi-directional amplifier (BDA) inside the building.

The high-gain, dual-band BDA chosen for the system offered scalability across selected cellular and PCS bands, as well as the flexibility to reconfigure band settings on an as-needed basis, providing enhancement capabilities for all major carriers now and in the future.

Every objective set forth by the school’s IT department has been satisfied with the deployment of this cellular repeater and DAS: All areas of the athletic complex now enjoy full 3G and 4G voice and data connectivity; safety and liability concerns have been mitigated; and campus personnel are able to maintain mobile communications regardless of where they are in the complex.

FCC postpones spectrum auction until mid 2015

Monday, December 9th, 2013

In a blog post on Friday, Federal Communications Commission Chairman Tom Wheeler said that he would postpone a June 2014 spectrum auction to mid-2015. In his post, Wheeler called for more extensive testing of “the operating systems and the software necessary to conduct the world’s first-of-a kind incentive auction.”

”Only when our software and systems are technically ready, user friendly, and thoroughly tested, will we start the auction,” wrote Wheeler. The chairman also said that he wanted to develop procedures for how the auction will be conducted, specifically after seeking public comment on those details in the second half of next year.

A separate auction for 10MHz of space will take place in January 2014. In 2012, Congress passed the Middle Class Tax Relief and Job Creation Act, which required the FCC to auction off 65MHz of spectrum by 2015. Revenue from the auction will go toward developing FirstNet, an LTE network for first responders. Two months ago, acting FCC chair Mignon Clyburn announced that the commission would start that sell-off by placing 10MHz on the auction block in January 2014. The other 55MHz would be auctioned off at a later date, before the end of 2015.

The forthcoming auction aims to pay TV broadcasters to give up lower frequencies, which will be bid on by wireless cell phone carriers like AT&T and Verizon, but also by smaller carriers who are eager to expand their spectrum property. Wheeler gave no hint as to whether he would push for restrictions on big carriers during the auction process, but he wrote, “I am mindful of the important national interest in making available additional spectrum for flexible use.”

Source:  arstechnica.com

HP: 90 percent of Apple iOS mobile apps show security vulnerabilities

Tuesday, November 19th, 2013

HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.

Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97 percent of these apps inappropriately accessed private information sources within a device, and 86 percent proved to be vulnerable to attacks such as SQL injection.

The Apple guidelines for developing iOS apps help developers but this doesn’t go far enough in terms of security, says Armistead. Mobile apps are being used to extend the corporate website to mobile devices, but companies in the process “are opening up their attack surfaces,” he says.

In its summary of the testing, HP said 86 percent of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.

The same number did not have optimized security built in the early part of the development process, according to HP. Three quarters “did not use proper encryption techniques when storing data on mobile devices, which leaves unencrypted data accessible to an attacker.” A large number of the apps didn’t implement SSL/HTTPS correctly.To discover weaknesses in apps, developers need to involve practices such as app scanning for security, penetration testing and a secure coding development life-cycle approach, HP advises.

The need to develop mobile apps quickly for business purposes is one of the main contributing factors leading to weaknesses in these apps made available for public download, according to HP. And the weakness on the mobile side is impacting the server side as well.

“It is our earnest belief that the pace and cost of development in the mobile space has hampered security efforts,” HP says in its report, adding that “mobile application security is still in its infancy.”

Source:  infoworld.com

Researchers find way to increase range of wireless frequencies in smartphones

Friday, November 8th, 2013

Researchers have found a new way to tune the radio frequency in smartphones and other wireless devices that promises to reduce costs and improve performance of semiconductors used in defense, satellite and commercial communications.

Semiconductor Research Corp. (SRC) and Northeastern University in Boston presented the research findings at the 58th Magnetism and Magnetic Materials Conference in Denver this week.

Nian Sun, associate professor of electrical and computer engineering at Northeastern, said he’s been working on the process since 2006, when he received National Science Foundation grants for the research.

“In September, we had a breakthrough,” he said in a telephone interview. “We didn’t celebrate with champagne exactly, but we were happy.”

The research progressed through a series of about 20 stages over the past seven years. It wasn’t like the hundreds of failures that the Wright brothers faced in coming up with a working wing design, but there were gradual improvements at each stage, he said.

Today, state-of-the art radio frequency circuits in smartphones rely on tuning done with radio frequency (RF) varactors, a kind of capacitor. But the new process allows tuning in inductors as well, which could enhance a smartphone’s tunable frequency range from 50% to 200%, Sun said. Tuning is how a device finds an available frequency to complete a wireless transmission. It’s not very different from turning a dial on an FM radio receiver to bring in a signal.

Capacitors and inductors work in electronic circuits to move electrons; inductors change the direction of electrons in a circuit, while capacitors do not.

Most smartphones use 15 to 20 frequency channels to make connections, but the new inductors made possible by the research will potentially more than double the number of channels available on a smartphone or other device. The new inductors are a missing link long sought for in ways to upgrade the RF tunable frequency range in a tuned circuit.

“Researchers have been trying a while to make inductors tunable — to change the inductance value — and haven’t been very successful,” said Kwok Ng, senior director of device sciences at SRC. He said SRC has worked with Northeastern since 2011 on the project, investing up to $300,000 in the research work.

How it worked: Researchers at the Northeastern lab used a thin magnetic piezoelectric film deposit in an experimental inductor about a centimeter square, using microelectromechanical systems (MEMS) processes . Piezoelectricity is an electromechanical interaction between the mechanical and electric states in a crystalline material. A crystal can acquire a charge when subjected to AC voltage.

What the researchers found is they could apply the right amount of voltage on a layer of metal going around a core of piezoelectric film to change its permeability. As the film changes permeability, its electrons can move at different frequencies.

Ng said the research means future inductors can be used to improve radio signal performance, which could eliminate the number of modules needed in a smartphone, with the potential to reduce the cost of materials.

Intel and Texas Instruments cooperated in the work, and the new inductor technology will be available for further industrial development by the middle of next year, followed by use in consumer applications by as earlier as late 2014.

Source:  networkworld.com

FCC crowdsources mobile broadband research with Android app

Friday, November 8th, 2013

Most smartphone users know data speeds can vary widely. But how do the different carriers stack up against each other? The Federal Communications Commission is hoping the public can help figure that out, using a new app it will preview next week.

The FCC on Friday said that the agenda for next Thursday’s open meeting, the first under new Chairman Tom Wheeler, will feature a presentation on a new Android smartphone app that will be used to crowdsource measurements of mobile broadband speeds. 

The FCC announced it would start measuring the performance of mobile networks last September. All four major wireless carriers, as well CTIA-The Wireless Association have already agreed to participate in the app, which is called “FCC Speed Test.” It works only on Android for now — no word on when an iPhone version might be available.

While the app has been in the works for a long time, its elevation to this month’s agenda reaffirms something Wheeler told the Journal this week. During that conversation, the Chairman repeatedly emphasized his desire to “make decisions based on facts.” Given the paucity of information on mobile broadband availability and prices, this type of data collection seems like the first step toward evaluating whether Americans are getting what they pay for from their carriers in terms of mobile data speeds.

The FCC unveiled its first survey of traditional land-based broadband providers in August 2011, which showed that most companies provide access that comes close to or exceeds advertised speeds. (Those results prompted at least one Internet service provider to increase its performance during peak hours.) Expanding the data collection effort to the mobile broadband is a natural step; smartphone sales outpace laptop sales and a significant portion of Americans (particularly minorities and low-income households) rely on a smartphone as their primary connection to the Internet.

Wheeler has said ensuring there is adequate competition in the broadband and wireless markets is among his top priorities. But first the FCC must know what level of service Americans are getting from their current providers. If mobile broadband speeds perform much as advertised, it would bolster the case of those who argue the wireless market is sufficiently competitive. But if any of the major carriers were to seriously under-perform, it would raise questions about the need for intervention from federal regulators.

Source:  wsj.com

802.11ac ‘gigabit Wi-Fi’ starts to show potential, limits

Monday, October 7th, 2013

Vendor tests and very early 802.11ac customers provide a reality check on “gigabit Wi-Fi” but also confirm much of its promise.

Vendors have been testing their 11ac products for months, yielding data that show how 11ac performs and what variables can affect performance. Some of the tests are under ideal laboratory-style conditions; others involve actual or simulated production networks. Among the results: consistent 400M to 800Mbps throughput for 11ac clients in best-case situations, higher throughput as range increases compared to 11n, more clients serviced by each access point, and a boost in performance for existing 11n clients.

Wireless LAN vendors are stepping up product introductions, and all of them are coming out with products, among them Aerohive, Aruba Networks, Cisco (including its Meraki cloud-based offering), Meru, Motorola Solutions, Ruckus, Ubiquiti, and Xirrus.

The IEEE 802.11ac standard does several things to triple the throughput of 11n. It builds on some of the technologies introduced in 802.11n; makes mandatory some 11n options; offers several ways to dramatically boost Wi-Fi throughput; and works solely in the under-used 5GHz band.

It’s a potent combination. “We are seeing over 800Mbps on the new Apple 11ac-equipped Macbook Air laptops, and 400Mbps on the 11ac phones, such as the new Samsung Galaxy S4, that [currently] make up the bulk of 11ac devices on campus,” says Mike Davis, systems programmer, University of Delaware, Newark, Delaware.

A long-time Aruba Networks WLAN customer, the university has installed 3,700 of Aruba’s new 11ac access points on campus this summer, in a new engineering building, two new dorms, and some large auditoriums. Currently, there are on average about 80 11ac clients online with a peak of 100, out of some 24,000 Wi-Fi clients on campus.

The 11ac network seems to bear up under load. “In a limited test with an 11ac Macbook Air, I was able to sustain 400Mbps on an 11ac access point that was loaded with over 120 clients at the time,” says Davis. Not all of the clients were “data hungry,” but the results showed “that the new 11ac access points could still supply better-than-11n data rates while servicing more clients than before,” Davis says.

The maximum data rates for 11ac are highly dependent on several variables. One is whether the 11ac radios are using 80 Mhz-wide channels (11n got much of its throughput boost by being able to use 40 MHz channels). Another is whether the radios are able to use the 256 QAM modulation scheme, compared to the 64 QAM for 11n. Both of these depend on how close the 11ac clients are to the access point. Too far, and the radios “step down” to narrower channels and lower modulations.

Another variable is the number of “spatial streams,” a technology introduced with 11n, supported by the client and access point radios. Chart #1, “802.11ac performance based on spatial streams,” shows the download throughput performance.

802.11ac

In perfect conditions, close to the access point, a three-stream 11ac radio can achieve the maximum raw data rate of 1.3Gbps. But no users will actually realize that in terms of useable throughput.

“Typically, if the client is close to the access point, you can expect to lose about 40% of the overall raw bit rate due to protocol overhead – acknowledgements, setup, beaconing and so on,” says Mathew Gast, director of product management, for Aerohive Networks, which just announced its first 11ac products, the AP370 and AP390. Aerohive incorporates controller functions in a distributed access point architecture and provides a cloud-based management interface for IT groups.

“A single [11ac] client that’s very close to the access point in ideal conditions gets very good speed,” says Gast. “But that doesn’t reflect reality: you have electronic ‘noise,’ multiple contending clients, the presence of 11n clients. In some cases, the [11ac] speeds might not be much higher than 11n.”

A third key variable is the number of spatial streams, supported by both access points and clients. Most of the new 11ac access points will support three streams, usually with three transmit and three receive antennas. But clients will vary. At the University of Delaware, the new Macbook Air laptops support two streams; but the new Samsung Galaxy S4 and HTC One phones support one stream, via Broadcom’s BCM4335 11ac chipset.

Tests by Broadcom found that a single 11n data stream over a 40 MHz channel can deliver up to 60Mbps. By comparison, single-stream 11ac in an 80 MHz channels is “starting at well over 250Mbps,” says Chris Brown, director of business development for Broadcom’s wireless connectivity unit. Single-stream 11ac will max out at about 433Mbps.

There are some interesting results from these qualities. One is that the throughput at any given distance from the access point is much better in 11ac compared to 11n. “Even at 60 meters, single-stream 11ac outperforms all but the 2×2 11n at 40 MHz,” Brown says.

Another result is that 11ac access points can service a larger number of clients than 11n access points.

“We have replaced several dozen 11n APs with 11ac in a high-density lecture hall, with great success,” says University of Delaware’s Mike Davis. “While we are still restricting the maximum number of clients that can associate with the new APs, we are seeing them maintain client performance even as the client counts almost double from what the previous generation APs could service.”

Other features of 11ac help to sustain these capacity gains. Transmit beam forming (TBF), which was an optional feature in 11n is mandatory and standardized in 11ac. “TBR lets you ‘concentrate’ the RF signal in a specific direction, for a specific client,” says Mark Jordan, director, technical marketing engineering, Aruba Networks. “TBF changes the phasing slightly to allow the signals to propagate at a higher effective radio power level. The result is a vastly improved throughput-over-distance.”

A second feature is low density parity check (LDPC), which is a technique to improve the sensitivity of the receiving radio, in effect giving it better “hearing.”

The impact in Wi-Fi networks will be significant. Broadcom did extensive testing in a network set up in an office building, using both 11n and 11ac access points and clients. It specifically tested 11ac data rates and throughput with beam forming and low density parity check switched off and on, according to Brown.

Tests showed that 11ac connections with both TBR and LDPC turned on, increasingly and dramatically outperformed 11n – and even 11ac with both features turned off – as the distance between client and access point increased. For example, at one test point, an 11n client achieved 32Mbps. At the same point, the 11ac client with TBR and LDPC turned “off,” achieved about the same. But when both were turned “on,” the 11ac client soared to 102Mbps, more than three times the previous throughput.

Aruba found similar results. Its single-stream Galaxy S4 smartphone reached 238Mbps TCP downstream throughput at 15 feet, 235Mbps at 30 feet, and 193Mbps at 75 feet. At 120 feet, it was still 154Mbps. For the same distances upstream the throughput rates were: 235Mbps, 230M, 168M, and 87M.

“We rechecked that several times, to make sure we were doing it right, says Aruba’s Jordan. “We knew we couldn’t get the theoretical maximums. But now, we can support today’s clients with all the data they demand. And we can do it with the certainty of such high rates-at-range that we can come close to guaranteeing a high quality [user] experience.”

There are still other implications with 11ac. Because of the much higher up and down throughput, 11ac mobile devices get on and off the Wi-Fi channel much faster compared to 11n, drawing less power from the battery. The more efficient network use will mean less “energy per bit,” and better battery life.

A related implication is that because this all happens much faster with 11ac, there’s more time for other clients to access the channel. In other words, network capacity increases by up to six times, according to Broadcom’s Brown. “That frees up time for other clients to transmit and receive,” he says.

That improvement can be used to reduce the number of access points covering a given area: in the Broadcom office test area, four Cisco 11n access points provided connectivity. A single 11n access point could replace them, says Brown.

But more likely, IT groups will optimize 11ac networks for capacity, especially as the number of smartphones, tablets, laptops and other gear are outfitted with 11ac radios.

Even 11n clients will see improvement in 11ac networks, as University of Delaware has found.

“The performance of 11n clients on the 11ac APs has probably been the biggest, unexpected benefit,” says Mike Davis. “The 11n clients still make up 80% of the total number of clients and we’ve measured two times the performance of 11n clients on the new 11ac APs over the last generation [11n] APs.”

Wi-Fi uses Ethernet’s carrier sense multiple access with collision detection (CSMA/CD) which essentially checks to see if a channel is being used, and if so, backs off, waits and tries again. “If we’re spending less time on the net, then there’s more airtime available, and so more opportunities for devices to access the media,” says Brown. “More available airtime translates into fewer collisions and backoffs. If an overburdened 11n access point is replaced with an 11ac access point, it will increase the network’s capacity.”

In Aruba’s in-house testing, a Macbook Pro laptop with a three-stream 11n radio was connected to first to the 11n Aruba AP-135, and then to the 11ac AP-225. As shown in Chart #2, “11ac will boost throughput in 11n clients,” the laptop’s performance was vastly better on the 11ac access point, especially as the range increased.

802.11ac

These improvements are part of “wave 1” 11ac. In wave 2, starting perhaps later in 2014, new features will be added to 11ac radios: support four to eight data streams, explicit transmit beam forming, an option for 160 Mhz channels, and “multi-user MIMO,” which lets the access point talk to more than one 11ac client at the same time.

Source:  networkworld.com

AT&T announces plans to use 700Mhz channels for LTE Broadcast

Thursday, September 26th, 2013

Yesterday at Goldman Sachs’ Communacopia Conference in New York, AT&T CEO Randall Stephenson announced that his company would be allocating the 700Mhz Lower D and E blocks of spectrum that it acquired from Qualcomm in 2011 to build out its LTE Broadcast service. Fierce Wireless reported from the event and noted that this spectrum was destined for additional data capacity. In a recent FCC filing, AT&T put off deploying LTE in this spectrum due to administrative and technical delays caused by the 3G Partnership Project’s continued evaluation of carrier aggregation in LTE Advanced.

No timeline was given for deploying LTE Broadcast, but Stephenson stressed the importance of video to AT&T’s strategy over the next few years.

The aptly named LTE Broadcast is an adaptation of the LTE technology we know and love, but in just one direction. In the case of AT&T’s plans, either 6Mhz or 12Mhz will be available for data transmission, depending on the market. In 6Mhz markets there would be some bandwidth limitations, but plenty enough to distribute a live television event, like the Super Bowl or March Madness. Vitally, since the content is broadcast indiscriminately to any handsets capable of receiving it, there’s no upper limit to the number of recipients of the data. So, instead of having a wireless data network crumble under the weight of thousands of users watching March Madness on their phones and devices at one cell site, the data network remains intact, and everyone gets to watch the games.

Verizon Wireless has a similar proposal in the works, with vague hopes that they’ll be able to be in position to leverage their ongoing relationship with the NFL for the 2014 Super Bowl. Neither Verizon Wireless nor AT&T is hurting for spectrum right now, so it’s nice to see them putting it to good use.

Source:  arstechnica.com

US FDA to regulate only medical apps that could be risky if malfunctioning

Tuesday, September 24th, 2013

The FDA said the mobile platform brings its own unique risks when used for medical applications

The U.S. Food and Drug Administration intends to regulate only mobile apps that are medical devices and could pose a risk to a patient’s safety if they do not function as intended.

Some of the risks could be unique to the choice of the mobile platform. The interpretation of radiological images on a mobile device could, for example, be adversely affected by the smaller screen size, lower contrast ratio and uncontrolled ambient light of the mobile platform, the agency said in its recommendations released Monday. The FDA said it intends to take the “risks into account in assessing the appropriate regulatory oversight for these products.”

The nonbinding recommendations to developers of mobile medical apps only reflects the FDA’s current thinking on the topic, the agency said. The guidance document is being issued to clarify the small group of mobile apps which the FDA aims to scrutinize, it added.

The recommendations would leave out of FDA scrutiny a majority of mobile apps that could be classified as medical devices but pose a minimal risk to consumers, the agency said.

The FDA said it is focusing its oversight on mobile medical apps that are to be used as accessories to regulated medical devices or transform a mobile platform into a regulated medical device such as an electrocardiography machine.

“Mobile medical apps that undergo FDA review will be assessed using the same regulatory standards and risk-based approach that the agency applies to other medical devices,” the agency said.

It also clarified that its oversight would be platform neutral. Mobile apps to analyze and interpret EKG waveforms to detect heart function irregularities would be considered similar to software running on a desktop computer that serves the same function, which is already regulated.

“FDA’s oversight approach to mobile apps is focused on their functionality, just as we focus on the functionality of conventional devices. Our oversight is not determined by the platform,” the agency said in its recommendations.

The FDA has cleared about 100 mobile medical applications over the past decade of which about 40 were cleared in the past two years. The draft of the guidance was first issued in 2011.

Source:  computerworld.com

iOS and Android weaknesses allow stealthy pilfering of website credentials

Thursday, August 29th, 2013

Computer scientists have uncovered architectural weaknesses in both the iOS and Android mobile operating systems that make it possible for hackers to steal sensitive user data and login credentials for popular e-mail and storage services.

Both OSes fail to ensure that browser cookies, document files, and other sensitive content from one Internet domain are off-limits to scripts controlled by a second address without explicit permission, according to a just-published academic paper from scientists at Microsoft Research and Indiana University. The so-called same-origin policy is a fundamental security mechanism enforced by desktop browsers, but the protection is woefully missing from many iOS and Android apps. To demonstrate the threat, the researchers devised several hacks that carry out so-called cross-site scripting (XSS) and cross-site request forgery (CSRF) attacks to surreptitiously download user data from handsets.

The most serious of the attacks worked on both iOS and Android devices and required only that an end-user click on a booby-trapped link in the official Google Plus app. Behind the scenes, a script sent instructions that caused a text-editing app known as PlainText to send documents and text input to a Dropbox account controlled by the researchers. The attack worked against other apps, including TopNotes and Nocs.

“The problem here is that iOS and Android do not have this origin-based protection to regulate the interactions between those apps and between an app and another app’s Web content,” XiaoFeng Wang, a professor in Indiana University’s School of Informatics and Computing, told Ars. “As a result, we show that origins can be crossed and the same XSS and CSRF can happen.” The paper, titled Unauthorized Origin Crossing on Mobile Platforms: Threats and Mitigation, was recently accepted by the 20th ACM Conference on Computer and Communications Security.

All your credentials belong to us

The Plaintext app in this demonstration video was not configured to work with Dropbox. But even if the app had been set up to connect to the storage service, the attack could make it connect to the attacker’s account rather than the legitimate account belonging to the user, Wang said. All that was required was for the iPad user to click on the malicious link in the Google Plus app. In the researchers’ experiments, Android devices were susceptible to the same attack.

A separate series of attacks were able to retrieve the multi-character security tokens Android apps use to access private accounts on Facebook and Dropbox. Once the credentials are exposed, attackers could use them to download photos, documents, or other sensitive files stored in the online services. The attack, which relied on a malicious app already installed on the handset, exploited the lack of same-origin policy enforcement to bypass Android’s “sandbox” security protection. Google developers explicitly designed the mechanism to prevent one app from being able to access browser cookies, contacts, and other sensitive content created by another app unless a user overrides the restriction.

All attacks described in the 12-page paper have been confirmed by Dropbox, Facebook, and the other third-party websites whose apps were tested, Wang said. Most of the vulnerabilities have been fixed, but in many cases the patches were extremely hard to develop and took months to implement. The scientists went on to create a proof-of-concept app they called Morbs that provides OS-level protection across all apps on an Android device. It works by labeling each message with information about its origin and could make it easier for developers to specify and enforce security policies based on the sites where security tokens and other sensitive information originate.

As mentioned earlier, desktop browsers have long steadfastly enforced a same-origin policy that makes it impossible for JavaScript and other code from a domain like evilhacker.com to access cookies or other sensitive content from a site like trustedbank.com. In the world of mobile apps, the central role of the browser—and the gate-keeper service it provided—has largely come undone. It’s encouraging to know that the developers of the vulnerable apps took this research so seriously. Facebook awarded the researchers at least $7,000 in bounties (which the researchers donated to charity), and Dropbox offered valuable premium services in exchange for the private vulnerability report. But depending on a patchwork of fixes from each app maker is problematic given the difficulty and time involved in coming up with patches.

A better approach is for Apple and Google developers to implement something like Morbs that works across the board.

“Our research shows that in the absence of such protection, the mobile channels can be easily abused to gain unauthorized access to a user’s sensitive resources,” the researchers—who besides Wang, included Rui Wang and Shuo Chen of Microsoft and Luyi Xing of Indiana University—wrote. “We found five cross-origin issues in popular [software development kits] and high-profile apps such as Facebook and Dropbox, which can be exploited to steal their users’ authentication credentials and other confidential information such as ‘text’ input. Moreover, without the OS support for origin-based protection, not only is app development shown to be prone to such cross-origin flaws, but the developer may also have trouble fixing the flaws even after they are discovered.”

Source:  arstechnica.com

Intel plans to ratchet up mobile platform performance with 14-nanometre silicon

Friday, August 23rd, 2013

Semiconductor giant Intel is to start producing mobile and embedded systems using its latest manufacturing process technology in a bid to muscle in on a market that it had previously ignored.

The company is planning to launch a number of platforms this year and next intended to ratchet up the performance of its offerings, according to sources quoted in the Far Eastern trade journal Digitimes.

By the end of 2013, a new smartphone system-on-a-chip (SoC) produced using 22-nanometre process technology, codenamed “Merrifield”, will be introduced, followed by “Moorefield” in the first half of 2014. “Morganfield”, which will be produced on forthcoming 14-nanometre process manufacturing technology, will be available from the first quarter of 2015.

Merrifield ought to offer a performance boost of about 50 per cent combined with much improved battery life compared to Intel’s current top-end smartphone platform, called Clover Trail+.

More immediately, Intel will be releasing “Bay Trail-T” microprocessors intended for Windows 8 and Android tablet computers. The Bay Trail-T architecture will offer a battery life of about eight hours in use, but weeks when it is idling, according to Digitimes sources.

The Bay Trail-T may be unveiled at the Intel Developer Forum in September, when Intel will also be unveiling “Bay Trail” on which the T-version is based. Bay Trail will be produced on the 22-nanometre Silvermont architecture.

Digitimes was quoting sources among Taiwan-based manufacturers.

Intel’s current Intel Atom microprocessors for mobile phones – such as the Motorola Raxr-I and the Prestigio MultiPhone – are based on 32-nanometre technology, a generation behind the manufacturing process technology that it is using to produce its latest desktop and laptop microprocessors.

However, the roadmap suggests that Intel is planning to produce its high-end smartphone and tablet computer microprocessors and SoC platforms using the same manufacturing technology as desktop and server products in a bid to gain an edge on ARM-based rivals from Samsung, Qualcomm, TSMC and other producers.

Manufacturers of ARM-based microprocessors, which currently dominate the high-performance market for mobile and embedded microprocessors, trail in terms of the manufacturing technology that they can build their systems with, compared to Intel.

Intel, though, has been turning its attention to mobile and embedded as laptop, PC and server sales have stalled.

Source:  computing.com

Amazon is said to have tested a wireless network

Friday, August 23rd, 2013

Amazon.com Inc. (AMZN) has tested a new wireless network that would allow customers to connect its devices to the Internet, according to people with knowledge of the matter.

The wireless network, which was tested in Cupertino, California, used spectrum controlled by satellite communications company Globalstar Inc. (GSAT), said the people who asked not to be identified because the test was private.

The trial underlines how Amazon, the world’s largest e-commerce company, is moving beyond being a Web destination and hardware maker and digging deeper into the underlying technology for how people connect to the Internet. That would let Amazon create a more comprehensive user experience, encompassing how consumers get online, what device they use to connect to the Web and what they do on the Internet.

Leslie Letts, a spokeswoman for Amazon, didn’t respond to a request for comment. Katherine LeBlanc, a spokeswoman for Globalstar, declined to comment.

Amazon isn’t the only Internet company that has tested technology allowing it to be a Web gateway. Google Inc. (GOOG) has secured its own communications capabilities by bidding for wireless spectrum and building high-speed, fiber-based broadband networks in 17 cities, including Austin, Texas and Kansas City, Kansas. It also operates a Wi-Fi network in Mountain View, California, and recently agreed to provide wireless connectivity at Starbucks Corp. (SBUX)’s coffee shops.

Always Trying

Amazon continually tries various technologies, and it’s unclear if the wireless network testing is still taking place, said the people. The trial was in the vicinity of Amazon’s Lab126 research facilities in Cupertino, the people said. Lab126 designs and engineers Kindle devices.

“Given that Amazon’s becoming a big player in video, they could look into investing into forms of connectivity,” independent wireless analyst Chetan Sharma said in an interview.

Amazon has moved deeper into wireless services for several years, as it competes with tablet makers like Apple Inc. (AAPL) and with Google, which runs a rival application store. Amazon’s Kindle tablets and e-book readers have built-in wireless connectivity, and the company sells apps for mobile devices. Amazon had also worked on its own smartphone, Bloomberg reported last year.

Chief Executive Officer Jeff Bezos is aiming to make Amazon a one-stop shop for consumers online, a strategy that spurred a 27 percent increase in sales to $61.1 billion last year. It’s an approach investors have bought into, shown in Amazon’s stock price, which has more than doubled in the past three years.

Globalstar’s Spectrum

Globalstar is seeking regulatory approval to convert about 80 percent of its spectrum to terrestrial use. The Milpitas, California-based company applied to the Federal Communications Commission for permission to convert its satellite spectrum to provide Wi-Fi-like services in November 2012.

Globalstar met with FCC Chairwoman Mignon Clyburn in June, and a decision on whether the company can convert the spectrum could come within months. A company technical adviser conducted tests that showed the spectrum may be able to accommodate more traffic and offer faster speeds than traditional public Wi-Fi networks.

“We are now well positioned in the ongoing process with the FCC as we seek terrestrial authority for our spectrum,” Globalstar CEO James Monroe said during the company’s last earnings call.

Neil Grace, a spokesman for the FCC, declined to comment.

If granted FCC approval, Globalstar is considering leasing its spectrum, sharing service revenues with partners, and other business models, one of the people said. With wireless spectrum scarce, Globalstar’s converted spectrum could be of interest to carriers and cable companies, seeking to offload ballooning mobile traffic, as well as to technology companies.

The FCC issued the permit to trial wireless equipment using Globalstar’s spectrum to the satellite service provider’s technical adviser, Jarvinian Wireless Innovation Fund. In a letter to the FCC dated July 1, Jarvinian managing director John Dooley said his company is helping “a major technology company assess the significant performance benefits” of Globalstar’s spectrum.

Source:  bloomberg.com

Next up for WiFi

Thursday, August 22nd, 2013

Transitioning from the Wi-Fi-shy financial industry, Riverside Medical Center’s CSO Erik Devine remembers his shock at the healthcare industry’s wide embrace of the technology when he joined the hospital in 2011.

“In banking, Wi-Fi was almost a no-go because everything is so overly regulated. Wireless here is almost as critical as wired,” Devine still marvels. “It’s used for connectivity to heart pumps, defibrillators, nurse voice over IP call systems, surgery robots, remote stroke consultation systems, patient/guest access and more.”

To illustrate the level of dependence the organization has on Wi-Fi, Riverside Medical Center calls codes over the PA system — much like in medical emergencies — when the network goes down. “Wireless is such a multifaceted part of the network that it’s truly a big deal,” he says.

And getting bigger. Besides the fact that organizations are finding new ways to leverage Wi-Fi, workers have tasted the freedom of wireless, have benefited from the productivity boost, and are demanding increased range and better performance, particularly now that many are showing up with their own devices (the whole bring your own device thing). The industry is responding in kind, introducing new products and technologies, including gigabit Wi-Fi (see “Getting ready for gigabit Wi-Fi“), and it is up to IT to orchestrate this new mobile symphony.

“Traffic from wireless and mobile devices will exceed traffic from wired devices by 2017,” according to the Cisco Visual Networking Index. While only about a quarter of consumer IP traffic originated from non-PC devices in 2012, non-PC devices will account for almost half of consumer IP traffic by 2017, Cisco says.

Cisco Visual Networking IndexIT gets it, says Tony Hernandez, principal in Grant Thornton’s business consulting practice. Wi-Fi is no longer an afterthought in IT build-outs. “The average office worker still might have a wired connection, but they also have the capability to use Wi-Fi across the enterprise,” says Hernandez, noting the shift has happened fast.

“Five years ago, a lot of enterprises were looking at Wi-Fi for common areas such as lobbies and cafeterias and put that traffic on an isolated segment of the network,” Hernandez says. “If users wanted access to corporate resources from wireless, they’d have to use a VPN.”

Hernandez credits several advances for Wi-Fi’s improved stature: enterprise-grade security; sophisticated, software-based controllers; and integrated network management.

Also in the mix: pressure from users who want mobility and flexibility for their corporate machines as well as the ability to access the network from their own devices, including smartphones, tablets and laptops.

Where some businesses have only recently converted to 802.11n from the not-too-distant past of 802.11a/b/g, they now have to decide if their next Wi-Fi purchases will support 802.11ac, the draft IEEE standard that addresses the need for gigabit speed. “The landscape is still 50/50 between 802.11g and 802.11n,” Hernandez says. “There are many businesses with older infrastructure that haven’t refreshed their Wi-Fi networks yet.”

What will push enterprises to move to 802.11ac? Heavier reliance on mobile access to video such as videoconferencing and video streaming, he says.

Crash of the downloads

David Heckaman, vice president of technology development at luxury hospitality chain Mandarin Oriental Hotel Group, remembers the exact moment he knew Wi-Fi had gained an equal footing with wired infrastructure in his industry.A company had booked meeting room space at one of Mandarin Oriental’s 30 global properties to launch its new mobile app and answered all the hotel’s usual questions about anticipated network capacity demands. Not yet familiar with the impact of dense mobile usage, the IT team didn’t account for the fallout when the 200-plus crowd received free Apple iPads to immediately download and launch the new app. The network crashed. “It was a slap in the face: What was good enough before wouldn’t work. This was a whole new world,” Heckaman says.

Seven to eight years ago, Wi-Fi networks were designed to address coverage and capacity wasn’t given much thought. When Mandarin Oriental opened its New York City property in 2003, for example, IT installed two or three wireless access points in a closet on each floor and used a distributed antenna to extend coverage to the whole floor. At the time, wireless only made up 10% of total network usage. As the number climbed to 40%, capacity issues cropped up, forcing IT to rethink the entire architecture.

“We didn’t really know what capacity needs were until the Apple iPhone was released,” Heckaman says. Now, although a single access point could provide signal coverage for every five rooms, the hotel is putting access points in almost every room to connect back to an on-site controller.

Heckaman’s next plan involves adding centralized Wi-Fi control from headquarters for advanced reporting and policy management. Instead of simply reporting that on-site controllers delivered a certain number of sessions and supported X amount of overall bandwidth, he would be able to evaluate in real-time actual end-device performance. “We would be able to report on the quality of the connection and make adjustments accordingly,” he says.

Where he pinpoints service degradation, he’ll refresh access points with those that are 802.11ac-enabled. As guests bring more and more devices into their rooms and individually stream movies, play games or perform other bandwidth-intensive actions, he predicts the need for 802.11ac will come faster than anticipated.

“We have to make sure that the physical link out of the building, not the guest room access point, remains the weakest point and that the overall network is robust enough to handle it,” he says.

Getting schooled on wireless

Craig Canevit, IT administrator at the University of Tennessee at Knoxville, has had many aha! moments when it comes to Wi-Fi across the 27,000-student campus. For instance, when the team first engineered classrooms for wireless, it was difficult to predict demand. Certain professors would need higher capacity for their lectures than others, so IT would accommodate them. If those professors got reassigned to different rooms the next year, they would immediately notice performance issues.

“They had delays and interruption of service so we had to go back and redesign all classrooms with more access points and more capacity,” Canevit says.

The university also has struggled with the fact that students and faculty are now showing up with numerous devices. “We see at least three devices per person, including smartphones, tablets, gaming consoles, Apple TV and more,” he says. IT has the dual challenge of supporting the education enterprise during the day and residential demands at night.

The school’s primary issue has revolved around IP addresses, which the university found itself low on as device count skyrocketed. “Devices require IP addresses even when sitting in your pocket and we faced a terrible IP management issue,” he says. IT had to constantly scour the network for unused IP addresses to “feed the monster.”

Eventually the team came too close to capacity for comfort and had to act. Canevit didn’t think IPv6 was widely enough supported at the time, so the school went with Network Address Translation instead, hiding private IP addresses behind a single public address. A side effect of NAT is that mapping network and security issues to specific devices becomes more challenging, but Canevit says the effort is worth it.

Looking forward, the university faces the ongoing challenge of providing Wi-Fi coverage to every dorm room and classroom. That’s a bigger problem than capacity. “We only give 100Mbps on the wired network in residence halls and don’t come close to hitting capacity,” he says, so 802.11ac is really not on the drawing board. What’s more, 802.11ac would exacerbate his coverage problem. “To get 1Gbps, you’ll have to do channel bonding, which leaves fewer overlapping channels available and takes away from the density,” he says.

What he is intrigued by is software-defined networking. Students want to use their iPhone to control their Apple TV and other such devices, which is impossible currently because of subnets. “If you allowed this in a dorm, it would degrade quality for everyone,” he says. SDN could give wireless administrators a way around the problem by making it possible to add boatloads of virtual LANs. “Wireless will become more of a provisioning than an engineering issue,” Canevit predicts.

Hospital all-in with Wi-Fi

Armand Stansel, director of IT infrastructure at Houston’s The Methodist Hospital System, recalls a time when his biggest concern regarding Wi-Fi was making sure patient areas had access points. “That was in early 2000 when we were simply installing Internet hotspots for patients with laptops,” he says.

Today, the 1,600-bed, five-hospital system boasts 100% Wi-Fi coverage. Like Riverside Medical Center, The Methodist Hospital has integrated wireless deep into the clinical system to support medical devices such as IV pumps, portable imaging systems for radiology, physicians’ tablet-based consultations and more. The wireless network has 20,000 to 30,000 touches a day, which has doubled in the past few years, Stansel says.

And if IT has its way, that number will continue to ramp up. Stansel envisions a majority of employees working on the wireless network. He wants to transition back-office personnel to tablet-based docking systems when the devices are more “enterprise-ready” with better security and durability (battery life and the device itself).

Already he has been able to reduce wired capacity by more than half due to the rise of wireless. Patient rooms, which used to have numerous wired outlets, now only require a few for the wired patient phone and some telemetry devices.

When the hospital does a renovation or adds new space, Stansel spends as much time planning the wired plant as he does studying the implications for the Wi-Fi environment, looking at everything from what the walls are made of to possible sources of interference. And when it comes to even the simplest construction, such as moving a wall, he has to deploy a team to retest nearby access points. “Wireless does complicate things because you can’t leave access points static. But it’s such a necessity, we have to do it,” he says.

He also has to reassess his access point strategy on an ongoing basis, adding more or relocating others depending on demand and traffic patterns. “We always have to look at how the access point is interacting with devices. A smartphone connecting to Wi-Fi has different needs than a PC and we have to monitor that,” he says.

The Methodist Hospital takes advantage of a blend of 802.11b, .11g and .11n in the 2.4GHz and 5GHz spectrums. Channel bonding, he has found, poses challenges even for .11n, reducing the number of channels available for others. The higher the density, he says, the less likely he can take full advantage of .11n. He does use n for priority locations such as the ER, imaging, radiology and cardiology, where users require higher bandwidth.

Stansel is betting big that wireless will continue to grow. In fact, he believes that by 2015 it will surpass wired 3-to-1. “There may come a point where wired is unnecessary, but we’re just not there yet,” he says.

Turning on the ac

Stansel is, however, onboard with 802.11ac. The Methodist Hospital is an early adopter of Cisco’s 802.11ac wireless infrastructure. To start, he has targeted the same locations that receive 802.11n priority. If a patient has a cardiac catheterization procedure done, the physician who performed the procedure can interactively review the results with the patient and family while he is still in the recovery room, referencing dye images from a wireless device such as a tablet. Normally, physicians have to verbally brief patients just out of surgery, then do likewise with the family, and wait until later to go over high-definition images from a desktop.

Current wireless technologies have strained to support access to real-time 3D imaging (also referred to as 4D), ultrasounds and more. Stansel expects better performance as 802.11ac is slowly introduced.

Riverside Medical Center’s Devine is more cautious about deploying 802.11ac, saying he is still a bit skeptical. “Can we get broader coverage with fewer access points? Can we get greater range than with 802.11n? That’s what is important to us,” he says.

In the meantime, Devine plans to deploy 20% to 25% more access points to support triangulation for location of equipment. He’ll be able to replace RFID to stop high-value items such as Ascom wireless phones and heart pumps from walking out the door. “RFID is expensive and a whole other network to manage. If we can mimic what it does with Wi-Fi, we can streamline operations,” he says.

High-power access points currently are mounted in each hallway, but Devine wants to swap those out with low-power ones and put regular-strength access points in every room. If 802.11ac access points prove to be affordable, he’ll consider them, but won’t put off his immediate plans in favor of the technology.

The future of Wi-Fi

Enterprise Strategy Group Senior Analyst John Mazur says that Wi-Fi should be front and center in every IT executive’s plans. BYOD has tripled the number of Wi-Fi connected devices and new access points offer about five times the throughput and twice the range of legacy Wi-Fi access points. In other words, Mazur says, Wi-Fi is up to the bandwidth challenge.

He warns IT leaders not to be scared off by spending projections, which, according to ESG’s 2013 IT Spending Intentions Survey, will be at about 2012 levels and favor cost-cutting (like Devine’s plan to swap out RFID for Wi-Fi) rather than growth initiatives.

But now is the time, he says, to set the stage for 802.11ac, which is due to be ratified in 2014. “IT should require 802.11ac support from their vendors and get a commitment on the upgrade cost and terms before signing a deal. Chances are you won’t need 802.11ac’s additional bandwidth for a few years, but you shouldn’t be forced to do forklift upgrades/replacements of recent access points to get .11ac. It should be a relatively simple module or software upgrade to currently marketed access points.”

While 802.11ac isn’t even fully supported by wireless clients yet, Mazur recommends keeping your eye on the 802.11 sky. Another spec, 802.11ad, which operates in the 60GHz spectrum and is currently geared toward home entertainment connectivity and near-field HD video connectivity, could be — like other consumer Wi-Fi advances — entering the enterprise space sooner rather than later.

Source:  networkworld.com

“Jekyll” test attack sneaks through Apple App Store, wreaks havoc on iOS

Monday, August 19th, 2013

Like a Transformer robot, Apple iOS app re-assembles itself into attacker

Acting like a software version of a Transformer robot, a malware test app sneaked through Apple’s review process disguised as a harmless app, and then re-assembled itself into an aggressive attacker even while running inside the iOS “sandbox” designed to isolate apps and data from each other.

The app, dubbed Jekyll, was helped by Apple’s review process. The malware designers, a research team from Georgia Institute of Technology’s Information Security Center (GTISC), were able to monitor their app during the review: they discovered Apple ran the app for only a few seconds, before ultimately approving it. That wasn’t anywhere near long enough to discover Jekyll’s deceitful nature.

The name is a reference to the 1886 novella by Robert Louis Stevenson, called “The Strange Case of Dr Jekyll and Mr Hyde.” The story is about the two personalities within Dr. Henry Jekyll: one good, but the other, which manifests as Edward Hyde, deeply evil.

Jekyll’s design involves more than simply hiding the offending code under legitimate behaviors. Jekyll was designed to later re-arrange its components to create new functions that couldn’t have been detected by the app review. It also directed Apple’s default Safari browser to reach out for new malware from specific Websites created for that purpose.

“Our research shows that despite running inside the iOS sandbox, a Jekyll-based app can successfully perform many malicious tasks, such as posting tweets, taking photos, sending email and SMS, and even attacking other apps – all without the user’s knowledge,” says Tielei Wang, in a July 31 press release by Georgia Tech. http://www.gatech.edu/newsroom/release.html?nid=225501 Wang led the Jekyll development team at GTISC; also part of the team was Long Lu, a Stony Brook University security researcher.

Some blogs and technology sites picked up on the press release in early August. But wider awareness of Jekyll, and its implications, seems to have been sparked by an August 15 online story in the MIT Technology Review, by Dave Talbot, who interviewed Long Lu for a more detailed account.

Jekyll “even provided a way to magnify its effects, because it could direct Safari, Apple’s default browser, to a website with more malware,” Talbot wrote.

A form of Trojan Horse malware, the recreated Jekyll, once downloaded, reaches out to the attack designers for instructions. “The app did a phone-home when it was installed, asking for commands,” Lu explained. “This gave us the ability to generate new behavior of the logic of that app which was nonexistent when it was installed.”

Sandboxing is a fundamental tenet of secure operating systems, intended to insulate apps and their associated data from each other, and avoid the very attacks and activities that Jekyll was able to carry off. It’s also explicitly used as a technique for detecting malware by running code in a protected space where it can be automatically analyzed for traits indicative of a malicious activity. The problem is that attackers are well aware of sandboxing and are working to exploit existing blind spots. [See “Malware-detecting ‘sandboxing’ technology no silver bullet”]

“The Jekyll app was live for only a few minutes in March, and no innocent victims installed it, Lu says,” according to Talbot’s account. “During that brief time, the researchers installed it on their own Apple devices and attacked themselves, then withdrew the app before it could do real harm.”

“The message we want to deliver is that right now, the Apple review process is mostly doing a static analysis of the app, which we say is not sufficient because dynamically generated logic cannot be very easily seen,” Lu says.

The results of the new attack, in a paper titles “Jekyll on iOS: when benign apps become evil,” was scheduled to be presented in a talk last Friday at the 22nd Usenix Security Symposium, in Washington, D.C. The full paper is available online. In addition to Wang and Lu, the other co-authors are Kangjie Lu, Simon Chung, and Wenke Lee, all with Georgia Tech.

Apple spokesman Tom Neumayr said that Apple “some changes to its iOS mobile operating system in response to issues identified in the paper,” according to Talbot. “Neumayr would not comment on the app-review process.”

Oddly the same July 31 Georgia Tech press release that revealed Jekyll also revealed a second attack vector against iOS devices, via a custom built hardware device masquerading as a USB charger. Malware in the charger was injected into an iOS device. This exploit, presented at the recent Black Hat Conference, was widely covered (including by Network World’s Layer8 blog) while Jekyll was largely overlooked.

Source:  networkworld.com

RAM wars: RRAM vs. 3D NAND flash, and the winner is … us

Friday, August 9th, 2013

You may soon have a smartphone or tablet with more than a terabyte of high-speed storage

In fact, Crossbar expects to see mass production of its RRAM chip in two years. Minassian said his company has already penned an agreement with a flash fabrication plant in the automotive industry to manufacture the chips. He also said an agreement with a much Within a few years, you’ll likely be carrying a smartphone, tablet or laptop with hundreds of gigabytes or even terabytes of hyper fast, non-volatile memory, thanks to two memory developments unveiled this week.

First, Samsung announced it is now mass producing three-dimensional (3D) Vertical NAND (V-NAND) chips; then start-up Crossbar said it has created a prototype of its resistive random access memory (RRAM) chip.

Three-dimensional NAND takes today’s flash, which is built on a horizontal plane, and turns it sideways. Then, like microscopic memory skyscrapers, it stacks them side-by-side to create a vastly more dense chip with twice the write performance and 10 times the reliability of today’s 2D, or planar, NAND.

The most-dense process for creating silicon flash memory cells to store data on planar NAND is between 10 nanometer (nm) and 19nm in size. To give some idea of how small that is, a nanometer is one-billionth of a meter — a human hair is 3,000 times thicker than NAND flash made with 25nm process technology. There are 25 million nanometers in an inch.

NAND flash uses transistors or a charge to trap (also known as Charge Trap Flash) to store a bit of data in a silicon cell, while RRAM uses tiny conductive filaments that connect silicon layers to represent a bit of data – a digital one or a zero.

In RRAM, the top layer of silicon nitrate creates a conductive electrode, while the lower layer is non-conductive silicon oxide. A positive charge creates a filament connection between the two silicon layers, which represents a one; a negative charge breaks that filament, creating a resistive layer or a zero.

So, which memory tech wins?

Which of the two memories will dominate the non-volatile memory marketplace in five years isn’t certain, as experts have mixed opinions about how much 3D (or stackable) NAND flash can extend the life of current NAND flash technology. Some say it will grow beyond Samsung’s current 24 layers to more than 100 in the future; others believe it has only two to three generations to go, meaning the technology will hit a wall when it gets to 64 layers or so.

By contrast, RRAM starts out with an advantage. It is denser than NAND, with higher performance and endurance. That means RRAM will be able to use silicon wafers that are half the size used by current NAND flash fabricators. And, best of all, current flash fabrication plants won’t need to change their equipment to make it, according to Crossbar CEO George Minassian.

“It will cost maybe a couple million dollars in engineering costs for plants to introduce it. That’s what it is in our plan,” Minassian said. “It’s about the same cost as introducing a new [NAND flash] node, like going from 65 to 45 nanometer node.”

Crossbar claims its RRAM technology has a 30 nanosecond latency time. Samsung’s top-rated flash, the 840 Pro SSD, has a 0.057 millisecond latency. A millisecond is one-thousandth of a second, a nanosecond is one billionth of a second – a million times faster.

According to Minassian, RRAM can natively withstand 10,000 write-erase cycles, which is a little more than typical consumer-grade MLC (multi-cell level) NAND flash can withstand today – and that’s without any error correction code. ECC is used to upgrade today’s MLC NAND flash to enterprise-class flash cards and solid-state drives (SSDs).

In fact, Crossbar expects to see mass production of its RRAM chip in two years. Minassian said his company has already penned an agreement with a flash fabrication plant in the automotive industry to manufacture the chips. He also said an agreement with a much larger fab is nearing an agreement.

Both RRAM and 3D NAND herald an enormous leap in memory performance and storage capacity. Crossbar’s RRAM promises 20 times faster write performance and 10 times the durability of today’s planar NAND flash. Like 3D NAND, RRAM memory chips will be stacked, and a 1TB module will be roughly half the size of a NAND flash module with similar storage, Minassian said.

Three-dimensional NAND offers multiples more capacity. With every NAND flash “skyscraper” comes a doubling of capacity. Samsung said its V-NAND will initially only boast capacities ranging from 128GB to 1TB in embedded systems and SSDs, “depending on customer demand.” So, Samsung appears to be betting on a manufacturing cost reduction – price per bit — and not a capacity increase to drive V-NAND sales.

Crossbar’s initial RRAM chip will also be capable of storing up to 1TB of data, but it can do that on a chip smaller than a postage stamp; that amounts to 250 hours of hi-def movies on a 200mm square surface.

When it comes to performance, RRAM brings yet another advantage. A NAND flash chip today has about 7MB/sec write speeds. SSDs and flash cards can achieve 400MB/sec speeds by running multiple chips in parallel.

A RRAM chip boasts 140MB/sec write speeds, and that’s without parallel interconnects to multiple chips, Minassian said.

Both 3D NAND and RRAM’s purported performance gains mean that storage devices will no longer be the system bottlenecks they are now. In the future, the bottleneck will be the bus — the communication layer between computer components. In other words, if NAND flash is a 100 mph car and RRAM is a 200 mph car, it doesn’t matter how fast they can go if the road they’re on has a curve that limits speeds to 50 mph.

On top of performance, RRAM uses a fraction of the power to store data that NAND flash uses, meaning it will help extends battery life “to weeks, months or years,” according to Crossbar.

For example, NAND flash requires about 20 volts of electricity to write a bit of data into a silicon chip. RRAM requires just 4 microamps to write a bit of data.

Crossbar is not alone in its development of RRAM. Both Hewlett-Packard and Panasonic have developed their own versions of resistive memory, but according to Jim Handy, principal analyst at Objective Analysis, Crossbar has a huge leg up on other developers.

“One very big advantage of this technology is that the selection device is built into the cell. In other RRAMs it is not, so something external (a diode or transistor) has to be built in. This is an area that has received a lot of research funding but is still a thorny issue for many other technologies,” he said in an email reply to Computerworld

Handy said the market for alternative technologies to RRAM is limited, as flash manufacturers tend to use the cheapest technology they can get away with, even though other technologies offer better performance.

RRAM has the advantage

RRAM isn’t the only memory advance in sight. Alternative forms of non-volatile memory that could be future rivals to NAND and DRAM include Everspin’s magnetoresistive RAM (MRAM) and phase-change memory (PCM), a memory type being pursued by Samsung and Micron. There is also Racetrack Memory, Graphene Memory and Memristor, HP’s own type of RRAM.

Gregory Wong, founder and principal analyst at research firm Forward Insights, believes Crossbar’s RRAM is a viable product that may someday challenge NAND, “and when I say NAND, I mean 3D NAND, too,” he said.

Racetrack memory still has at least five years go to go before it is even viable. “Right now, it looks like an interesting concept. Whether it eventually becomes commercialized or not is far out in the future,” Wong said.

“Phase change…, well, there is some out there, but the question is, where does it fit in the memory market? Right now, it’s a NOR replacement,” Wong continued. “Its performance and endurance is like NOR, not NAND.

“Generally, when you look at…others touting RRAM, there’s a lot of skepticism, but when we looked at Crossbar and its technology, we found it interesting,” he said.

Handy also believes memory made with silicon, like Crossbar’s RRAM, will continue to dominate the memory market because fabs are already outfitted to use it and it’s an inexpensive material.

“Silicon will retain its dominance over newer materials for as long as it can, and technologies like Crossbar’s will play a niche role until 3D NAND runs out of steam, which currently looks like it will happen two to three generations after 2D runs out of steam, which is two to three process generations away from where the market is today,” Handy said.

NAND flash process technology has been advancing every 12 months or so. For example, Intel is about to move from 19nm process node to 14nm. That means it may run out of steam in two to three years.

Not everyone agrees 3D NAND has such a limited lifespan.

Gill Lee, senior director and principal member of the technical staff at Applied Materials, believes 3D NAND could grow to more than 100 layers deep. Applied Materials provides the machines for the semiconductor industry to make both NAND flash and RRAM.

“Moving to 3D allows for the NAND technology to continue to scale down. How far can it go? I think it can go quite far,” he said.

Lee said he’s already seen fabrication plant roadmaps that take 3D NAND out to 128 pairs or layers.

The first generation of 3D NAND, 24-layers deep, comes on heels of sub-20nm node 2D NAND, but because it is more dense, 3D NAND will reduce the cost per bit to manufacture memory by about 30%, Lee said.

Whether consumers will see NAND flash with greater densities, or fabrication plants will simply continue creating the same memory capacities at lower costs, will be up to the industry, Lee added.

Source:  networkworld.com

AT&T uses small cells to improve service in Disney parks

Tuesday, July 23rd, 2013

AT&T will soon show off how small cell technology can improve network capacity and coverage in Walt Disney theme parks.

If you’re a Disney theme park fan and you happen to be an AT&T wireless customer, here’s some good news: Your wireless coverage within the company’s two main resorts is going to get a heck of a lot better.

AT&T and Disney Parks are announcing an agreement Tuesday that will make AT&T the official wireless provider for Walt Disney World Resort and Disneyland Resort.

What does this mean? As part of the deal, AT&T will be improving service within the Walt Disney World and Disneyland Resorts by adding small technology that will chop up AT&T’s existing licensed wireless spectrum and reuse it in smaller chunks to better cover the resort and add more capacity in high-volume areas. The company will also add free Wi-Fi hotspots, which AT&T customers visiting the resorts will also be able to use to offload data traffic.

Specifically, AT&T will add more than 25 distributed antenna systems in an effort to add capacity. It will also add more than 350 small cells, which extend the availability of the network. AT&T is adding 10 new cell sites across the Walt Disney World resort to boost coverage and capacity. And it will add nearly 50 repeaters to help improve coverage of the network.

Chris Hill, AT&T’s senior vice president for advanced solutions, said that AT&T’s efforts to improve coverage in an around Disney resorts is part of a bigger effort the company is making to add capacity and improve coverage in highly trafficked areas. He said that even though AT&T had decent network coverage already within the Disney parks, customers often experienced issues in some buildings or in remote reaches of the resorts.

“The macro cell sites can only cover so much,” he said. “So you need to go to small cells to really get everywhere you need to be and to provide the capacity you need in areas with a high density of people.”

Hill said the idea of creating smaller cell sites that reuse existing licensed spectrum is a big trend among all wireless carriers right now. And he said, AT&T is deploying this small cell technology in several cities as well as other areas where large numbers of people gather, such as stadiums and arenas.

“We are deploying this technology widely across metro areas to increase density of our coverage,” he said. “And it’s not just us. There’s a big wave of small cell deployments where tens of thousands of these access points are being deployed all over the place.”

Cooperation with Disney is a key element in this deployment since the small cell technology requires that AT&T place access points on the Disney property. The footprint of the access points is very small. They typically look like large access points used for Wi-Fi. Hill said they can be easily disguised to fit in with the surroundings.

Unfortunately, wireless customers with service from other carriers won’t see the same level of improved service. The network upgrade and the small cell deployments will only work for AT&T wireless customers. AT&T has no plans to allow other major carriers to use the network for roaming.

Also as part of the deal, AT&T will take over responsibility for Disney’s corporate wireless services, providing services to some 25,000 Disney employees. And the companies have struck various marketing and branding agreements. As part of that aspect of the deal, AT&T will become an official sponsor of Disney-created soccer and runDisney events at the ESPN Wide World of Sports Complex. In addition, Disney will join AT&T in its “It Can Wait” public service campaign, which educates the public about the dangers of texting while driving.

Source:  CNET

Crypto flaw makes millions of smartphones susceptible to hijacking

Tuesday, July 23rd, 2013

New attack targets weakness in at least 500 million smartphone SIM cards.

Millions of smartphones could be remotely commandeered in attacks that allow hackers to clone the secret encryption credentials used to secure payment data and identify individual handsets on carrier networks.

The vulnerabilities reside in at least 500 million subscriber identity module (SIM) cards, which are the tiny computers that store some of a smartphone’s most crucial cryptographic secrets. Karsten Nohl, chief scientist at Security Research Labs in Berlin, told Ars that the defects allow attackers to obtain the encryption key that safeguards the user credentials. Hackers who possess the credentials—including the unique International Mobile Subscriber Identity and the corresponding encryption authentication key—can then create a duplicate SIM that can be used to send and receive text messages, make phone calls to and from the targeted phone, and possibly retrieve mobile payment credentials. The vulnerabilities can be exploited remotely by sending a text message to the phone number of a targeted phone.

“We broke a significant number of SIM cards, and pretty thoroughly at that,” Nohl wrote in an e-mail. “We can remotely infect the card, send SMS from it, redirect calls, exfiltrate call encryption keys, and even hack deeper into the card to steal payment credentials or completely clone the card. All remotely, just based on a phone number.”

Nohl declined to identify the specific manufacturers or SIM models that contain the exploitable weaknesses. The vulnerabilities are in the SIM itself and can be exploited regardless of the particular smartphone they manage.

The cloning technique identified by the research team from Security Research Labs exploits a constellation of vulnerabilities commonly found on many SIMs. One involves the automatic responses some cards generate when they receive invalid commands from a mobile carrier. Another stems from the use of a single Data Encryption Standard key to encrypt and authenticate messages sent between the mobile carrier and individual handsets. A third flaw involves the failure to perform security checks before a SIM installs and runs Java applications.

The flaws allow an attacker to send an invalid command that carriers often issue to handsets to instruct them to install over-the-air (OTA) updates. A targeted phone will respond with an error message that’s signed with the 1970s-era DES cipher. The attacker can then use the response message to retrieve the phone’s 56-bit DES key. Using a pre-computed rainbow table like the one released in 2009 to crack cell phone encryption keys, an attacker can obtain the DES key in about two minutes. From there, the attacker can use the key to send a valid OTA command that installs a Java app that extracts the SIM’s IMSI and authentication key. The secret information is tantamount to the user ID and password used to authenticate a smartphone to a carrier network and associate a particular handset to a specific phone number.

Armed with this data, an attacker can create a fully functional SIM clone that could allow a second phone under the control of the attacker to connect to the network. People who exploit the weaknesses might also be able to run unauthorized apps on the SIM that redirect SMS and voicemail messages or make unauthorized purchases against a victim’s mobile wallet. It doesn’t appear that attackers could steal contacts, e-mails, or other sensitive information, since SIMs don’t have access to data stored on the phone, Nohl said.

Nohl plans to further describe the attack at next week’s Black Hat security conference in Las Vegas. He estimated that there are about seven billion SIMs in circulation. That suggests the majority of SIMs aren’t vulnerable to the attack. Right now, there isn’t enough information available for users to know if their particular smartphones are susceptible to this technique. This article will be updated if carriers or SIM manufacturers provide specific details about vulnerable cards or mitigation steps that can be followed. In the meantime, Security Research Labs has published this post that gives additional information about the exploit.

Source:  arstechnica.com

Nation’s first campus ‘Super Wi-Fi’ network launches at West Virginia University

Friday, July 19th, 2013

West Virginia University today (July 9) became the first university in the United States to use vacant broadcast TV channels to provide the campus and nearby areas with wireless broadband Internet services.

The university has partnered with AIR.U, the Advanced Internet Regions consortium, to transform the “TV white spaces” frequencies left empty when television stations moved to digital broadcasting into much-needed connectivity for students and the surrounding community.

http://wvutoday.assets.slate.wvu.edu/resources/1/1373572611_md.jpgThe initial phase of the network provides free public Wi-Fi access for students and faculty at the Public Rapid Transit platforms, a 73-car tram system that transports more than 15,000 riders daily.

“Not only does the AIR.U deployment improve wireless connectivity for the PRT System, but also demonstrates the real potential of innovation and new technologies to deliver broadband coverage and capacity to rural areas and small towns to drive economic development and quality of life, and to compete with the rest of the world in the knowledge economy,” said WVU Chief Information Officer John Campbell.

“This may well offer a solution for the many West Virginia communities where broadband access continues to be an issue,” Campbell said, “and we are pleased to be able to be a test site for a solution that may benefit thousands of West Virginians.”

Chairman of the Senate Committee on Commerce, Science and Transportation Sen. Jay Rockefeller, said, “As chairman of the Senate Commerce Committee, I have made promoting high-speed Internet deployment throughout West Virginia, and around the nation, a priority. That is why I am excited by today’s announcement of the new innovative wireless broadband initiative on West Virginia University’s campus.

“Wireless broadband is an important part of bringing the economic, educational, and social benefits of broadband to all Americans,” he said.

“My Public Safety Spectrum legislation, which the president signed into law last year, helped to preserve and promote innovative wireless services,” Rockefeller said. “The lessons learned from this pilot project will be important as Congress continues to look for ways to expand broadband access and advance smart spectrum policy.”

Mignon Clyburn, acting chair of the Federal Communications Commission, praised the development, saying, ””Innovative deployment of TV white spaces presents an exciting opportunity for underserved rural and low-income urban communities across the country. I commend AIR.U and West Virginia University on launching a unique pilot program that provides campus-wide Wi-Fi services using TV white space devices.

“This pilot will not only demonstrate how TV white space technologies can help bridge the digital divide, but also could offer valuable insights into how best to structure future deployments,” she said.

The network deployment is managed by AIR.U co-founder Declaration Networks Group LLC and represents a collaboration between AIR.U and the WVU Board of Governors; the West Virginia Network for Telecomputing, which provides the fiber optic Internet backhaul for the network; and Adaptrum Inc., a California start-up providing white space equipment designed to operate on vacant TV channels. AIR.U is affiliated with the Open Technology Institute at the New America Foundation, a non-partisan think tank based in Washington, D.C. Microsoft and Google both provided early support for AIR.U’s overall effort to spur innovation to upgrade the broadband available to underserved campuses and their surrounding communities.

“WVNET is proud to partner with AIR.U and WVU on this exciting new wireless broadband opportunity,” WVNET Director Judge Dan O’Hanlon said. “We are very pleased with this early success and look forward to expanding this last-mile wireless solution all across West Virginia.” O’Hanlon also serves as chairman of the West Virginia Broadband Council.

Because the unique propagation characteristics of TV band spectrum enables networks to broadcast Wi-Fi connections over several miles and over hilly and forested terrain, the Federal Communications Commission describes unlicensed access to vacant TV channels as enabling “Super Wi-Fi” services. For example, WVU can add additional Wi-Fi hotspots in other locations around campus where students congregate or lack connectivity today. Future applications include public Wi-Fi access on the PRT cars and machine-to-machine wireless data links supporting control functions of the PRT System.

AIR.U’s initial deployment, blanketing the WVU campus with Wi-Fi connectivity, demonstrates the equipment capabilities, the system throughput and performance of TV band frequencies to support broadband Internet applications. AIR.U intends to facilitate additional college community and rural broadband deployments in the future.

“The innovative WVU network demonstrates why it is critical that the FCC allows companies and communities to use vacant TV channel spectrum on an unlicensed basis,” said Michael Calabrese, director of the Wireless Future Project at the New America Foundation. “We expect that hundreds of rural and small town colleges and surrounding communities will soon take advantage of this very cost-effective technology to extend fast and affordable broadband connections where they are lacking.”

“Microsoft was built on the idea that technology should be accessible and affordable to everyone, and today access to a broadband connection is becoming increasingly important.” said Paul Mitchell, general manager/technology policy, at Microsoft. “White spaces technology and efficient spectrum management have a huge potential for expanding affordable broadband access in underserved areas and we are pleased to be partnering with AIR.U and West Virginia University on this new launch.”

The AIR.U consortium includes organizations that represent over 500 colleges and universities nationwide, and includes the United Negro College Fund, the New England Board of Higher Education, the Corporation for Education Network Initiatives in California, the National Institute for Technology in Liberal Education, and Gig.U, a consortium of 37 major universities.

“We are delighted that AIR.U was born out of the Gig.U effort,” said Blair Levin, executive director of Gig.U and former executive director of the National Broadband Plan. “The communities that are home to our research universities and colleges across the country need next generation speeds to compete in the global economy and we firmly believe this effort can be a model for other communities.”

Founding partners of AIR.U include Microsoft, Google, the Open Technology Institute at the New America Foundation, the Appalachian Regional Commission, and Declaration Networks Group, LLC, a new firm established to plan, deploy and operate Super Wi-Fi networks.

“Super Wi-Fi presents a lower-cost, scalable approach to deliver high capacity wireless networks, and DNG is leading the way for a new broadband alternative to provide sustainable models that can be replicated and extended to towns and cities nationwide,” stated Bob Nichols, CEO of Declaration Networks Group, LLC and AIR.U co-founder.

Source:  wvu.edu

Mobile malware, mainly aimed at Android devices, jumps 614% in a year

Friday, July 12th, 2013

The threat to corporate data continues to grow as Android devices come under attack

The number of mobile malware apps has jumped 614% in the last year, according to studies conducted by McAfee and Juniper Networks.

The Juniper study — its third annual Mobile Threats Report — showed that the majority of attacks are directed at Android devices, as the Android market continues to grow. Malware aimed specifically at Android devices has increased at a staggering rate since 2010, growing from 24% of all mobile malware that year to 92% by March 2013.

According to data from Juniper’s Mobile Threat Center (MTC) research facility, the number of malicious mobile apps jumped 614% in the last year to 276,259, which demonstrates “an exponentially higher cyber criminal interest in exploiting mobile devices.”

“Malware writers are increasingly behaving like profit-motivated businesses when designing new attacks and malware distribution strategies,” Juniper said in a statement. “Attackers are maximizing their return on investment by focusing 92% of all MTC detected threats at Android, which has a commanding share of the global smartphone market.

In addition to malicious apps, Juniper Networks found several legitimate free applications that could allow corporate data to leak out. The study found that free mobile apps sampled by the MTC are three times more likely to track location and 2.5 times more likely to access user address books than their paid counterparts. Free applications requesting/gaining access to account information nearly doubled from 5.9% in October 2012 to 10.5% in May 2013.

McAfee’s study found that a type of SMS malware known as a Fake Installer can be used to charge a typical premium rate of $4 per message once installed on a mobile device. A “free” Fake Installer app can cost up to $28 since each one can tell a consumer’s device to send or receive up to seven messages from a premium rate SMS number.

Seventy-three percent of all known malware involves Fake Installers, according to the report.

“These threats trick people into sending SMS messages to premium-rate numbers set up by attackers,” the report states. “Based on research by the MTC, each successful attack instance can yield approximately $10 in immediate profit. The MTC also found that more sophisticated attackers are developing intricate botnets and targeted attacks capable of disrupting and accessing high-value data on corporate networks.”

Juniper’s report identified more than 500 third-party Android application stores worldwide, most with very low levels of accountability or oversight, that are known to host mobile malware — preying on unsuspecting mobile users as well as those with jail-broken iOS mobile devices. Of the malicious third-party stores identified by the MTC, 60% originate from either China or Russia.

According to market research firm ComScore, Android now has a 52.4% market share worldwide, up 0.7% from February. As Samsung has been taking market share from Apple, Android use is expected to continue to grow, according to ComScore.

According to market analyst firm Canalys, Android representedalmost 60% of the mobile devices shipped in 2012. Apple accounted for 19.3% of devices shipped last year, while Microsoft had 18.1%.

Source:  computerworld.com

Google: Critical Android security flaw won’t harm most users

Tuesday, July 9th, 2013

A security flaw could affect 99 percent of Android devices, a researcher claims, but the reality is that most Android users have very little to worry about.

Bluebox, a mobile security firm, billed the exploit as a “Master Key” that could “turn any legitimate application into a malicious Trojan, completely unnoticed by the app store, the phone, or the end user.” In a blog post last week, Bluebox CTO Jeff Forristal wrote that nearly any Android phone released in the last four years is vulnerable.

Bluebox’s claims led to a fair number of scary-sounding headlines, but as Google points out, most Android users are already safe from this security flaw.

Speaking to ZDNet, Google spokeswoman Gina Scigliano said that all apps submitted to the Google Play Store get scanned for the exploit. So far, no apps have even tried to take advantage of the exploit, and they’d be shut out from the store if they did.

If the attack can’t come from apps in the Google Play Store, how could it possibly get onto Android phones? As Forristal explained to Computerworld last week, the exploit could come from third-party app stores, e-mailed attachments, website downloads and direct transfer via USB.

But as any Android enthusiast knows, Android phones can’t install apps through those methods unless the user provides explicit permission through the phone’s settings menu. The option to install apps from outside sources is disabled by default. Even if the option is enabled, phones running Android 4.2 or higher have yet another layer of protection through app verification, which checks non-Google Play apps for malicious code. This verification is enabled by default.

In other words, to actually be vulnerable to this “Master Key,” you must enable the installation of apps from outside Google Play, disable Android’s built-in scanning and somehow stumble upon an app that takes advantage of the exploit. At that point, you must still knowingly go through the installation process yourself. When you consider how many people might go through all those steps, it’s a lot less than 99 percent of users.

Still, just to be safe, Google has released a patch for the vulnerability, which phone makers can apply in future software updates. Scigliano said Samsung is already pushing the fix to devices, along with other unspecified OEMs. The popular CyanogenMod enthusiast build has also been patched to protect against the peril.

Android’s fragmentation problem does mean that many users won’t get this patch in a timely manner, if at all, but it doesn’t mean that unpatched users are at risk.

None of this invalidates the work that Bluebox has done. Malicious apps have snuck into Google’s app store before, so the fact that a security firm uncovered the exploit first and disclosed it to Google is a good thing. But there’s a big difference between a potential security issue and one that actually affects huge swaths of users. Frightening headlines aside, this flaw is an example of the former.

Source:  techhive.com

‘Master key’ to Android phones uncovered

Friday, July 5th, 2013

A “master key” that could give cyber-thieves unfettered access to almost any Android phone has been discovered by security research firm BlueBox.

The bug could be exploited to let an attacker do what they want to a phone including stealing data, eavesdropping or using it to send junk messages.

The loophole has been present in every version of the Android operating system released since 2009.

Google said it currently had no comment to make on BlueBox’s discovery.

Writing on the BlueBox blog, Jeff Forristal, said the implications of the discovery were “huge”.

The bug emerges because of the way Android handles cryptographic verification of the programs installed on the phone.

Android uses the cryptographic signature as a way to check that an app or program is legitimate and to ensure it has not been tampered with. Mr Forristal and his colleagues have found a method of tricking the way Android checks these signatures so malicious changes to apps go unnoticed.

Any app or program written to exploit the bug would enjoy the same access to a phone that the legitimate version of that application enjoyed.

“It can essentially take over the normal functioning of the phone and control any function thereof,” wrote Mr Forristal. BlueBox reported finding the bug to Google in February. Mr Forristal is planning to reveal more information about the problem at the Black Hat hacker conference being held in August this year.

Marc Rogers, principal security researcher at mobile security firm Lookout said it had replicated the attack and its ability to compromise Android apps.

Mr Rogers added that Google had been informed about the bug by Mr Forristal and had added checking systems to its Play store to spot and stop apps that had been tampered with in this way.

The danger from the loophole remains theoretical because, as yet, there is no evidence that it is being exploited by cyber-thieves.

Source:  BBC