Archive for June, 2013

FCC approves Google’s ‘white space’ database operation

Sunday, June 30th, 2013


The database will allow unlicensed TV broadcast spectrum to be used for wireless broadband.

The Federal Communications Commission has approved Google’s plan to operate a database that would allow unlicensed TV broadcast spectrum to be used for wireless broadband and shared among many users.

Google, which was granted commission approval on Friday, is the latest company to complete the FCC’s 45-day testing phase. Spectrum Bridge and Telcordia completed their trials, and there are another 10 companies, including Microsoft, which are working on similar databases. The new database will keep track of the TV broadcast frequencies in use so that wireless broadband devices can take advantage of the unlicensed space on the spectrum, also called “white space.”

In the U.S., the FCC has been working to free up spectrum for wireless carriers, which complain they lack adequate available spectrum to keep up with market demand for data services. The FCC approved new rules in 2010 for using unlicensed white space that included establishing databases to track clear frequencies and ensure that devices do not interfere with existing broadcast TV license holders. The databases contain information supplied by the FCC.

However, TV broadcasters have resisted the idea of unlicensed use, worried that allowing others to use white space, which is very close to the frequencies they occupy, could cause interference. What Google and others developing this database technology hope to show is that it is possible to share white space without creating interference.

The Web giant announced in March that it had launched a trial program that would tap white spaces to provide wireless broadband to 10 rural schools in South Africa.

Source:  CNET

Password complexity rules more annoying, less effective than lengthy ones

Friday, June 28th, 2013

Symbol, number, and cap requirements: do not want. Might not need.

Few Internet frustrations are so familiar as the password restriction. After creating a few (dozen) logins for all our Web presences, the use of symbols, mixed cases, and numbers seems less like a security measure and more like a torture device when it comes to remembering a complex password on a little-used site. But at least that variety of characters keeps you safe, right? As it turns out, there is some contrary research that supports both how frustrating these restrictions are and suggests it’s possible that the positive effect of complexity rules on security may not be as great as long length requirements.

Let’s preface this with a reminder: the conventional wisdom is that complexity trumps length every time, and this notion is overwhelmingly true. Every security expert will tell you that “Supercalifragilistic” is less secure than “gj7B!!!bhrdc.” Few password creation schemes will render any password uncrackable, but in general, length does less to guard against crackability than complexity.

A password is not immune from cracking simply by virtue of being long—44,991 passwords recovered from a dump of LinkedIn hashes last year were 16 characters or more. The research we describe below refers specifically to the effects of restrictions placed by administrators on password construction on their crackability. By no means does it suggest that a long password is, by default, more secure than a complex one.

In April, Ars checked in with a few companies that place a range of restrictions on how passwords must be constructed, from Charles Schwab’s 8-character maximum to Evernote’s “use any character but spaces.” Reasons ranged from whether customers can stand typing certain characters with a mobile phone to password-cracking being the last of a company’s concerns compared to phishing or malware.

A pair of studies done in 2011 and 2012 on password length and construction showed two things: first, customer frustration increases significantly with complexity, but less so with length. Second, a number of password cracking algorithms can be more easily thwarted by a long password that is created without number, symbol, or case requirements than are shorter passwords that are required to be complex, particularly for a large number of guesses. That is, shorter, more complex password restrictions beget passwords that can be more frustrating to everyone except the only entity who shouldn’t have it: the password cracker.

The first study in 2011 specifically addressed the problems of usability in password complexity (full disclosure: both studies mentioned in this article were conducted in part by Michelle L. Mazurek, wife of Ars Gaming Editor Kyle Orland). The study authors looked at 12,000 passwords created by participants under a variety of construction methods, including comprehensive8, where passwords must be at least 8 characters and include both an uppercase and lowercase letter, as well as a digit and a symbol, and must not contain dictionary words; basic8, where passwords must be 8 characters with no other restrictions; and basic16, where passwords must be 16 characters with no other restrictions.

Study participants experienced the most difficulty with the comprehensive8 requirements from beginning to end. Only 17.7 percent were able to create a password that met all of the requirements in the first try, compared to well over 50 percent for the rest of the conditions. Twenty-five percent of comprehensive8 testers gave up before they could even make a password that satisfied the requirements, compared to 18.3 percent or less for other conditions. Over 50 percent of comprehensive8 participants stored their password either on paper or electronically, compared to 33 percent for those with the 16-character minimum and less for the rest of the conditions.

Despite the fact that passwords that impose a lot of requirements on content are harder to make and harder to remember, their use could be justified if they proved to be significantly more secure than, say, basic8 or basic 16. But contrary to password creation advice external to site-based creation rules, that did not seem to be the case.

Using 12,000 passwords sourced from Mechanical Turk participants, the researchers applied two cracking algorithms to see which types tended to stand up best to attacks. One was based on a Markov model that makes guesses based on character frequency, and the other was developed by another team of researchers and takes “training data” from password and dictionary word lists and then applies mangling rules to the text to form guesses.

Enlarge / The percent of passwords cracked vs. the number of guesses, using the second, more robust cracking method.
Carnegie Mellon University

Per the researchers’ tests, the basic 16-character passwords were the hardest to crack for a “powerful attacker.” After 10 billion guesses, only around 12 percent of the 16-character passwords had been cracked, compared to 22 percent of the comprehensive8 passwords and almost 60 percent of the basic8 passwords.

It’s worth nothing that the cracking algorithms used in this experiment differ from those Ars detailed in its story on real-world password crackers: one algorithm is a modified mask attack, while the other is based on the publicly available Weir algorithm. In either case, the results of using these cracking methods may differ from those used by real-world password crackers.

While the study casts doubt on whether complex and short password requirements result in passwords that are more secure than ones that just require length, it did find an interesting effect from the password restrictions. When the researchers compared passwords created under basic8 restrictions that happened to meet comprehensive8 restrictions to passwords actually created under comprehensive8 restrictions, the latter were significantly harder to guess.

Mazurek suggests two reasons to Ars for apparent resilience of passwords created under long-length restrictions versus short-and-complex ones. One is that there may not be enough good guessing data for long passwords due to the dearth of long-password requirements, which she said is true for both her own team and crackers in the wild. “It won’t remain true long-term if people start requiring (and using) long passwords everywhere,” Mazurek told Ars in an e-mail. The second reason is that “the space of possible passwords is just bigger… so relatively common long passwords are still less common than relatively common short passwords.”

Between the two studies, it’s less clear why those in charge of setting password rules should ever lower length restrictions while raising complexity restrictions. If those people are interested both in more security and less frustration for users, the better solution seems to be setting a higher character limit and leaving all of the other restrictions out.

But from our brief survey of sites, 16 characters seems to be the maximum more often than the minimum, and complexity rules abound. Ironically, Microsoft, which sponsored both of these studies in part, sets its own maximum at 16 characters. If admins are interested in a more secure restriction, a (long) flat length requirement could go further than one that allows short passwords but requires complications.


‘Containerization’ is no BYOD panacea: Gartner

Tuesday, June 25th, 2013

Gartner notes it’s an important IT application development question

Companies adopting BYOD policies are struggling with the thorny problem of how they might separate corporate and personal data on an employee’s device.

One technology approach to this challenge involves separating out the corporate mobile apps and the data associated with these into “containers” on the mobile device, creating a clear division as to what is subject to corporate security policies such as wiping. But one Gartner analyst delving into the “containerization” subject recently noted the current array of technology choices each have advantages and disadvantages.

“BYOD means my phone, my tablet, my pictures, my music — it’s all about the user,” said analyst Eric Maiwald at the recent Gartner Security and Risk Management Summit.

But if IT security managers want to place controls on the user device to separate out and manage corporate e-mail, applications and data, it’s possible to enforce security such as authentication, encryption, data leakage, cut-and-paste restrictions and selective content wiping through various types of container technologies.

However, the ability of containers to detect “jailbreaking” of Apple iOS devices, which strips out Apple’s security model completely, remains “nearly zero,” Maiwald added. “If you have a rooted device, a container will not protect you.”

There are many choices for container technology. The secure “container” can be embedded in the operating system itself, such as Samsung’s Knox smartphone or the Blackberry 10, Maiwald noted. And the mobile-device management (MDM) vendors such as AirWatch, MobileIron and WatchDocs also have taken a stab at containers, though Gartner sees some of what the MDM vendors are doing as more akin to “tags” available to do things like tag a mailbox and message as corporate.

Companies that include, Enterproid, Excitor, Fixmo, Good Technology, LRW Technologies, NitroDesk, VMware and Citrix also have approaches to containerization that get attention from Gartner as possible ways to containerize corporate apps.

But selecting a container vendor is not necessarily simple because what you are doing is making an important IT decision about enterprise development of apps, says Maiwald. “Container vendors provide mechanisms for linking a customized app to the container,” he said. It typically means choosing an API as part of your corporate mobile-device strategy.

For example, Citrix’s containerization software is called XenMobile, and Kurt Roemer, Citrix chief security strategist, says to make use of it, apps have to be developed using the Citrix API and SDK for this. However, there are several app developers that already do that through what Citrix calls its Worx-enabled program for XenMobile. These include Adobe, Cisco, Evernote, Egnyte and Concur, to name a few. The Citrix containerization approach, which includes an app-specific VPN, will let IT managers do many kinds of tasks, such as automating SharePoint links to mobile devices for specific apps or easily control provisioning of corporate apps on BYOD mobile devices, Roemer says.


Cisco delivers “monster” Catalyst switch in major product refresh

Tuesday, June 25th, 2013

Programmable and optimized for 10/40/100G, Cisco Catalyst 6800 line still does not yet retire the decade-old Cat 6500

Cisco this week will significantly update its enterprise network line-up with programmable campus and branch switches and routers designed to tightly bind applications to network hardware and services.

The new products include the Catalyst 6800 backbone switching line, a new supervisor engine for Cisco’s 4500-E chassis-based access switch, a new high-end ISR branch router and application performance extensions to the ASR 1000 edge router.

Cisco 6800

Cisco 6800

“Cisco has…delivered a monster Catalyst,” says Bill Carter, senior business communications analyst at value-added reseller Sentinel Technologies in Springfield, Ill. “This gives customers a core switch with 10G/40G/100G with the feature set required in the campus.”

The company, which this week hosts its Cisco Live event in Orlando, says its new products fit within an Enterprise Network Architecture under which applications, network services software and hardware networking functions all work together.

Much of this synergy is facilitated by Cisco’s ONE API framework for programmable networking and associated ASICs optimized for Cisco ONE programmability. Cisco ONE and its onePK API set is Cisco’s response to software-defined networking (SDN), in which many of the functions of network behavior are divorced from hardware and centrally administered by software controllers.

SDN makes network functions less reliant on specific hardware and operating systems, and more accommodating to commodity switching and open source software. It threatens Cisco’s dominance and fat profits in routers and switches.

Cisco is combating the SDN trend by attempting to tightly link software programmability of network infrastructure to custom-developed ASIC hardware and hardware-specific operating systems, and defending its incumbency and massive installed base. These new products are instantiations of that strategy.

Cisco says it will support onePK across its entire enterprise routing and switching portfolio within the next 12 months, beginning with the ISR 4451-AX and ASR 1000-AX routers announced this week, which will support onePK in late summer/early fall.

The Catalyst 6800 is an outgrowth of the ubiquitous – and 10+ year old – Catalyst 6500. The 6800 is targeted at campus backbone 10/40/100Gbps services. In addition to network programmability, the 6800 is supervisor- and line card-compatible with the 6500, Cisco says, adding that there is still no date set for retiring the 6500.

“I see the Cat 6800 as a natural evolution of the 6500 platform,” says IDC analyst Rohit Mehra. “While scale and performance are going to be important, so will the need for providing agility and deploying programmable platforms. That’s what the 6800 brings to the table with added simplicity, while maintaining operational consistency and continuity with the 6500 product suite.”

Sources say Cisco still has a vibrant roadmap for the Catalyst 6500, including a 10Tbps supervisor engine in the works. Cisco confirmed that a 10T supervisor engine is planned for both the 6500 and 6800 switches. The company would not say when it’s coming.

The 6800 lines include the 6807-XL, the 6880-X and the 6800ia. The 6807-XL is a modular campus backbone switch with a seven-slot, 10RU chassis. It supports up to 880Gbps per slot and 11.4Tbps of switch capacity. It will go head-to-head against HP’s 11Tbps 10500 switch, and Juniper’s EX8200 and EX9200 switches in Virtual Chassis configurations.

By contrast, the Catalyst 6513-E with the Supervisor 2T supports 80Gbps per slot but that bandwidth can be doubled in a Virtual Switching System configuration. The Sup 2T can work in the new 6807-XL chassis, as can 6700, 6800 and 6900 series line cards for the Catalyst 6500-E, Cisco says.

The 6807-XL is optimized for 10/40/100G Ethernet switching, while the 6500-E is optimized for 10G.

The 6880-X is a 3-slot, 4.5RU switching with a fixed supervisor engine – it cannot be changed. It supports up to 80 10G ports or 20 40G ports, and is targeted at mid-market/mid-sized campus deployments. The supervisor sports 16 10G ports, and the switch’s four half slots house optional 10G and 40G line cards.

The Catalyst 6800ia “Instant Access” switch is designed to support automated deployment and provisioning through “one touch” programming, Cisco says. It allows IT departments to virtually consolidate access switches across the campus into one extended switch.

The 6800ia sports 48 Gigabit Ethernet ports and two 10G uplinks. The switch is analogous to Cisco’s FEX fabric extension architecture with the Nexus 7000 data center switching systems, analysts say.

“It does fill out the Cisco 6800 family for enterprise campuses that may require a fixed form factor, adjunct to a broader 6800 deployment, with a common operational and management model,” says IDC’s Mehra. “What Cisco will need to do though, will be to carefully position and differentiate from its (Catalyst 2000 and 3000) platforms to ensure its channels and partners are clear where to deploy each.”

The new Supervisor Engine 8E for the Catalyst 4500-E modular access switch includes Cisco’s new programmable UADP ASIC for wired and wireless convergence, which was unveiled early this year. It is designed to unify wired and wireless policies and management. The 8E works with existing Catalyst 4500-E chassis and line cards, Cisco says.

For large branch deployments, Cisco’s new ISR 4451-AX router features up to 2Gbps forward performance with native Cisco WAAS-based WAN optimization, and “LAN-like experience” at the branch, Cisco says.

Complementing that is the ASR 1000-AX WAN edge router, which integrates Cisco’s Application Visibility and Control and AppNav capabilities with virtual WAAS WAN optimization for providing application control and services on WAN links aggregated from branch sites.

The Cisco ISR 4451-AX is available now with prices starting at $18,000. The ASR 1000-AX and 4500-E Supervisor Engine 8E are scheduled to be available in July, at starting prices of $45,000 and $28,000, respectively.

The Catalyst 6800 switch series is scheduled to be available in November, at a starting price of $40,000.


Cheat sheet: What you need to know about 802.11ac

Friday, June 21st, 2013

Wi-Fi junkies, people addicted to streaming content, and Ethernet-cable haters are excited. There’s a new Wi-Fi protocol in town, and vendors are starting to push products based on the new standard out the door. It seems like a good time to meet 802.11ac, and see what all the excitement’s about.

What is 802.11ac?

802.11ac is a brand new, soon-to-be-ratified wireless networking standard under the IEEE 802.11 protocol. 802.11ac is the latest in a long line of protocols that started in 1999:

  • 802.11b provides up to 11 Mb/s per radio in the 2.4 GHz spectrum. (1999)
  • 802.11a provides up to 54 Mb/s per radio in the 5 GHz spectrum. (1999)
  • 802.11g provides up to 54 Mb/s per radio in the 2.4 GHz spectrum (2003).
  • 802.11n provides up to 600 Mb/s per radio in the 2.4 GHz and 5.0 GHz spectrum. (2009)
  • 802.11ac provides up to 1000 Mb/s (multi-station) or 500 Mb/s (single-station) in the 5.0 GHz spectrum. (2013?)

802.11ac is a significant jump in technology and data-carrying capabilities. The following slide compares specifications of the 802.11n (current protocol) specifications with the proposed specs for 802.11ac.

(Slide courtesy of Meru Networks)

What is new and improved with 802.11ac?

For those wanting to delve deeper into the inner workings of 802.11ac, this Cisco white paper should satisfy you. For those not so inclined, here’s a short description of each major improvement.

Larger bandwidth channels: Bandwidth channels are part and parcel to spread-spectrum technology. Larger channel sizes are beneficial, because they increase the rate at which data passes between two devices. 802.11n supports 20 MHz and 40 MHz channels. 802.11ac supports 20 MHz channels, 40 MHz channels, 80 MHz channels, and has optional support for 160 MHz channels.

(Slide courtesy of Cisco)

More spatial streams: Spatial streaming is the magic behind MIMO technology, allowing multiple signals to be transmitted simultaneously from one device using different antennas. 802.11n can handle up to four streams where 802.11ac bumps the number up to eight streams.

(Slide courtesy of Aruba)

MU-MIMO: Multi-user MIMO allows a single 802.11ac device to transmit independent data streams to multiple different stations at the same time.

(Slide courtesy of Aruba)

Beamforming: Beamforming is now standard. Nanotechnology allows the antennas and controlling circuitry to focus the transmitted RF signal only where it is needed, unlike the omnidirectional antennas people are used to.

(Slide courtesy of Altera.)

What’s to like?

It’s been four years since 802.11n was ratified; best guesses have 802.11ac being ratified by the end of 2013. Anticipated improvements are: better software, better radios, better antenna technology, and better packaging.

The improvement that has everyone charged up is the monstrous increase in data throughput. Theoretically, it puts Wi-Fi on par with gigabit wired connections. Even if it doesn’t, tested throughput is leaps and bounds above what 802.11b could muster back in 1999.

Another improvement that should be of interest is Multi-User MIMO. Before MU-MIMO, 802.11 radios could only talk to one client at a time. With MU-MIMO, two or more conversations can happen concurrently, reducing latency.


India sets up elaborate system to tap phone calls, e-mail

Friday, June 21st, 2013

India has launched a wide-ranging surveillance program that will give its security agencies and even income tax officials the ability to tap directly into e-mails and phone calls without oversight by courts or parliament, several sources said.

The expanded surveillance in the world’s most populous democracy, which the government says will help safeguard national security, has alarmed privacy advocates at a time when allegations of massive U.S. digital snooping beyond American shores has set off a global furor.

“If India doesn’t want to look like an authoritarian regime, it needs to be transparent about who will be authorized to collect data, what data will be collected, how it will be used, and how the right to privacy will be protected,” said Cynthia Wong, an Internet researcher at New York-based Human Rights Watch.

The Central Monitoring System (CMS) was announced in 2011 but there has been no public debate and the government has said little about how it will work or how it will ensure that the system is not abused.

The government started to quietly roll the system out state by state in April this year, according to government officials. Eventually it will be able to target any of India’s 900 million landline and mobile phone subscribers and 120 million Internet users.

Interior ministry spokesman K.S. Dhatwalia said he did not have details of CMS and therefore could not comment on the privacy concerns. A spokeswoman for the telecommunications ministry, which will oversee CMS, did not respond to queries.

Indian officials said making details of the project public would limit its effectiveness as a clandestine intelligence-gathering tool.

“Security of the country is very important. All countries have these surveillance programs,” said a senior telecommunications ministry official, defending the need for a large-scale eavesdropping system like CMS.

“You can see terrorists getting caught, you see crimes being stopped. You need surveillance. This is to protect you and your country,” said the official, who is directly involved in setting up the project. He did not want to be identified because of the sensitivity of the subject.


The new system will allow the government to listen to and tape phone conversations, read e-mails and text messages, monitor posts on Facebook, Twitter or LinkedIn and track searches on Google of selected targets, according to interviews with two other officials involved in setting up the new surveillance program, human rights activists and cyber experts.

In 2012, India sent in 4,750 requests to Google Inc for user data, the highest in the world after the United States.

Security agencies will no longer need to seek a court order for surveillance or depend, as they do now, on Internet or telephone service providers to give them the data, the government officials said.

Government intercept data servers are being built on the premises of private telecommunications firms. These will allow the government to tap into communications at will without telling the service providers, according to the officials and public documents.

The top bureaucrat in the federal interior ministry and his state-level deputies will have the power to approve requests for surveillance of specific phone numbers, e-mails or social media accounts, the government officials said.

While it is not unusual for governments to have equipment at telecommunication companies and service providers, they are usually required to submit warrants or be subject to other forms of independent oversight.

“Bypassing courts is really very dangerous and can be easily misused,” said Pawan Sinha, who teaches human rights at Delhi University. In most countries in Europe and in the United States, security agencies were obliged to seek court approval or had to function with legal oversight, he said.

The senior telecommunications ministry official dismissed suggestions that India’s system could be open to abuse.

“The home secretary has to have some substantial intelligence input to approve any kind of call tapping or call monitoring. He is not going to randomly decide to tape anybody’s phone calls,” he said.

“If at all the government reads your e-mails, or taps your phone, that will be done for a good reason. It is not invading your privacy, it is protecting you and your country,” he said.

The government has arrested people in the past for critical social media posts although there have been no prosecutions.

In 2010, India’s Outlook news magazine accused intelligence officials of tapping telephone calls of several politicians, including a government minister. The accusations were never proven, but led to a political uproar.


“The many abuses of phone tapping make clear that that is not a good way to organize the system of checks and balances,” said Anja Kovacs, a fellow at the New Delhi-based Centre for Internet and Society.

“When similar rules are used for even more extensive monitoring and surveillance, as seems to be the case with CMS, the dangers of abuse and their implications for individuals are even bigger.”

Nine government agencies will be authorized to make intercept requests, including the Central Bureau of Investigation (CBI), India’s elite policy agency, the Intelligence Bureau (IB), the domestic spy agency, and the income tax department.

India does not have a formal privacy law and the new surveillance system will operate under the Indian Telegraph Act – a law formulated by the British in 1885 – which gives the government freedom to monitor private conversations.

“We are obligated by law to give access to our networks to every legal enforcement agency,” said Rajan Mathews, director general of the Cellular Operators Association of India.

Telecommunications companies Bharti Airtel, Vodafone’s India unit, Idea Cellular, Tata Communications and state-run MTNL did not respond to requests for comment.

India has a long history of violence by separatist groups and other militants within its borders. More than one third of India’s 670 districts are affected by such violence, according to the South Asia Terrorism Portal.

The government has escalated efforts to monitor the activities of militant groups since a Pakistan-based militant squad rampaged through Mumbai in 2008, killing 166 people. Monitoring of telephones and the Internet are part of the surveillance.

India’s junior minister for information technology, Milind Deora, said the new data collection system would actually improve citizens’ privacy because telecommunications companies would no longer be directly involved in the surveillance – only government officials would

“The mobile company will have no knowledge about whose phone conversation is being intercepted”, Deora told a Google Hangout, an online forum, earlier this month.



Ferromagnetics breakthrough could change storage as we know it

Thursday, June 20th, 2013

MIT prof says new method of writing to magnetic media should cut power consumption by a factor of 10,000.

A previously misunderstood magnetic phenomenon has been apparently explained by a paper published on Sunday in Nature Materials – and the explanation could lead to wholesale transformation in magnetic storage.

Essentially, according to MIT professor Geoffrey Beach’s team, the positive or negative “poles” of a very thin ferromagnet behave in a predictable way when placed next to specific types of materials. What this means is that, due to a complicated asymmetry created when the magnetic media is the middle layer in a sandwich of two others, it’s possible to switch a value on the disk from 1 to 0 using about 1/100th the energy as in current systems. (Since power scales with the square of the current, this represents a 10,000-fold improvement in power dissipation.)

“The idea is not to improve hard disks, but to replace them with magnetic solid-state devices. In a hard disk, bits are fixed in position on the surface of the disk, and individual bits are accessed by physically rotating the disk,” Professor Beach told Network World. “If the bits are instead stored as a series of magnetic domains arranged along a magnetic nanowire, they can be moved by shifting the domains using an electrical current, without any mechanical motion.”

This not only means increased energy efficiency, but increased speed, since the need for mechanical motion has been obviated. And since it’s non-volatile memory, it could both replace RAM and do away with the need to perform boot sequences when computers are powered on, he said.

We could see these “magnetic solid-state” devices sooner, rather than later – according to Beach, the materials involved are the same as those in present-day HDD technology.


Cisco acquires big piece of its plan to ease IT

Thursday, June 20th, 2013

Composite Software virtualizes data from all sources

Cisco this week said it would acquire privately held Composite Software, a provider of data virtualization software and services, for $180 million.

Composite’s software makes data collected from across the network appear as if it’s in one place. This logical representation is intended to speed and improve decision making, Cisco says.

Composite will play a key role in Cisco’s plan to develop an IT simplification platform. This platform appears to also include Cisco’s Unified Computing System (UCS) server and associated components, and technology from another recently acquired company, SolveDirect.

“Cisco’s strategy is to create a next generation IT model that provides highly differentiated solutions to help solve our customers’ most challenging business problems,” said Gary Moore, Cisco president and COO, in a statement. “By combining our network expertise with the performance of Cisco’s Unified Computing System and Composite’s software, we will provide customers with instant access to data analysis for greater business intelligence.”

Cisco also says a combination of Composite and SolveDirect’s process integration platform will provide cross-domain data and workflow integration for real-time business operations.

Composite will join a Cisco services group that is led by both Mala Anand, senior vice president of Cisco Services Platforms Group, and Mike Flannagan, senior director and general manager of the Integration Brokerage Technology Group. The acquisition is expected to close in the first quarter of Cisco’s fiscal year 2014, which closes in late October.


Internet-facing SAP systems suffering increased attacks

Thursday, June 20th, 2013

Hundreds of organizations around the world are running unpatched, Internet-facing versions of SAP software, exposing them to data theft, APTs (advanced persistent threats), and other unpleasantness, according to security analyst Alexander Polyakov, CTO of ERPScan and founder of ZeroNights. Polyakov also said that SAP exploits are part of a thriving underground trade, particularly as organizations in Asian countries are exposing their systems with new SAP deployments.

“You need to do your HR and financials with SAP, so [if it is hacked] it is kind of the end of the business,” Polyakov said at a presentation at RSA Conference Asia Pacific 2013, attended by SC Magazine’s Darren Pauli. “If someone gets access to the SAP, they can steal HR data, financial data, or corporate secrets … or get access to a SCADA system.”

Vulnerabilities in SAP software, combined with the value of the data SAP systems utilize, have yielded increased attention from security researchers, per Polyakov: Nearly 60 percent of vulnerabilities found in 2013 were turned up by outsiders. The fact that SAP users are willing to open up interface to the Internet, whether for remote employees, connecting to remote offices, or remote management, increases the risks.

In his research, Polyakov found more than 4,000 servers hosting publicly facing SAP applications, 700 through Google and 3,471 via Shodan. He found that 35 percent of exposed SAP systems were running NetWeaver version 7 EHP 0, which hasn’t been updated since November 2005. Another 23 percent of the SAP code he found was last updated in April 2010; 19 percent of the installations hadn’t been patched since October 2008.

Polyakov found a comparable percentage of instances of SAP NetWeaver J2EE containing security holes through which attackers can create user accounts, assign roles, execute commands, and wreak other forms of havoc. He also determined that one in 40 organizations was vulnerable to remote exploits via SAP Management Console, while one in 120 organizations was susceptible via vulnerable HostControl, which allows for command injection. One in 20 organizations had a version of the SAP Dispatcher service for client-server communications containing default accounts that could be used to fully compromise SAP systems.

The researcher said the top five vulnerabilities for 2012 were as follows:

  1. SAP NetWeaver DilbertMsg servlet SSRF, which enables an attacker to access any files located in the SAP server file system
  2. SAP HostControl command injection, which allows for full code execution as the SAP administrator from an unauthenticated perspective
  3. SAP J2EE file read/write, through which a remote, unauthenticated attacker could compromise a system by exploiting an arbitrary file access vulnerability in the SAP J2EE Core Services
  4. SAP Message Server buffer overflow, which allows remote attackers to execute arbitrary code on vulnerable installations of SAP NetWeaver ABAP without authentication
  5. SAP DIAG buffer overflow, with which an unauthenticated, remote attacker can execute arbitrary code to launch a denial of service attack

Also part of his findings: One in three organizations had SAP routers publically available by a default port. Of the 5,000 exposed routers, 15 percent lacked ACLs (access control lists), 19 percent suffered information-disclosure holes which enable denial of service attacks, and five percent were improperly configured, allowing attackers to bypass authentication.

Polyakov will publish the entirety of his research next month. His slide presentation is available on the RSA website.


LTE Advanced is coming, but smartphone users may not care

Thursday, June 20th, 2013

Faster network speeds are better, right?

LTE Advanced is coming soon to the Samsung Galaxy S4 smartphone, offering double the downlink speeds of LTE (Long Term Evolution).

But U.S. carriers still have to upgrade LTE networks to operate with LTE Advanced, and their plans to do so are vague.

Even if U.S. networks were completely LTE Advanced-ready, some analysts question whether buyers would pay much more to upgrade their smartphones for a model with the LTE Advanced speed advantage. There’s unlikely to be the same scramble for LTE Advanced as there was for LTE-ready smartphones such as the iPhone 5, which provide 10Mbps or more on LTE downlinks on average, boosting previous speeds by three times or more over 3G, analysts said.

One analyst is especially skeptical of LTE Advanced’s value. LTE Advanced in a smartphone or tablet “is not important to the user, especially in the U.S. where carriers have been marketing LTE or 4G for years now,” said Carolina Milanesi, an analyst at Gartner. “The novelty has worn off. To tell customers that LTE will be even faster … is nice, but not life changing.”

Jack Gold, an analyst at J. Gold Associates, said consumers don’t understand what LTE Advanced is. “Will users actually see that much improvement? Will they notice anything all that different in their user experience? For most, probably not,” Gold said Tuesday. “Carriers [and manufacturers] are really trying to find advantage to keep the market excited about their networks. Will users buy into it? Remains to be seen.”

JK Shin, co-CEO of Samsung Electronics, said Monday that the Galaxy S4 with LTE Advanced capabilities will go on sale in South Korea in June and with wireless carriers in other countries later on. He said that a three-minute download of a movie using existing LTE technology would take just over a minute to download over LTE Advanced.

Of the four largest U.S. wireless carriers polled on Monday, T-Mobile USA said it was planning on LTE Advanced network support, although it has not announced a schedule.

Verizon said it plans three network improvements that will support LTE Advanced, some of which will be ready and “invisible” to customers later this year.

Sprint said it has already deployed some elements of LTE Advanced, but didn’t elaborate on a schedule or other details. The carrier said LTE Advanced will give customers greater speed, capacity and improvements in video quality, and will help lower Sprint’s costs to keep unlimited data plans.

AT&T is also expected to move to LTE Advanced, but didn’t respond immediately to questions about timing.

The upgrade cost to deliver LTE Advanced is expected to be minor compared to the many billions of dollars it cost to upgrade networks to LTE with new antennas and switches.

Analysts said that while Samsung will introduce the GS4 in South Korea on LTE Advanced networks, the value of LTE Advanced could be far less exciting in the U.S. Smartphones are getting wide acceptance in the U.S., where a majority of Americans own the devices, according to a Pew Research Center survey. Smartphone makers and carriers face the challenge of marketing new features, such as faster network speeds, to lure buyers into trading in their old devices for new ones.

“We are getting to the point where selling innovation is hard,” Milanesi said. “That innovation today is about user experience, convenience and incremental benefits — not transformational ones.”

Such incremental benefits are hard to sell in a store because the features take longer for salespeople to demonstrate, she added. The sales rep might be showing the improvements on “a phone that otherwise will look like all the rest — or even worse — like the previous generation,” Milanesi said.

Gartner and other analyst firms noticed a new trend that started in the fourth quarter of 2012: U.S. smartphone owners were keeping their devices longer, holding them beyond a two-year contract rather than upgrading before the end of the two-year period to get access to newer hardware, software or network speeds.

“Some users might hang onto their smartphones and get a tablet instead,” Milanesi added.

As a result of both lower perceived innovation in new smartphones and the hype to buy tablets, smartphone lifetimes are lengthening, Milanesi said. “That means for a market like the U.S. where we have a replacement market, [sales] growth will decrease,” she said.

At Verizon, LTE Advanced is viewed as an improvement that will be invisible to customers, Verizon spokesman Tom Pica said. Verizon already has rolled out LTE to more than 400 cities, and LTE Advanced will mean customers “continue to find the consistent reliability and high speeds they have come to expect” from Verizon, he said.

Later in 2013, Verizon will deploy small cells and AWS (Advanced Wireless Services) spectrum as part of LTE Advanced capabilities. A third step, involving advanced MIMO (multiple input, multiple output) antennas for devices and cell sites, is in the plans, but no schedule has been announced.

AWS uses two spectrum bands in the 1700MHz and 2100MHz channels to increase network capacity for heavy data users but not necessarily speed. Verizon already sells seven devices that support AWS, including the GS4, the Nokia Lumia 928 and the BlackBerry Q10. Small cells can increase network capacity and network reach.


With faster 5G Wi-Fi coming, Wi-Fi Alliance kicks off certification program

Thursday, June 20th, 2013

Process ensures 802.11ac devices work well with older Wi-Fi products

Although faster fifth-generation Wi-Fi is already available in some new wireless routers and even the new MacBook Air laptops, a new Wi-Fi Certified ac program is being launched today to ensure the newest devices interoperate with other Wi-Fi products.

The Wi-Fi Alliance announced the certification program for 802.11ac Wi-Fi (also known as 5G Wi-Fi). Mobile devices, tablets, laptops, networking gear and other hardware will be available in the last half of 2013 with a Wi-Fi Certified label, ensuring that the devices have been tested to interoperate with other 802.11ac products and older Wi-Fi products.

“The certification program ensures that users can purchase the latest device and not worry if it will work with a device of two years or even 10 years ago,” said Kevin Robinson, senior marketing manager for the Wi-Fi Alliance in an interview.

The faster Wi-Fi allows two-to-three times faster speeds than existing 802.11n technology, Robinson said. It will enhance the speed of movie downloads and other user needs in a home or work place.

Robinson said that 802.11ac should allow a transfer of an HD movie to a tablet in under four minutes, and allow for multiple video streams inside a home at one time. “The average user will notice the difference,” he said, contrary to what some analysts have predicted.

Theoretical maximum speeds on 802.11ac can reach 1.3 Gbps, three times 802.11n’s speeds of 450 Mbps. Older 802.11g supports theoretical speeds of up to 54 Mbps. Actual speeds will be far lower, depending mainly on the number of users and the type of data being transferred.

Aside from faster speeds, 802.11ac allows for more network capacity so that more devices can be simultaneously connected to a network. Because of the added network capacity with 802.11ac, Robinson said that movies can be run without as much less compression, enhancing their overall visual quality. Wi-Fi over 802.11ac also reduces network latency, resulting in fewer delays in streaming music and gaming applications.

Wi-Fi Direct, which is technology to allow device-to-device interoperability with 802.11n, is not yet part of the 802.11ac certification program, Robinson said.

The Wi-Fi Alliance predicts that many of the new routers made with 802.11ac will operate on both the 5GHz and 2.4 GHz bands. That way, 802.11n traffic will be able to run over both bands, while 802.11ac traffic runs over 5GHz. Robinson said that 2.4 GHz will remain sufficient for carrying data for many apps and uses, such as Web browsing. Migrating to 5GHz allows wider spectrum channels with higher data throughputs, yielding higher performance. An advantage of 5 GHz is that various channel widths are supported — 20 MHz, 40 MHz and 80 MHz– while 2.4GHz allows only three 20 MHz channels.

The Wi-Fi Alliance said 11 chips and other components are being used to test new 802.11 ac devices. They are from Broadcom, Intel, Marvell, Mediatek, Qualcomm and Realtek. A list of Wi-Fi Certified ac products is available at

As an indication of the fast industry adoption of 802.11ac, Aruba Networks on May 21 announced new Wi-Fi access points supporting the technology and said more recently that the University of Delaware is a beta customer. Aruba is working for Wi-Fi Certified AC certification of the new access points, a spokeswoman said.

Robinson predicted that many of the recently announced routers and other products will seek Wi-Fi 802.11ac certification.


New attack cracks iPhone autogenerated hotspot passwords in seconds

Thursday, June 20th, 2013

Default password pool so small scientists need just 24 seconds to guess them all.

If you use your iPhone’s mobile hotspot feature on a current device, make sure you override the automatic password it offers to secure your connections. Otherwise, a team of researchers can crack it in less than half a minute by exploiting recently discovered weaknesses. turns out Apple’s iOS versions 6 and earlier pick from such a small pool of passwords by default that the researchers—who are from the computer science department of the Friedrich-Alexander University in Erlangen, Germany—need just 24 seconds to run through all the possible combinations. The time required assumes they’re using four AMD Radeon HD 7970 graphics cards to cycle through an optimized list of possible password candidates. It also doesn’t include the amount of time it takes to capture the four-way handshake that’s negotiated each time a wireless enabled device successfully connects to a WPA2, or Wi-Fi Protected Access 2, device. More often than not, though, the capture can be completed in under a minute. With possession of the underlying hash, an attacker is then free to perform an unlimited number of “offline” password guesses until the right one is tried.

The research has important security implications for anyone who uses their iPhone’s hotspot feature to share the device’s mobile Internet connectivity with other Wi-Fi-enabled gadgets. Adversaries who are within range of the network can exploit the weakness to quickly determine the default pre-shared key that’s supposed to prevent unauthorized people from joining. From there, attackers can leach off the connection, or worse, monitor or even spoof e-mail and other network data as it passes between connected devices and the iPhone acting as the access point.

“Taking our optimizations into consideration, we are now able to show that it is possible for an attacker to reveal a default password of an arbitrary iOS hotspot user within seconds,” the scientists wrote in a recently published research paper. “For that to happen, an attacker only needs to capture a WPA2 authentication handshake and to crack the pre-shared key using our optimized dictionary.”

By reverse engineering key parts of iOS powering iPhones, the researchers discovered that default hotspot passwords always contained a four- to six-letter word followed by a randomly generated four-digit number. All the words were contained in an open-source Scrabble word list available online. By using a single AMD Radeon HD 6990 GPU to append every possible four-digit number to each of the words, the researchers needed only 49 minutes to cycle through all possible combinations. Then they stumbled on a discovery that allowed them to drastically reduce the amount of time required.

The hotspot feature, they found, uses an observable series of programming calls to pick four- to six-letter words from an English-language dictionary included with iOS. By cataloging the default passwords issued after about 250,000 invocations, they determined that only 1,842 different words are selected. The discovery allowed them to drastically reduce the number of guesses needed to correctly find the correct password. As a result, the required search space—that is, the total number of password candidates needed to guess a default password—is a little less than 18.5 million.

They were able to further reduce the time required after noticing that certain words on the reduced list are much more likely than others to be chosen. For instance, “suave,” “subbed,” “headed,” and seven other top-10 words were 10 times more likely to be selected as the base for a default password than others. The optimized list in the attack orders words by their relative frequency, so those most likely to be used are guessed first. Given a four-GPU system is able to generate about 390,000 guesses each second, it takes about 24 seconds to arrive at the correct guess.

Among the many security features included in the WPA standard is its use of the relatively slow PBKDF2 function to generate hashes. As a result, the number of guesses that the researchers’ four-GPU system is capable of generating each second is measured in the hundreds of thousands, rather than in the millions or billions. The paper—titled “Usability vs. Security: The Everlasting Trade-Off in the Context of Apple iOS Mobile Hotspots”—demonstrates that slow hashing alone isn’t enough to stave off effective password cracks.

Also crucial is a selection of passwords that will require attackers to devote large amounts of time or computing resources to exhaust the required search space. Had Apple engineers designed a system that picked long default passwords with upper- and lower-case letters, numbers, and special characters, it could take centuries for crackers to cycle through every possibility. Alas, passwords such as “3(M$j;]fL[ZU%<1T” aren’t easy for most people to use in practical settings. Still, a Wi-Fi password that’s randomly generated—say “MPuUjxRpz0” or even “arNEsISIon” will require considerably more time and resources to crack than the default passwords currently offered by iOS.

Readers who use their iPhone’s hotspot feature should override the default password offering and replace it with something that’s harder to guess. They should also take advantage of the hotspot feature’s ability to monitor how many people are connected to the Wi-Fi network. Those who use hotspot features on other mobile platforms would also do well to carefully monitor the passwords protecting their connections. By default, passwords offered by Microsoft’s Windows Phone 8 consist of only an eight-digit number, according to the researchers, and depending on the carrier, some Android handsets may also generate default passwords that are easy to crack.


IT capital spending rises, but not for PCs

Tuesday, June 18th, 2013

While Windows 8 is getting blamed for dismal PC sales, upgrading laptops and desktop systems isn’t a priority for business users, according to new research.

Businesses are increasing capital spending, but they are directing that money at developing new systems, such as mobile platforms, and upgrading existing ones, according to new data from Computer Economics.

“Very few” organizations are placing a priority on PC upgrades, said John Longwell, vice president of research at Computer Economics, which collected data from more than 200 companies.

The reason PCs are down on priority lists is because “that’s not where the innovation is taking place,” he said.

Computer Economics, in its annual IT Spending and Staffing Benchmarks study, reported that median IT capital spending at businesses and organizations is up 4% this year, compared with 2% last year.

The research firm’s finding on PC spending may help illuminate the broader trend in PC sales, which declined 14% year-over-year, a decline that IDC called “brutal.”

Corporate PC refresh rates vary organization to organization, and estimates by analysts cover a broad range. A major factor in determining upgrades are three-year lease or warranty agreements on systems, but the refresh cycle extends beyond it.

Gartner estimates that for large businesses, the refresh rate is three to four years. For desktop systems, it may be as long as five years.

Mika Kitagawa, a Gartner analyst, said the refresh rate is extending, in part, because the technology is getting more reliable and the failure rates are declining.

David Daoud, an IDC analyst, said the business and consumer refresh rate has been 3.5 to four years, an increase in the replacement time because Windows 7 has been a “good enough” system. Among businesses, the upgrade cycle peaked at about four years but has been falling to three to 3.5 years as users migrate to Windows 7, he said.

Vendors, Hewlett-Packard in particular, are expecting strong PC sales next year because of Microsoft’s plans to end support for Windows XP.

IT operational budgets, which make up about 75% of an IT department’s budget, increased 2.3% this year versus 2.2% in the prior year, Computer Economics reported.

IT hiring was restrained. The research firm said 43% of all organizations are increasing their headcount. Last year, it was about 40%, and in 2011, it was 33%.

“I think we are past the point where organizations have just slammed on the brakes and instituted hiring freezes,” Longwell said. “We are seeing modest hiring, low turnover, and an end to significant layoffs.”

But Longwell characterized the hiring as “selective” and “restrained.”

“IT organizations are also striving to become more cost-efficient, and that means doing more with fewer people. As companies start investing in new technology, that is where you will see job growth,” he said.


Obama wants government to free up more wireless spectrum

Friday, June 14th, 2013

President Barack Obama is directing federal agencies to look for ways to eventually share more of their radio airwaves with the private sector as the growing use of smartphones and tablets ratchets up the demand for spectrum, according to a memo released on Friday.

With blocks of spectrum reserved by dozens of government agencies for national defense, law enforcement, weather forecasting and other purposes, wireless carriers and Internet providers are urging that more spectrum be opened up for commercial use.

The call comes as airwaves are becoming congested with the increase in gadgets and services that are heavily reliant on the ability to transport greater amounts of data.

“Although existing efforts will almost double the amount of spectrum available for wireless broadband, we must make available even more spectrum and create new avenues for wireless innovation,” Obama said in his presidential memo.  “One means of doing so is by allowing and encouraging shared access to spectrum that is currently allocated exclusively for Federal use.”

The memorandum, welcomed and lauded by the telecommunications industry, directs federal agencies to study how exactly they use the airwaves and how to make it easier to share them with the private sector.

The directive also sets up a Spectrum Policy Team that in six months will have to recommend incentives to encourage government agencies to share or give up their spectrum – something industry experts see as a critical step in opening more of the federally used airwaves to the private sector.

“Our traditional three-step process for reallocating federal spectrum — clearing federal users, relocating them and then auctioning the cleared spectrum for new use — is reaching its limits,” Jessica Rosenworcel, a Democratic member of the Federal Communications Commission, said in supporting Obama’s move.

The FCC is now working on rules for the biggest-ever auction of commercially used airwaves, in which TV stations would give up and wireless providers would buy highly attractive spectrum.  The auction is expected to take place in late 2014 or later.

The White House on Friday also released a report showing growth of broadband innovation and access, an area that the Obama administration has on because it is viewed as a critical tool for economic growth.  To further the process, the White House now plans to invest $100 million into spectrum sharing and advanced communications.

Friday’s directive also “strongly encourages” the FCC to develop a program that would spur the creation and sale of radio receivers that would ensure that if spectrum is shared, different users do not interfere with each other.

“The steps taken today lay the groundwork for tomorrow’s broadband future,” said Vonya McCann, senior vice president of government affairs at Sprint Nextel Corp.

Source:  Reuters

iPhones can auto-connect to rogue Wi-Fi networks, researchers warn

Friday, June 14th, 2013

Attackers can exploit behavior to collect passwords and other sensitive data.

Security researchers say they’ve uncovered a weakness in some iPhones that makes it easier to force nearby users to connect to Wi-Fi networks that steal passwords or perform other nefarious deeds.

The weakness is contained in configuration settings installed by AT&T, Vodafone, and more than a dozen other carriers that give the phones voice and Internet services, according to a blog post published Wednesday. Settings for AT&T iPhones, for instance, frequently instruct the devices to automatically connect to a Wi-Fi network called attwifi when the signal becomes available. Carriers make the Wi-Fi signals available in public places as a service to help subscribers get Internet connections that are fast and reliable. Attackers can take advantage of this behavior by setting up their own rogue Wi-Fi networks with the same names and then collecting sensitive data as it passes through their routers.

“The takeaway is clear,” the researchers from mobile phone security provider Skycure wrote. “Setting up such Wi-Fi networks would initiate an automatic attack on nearby customers of the carrier, even if they are using an out-of-the-box iOS device that never connected to any Wi-Fi network.”

The researchers said they tested their hypothesis by setting up several Wi-Fi networks in public areas that used the same SSIDs as official carrier networks. During a test at a restaurant in Tel Aviv, Israel on Tuesday, 60 people connected to an imposter network in the first minute, Adi Sharabani, Skycure’s CEO and cofounder, told Ars in an e-mail. During a presentation on Wednesday at the International Cyber Security Conference, the Skycure researchers set up a network that 448 people connected to during a two-and-a-half-hour period. The researchers didn’t expose people to any attacks during the experiments; they just showed how easy it was for them to connect to networks without knowing they had no affiliation to the carrier.

Sharabani said the settings that cause AT&T iPhones to automatically connect to certain networks can be found in the device’s profile.mobileconfig file. It’s not clear if phones from other carriers also store their configurations in the same location or somewhere else.

“Moreover, even if you take another iOS device and put an AT&T sim in it, the network will be automatically defined, and you’ll get the same behavior,” he said. He said smartphones running Google’s Android operating system don’t behave the same way.

Once attackers have forced a device to connect to a rogue network, they can run exploit software that bypasses the secure sockets layer Web encryption. From there, attackers can perform man-in-the-middle (MitM) attacks that allow them to observe passwords in transit and even forge links and other content on the websites users are visiting.

The most effective way to prevent iPhones from connecting to networks without the user’s knowledge is to turn off Wi-Fi whenever it’s not needed. Apps are also available that give users control over what SSIDs an iPhone will and won’t connect to. It’s unclear how iPhones running the upcoming iOS 7 will behave. As Ars reported Monday, Apple’s newest OS will support the Wi-Fi Alliance’s Hotspot 2.0 specification, which is designed to allow devices to hop from one Wi-Fi hotspot to another.

Given how easy it for attackers to abuse Wi-Fi weaknesses, the Skycure research isn’t particularly shocking. Still, the ability of iPhones to connect to networks for the first time without requiring users to take explicit actions could be problematic, said Robert Graham, an independent security researcher who reviewed the Skycure blog post.

“A lot of apps still send stuff in the clear, and other apps don’t check the SSL certificate chain properly, meaning that Wi-Fi MitM is a huge problem,” said Graham, who is CEO of Errata Security. “That your phone comes pre-pwnable without your actions is a bad thing. Devices should come secure by default, not pwnable by default.”


Medical Devices Hard-Coded Passwords (ICS-ALERT-13-164-01)

Friday, June 14th, 2013


Researchers Billy Rios and Terry McCorkle of Cylance have reported a hard-coded password vulnerability affecting roughly 300 medical devices across approximately 40 vendors. According to their report, the vulnerability could be exploited to potentially change critical settings and/or modify device firmware.

Because of the critical and unique status that medical devices occupy, ICS-CERT has been working in close cooperation with the Food and Drug Administration (FDA) in addressing these issues. ICS-CERT and the FDA have notified the affected vendors of the report and have asked the vendors to confirm the vulnerability and identify specific mitigations. ICS-CERT is issuing this alert to provide early notice of the report and identify baseline mitigations for reducing risks to these and other cybersecurity attacks. ICS-CERT and the FDA will follow up with specific advisories and information as appropriate

The report included vulnerability details for the following vulnerability

Vulnerability Type Remotely Exploitable Impact
Hard-coded password Yes, device dependent Critical settings/device firmware modification


The affected devices have hard-coded passwords that can be used to permit privileged access to devices such as passwords that would normally be used only by a service technician. In some devices, this access could allow critical settings or the device firmware to be modified.

The affected devices are manufactured by a broad range of vendors and fall into a broad range of categories including but not limited to:

  • Surgical and anesthesia devices,
  • Ventilators,
  • Drug infusion pumps,
  • External defibrillators,
  • Patient monitors, and
  • Laboratory and analysis equipment.

ICS-CERT and the FDA are not aware that this vulnerability has been exploited, nor are they aware of any patient injuries resulting from this potential cybersecurity vulnerability.


ICS-CERT is currently coordinating with multiple vendors, the FDA, and the security researchers to identify specific mitigations across all devices. In the interim, ICS-CERT recommends that device manufacturers, healthcare facilities, and users of these devices take proactive measures to minimize the risk of exploitation of this and other vulnerabilities. The FDA has published recommendations and best practices to help prevent unauthorized access or modification to medical devices.

  • Take steps to limit unauthorized device access to trusted users only, particularly for those devices that are life-sustaining or could be directly connected to hospital networks.
    • Appropriate security controls may include: user authentication, for example, user ID and password, smartcard or biometric; strengthening password protection by avoiding hard‑coded passwords and limiting public access to passwords used for technical device access; physical locks; card readers; and guards.
  • Protect individual components from exploitation and develop strategies for active security protection appropriate for the device’s use environment. Such strategies should include timely deployment of routine, validated security patches and methods to restrict software or firmware updates to authenticated code. Note: The FDA typically does not need to review or approve medical device software changes made solely to strengthen cybersecurity.
  • Use design approaches that maintain a device’s critical functionality, even when security has been compromised, known as “fail-safe modes.”
  • Provide methods for retention and recovery after an incident where security has been compromised. Cybersecurity incidents are increasingly likely and manufacturers should consider incident response plans that address the possibility of degraded operation and efficient restoration and recovery.

For health care facilities: The FDA is recommending that you take steps to evaluate your network security and protect your hospital system. In evaluating network security, hospitals and health care facilities should consider:

  • Restricting unauthorized access to the network and networked medical devices.
  • Making certain appropriate antivirus software and firewalls are up-to-date.
  • Monitoring network activity for unauthorized use.
  • Protecting individual network components through routine and periodic evaluation, including updating security patches and disabling all unnecessary ports and services.
  • Contacting the specific device manufacturer if you think you may have a cybersecurity problem related to a medical device. If you are unable to determine the manufacturer or cannot contact the manufacturer, the FDA and DHS ICS-CERT may be able to assist in vulnerability reporting and resolution.
  • Developing and evaluating strategies to maintain critical functionality during adverse conditions.

ICS-CERT reminds health care facilities to perform proper impact analysis and risk assessment prior to taking defensive and protective measures.

ICS-CERT also provides a recommended practices section for control systems on the US-CERT Web site. Several recommended practices are available for reading or download, including Improving Industrial Control Systems Cybersecurity with Defense-in-Depth Strategies.a Although medical devices are not industrial control systems, many of the recommendations from these documents are applicable.

Organizations that observe any suspected malicious activity should follow their established internal procedures and report their findings to ICS-CERT and FDA for tracking and correlation against other incidents.

The FDA has also announced a safety communications that highlights the points made in this alert. For additional information see:

Source:  US-CERT

China builds fastest supercomputer in the world

Monday, June 10th, 2013

supercomputer t2

China appears to have once again taken the lead from the United States in the burgeoning supercomputing wars, developing a supercomputer that is twice as fast as anything America has to offer.

The new Tianhe-2 supercomputer, nicknamed the Milkyway-2, was unveiled by China’s National University of Defense Technology (NUDT) during a conference held in late May. University of Tennessee professor Jack Dongarra confirmed this week that the Milkyway-2 operates as fast as 30.7 petaflops — quadrillions of calculations — per second.

Titan, the U.S. Department of Energy’s fastest supercomputer, has been clocked in at “just” 17.6 petaflops per second. Dongarra is also a researcher at the Oak Ridge National Laboratory which houses Titan.

The new Chinese supercomputer will provide an open, high-performance computing service for southwest China when it moves to the Chinese National Supercomputer Center in Guangzhou by the end of this year. NUDT has listed several possible uses for the Milkway-2, including simulations for testing airplanes, processing “big data,” and aiding in government security.

The Milkyway-2 will have to be officially tested, but its incredible speed will likely place it atop the biannual Top 500 supercomputer list, which is expected to be unveiled during the International Supercomputing Conference next weekend. It would mark the first time since 2010 that China topped the list — then with the Tianhe-1.

The United States only just reclaimed the top spot this past November after coming up short to Japan, China and Germany over the past three years.

The rankings earn more than bragging rights as supercomputers become increasingly important to national security. The Titan aids in American research about climate change, biofuels and nuclear energy. In May, a U.S. House subcommittee held a hearing on supercomputers where researches asked Congress to provide funding for “exascale” computers, which could operate at one quintillion flops per second.

Dongarra noted that even with sustained investment into this technology, the United States could still fall behind global leaders in the supercomputing race.

“Perhaps this is a wake up call,” he said.

Source:  CNN

Obama launches high-speed Internet program for all schools

Monday, June 10th, 2013

More than 80 percent of educators say the Internet connection at their schools is too slow to meet their needs — that’s why the president plans to bring broadband to 99 percent of all students.

In 2011, Loris Elementary School in Loris, S.C., was ranked 41st in the state among grammar schools with similar demographics. By 2012, it had risen to 19th.

What happened? According to the White House: technology.

Many of the students at Loris Elementary School are from low-income families that don’t have the means to give their children all of today’s high-tech devices, according to the Obama administration. That’s why in 2012 the school decided to introduce a technology blended learning program complete with laptops, software, and Internet access. It’s apparently made a difference.

President Barack Obama is convinced that if all schools worked more technology into their curriculum, they would also excel. That’s why he announced on Thursday a new initiative (PDF) to bring high-speed Internet access to 99 percent of all of the country’s K-12 students within the next five years.

“We are living in a digital age, and to help our students get ahead, we must make sure they have access to cutting-edge technology,” Obama said in a statement. “So today, I’m issuing a new challenge for America — one that families, businesses, school districts, and the federal government can rally around together — to connect virtually every student in America’s classrooms to high-speed broadband Internet within five years, and equip them with the tools to make the most of it.”

Dubbed ConnectED, the program aims to get all classrooms equipped with Internet access that has speeds of at least 100Mbps, with a target goal of 1Gbps. The initiative will also provide teachers with training on how to use more technology in their curriculum. ConnectED plans to especially focus on rural schools where Internet access can be sparse.

The majority of schools in the U.S. already have Internet access, but it can be extremely slow. According to the White House, fewer than 20 percent of teachers say their school’s Internet connections are fast enough to be used sufficiently.

No Congressional action is required for ConnectED to go into effect, but the Federal Communications Commission will have to cooperate by leveraging its E-Rate program and provide more discounts to schools on Internet costs.

Source:  CNET

Espionage malware infects raft of governments, industries around the world

Friday, June 7th, 2013

“NetTraveler” stole data on space exploration, nanotechnology, energy, and more.

Security researchers have blown the whistle on a computer-espionage campaign that over the past eight years has successfully compromised more than 350 high-profile targets in 40 countries.

“NetTraveler,” named after a string included in an early version of the malware, has targeted a number of industries and organizations, according to a blog post published Tuesday by researchers from antivirus provider Kaspersky Lab. Targets include oil industry companies, scientific research centers and institutes, universities, private companies, governments and governmental institutions, embassies, military contractors, and Tibetan/Uyghur activists. Most recently the group behind NetTraveler has focused most of its efforts on obtaining data concerning space exploration, nanotechnology, energy production, nuclear power, lasers, medicine, and communications.

“Based on collected intelligence, we estimate the group size to be about 50 individuals, most of which speak Chinese natively and have working knowledge of the English language,” the researchers wrote. “NetTraveler is designed to steal sensitive data as well as log keystrokes and retrieve file system listings and various Office and PDF documents.”

The highest number of infections were found in Mongolia, followed by India and Russia. Other countries with infections include Kazakhstan, Kyrgyzstan, Tajikistan, South Korea, Spain, Germany, the United States, Canada, the United Kingdom, Chile, Morocco, Greece, Belgium, Austria, Ukraine, Lithuania, Belarus, Australia, Hong Kong, Japan, China, Iran, Turkey, Pakistan, Thailand, Qatar, and Jordan. The earliest known samples of the malware are dated to 2005, but there are references that indicate it existed as early as 2004, Kaspersky said. The largest number of observed samples were created from 2010 to 2013.

Six of the NetTraveler victims were also compromised by Red October, the much larger espionage campaign that went undetected for five years. With more than 1,000 distinct modules, the operators were able to craft highly advanced infections that were tailored to the unique configurations of infected machines and the profiles of those who used them.

For a much deeper dive into NetTraveler, see the full Kaspersky report.


56% of American adults are now smartphone owners

Friday, June 7th, 2013

For the first time since the Pew Research Center’s Internet & American Life Project began systematically tracking smartphone adoption, a majority of Americans now own a smartphone of some kind. Our definition of a smartphone owner includes anyone who says “yes” to one—or both—of the following questions:

  • 55% of cell phone owners say that their phone is a smartphone.
  • 58% of cell phone owners say that their phone operates on a smartphone platform common to the U.S. market.

Taken together, 61% of cell owners said yes to at least one of these questions and are classified as smartphone owners. Because 91% of the adult population now owns some kind of cell phone, that means that 56% of all American adults are now smartphone adopters.  One third (35%) have some other kind of cell phone that is not a smartphone, and the remaining 9% of Americans do not own a cell phone at all.

Figure 1

Excerpt from: