Archive for June, 2012

Operators will test Wi-Fi roaming, simplified login in Q4

Tuesday, June 26th, 2012

Wi-Fi Alliance lays the groundwork by starting its Passpoint certification program

The Wi-Fi Alliance has begun certifying products that simplify access to hotspots and roaming between different mobile service operators; providers and equipment vendors are expected to test these products in the fourth quarter, the Wireless Broadband Alliance said Tuesday.

Mobile operators see Wi-Fi as a way to offload traffic from their networks to handle growing data volumes.

While Wi-Fi is already used by operators all over the world, the work currently underway aims to take the use of the technology to the next level. Users will be able to authenticate with a SIM card and move between mobile networks and Wi-Fi hotspots from different providers both at home and abroad.

Leading the charge is the Wi-Fi Alliance. On Tuesday, the organization announced it had started certifying the underlying products as part of the organization’s Passpoint program, which is based on technology defined in its Hotspot 2.0 specification.

The certification program is a major milestone for Wi-Fi technology, because there is now finally a single industry-wide solution for seamless access to Wi-Fi-based mobile broadband, according to the Wi-Fi Alliance.

Certified mobile devices can automatically discover and connect to Wi-Fi networks powered by access points that have also been approved.

The first products to be certified include access points, controllers and clients from BelAir Networks, which is owned by Ericsson, Broadcom, Cisco Systems, Intel, Marvell, MediaTek, Qualcomm and Ruckus Wireless.

Next, the Wireless Broadband Alliance (WBA) along with a number of operators will conduct trials during the fourth quarter to see if they work as intended.

A first set of trials were organized earlier this year, and the first commercial services are expected to be launched during the first half of next year, according to the WBA.

The second round of tests will delve into more complicated features, including operator-to-operator billing procedures when users are roaming and what Wi-Fi network to choose if more than one is available, according to Tiago Rodrigues, program director at WBA.

“Different metrics can be used [when the device chooses which network to connect to]. Probably the most common one will be the quality of the bandwidth. The devices will have the intelligence to make an initial assessment to understand which hotspot can offer the best service,” said Rodrigues.

The technology to make such an assessment is still under development, and the first prototype clients that can perform intelligent network selection will be used during the trial, according to Rodrigues.

Most of the operators are expected to test SIM-based authentication methods, but there will also be interoperability tests of other methods based on EAP (Extensible Authentication Protocol) for devices such as cameras and tablets that don’t have SIM cards, Rodrigues said.

There are several goals with the Q4 trial. They will allow operators to get more experience with the technology as they prepare to launch commercial services, and also find things that need to be improved and fine-tuned by organizations such as the Wi-Fi Alliance and GSM Association, according to Rodrigues.

Thirty-seven operators and 24 vendors will take part in the trial, according to the WBA. The first group includes AT&T, China Mobile, Orange and Time Warner Cable while the vendors list includes companies such as Cisco Systems, Ericsson and Intel.

Source:  computerworld.com

Cybercriminals increasingly use online banking fraud automation techniques

Tuesday, June 26th, 2012

Cybercriminals combine traditional banking malware with server-hosted scripts to automate online bank fraud, researchers say

Cybercriminals attempted to steal at least $75 million from high-balance business and consumer bank accounts by using sophisticated fraud automation techniques that can bypass two-factor authentication, according to a report released on Monday by antivirus firm McAfee and online banking security vendor Guardian Analytics.

The new fraud automation techniques are an advancement over the so-called man-in-the-browser (MitB) attacks performed through online banking malware like Zeus or SpyEye.

Banking malware has long had the ability to inject rogue content such as forms or pop-ups into online banking websites when they are accessed from infected computers. This feature has traditionally been used to collect financial details and log-in credentials from victims that could be abused at a later time.

However, attackers are increasingly combining malware-based Web injection with server-hosted scripts in order to piggyback on active online banking sessions and initiate fraudulent transfers in real time, McAfee and Guardian Analytics researchers said in their report.

The externally hosted scripts called by the malware are designed to work with specific online banking websites and automate the entire fraud process. They can read account balances and transfer predefined sums to money mules — intermediaries — the selection of which is also done automatically by querying a constantly updated database of money mule accounts, the researchers said.

This type of automated attacks, which the McAfee and Guardian Analytics researchers collectively call “Operation High Roller,” were first observed in Europe — in Italy, Germany and the Netherlands. However, since March they have also been detected in Latin America and the U.S.

By extrapolating the data gathered from the European attacks, security researchers estimate that cybercriminals attempted to steal between $75 million and $2.5 billion using fraud automation techniques.

Such attacks usually target high-balance accounts owned by businesses or high net-worth individuals, the researchers said. “The United States victims were all companies with commercial accounts with a minimum balance of several million dollars.”

The fraud automation scripts also allow cybercriminals to bypass two-factor authorization systems implemented by banks for security purposes.

The malware intercepts the authentication process and captures the one-time password generated by the victim’s bank-issued hardware token and uses it to perform the fraud in the background. Meanwhile, the user is shown a “please wait” message on the screen.

“The defeat of two-factor authentication that uses physical devices is a significant breakthrough for the fraudsters,” the researchers said. “Financial institutions must take this innovation seriously, especially considering that the technique used can be expanded for other forms of physical security devices.”

Source:  computerworld.com

Scientists crack RSA SecurID 800 tokens, steal cryptographic keys

Tuesday, June 26th, 2012

Scientists penetrate hardened security devices in under 15 minutes.

RSA’s SecurID 800 is one of at least five commercially available security devices susceptible to a new attack that extracts cryptographic keys used to log in to sensitive corporate and government networks.

http://cdn.arstechnica.net/wp-content/uploads/2012/06/securid-800.jpgScientists have devised an attack that takes only minutes to steal the sensitive cryptographic keys stored on a raft of hardened security devices that corporations and government organizations use to access networks, encrypt hard drives, and digitally sign e-mails.

The exploit, described in a paper to be presented at the CRYPTO 2012 conference in August, requires just 13 minutes to extract a secret key from RSA’s SecurID 800, which company marketers hold out as a secure way for employees to store credentials needed to access confidential virtual private networks, corporate domains, and other sensitive environments. The attack also works against other widely used devices, including the electronic identification cards the government of Estonia requires all citizens 15 years or older to carry, as well as tokens made by a variety of other companies.

Security experts have long recognized the risks of storing sensitive keys on general purpose computers and servers, because all it takes is a vulnerability in a single piece of hardware or software for adversaries to extract the credentials. Instead, companies such as RSA; Belcamp, Maryland-based SafeNet; and Amsterdam-based Gemalto recommend the use of special-purpose USB sticks that act as a digital Fort Knox that employees can use to safeguard their credentials. In theory, keys can’t be removed from the devices except during a highly controlled export process, in which they’re sealed in a cryptographic wrapper that is impossible for outsiders to remove.

“They’re designed specifically to deal with the case where somebody gets physical access to it or takes control of a computer that has access to it, and they’re still supposed to hang onto their secrets and be secure,” Matthew Green, a professor specializing in cryptography in the computer science department at Johns Hopkins University, told Ars. “Here, if the malware is very smart, it can actually extract the keys out of the token. That’s why it’s dangerous.” Green has blogged about the attack here.

If devices such as the SecurID 800 are a Fort Knox, the cryptographic wrapper is like an armored car used to protect the digital asset while it’s in transit. The attack works by repeatedly exploiting a tiny weakness in the wrapper until its contents are converted into plaintext. One version of the attack uses an improved variation of a technique introduced in 1998 that works against keys using the RSA cryptographic algorithm. By subtly modifying the ciphertext thousands of times and putting each one through the import process, an attacker can gradually reveal the underlying plaintext, D. Bleichenbacher, the original scientist behind the exploit, discovered. Because the technique relies on “padding” inside the cryptographic envelope to produce clues about its contents, cryptographers call it a “padding oracle attack.” Such attacks rely on so-called side-channels to see if ciphertext corresponds to a correctly padded plaintext in a targeted system.

It’s this version of the attack the scientists used to extract secret keys stored on RSA’s SecurID 800 and many other devices that use PKCS#11, a programming interface included in a wide variety of commercial cryptographic devices. Under the attack Bleichenbacher devised, it took attackers about 215,000 oracle calls on average to pierce a 1024-bit cryptographic wrapper. That required enough overhead to prevent the attack from posing a practical threat against such devices. By modifying the algorithm used in the original attack, the revised method reduced the number of calls to just 9,400, requiring only about 13 minutes of queries, Green said.

Other devices that store RSA keys that are vulnerable to the same attack include the Aladdin eTokenPro and iKey 2032 made by SafeNet, the CyberFlex manufactured by Gemalto, and Siemens’ CardOS, according to the paper.

The researchers also use refinements of an attack introduced in 2002 by Serge Vaudenay that exploits weaknesses in what is known as CBC padding to extract symmetric keys.

The CRYPTO 2012 paper is the latest research to demonstrate serious weaknesses in devices that large numbers of organizations rely on to secure digital certificates. In 2008, a team of hardware engineers and cryptographers cracked the encryption in the Mifare Classic, a wireless card used by transit operators and other organizations in the public and private sectors to control physical access to buildings. Netherlands-based manufacturer NXP Semiconductors said at the time it had sold 1 billion to 2 billion of the devices. Since then, crypto in a steady stream of other devices, including the Keeloq security system and the MiFare DESFire MF3ICD40, has also been seriously compromised.

The latest research comes after RSA warned last year that the effectiveness of the SecurID system its customers use to secure corporate and governmental networks was compromised after hackers broke into RSA networks and stole confidential information concerning the two-factor authentication product. Not long after that, military contractor Lockheed Martin revealed a breach it said was aided by the theft of that confidential RSA data. There’s nothing in the new paper that suggests the attack works on SecurID devices other than the 800 model.

RSA didn’t return e-mails seeking comment for this article. According to the researchers, RSA officials are aware of the attacks first described by Bleichenbacher and are planning a fix. SafeNet and Siemens are also in the process of fixing the flaws, they said. The researchers also reported that Estonian officials have said the attack is too slow to be practical.

Source:  arstechnica.com

Data center fabrics catching on, slowly

Monday, June 25th, 2012

Early adopters say the expense and time spent to revamp a data center’s switching gear are well worth it; benefits include killer bandwidth and more flexibility

When Government Employees Health Association (GEHA) overhauled its data center to implement a fabric infrastructure, the process was “really straightforward,” unlike that for many IT projects, says Brenden Bryan, senior manager of enterprise architecture. “We haven’t had any ‘gotchas’ or heartburn, with me looking back and saying ‘I wish I made that decision differently.'”

GEHA, based in Kansas City, Mo., and the nation’s second largest health plan and dental plan, processes claims for more than a million federal employees, retirees and their families. The main motivator behind switching to a fabric, Bryan says, was to simplify and consolidate and move away from a legacy Fibre Channel SAN environment.

When he started working at GEHA in August 2010, Bryan says he inherited an infrastructure that was fairly typical: a patchwork of components from different vendors with multiple points of failure. The association also wanted to virtualize its mainframe environment and turn it into a distributed architecture. “We needed an infrastructure in place that was redundant and highly available,” explains Bryan. Once the new infrastructure was in place and stable, the plan was to then move all of GEHA’s Tier 2 and Tier 3 apps to it and then, lastly, move the Tier 1 claims processing system.

GEHA deployed Ethernet switches and routers from Brocade, and now, more than a year after the six-month project was completed, he says they have a high-speed environment and a 20-to-1 ratio of virtual machines to blade hardware.

“I can keep the number of physical servers I have to buy to a minimum and get more utilization out of them,” says Bryan. “It enables me to drive the efficiencies out of my storage as well as my computing.”

Implementing a data center fabric does require some planning, however. It means having to upgrade and replace old switches with new switching gear because of the different traffic configuration used in fabrics, explains Zeus Kerravala, principal analyst at ZK Research. “Then you have to re-architect your network and reconnect servers.”

Moving flat and forward
A data center fabric is a flatter, simpler network that’s optimized for horizontal traffic flows, compared with traditional networks, which are designed more for client/server setups that send traffic from the server to the core of the network and back out, Kerravala explains.

In a fabric model, the traffic moves horizontally across the network and virtual machine, “so it’s more a concept of server-to-server connectivity.” Fabrics are flatter and have no more than two tiers, versus legacy networks, which have three or more tiers, he says. Storage networks have been designed this way for years, says Kerravala, and now data networks need to migrate this way.

One factor driving the move to fabrics is that about half of all enterprise data center workloads in Fortune 2000 companies are virtualized, and when companies get to that point, they start seeing the need to reconfigure how their servers communicate with one another and with the network.

“We look at it as an evolution in the architectural landscape of the data center network,” says Bob Laliberte, senior analyst at Enterprise Strategy Group. “What’s driving this is more server-to-server connectivity … there are all these different pieces that need to talk to each other and go out to the core and back to communicate, and that adds a lot of processing and latency.”

Virtualization adds another layer of complexity, he says, because it means dynamically moving things around, “so network vendors have been striving to simplify these complex environments.”

When data centers can’t scale
As home foreclosures spiked in 2006, Walz Group, which handles document management, fulfillment and regulatory compliance services across multiple industries, found its data center couldn’t scale effectively to take on the additional growth required to serve its clients. “IT was impeding the business growth,” says Chief Information Security Officer Bart Falzarano.

The company hired additional in-house IT personnel to deal with disparate systems and management, as well as build new servers, extend the network and add disaster recovery services, says Falzarano. “But it was difficult to manage the technology footprint, especially as we tried to move to a virtual environment,” he says. The company also had some applications that couldn’t be virtualized that would have to be managed differently. “There were different touch points in systems, storage and network. We were becoming counterproductive.”

To reduce the complexity, in 2009 Walz Group deployed Cisco’s Unified Data Center platform, a unified data center fabric architecture that combines compute, storage, network and management into a platform designed to automate IT as a service, across physical and virtual environments. The platform is connected to a NetApp SAN Storage Flexpod platform.

Previously, when they were using HP technology, Falzarano recalls, one of their database nodes went down, which required getting the vendor on the phone and eventually taking out three of the four CPUs and going through a troubleshooting process that took four hours. By the time they got the part they needed, installed it and returned to normal operations, 14 hours had passed, says Falzarano.

“Now, for the same [type of failure], if we get a degraded blade server node, we un-associate that SQL application and re-associate the SQL app in about four minutes. And you can do the same for a hypervisor,” he says.

IT has been tracking the data center performance and benchmarking some of the key metrics, and Falzarano reports that they immediately saw a poor-density reduction of 8 to 1, meaning less cabling complexity and fewer required cables. Where IT previously saw a low virtualization efficiency of 4 to 1 with the earlier technology, Falzarano says that’s now greater than 15 to 1, and the team can virtualize apps that it couldn’t before.

Other findings include a rack reduction of greater than 50 percent due to the amount of virtualization the IT team was able to achieve; more centralized systems management — now one IT engineer handles 50 systems — and what Falzarano refers to as “system mean time before failure.”

“We were experiencing a large amount of hardware failures with our past technology; one to two failures every 30 days across our multiple data centers. Now we are experiencing less than one failure per year,” he says.

Easy to implement
Like the IT executives at Walz Group, IT team leaders at GEHA believed that deploying a fabric model would not only meet the business requirements, but also reduce complexity, cost and staff needed to manage the data center. Bryan says the association also gained economies of scale by having a staff of two people who can manage an all-Ethernet environment, as opposed to needing additional personnel who are familiar with Fibre Channel.

“We didn’t have anyone on our team who was an expert in Fibre Channel, and the only way to achieve getting the claims processing system to be redundant and highly available was to leverage the Ethernet fabric expertise, which we had on staff,” he says.

Bryan says the association has been able to trim “probably a half million dollars of capital off the budget” since it didn’t have to purchase any Fibre Channel switching, and a quarter of a million dollars in operating expenses since it didn’t need staff to manage Fibre Channel. “Since collapsing everything to an Ethernet fabric, I was able to eliminate a whole stack of equipment,” says Bryan.

GEHA used a local managed services provider to help with setting up some of the more complex pieces of the architecture. “But from the time we unpacked the boxes to the time the environment was running was two days,” says Bryan. “It was very straightforward.”

And the performance, he adds, is “jaw-dropping.” A test they did copying a 4-gigabyte ISO file from one blade to another blade through the network, with the network and storage going through the same fabric, occurred in less than a second, “and we didn’t even see the transfer; I didn’t think it actually copied,” he says.

IT has now utilized the fabric for its backup environment with software from CommVault. Bryan says the association is seeing performance of about a terabyte an hour of throughput on the network, “which is probably eight to 10 times greater than before” the fabric was in place.

Today, all of GEHA’s production traffic is on the fabric, and Bryan says he couldn’t be more pleased with the infrastructure. He says scaling it out is not an issue, and is one of the major advantages with converged fabric and speed. GEHA is also able to run a very dense workload of virtual machines on a single blade, he says. “Instead of having to spend a lot of money on a lot of blades, you can increase the ROI on those blades without sacrificing performance,” says Bryan.

Laliberte says he sees a long life ahead for data center fabrics, noting that this type of architecture “is just getting started. If you think about complexity and size, and you have thousands of servers in your environment and thousands of switches, any kind of architecture change isn’t done lightly and takes time to evolve.”

Just as it took time for a three-tier architecture to evolve, it will take time for three tier to get broken down to two tier, he says, adding that flat fabric is the next logical step. “These things get announced and are available, but it still takes years to get widespread deployments,” says Laliberte.

Case study: Fabrics at work
When he used to look around his data center, all Dan Shipley would see was “a spaghetti mess” of cables and switches that were expensive to manage and error-prone. Shipley, architect at $600 million Supplies Network, a St. Louis-based wholesaler of office products, says the company had all the typical issues associated with a traditional infrastructure: some 300 servers that consumed a lot of power, took up a lot of space and experienced downtime due to hardware maintenance.

“We’re primarily an HP shop, and we had contracts on all those servers, which were from different generations, so if you lose a motherboard from one model, they’d overnight it and it was a big pain,” Shipley says. “So we said, ‘Look, we’ve got to get away from this. Virtualization is ready for prime time, and we need to get out of this traditional game.'”

Today, what Supplies Network has built in its data center is about as far from traditional as it gets. Rather than deploying Ethernet and Fibre Channel switches, the company turned to I/O Director from Xsigo, which sits on top of a rack of servers and directs traffic. All of the servers in that rack are plugged into the box, which dynamically establishes connectivity to all other data center resources. Unlike other data center fabrics, I/O Director offers InfiniBand, an open standards-based switched fabric communications link that provides high-performance computing.

“On all your servers you get rid of all those cables and Ethernet and Fibre switches and connect with one InfiniBand cable or two, for redundancy, which is what we did,” says Shipley. The cables are plugged into I/O Director. “You say ‘On servers one through 10, I want to connect all of those to this external Fibre Channel storage’ and it creates a virtual Fibre Channel storage network. So in reality, this is all running across InfiniBand, but the server … thinks it’s still connecting via Fibre Channel.”

The configuration means they now only have two cables instead of several, “and we have a ton of bandwidth.”

Supplies Network is fully virtualized, and has seen its data center shrink from about 20 racks to about four, Shipley says. Power consumption and cooling have also been reduced.

Shipley says he likes that InfiniBand has been used in the supercomputer world for a decade, and is low-cost and open, whereas other vendors “are so invested in Ethernet, they don’t want to see InfiniBand win.” Today, I/O Director runs at 56 gigabits per second, compared with the fastest Ethernet connection, which is 10 gigabits per second, he says.

In terms of cost, Shipley says a single port 10-gigabit Ethernet card is probably around $600, and an Ethernet switch port is needed on the other side, which runs approximately $1,000 per port. “So for each Ethernet connection, you’re looking at $1,600 for each one.” A 40-gigabit, single-port InfiniBand adapter is probably about $450 to $500, he says, and a 36-port InfiniBand switch box is $6,000, which works out to $167 per port.

Shipley says the company has now gotten rid of all of its core Ethernet switches in favor of InfiniBand.

“I was afraid at first because … I didn’t know much about InfiniBand,” he acknowledges, and most enterprise architectures run on Fibre Channel and Ethernet. “We brought [I/O Director] out here and did a bake-off with Cisco’s [Unified Data Center]. It whooped their butt. It was way less cost, way faster, it was simple and easy to use and Xsigo’s support has been fabulous,” he says.

Previously, big database jobs would take 12 hours, Shipley says. Since the deployment of I/O Director, those same jobs run in less than three hours. Migrating a virtual machine from one host to another now takes seconds, as opposed to minutes, he says.

He says he was initially concerned that because Xsigo is a much smaller vendor, it might not be around over the long term. But, says Shipley, “we found out VMware uses these guys.”

“What Xsigo is saying is, instead of having to use Ethernet and Fibre Channel, you can take all those out and put [their product] in and it creates a fabric,” explains Bob Laliberte, senior analyst at Enterprise Strategy Group. “They’re right, but when you’re talking about data center networking and data center fabrics, Xsigo is helping to create two tiers. But the Junipers and Ciscos and Brocades are trying to create that flat fabric.”

InfiniBand is a great protocol, Laliberte adds, but cautions that it’s not necessarily becoming more widely used. “It’s still primarily in the realm of supercomputing sites that need ultra-fast computing.”

Source:  infoworld.com

The U.N. vs. the Internet: The fight escalates

Monday, June 25th, 2012

The House is expected to advance a resolution condemning efforts by the U.N. to insert itself into governance of the Internet.  But some interpret leaked documents as suggesting that the U.S. is being far too tepid in its responses to a key international communications treaty being negotiated in secret.

The House Energy and Commerce Committee is expected this week to approve a resolution (PDF) strongly critical of growing efforts to transfer key aspects of Internet governance to the International Telecommunications Union, an agency of the United Nations.

The resolution was introduced by Rep. Mary Bono Mack (R-Calif.) as part of a hearing last month on the upcoming World Conference on International Telecommunications, which will convene in Dubai late this year to rewrite an international treaty on communications overseen by the ITU.

The WCIT process is secret, but proposals drafted by the 193 ITU member nations and nonvoting affiliate organizations have begun leaking out. CNET was first to report earlier this month on a proposal by the European Telecommunications Network Operators association that would, if made part of the new treaty, impose a “sending party” tax on content providers, upending longstanding principles of Internet architecture.

That leaked document was posted to the Web site WCITLeaks, which has since published more proposed changes to the treaty.

Since the House hearing, tensions over WCIT have been rising at home and abroad. Over the weekend, WCITLeaks posted a key planning document (PDF) summarizing several radical proposals currently circulating in advance of the conference.

These include efforts, such as ETNO’s, to use the treaty to gain competitive and financial leverage over the most successful Internet companies, most of which are based in the U.S., including Apple, Google, and Facebook. Such proposals could effectively impose the same extortionary taxes on incoming content that once characterized international long distance calls. (Many countries still operate under nationalized or seminationalized monopoly ISPs.)

But China and other repressive governments, well aware of the increasingly important role played by the Internet in popular uprisings around the world, are also looking to WCIT as a golden opportunity to turn off the free flow of information at their borders and introduce U.N.-sanctioned surveillance technologies to spy on Internet communications.

Several proposals in the newly leaked document, for example, would authorize governments to inspect incoming Internet traffic for malware or other evidence of “criminal” activity, opening the door to wide scale, authorized censorship.

The Internet Society, the parent organization of engineering groups that develop and maintain core Internet technologies, object to these proposals as requiring countries to “take a very active and inappropriate role in patrolling and enforcing newly defined standards of behavior.”

The document also discloses direct attacks on the engineering-driven governance of the Internet. A proposal from Russia and Cote d’Ivoire, for example, would transfer authority to the ITU to allocate and distribute “some part of IPV6 addresses,” similar to the agency’s historic involvement in the assignment of international area codes.

The Internet Society objected to this proposal as well, noting that it “would be disruptive to the existing, successful mechanisms” for allocating and distributing Internet addresses. The transition to IPV6, which began only a few weeks ago, is widely seen as an example of the speed and efficiency of an Internet run without traditional government interference.

Rep. Bono Mack’s resolution highlights these and other threats from the secretive and freewheeling WCIT process, in which member states regardless of their size get the same single vote. It encourages those negotiating on behalf of the United States to articulate “the consistent and unequivocal policy of the United States to promote a global Internet free from government control” and to “preserve and advance the successful multistakeholder model that governs the Internet today.”

In prepared remarks today, Rep. Bono Mack noted that

In many ways, we’re facing a referendum on the future of the Internet. A vote for my resolution is a vote to keep the Internet free from government control and to prevent Russia, China, and other nations from succeeding in giving the U.N. unprecedented power over Web content and infrastructure. That’s the quickest way for the Internet to one day become a wasteland of unfilled hopes, dreams, and opportunities.

Yet some see the U.S. reaction so far as being too tepid. On Monday, former Wall Street Journal publisher L. Gordon Crovitz blasted the Obama administration’s “weak responses” to some of the most dangerous WCIT proposals as revealed in the leaked documents.

“It may be hard for the billions of Web users or the optimists of Silicon Valley to believe that an obscure agency of the U.N. can threaten their Internet,” Crovitz wrote, “but authoritarian regimes are busy lobbying a majority of the U.N. members to vote their way. The leaked documents disclose a U.S. side that has hardly begun to fight back. That’s no way to win this war.”

Eli Dourado, one of two researchers at George Mason University who created the WCITLeaks site, went farther, arguing in a blog post that WCIT is not a fight between liberals and conservatives nor a “USA vs. the world” issue.

The escalating battle, Dourado wrote, is really one between Internet users worldwide and their governments: “Who benefits from increased ITU oversight of the Internet? Certainly not ordinary users in foreign countries, who would then be censored and spied upon by their governments with full international approval. The winners would be autocratic regimes, not their subjects.”

Which may in part explain why the U.S. has remained both engaged and “polite” in the WCIT process so far. “I hope that the awareness we raise through WCITLeaks,” Dourado wrote, “will not only highlight how foolish the U.S. government is for playing the lose-lose game with the ITU, but how hypocritical it is for preaching Net freedom while spying on, censoring, and regulating its own citizens online.”

Source:  CNET

Write speeds for phase-change memory reach record limits

Friday, June 22nd, 2012

Scientists bring us closer to wider use of phase-change memory (PRAM) chips.

http://cdn.arstechnica.net/wp-content/uploads/2012/06/Memory_module_DDRAM_20-03-2006-640x480.jpgBy pre-organizing atoms in a bit of phase-change memory, information can be written in less than one nanosecond, the fastest for such memory. With write speeds comparable to the memory that powers our computers, phase change memory could one day help computers boot up instantly.

Phase-change memory stores information based on the organization of atoms in a material, often a mixture of germanium, antimony, and tellurium (Ge2Sb2Te5 or GST). A voltage pulse heats the metal and disordered atoms in the crystal rearrange into an ordered crystal. Restoring the disordered arrangement by melting the metal erases the information. A computer reads each bit by detecting the lower electrical resistance of the ordered crystal.

Micron sells small phase-change memory (PRAM) chips. Companies like IBM and Samsung are working on PRAM chips too.

Phase-change memory could one day replace flash memory in our cellphones, just as Samsung briefly tried in a commercial smartphone. PRAM can top the density and write times of flash memory. And like flash, PRAM is nonvolatile, meaning that it retains its information even when a device is powered off.

That makes phase-change memory an intriguing candidate to replace the volatile DRAM that powers our computers. But a computer that boots up instantly using PRAM is still a long way off, partly because the materials can’t be switched from disordered to ordered, or written quickly enough. Most phase-change materials crystallize slower than the 1-10 nanoseconds it takes to write a bit of DRAM. And materials that crystallize faster at PRAM operating temperatures tend to naturally organize at lower temperatures too, says Stephen Elliott of the University of Cambridge. Therefore, they slowly crystallize and erase themselves over time.

Elliott and his colleagues have boosted the crystallization time, and thus the write speed, of a stable PRAM bit. They pre-organized the atoms in a chunk of Ge2Sb2Te5 using a weak electric field. The scientists sandwiched a 50nm-wide cylinder of GST between two titanium electrodes and applied 0.3V of energy across the material. A 500-picosecond burst of 1V power triggered the crystallization, which is about 10 times faster than the best speed using a germanium-tellurium material.

The scientists melted the GST crystal, thus erasing the memory, with a similar pulse of 6.5V of energy. The material’s resistance was stable for 10,000 write-rewrite cycles.

Computer models of the material show that the constant low voltage causes tiny seed crystals to form, basically “priming” the atoms for complete crystallization.

Most researchers have tried to improve the switching speed of phase change memory by randomly inserting metals into GST, says Robert Simpson, at the Institute of Photonic Sciences in Spain. Learning how to create crystal seeds through simulations and then demonstrating how these seeds speed crystallization in a device is a more scientific approach to developing new memory materials, he adds.

Eric Pop, of the University of Illinois, Urbana-Champaign, is excited to see the speed limits of phase-change memory bits, but wonders if the extra power needed to maintain the low priming voltage would influence the speed and energy consumption of a chip containing many phase-change bits. Ultimately, consumer cost influences the commercial viability of PRAM chips, he adds.

Source:  arstechnica.com

Printer bomb malware wastes reams of paper, sparks pandemonium

Friday, June 22nd, 2012

A recently unleashed piece of malware is wreaking havoc in some enterprises by causing all their printers to print gibberish until they run out of paper, researchers from Symantec said.

“The impact is global and effecting approximately 80 print servers,” an admin of one Fortune 500 company wrote in an online forum dedicated to the print bomb explosion. “The print job names were all 15 characters in length and unique. The print jobs were all garbage print, as if it was opening the .exe and printing the garbage text.” Other participants reported the same phenomenon caused hundreds of their organizations’ printers to run through reams of paper.

According to a blog post published Thursday by researchers from antivirus provider Symantec, the nuisance is being spread by Trojan.Milicenso. The worst hit regions are the US, India, Europe, and South America. Milicenso is a fairly sophisticated backdoor that serves as a for-hire delivery vehicle for other pieces of malware. One of its malicious payloads, known as Adware.Eorezo, is dropping an executable file in printer spooler directories, causing some applications to print representations of the binary code.

“This explains the reports of unwanted printouts observed in some compromised environments,” the Symantec post stated. “Based on what we have discovered so far, the garbled printouts appear to be a side effect of the infection vector rather than an intentional goal of the author.”

Source:  arstechnica.com

Code crackers break 923-bit encryption record

Thursday, June 21st, 2012

In what was thought an impossibility, researchers break the longest code ever over a 148-day period using 21 computers.

Before today no one thought it was possible to successfully break a 923-bit code. And even if it was possible, scientists estimated it would take thousands of years.

However, over 148 days and a couple of hours, using 21 computers, the code was cracked.

Working together, Fujitsu Laboratories, the National Institute of Information and Communications Technology, and Kyushu University in Japan announced today that they broke the world record for cryptanalysis using next-generation cryptography.

“Despite numerous efforts to use and spread this cryptography at the development stage, it wasn’t until this new way of approaching the problem was applied that it was proven that pairing-based cryptography of this length was fragile and could actually be broken in 148.2 days,” Fujitsu Laboratories wrote in a press release.

Using “pairing-based” cryptography on this code has led to the standardization of this type of code cracking, says Fujitsu Laboratories. Scientists say that breaking the 923-bit encryption, which is 278-digits, would have been impossible using previous “public key” cryptography; but using pairing-based cryptography, scientists were able to apply identity-based encryption, keyword searchable encryption, and functional encryption.

Researchers’ efforts to crack this type of code is useful because it helps companies, governments, and organizations better understand how secure their electronic information needs to be.

“The cryptanalysis is the equivalent to spoofing the authority of the information system administrator,” Fujitsu Laboratories wrote. “As a result, for the first time in the world we proved that the cryptography of the parameter was vulnerable and could be broken in a realistic amount of time.”

Researchers from NICT and Hakodate Future University hold the previous world record for code cracking, which required far less computer power. They managed to figure out a 676-bit, or 204-digit, encryption in 2009.

Source:  CNET

Have you ever chatted with a hacker within a virus?

Thursday, June 21st, 2012

This is an impressive and first-time experience in my anti-virus career. I chatted with a hacker while debugging a virus. Yes, it’s true. It happened when the Threat team were researching key loggers for Diablo III while many game players playing this game found their accounts stolen.  A sample is found in battle .net in Taiwan.

The hacker posted a topic titled “How to farm Izual in Inferno” (Izual is a boss in Diablo III ACT 4), and provided a link in the content which, as he said, pointed to a video demonstrating the means.

 

Below is the ‘Video’. It’s a RAR archive actually containing two executable files. These two files are almost the same except the icon.

 

The malware will connect to a remote server via TCP port 80 and download a new file packed by Themida.

 

That’s very simple Downloader/Backdoor behavior and we are only interested in looking for key logging code for Diablo III so we didn’t pay much attention to it.

But an astonishing scene staged at this time. A chatting dialog popped up with a text message:

(Translated from the image below)

Hacker: What are you doing? Why are you researching my Trojan?

Hacker: What do you want from it?

 

The dialog is not from any software installed in our virtual machine. On the contrary, it’s an integrated function of the backdoor and the message is sent from the hacker who wrote the Trojan. Amazing, isn’t it? It seems that the hacker was online and he realized that we were debugging his baby.

 

We felt interested and continued to chat with him. He was really arrogant.

(Translated from the image below)

Chicken: I didn’t know you can see my screen.

Hacker: I would like to see your face, but what a pity you don’t have a camera.

 

He is telling the truth. This backdoor has powerful functions like monitoring victim’s screen, mouse controlling, viewing process and modules, and even camera controlling.

 

We then chatted with hacker for some time, pretending that we were green hands and would like to buy some Trojan from him. But this hacker was not so foolish to tell us all the truth. He then shut down our system remotely.

Regarding this malware, no Diablo III key logging code was captured. What it really wants to steal is dial up connection’s username and password.

 

It sounds like a movie story, but it’s real. We are familiar with malware and we are fighting with them every day. But chatting with malware writers in real time doesn’t happen so often. Next time, I will be on the alert.

The malware and its components are detected by the AVG as Trojan horse BackDoor.Generic variants.

Source:  avg.com

Windows driveby attack on aeronautical website may be state sponsored

Thursday, June 21st, 2012

 

Attack exploited an unpatched Windows vulnerability, allowed code execution.

The website of a European aeronautical parts supplier was infected with an exploit that uses an unpatched Windows vulnerability to execute malicious code on end users’ computers, researchers from antivirus provider Sophos said.

The active exploit of an XML Core Services package in all supported versions of Windows, which Ars reported last week, allowed people to become infected simply by visiting the unnamed site using Microsoft’s Internet Explorer browser. Researchers with the firm said the exploit was planted on the site by “cybercriminals” who first managed to compromise its security.

The vulnerability, which stems from an uninitialized variable, was discovered by researchers at Google when they noticed it was being exploited in targeted attacks. Around the same time, Google initiated a new service that alerts potential targets of state-sponsored attacks, and it was later reported that the XML attacks Google saw prompted the new warnings. Over the weekend, Sophos saw at least one other attack on the website of a European medical company. The latest attack may also be state sponsored, Sophos researchers speculated Wednesday morning.

“We know that a hacker who manages to plant malicious code on the website of, say, a company which supplies aeronautical parts may reasonably predict that staff at a larger organization—such as an arms manufacturer or defense ministry—might have reason to access the site,” they wrote in a blog post. “Once the hackers have placed their malicious code on the supplier’s website, they would simply wait for notification that their code has run on either the big company’s network or a larger supplier further up the chain.”

Microsoft has provided a temporary fix for the vulnerability that all Windows users should apply whether or not they use IE as their browser of choice. Most antivirus products have added signatures to detect and block exploits. The aeronautical parts supplier, which Sophos declined to name, has since removed the infection from its website.

Source:  arstechnica.com

 

First Privacy Bill of Rights meeting: Mobile apps targeted

Monday, June 18th, 2012

A meeting on mobile applications and data privacy will be held July 12 to start enforcement of President Obama’s digital Privacy Bill of Rights.

The first in a series of meetings to decide concrete enforcement terms for President Obama’s digital “Privacy Bill of Rights” has just been announced for July 12, 2012, and its focus is on mobile apps.

The National Communications and Telecommunication Administration (U.S. Department of Commerce) has decided that it’s time to put President Obama’s Privacy Bill of Rights into practice.

To begin, they’ve just invited all “privacy stakeholders” to “generate robust input” for the first consumer data transparency code of conduct.

NTIA has selected mobile app transparency as the focus of the first privacy multi-stakeholder process.

Multi-stakeholders are defined as consumer groups, advertisers, and Internet companies.

“Although other possible topics were suggested and may be pursued in future multi-stakeholder convenings, the mobile app transparency topic presents a strong opportunity for stakeholders to reach consensus on a code of conduct in a reasonable time frame,” the NTIA said in its announcement.

The NTIA’s first invitation to comment, in March, saw an overwhelming amount of concern about mobile applications because

(…) practices surrounding the disclosure of consumer data privacy practices do not appear to have kept pace with rapid developments in technology and business models.

Perhaps that’s in part owing to widespread awareness about Apple’s mobile tracking lawsuit, which it has failed to fend off.

The now-famous lawsuit, still in progress, was filed in April, and 18 companies were sued over app privacy including Apple, Facebook, Google, Path, Beluga, Yelp, Burbn, Instagram, Foursquare Labs, (the now-defunct) Gowalla, Foodspotting, Hipster, LinkedIn, Rovio Mobile, ZeptoLab, Chillingo, Electronics Arts, and Kik Interactive.

The lawsuit raised awareness that innocuous seeming apps like Instagram, Foursquare, Foodspotting, and Yelp scrape phones to send names, e-mail addresses and/or phone numbers from users’ address books to their servers.

Instagram and Foursquare began to notify users with a permission prompt only after the Path debacle, according to VentureBeat.

A second, similar privacy lawsuit has recently been filed against Apple, Pandora, and The Weather Channel over user location data.

The NTIA multi-stakeholder privacy meeting will decide a code of conduct for app makers and much more: its intent is to create a blueprint for data transparency and also make a clear set of rules for app makers to stay within to remain out of trouble a la privacy lawsuits.

When the Obama administration released its comprehensive blueprint to improve consumers’ data privacy protections in February, The White House requested that NTIA ask stakeholders — companies, privacy advocates, consumer groups, and technology experts — to develop enforceable codes of conduct to specify how the Consumer Privacy Bill of Rights will be applied in specific contexts.

A wide range of multi-stakeholders are invited to contribute. About who this affects, NTIA writes:

The issue of mobile app transparency potentially impacts a range of industry participants, including: developers of mobile apps; providers of sophisticated interactive services for mobile devices (such as those utilizing HTML5 to access mobile APIs); and mobile app platforms, among others.

This is only the first in a series of meetings that will address other areas of consumer data privacy.

The meeting is in Washington, D.C., and NTIA has detailed:

The July 12, 2012, multi-stakeholder meeting will begin at 9:30 a.m and is expected to end no later than 4:30 p.m.

The meeting will be held in the Washington, D.C., metro area; NTIA will announce the venue no later than fifteen (15) days before the meeting, and sooner if possible.

The meeting is open to all interested stakeholders, will be Web cast, and is open to the press.

According to The Hill, The Software and Information Industry Association (SIIA) applauds the move and says that growth of the app marketplace will depend on consumers trusting apps with their privacy.

NationalJournal warns

(…) Berin Szoka, president of the think tank TechFreedom, said if the process fails, it could provide an opening for officials seeking more authority to regulate Internet privacy.

Source:  CNET

Whoops! ICANN makes domain applicants’ personal info public

Friday, June 15th, 2012

Agency that oversees oversees the assignment of Web domains published postal addresses from applications by mistake.

The Internet Corporation for Assigned Named and Numbers, or ICANN, said it accidentally published the addresses of top-level domain applicants, but has since taken them down.

ICANN posted a notice yesterday indicating that it temporarily disabled viewing of the application details to remove the postal addresses of some primary and secondary contacts for top-level domain, or TLD applications. The addresses appeared as responses to questions on the application. The viewing was restored on the same day sans the personal information.

“We temporarily disabled viewing of the application details. We removed the unintended information and restored this functionality,” ICANN’s statement reads. “We apologize for this oversight. Applicants should contact the Customer Service Center with any questions or concerns.”

ICANN oversees the assignment of Web domains and did a public reveal of the applicants, which included big names like Google, Amazon and Microsoft, for TLDs on Wednesday. The TLD’s, also known as “strings,” were also revealed, with some of the most coveted being .app and .home.

Source:  CNET

Microsoft opens up access to cloud-based ALM server

Monday, June 11th, 2012

The Team Foundation Service, which had been invitation-only, is now open to anyone, but it still is in preview mode

Microsoft is expanding access to its cloud-based application lifecycle management service, although the service still remains in preview mode.

At its TechEd conference in Orlando, Fla. Monday, the company will announce that anyone can utilize its Team Foundation Service ALM server, which is hosted on Microsoft’s Windows Azure cloud. First announced last September, the preview had been limited to invitation-only usage. Since it remains in a preview phase, the service can be used free of charge.

“Anybody who wants to try it can try it,” said Brian Harry, Microsoft technical fellow and product line manager for Team Foundation Server, the behind-the-firewall version of the ALM server. Developers can access Team Foundation Service at the Team Foundation Service preview site.

Through the cloud ALM service, developers can plan projects, collaborate, and manage code online. Code is checked into the cloud using the Visual Studio or Eclipse IDEs. Languages ranging from C# to Python are supported, as are such platforms as Windows and Android.

With Team Foundation Service, Microsoft expects to compete with rival tools like Atlassian Jira. “Team Foundation Service is a full application lifecycle management product that provides a rich set of capabilities from early project planning and management through development, testing, and deployment,” Harry said. “We’ve got the most comprehensive ALM tool in the market, and it is simple and easy to use and easy to get started.” Eventually, Microsoft will charge for use of Team Foundation Service, but it will not happen this year, Harry said.

Microsoft has been adding capabilities to Team Foundation Service every three weeks. A new continuous deployment feature enables applications to be deployed to Azure automatically. A build service was added in March. On Monday, Microsoft will announce the addition of a rich landing page with more information about the product.

Source:  computerworld.com

Flame and Stuxnet makers ‘co-operated’ on code

Monday, June 11th, 2012

Teams responsible for the Flame and Stuxnet cyber-attacks worked together in the early stages of each threat’s development, researchers have said.

Flame, revealed last month, attacked targets in Iran, as did Stuxnet which was discovered in 2010.

Kaspersky Lab said they co-operated “at least once” to share source code.

“What we have found is very strong evidence that Stuxnet/Duqu and Flame cyber-weapons are connected,” Kaspersky said.

Alexander Gostev, chief security expert at the Russian-based security company added: “The new findings that reveal how the teams shared source code of at least one module in the early stages of development prove that the groups co-operated at least once.”

Vitaly Kamluk, the firm’s chief malware expert, said: “There is a link proven – it’s not just copycats.

“We think that these teams are different, two different teams working with each other, helping each other at different stages.”

The findings relate to the discovery of “Resource 207”, a module found in early versions of the Stuxnet malware.

It bears a “striking resemblance” to code used in Flame, Kaspersky said.

“The list includes the names of mutually exclusive objects, the algorithm used to decrypt strings, and the similar approaches to file naming,” Mr Gostev said.

Direct orders

Recently, a New York Times investigation – based on an upcoming book – singled out the US as being responsible for Stuxnet, under the direct orders of President Barack Obama.

The report said the threat had been developed in co-operation with Israel.

No country is yet to publicly take responsibility for the attack.

Speaking about Flame, a spokesman for the Israeli government distanced the country from involvement following an interview in which a minister seemed to back the attacks.

“There was no part of the interview where the minister has said anything to imply that Israel was responsible for the virus,” the spokesman said.

‘Completely separate’

Last week, the UN’s telecommunications head Dr Hamadoun Toure said he did not believe the US was behind Flame, and that reports regarding the country’s involvement in Stuxnet were “speculation”.

Prof Alan Woodward, a security expert from the University of Surrey, described the findings as interesting – but not yet a clear indicator of who was behind the attacks.

“The fact that they shared source code further suggests that it wasn’t just someone copying or reusing one bit of Stuxnet or Flame that they had found in the wild, but rather those that wrote the code passed it over,” he said.

“However, everything else still indicates that Flame and Stuxnet were written, designed and built by a completely separate group of developers.

“At the very least it suggests there are two groups capable of building this type of code but they are somehow collaborating, albeit only in a minor way.”

Source:  BBC

Flame authors force self-destruct

Friday, June 8th, 2012

After Flame was exposed publicly and partially compromised, the malware’s authors apparently retained enough control to make it almost disappear.

Amid the exposure of Flame, its authors appear to be going to ground, using what control they have of the malware to force it to self-destruct and disappear (almost) without a trace.

Earlier this week, Kaspersky Labs noted that in a matter of hours after researchers had announced the discovery of Flame, the command and control infrastructure behind Flame went dark. This infrastructure was important because Flame is initially configured to contact a number of these servers and then run the control scripts that they serve. However, by 28 May — the day that Flame’s details began to emerge — requests for these scripts were met with 403/404 errors, hampering efforts to learn more about the servers behind the malware.

Kaspersky Lab, with the assistance of GoDaddy and OpenDNS, attempted to sinkhole the malware; however, Symantec noted that this effort was only partially successful — Flame’s authors still had control of a few command and control servers — enough to communicate with some of the infected computers. [Flame’s authors] had retained control of their domain registration accounts, which allowed them to host these domains with a new hosting provider,” Symantec wrote on its blog.

Source:  CNET

U.N. could tax U.S.-based Web sites, leaked docs show

Friday, June 8th, 2012

Global Internet tax suggested by European network operators, who want Apple, Google, and other Web companies to pay to deliver content, is proposed for debate at a U.N. agency in December.

http://asset3.cbsistatic.com/cnwk.1d/i/tim/2012/06/07/unbw.jpg

The United Nations is considering a new Internet tax targeting the largest Web content providers, including Google, Facebook, Apple, and Netflix, that could cripple their ability to reach users in developing nations.

The European proposal, offered for debate at a December meeting of a U.N. agency called the International Telecommunication Union, would amend an existing telecommunications treaty by imposing heavy costs on popular Web sites and their network providers for the privilege of serving non-U.S. users, according to newly leaked documents.

The documents (No. 1 No. 2) punctuate warnings that the Obama administration and Republican members of Congress raised last week about how secret negotiations at the ITU over an international communications treaty could result in a radical re-engineering of the Internet ecosystem and allow governments to monitor or restrict their citizens’ online activities.

“It’s extremely worrisome,” Sally Shipman Wentworth, senior manager for public policy at the Internet Society, says about the proposed Internet taxes. “It could create an enormous amount of legal uncertainty and commercial uncertainty.”

The leaked proposal was drafted by the European Telecommunications Network Operators Association, or ETNO, a Brussels-based lobby group representing companies in 35 nations that wants the ITU to mandate these fees.

While this is the first time this proposal been advanced, European network providers and phone companies have been bitterly complaining about U.S. content-providing companies for some time. France Telecom, Telecom Italia, and Vodafone Group, want to “require content providers like Apple and Google to pay fees linked to usage,” Bloomberg reported last December.

ETNO refers to it as the “principle of sending party network pays” — an idea borrowed from the system set up to handle payments for international phone calls, where the recipient’s network set the per minute price. If its proposal is adopted, it would spell an end to the Internet’s long-standing, successful design based on unmetered “peered” traffic, and effectively tax content providers to reach non-U.S. Internet users.

The sender-pays framework would likely prompt U.S.-based Internet services to reject connections from users in developing countries, who would become unaffordably expensive to communicate with, predicts Robert Pepper, Cisco’s vice president for global technology policy.

Developing countries “could effectively be cut off from the Internet,” says Pepper, a former policy chief at the U.S. Federal Communications Commission. The ETNO plan, he says, “could have a host of very negative unintended consequences.”

It’s not clear how much the taxes levied by the ETNO’s plan would total per year, but observers expect them to be in the billions of dollars. Government data show that in 1996, U.S. phone companies paid their overseas counterparts a total of $5.4 billion just for international long distance calls.

If the new taxes were levied, larger U.S. companies might be able to reduce the amount of money they pay by moving data closer to overseas customers, something that Netflix, for instance, already does through Akamai and other content delivery networks. But smaller U.S. companies unable to afford servers in other nations would still have to pay.

The leaked documents were posted by the Web site WCITLeaks, which was created by two policy analysts at the free-market Mercatus Center at George Mason University in Arlington, Va, who stress their Wikileaks-esque project is being done in their spare time. The name, WCITLeaks, is a reference to the ITU’s December summit in Dubai, the World Conference on International Telecommunications, or WCIT.

Eli Dourado, a research fellow who founded WCITLeaks along with Jerry Brito, told CNET this afternoon that the documents show that Internet taxes represent “an attractive revenue stream for many governments, but it probably is not in the interest of their people, since it would increase global isolation.”

Dourado hopes to continue posting internal ITU documents, and is asking for more submissions. “We hope that shedding some light on them will help people understand what’s at stake,” he says.

One vote per country

ETNO’s proposal arrives against the backdrop of negotiations now beginning in earnest to rewrite the International Telecommunications Regulations (PDF), a multilateral treaty that governs international communications traffic. The ITRs, which dates back to the days of the telegraph, were last revised in 1988, long before the rise of the commercial Internet and the on-going migration of voice, video and data traffic to the Internet’s packet-switched network.

The U.S. delegation to the Dubai summit, which will be headed by Terry Kramer, currently an entrepreneur-in-residence at the Harvard Business School, is certain to fight proposals for new Internet taxes and others that could curb free speech or privacy online.

But the ITU has 193 member countries, and all have one vote each.

If proposals harmful to global Internet users eventually appear in a revision to the ITRs, it’s possible that the U.S. would refuse to ratify the new treaty. But that would create additional problems: U.S. network operators and their customers would still be held to new rules when dealing with foreign partners and governments. The unintended result could be a Balkanization of the Internet.

In response to the recent criticism from from Washington, ITU Secretary-General Hamadoun Toure convened a meeting yesterday with ITU staff to deny charges that the WCIT summit in Dubai “is all about ITU, or the United Nations, trying to take over the Internet.” (The ITU also has been criticized, as CNET recently reported, for using the appearance of the Flame malware to argue it should have more cybersecurity authority over the Internet.)

“The real issue on the table here is not at all about who ‘runs’ the Internet — and there are in fact no proposals on the table concerning this,” Toure said, according to a copy of his remarks posted by the ITU. “The issue instead is on how best to cooperate to ensure the free flow of information, the continued development of broadband, continued investment, and continuing innovation.”

Robert McDowell, a Republican member of the Federal Communications Commission who wrote an article (PDF) in the Wall Street Journal in February titled “The U.N. Threat to Internet Freedom,” appeared to reference the ETNO’s proposal for Internet taxes during last week’s congressional hearing.

Proposals that foreign governments have pitched to him personally would “use international mandates to charge certain Web destinations on a ‘per-click’ basis to fund the build-out of broadband infrastructure across the globe,” McDowell said. “Google, Tunes, Facebook, and Netflix are mentioned most often as prime sources of funding.”

They could also allow “governments to monitor and restrict content or impose economic costs upon international data flows,” added Ambassador Philip Verveer, a deputy assistant secretary of state.

ITU spokesman Paul Conneally told CNET this week that:

There are proposals that could change the charging system, but nothing about pay-per-click as such. There isn’t anything we can comment about this interpretation because, as stated before, member states are free to interpret proposals as they like, so if McDowell chooses to interpret as pay-per-click, that is his right and similarly it is he who should provide pointers for you.

From the beginning, the Internet’s architecture has been based on traffic exchange between backbone providers for mutual benefit, without metering and per-byte “settlement” charges for incoming and outgoing traffic. ETNO’s proposal would require network operators and others to instead negotiate agreements “where appropriate” aimed at achieving “a sustainable system of fair compensation for telecommunications services” based on “the principle of sending party network pays.”

“Not all those countries like open, transparent process”

This isn’t the first time that a U.N. agency will consider the idea of Internet taxes.

In 1999, a report from the United Nations Development Program proposed Internet e-mail taxes to help developing nations, suggesting that an appropriate amount would be the equivalent of one penny on every 100 e-mails that an individual might send. But the agency backed away from the idea a few days later.

And in 2010, the U.N.’s World Health Organization contemplated, but did not agree on, a “bit tax” on Internet traffic.

Under the ITU system for international long distance, government-owned telecommunications companies used to make billions from incoming calls, effectively taxing the citizens of countries that placed the calls. That meant that immigrants to developed nations paid princely sums to call their relatives back home, as high as $1 a minute.

But technological advances have eroded the ability of the receiving countries to collect the fees, and the historic shift to voice over Internet Protocol services such as Skype has all but erased the transfer payments. Some countries see the WCIT process as a long-shot opportunity to reclaim those riches.

The ITU’s process has been controversial because so much of it is conducted in secret. That’s drawn unflattering comparisons with the Anti-Counterfeiting Trade Agreement, or ACTA, an international intellectual property agreement that has generated protests from Internet users across the world. (The Obama administration approved ACTA in 2011, before anyone outside the negotiations had a chance to review it.)

By comparison, the Internet Society, with 55,000 members and 90 worldwide chapters, hosts the engineering task forces responsible for the development and enhancement of Internet protocols, which operate through virtual public meetings and mailing lists.

“Not all those countries like open, transparent process,” says Cisco’s Pepper, referring to the ITU’s participants. “This is a problem.”

Source:  CNET

RedHat will pay Microsoft to ensure Fedora 18 runs on Windows 8 PCs

Monday, June 4th, 2012

RedHat, the makers of the popular Fedora Linux distro, made an announcement recently about the future of the OS that has some open source purists up in arms. Fedora 18 is expected to drop about the same time as Windows 8, and that means new hardware is going to be coming equipped with UEFI secure boot enabled. To ensure Fedora works smoothly for users, RedHat is getting cozy with the man.

UEFI secure boot is essentially a method of locking a computer’s bootloader to make sure unsigned code, like pre-boot malware, cannot be run on the system. Microsoft originally wanted Windows-certified hardware to have secure boot turned on with no option to disable. Eventually, heavy pressure forced a change in that policy. While secure boot will be on by default, there will be an option hidden in the IEFI settings to disable it.

In the interests of saving you some hassle, RedHat will be building a Microsoft-signed bootloader. Pick your jaw up off the floor and listen, because it’s not that bad. RedHat has decided to build a simple bootlaoder that will be certified by the Microsoft sysdev portal. This bootloader will really just be an intermediate that loads the real bootloader, which continues to be grub2.

The sysdev portal charges an access fee, but it’s just $99. RedHat can certainly afford this small expense, and the cash ultimately goes to Verisign, not Microsoft. As the RedHat folks point out, this is the best option. Almost all hardware will be Windows certified, so it will have the Windows boot keys. RedHat would otherwise have to create its own keys, then work with hardware vendors to implement them to avoid getting in bed with Microsoft. The logistics make that impossible.

UEFI secure boot operates on the idea that software which can directly interact with the hardware should be trusted. That’s not a terrible idea in and of itself, and we will have the option to turn it off on x86 systems. ARM devices running Windows will be locked to secure boot, though. For that reason, RedHat has no plans to support ARM tablets.

None of this is final, but RedHat seems comfortable with the decision. It’s probably the most rational course of action.

Source:  geek.com

Spy software’s Bluetooth capability allowed stalking of Iranian victims

Monday, June 4th, 2012

Flame attackers could even surveil smartphones not infected by the malware

http://cdn.arstechnica.net/wp-content/uploads/2012/06/flame_bluetooth.png

Espionage software that was recently found targeting Iranian computers contains advanced Bluetooth capabilities, taking malware to new heights by allowing attackers to physically stalk their victims, new analysis from Symantec shows.

The Flame malware, reported earlier this week to have infiltrated systems in Iran and other Middle Eastern countries, is so comprehensive that security experts have said it may take years for them to fully document its inner workings. In a blog post published Thursday, Symantec researchers dangled an intriguing morsel of information concerning one advanced feature when picking apart a module that the binary code referred to as BeetleJuice.

The component scans for all Bluetooth devices in range and collects the status and unique ID of each one found, presumably so that it can be uploaded later to servers under the control of attackers, the Symantec report said. It also embeds an encoded fingerprint into each infected device with Bluetooth capabilities. The BeetleJuice module gives the attackers the ability to track not only the physical location of the infected device, but the coordinates of smartphones and other Bluetooth devices that have been in range of the infected device.

“This will be particularly effective if the compromised computer is a laptop because the victim is more likely to carry it around,” the report stated. “Over time, as the victim meets associates and friends, the attackers will catalog the various devices encountered, most likely mobile phones. This way the attackers can build a map of interactions with various people—and identify the victim’s social and professional circles.”

By measuring the strength of radio signals broadcast by devices indexed by Flame, attackers in airports, city streets, and other locations might be able to measure the comings and goings of a host of people, the Symantec report goes on to say. It refers to at least one attack that was reported to identify Bluetooth devices more than a mile away. The post says BeetleJuice could be used to upload contacts, text messages, photos, and other data stored on Bluetooth devices, or to bypass firewalls and other security mechanisms when exfiltrating sensitive information.

According to another blog post also published Thursday by Trend Micro, Flame doesn’t post a significant threat because of the “very limited and specific targets” it infected. Researchers at Kaspersky have said it hit about 1,000 computers operated by private companies, educational facilities and government-run organizations. Its significance lies in its complexity, which, when combined with its victims, strongly suggests the resources of a nation-state oversaw its creation. The malicious software is also known as Flamer and sKyWIper.

With a size of 20 megabytes, Flame is a massive piece of malware whose discovery might be the security equivalent of oceanographers finding a previously unknown sea. Expect new factoids to trickle out steadily for the foreseeable future.

Source:  arstechnica.com

“Flame” malware was signed by rogue Microsoft certificate

Monday, June 4th, 2012

http://cdn.arstechnica.net/wp-content/uploads/2012/06/windows_update-640x461.jpg

Emergency Windows update nukes credentials minted by Terminal Services bug.

Microsoft released an emergency Windows update on Sunday after revealing that one of its trusted digital signatures was being abused to certify the validity of the Flame malware that has infected computers in Iran and other Middle Eastern Countries.

The compromise exploited weaknesses in Terminal Server, a service many enterprises use to provide remote access to end-user computers. By targeting an undisclosed encryption algorithm Microsoft used to issue licenses for the service, attackers were able to create rogue intermediate certificate authorities that contained the imprimatur of Microsoft’s own root authority certificate—an extremely sensitive cryptographic seal. Rogue intermediate certificate authorities that contained the stamp were then able to trick administrators and end users into trusting various Flame components by falsely certifying they were produced by Microsoft.

“We have discovered through our analysis that some components of the malware have been signed by certificates that allow software to appear as if it was produced by Microsoft,” Microsoft Security Response Center Senior Director Mike Reavey wrote in a blog post published Sunday night. “We identified that an older cryptography algorithm could be exploited and then be used to sign code as if it originated from Microsoft. Specifically, our Terminal Server Licensing Service, which allowed customers to authorize Remote Desktop services in their enterprise, used that older algorithm and provided certificates with the ability to sign code, thus permitting code to be signed as if it came from Microsoft.”

The exploit, which abused a series of intermediate authorities that were ultimately signed by Microsoft’s root authority, is the latest coup for Flame, a highly sophisticated piece of espionage malware that came to light last Monday. Flame’s 20-megabyte size, it’s extensive menu of sophisticated spying capabilities, and its focus on computers in Iran have led researchers from Kaspersky Lab, Symantec, and other security firms to conclude it was sponsored by a wealthy nation-state. Microsoft’s disclosure follows Friday’s revelation that the George W. Bush and Obama administrations developed and deployed Stuxnet, the highly advanced software used to set back the Iranian nuclear program by sabotaging uranium centrifuges at Iran’s Natanz refining facility.

The emergency update released by Microsoft blacklists three intermediate certificate authorities tied to Microsoft’s root authority. All versions of Windows that have not applied the new patch can be tricked by the Flame attackers into displaying cryptographically generated assurances that the malicious wares were produced by Microsoft.

Microsoft engineers have also stopped issuing certificates that can be used for code signing with the Terminal Services activation and licensing process. The ability of the licensing mechanism to sign untrusted code that linked Microsoft’s root authority is a mistake of breathtaking proportions. None of Microsoft’s Sunday night blog posts explained why such design was ever allowed to be put in place. A description of the Terminal Services License Server Activation refers to a “limited-use digital certificate that validates server ownership and identity.” Based on Microsoft’s description of the attack, it would appear the capabilities of these certificates weren’t as limited as company engineers had intended.

“This is a pretty big goof,” Marsh Ray, a software developer two-factor authentication company PhoneFactor, told Ars. “I don’t think anyone realized that this enabled the sub CA that was present on the licensing server to have the full authority of the trusted root CA itself.”

Microsoft’s mention of an older cryptography algorithm that could be exploited and used to sign code as if it originated from Microsoft evoked memories of an attack from 2008 to mint a rogue certificate authority that could be trusted by all major browsers. The attack in part relied on weaknesses in the MD5 cryptographic hash function that made it susceptible to “collisions,” in which two or more different plaintext messages generated the same cryptographic hash. By unleashing 200 PlayStation 3 game consoles to essentially find a collision, the attackers could become a certificate authority that could spawn SSL (secure sockets layer) credentials trusted by major browsers and operating systems.

Based on the language in Microsoft’s blog posts, it’s impossible to rule out the possibility that at least one of the certificates revoked in the update was also created using MD5 weaknesses. Indeed, two of the underlying credentials used MD5, while the third used the more advanced SHA-1 algorithm. In a Frequently Asked Questions section of Microsoft Security Advisory (2718704), Microsoft’s security team also said: “During our investigation, a third Certificate Authority has been found to have issued certificates with weak ciphers.” The advisory didn’t elaborate.

It’s also unclear if those with control of one of the rogue Microsoft certificates could sign Windows software updates. Such a feat would allow attackers with control over a victim network to hijack Microsoft’s update mechanism by using the credentials to pass off their malicious wares as official patches. Microsoft representatives didn’t respond to an e-mail seeking comment on that possibility. This article will be updated if an answer arrives later.

Two of the rogue certificates were chained to a Microsoft Enforced Licensing Intermediate PCA. A third was chained to a Microsoft Enforced Licensing Registration Authority CA, and ultimately to the company’s root authority. In addition to potential exploits from the actors behind Flame, unrelated attackers could also use the certificates to apply Microsoft’s signature to malicious pieces of software.

A third Microsoft advisory pointed out that Flame so far has been found only on the machines of highly targeted victims, so the “vast majority of customers are not at risk.”

“That said, our investigation has discovered some techniques used by this malware that could also be leveraged by less sophisticated attackers to launch more widespread attacks,” Jonathan Ness, of Microsoft’s Security Response Center, continued. “Therefore, to help protect both targeted customers and those that may be at risk in the future, we are sharing our discoveries and taking steps to mitigate the risk to customers.”

Source:  arstechnica.com

Kabel Deutschland sets record with 4.7Gbps download speeds

Friday, June 1st, 2012

About a year ago, Arris teased a system capable of 4.5Gbps downloads, and while that technology was in the proof-of-concept phase last June, it’s beginning to look more like a real possibility. German network provider Kabel Deutschland just notched a new download speed record using Arris’ C4 CMTs and Touchstone CM820S cable modems: a mind-blowing 4,700 Mbps (4.7 Gbps). The cable operator set that world-record rate in the city of Schwerin, where it recently updated its network to 862 MHz.

http://www.blogcdn.com/www.engadget.com/media/2012/05/kd.jpg

The network may be capable of delivering those 4.7Gbps speeds, but the company noted that current laptops and modems can’t even process such blazing data transfer rates. And before you North Americans get too excited, note that KD uses the EuroDOCSIS specification on the 8MHz channel, while the DOCSIS uses the 6MHz scheme in the US and beyond. Still, that’s not to say that other cable providers like Verizon FiOS have been slacking lately — 300Mbps downloads are nothing to scoff at.

Source:  engadget.com