Archive for March, 2013

How Spamhaus’ attackers turned DNS into a weapon of mass destruction

Thursday, March 28th, 2013

DNS amplification can clog the Internet’s core—and there’s no fix in sight.

A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet’s Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, “Operation Global Blackout” (later dismissed by some security experts and Anonymous members as a “massive troll”), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.

This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn’t brought the Internet itself down, it has caused major slowdowns in the Internet’s core networks.

DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or “volunteer” systems within their networks to stage such attacks. But thanks to public cloud services, “bulletproof” hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.

Hello, operator?

The Domain Name Service is the Internet’s directory assistance line. It allows computers to get the numerical Internet Protocol (IP) address for a remote server or other network-attached device based on its human-readable host and domain name. DNS is organized in a hierarchy; each top-level domain name (such as .com, .edu, .gov, .net, and so on) has a “root” DNS server keeping a list of each of the “authoritative” DNS servers for each domain registered with them. If you’ve ever bought a domain through a domain registrar, you’ve created (either directly or indirectly) an authoritative DNS address for that domain by selecting the primary and secondary DNS servers that go with it.

When you type “arstechnica.com” into your browser’s address bar and hit the return key, your browser checks with a DNS resolver—your personal Internet 411 service— to determine where to send the Web request. For some requests, the resolver may be on your PC. (For example, this happens if you’ve requested a host name that’s in a local “hosts” table for servers within your network, or one that’s stored in your computer’s local cache of DNS addresses you’ve already looked up.) But if it’s the first time you’ve tried to connect to a computer by its host and domain name, the resolver for the request is probably running on the DNS server configured for your network—within your corporate network, at an Internet provider, or through a public DNS service such as Google’s Public DNS.

There are two ways for a resolver to get the authoritative IP address for a domain name that isn’t in its cache: an iterative request and a recursive request. In an iterative request, the resolver pings the top-level domain’s DNS servers for the authoritative DNS for the destination domain, then it sends a DNS request for the full hostname to that authoritative server. If the computer that the request is seeking is in a subdomain or “zone” within a larger domain—such as www.subdomain.domain.com—it may tell the resolver to go ask that zone’s DNS server. The resolver “iterates” the request down through the hierarchy of DNS servers until it gets an answer.

But on some networks, the DNS resolver closest to the requesting application doesn’t handle all that work. Instead, it sends a “recursive” request to the next DNS server up and lets that server handle all of the walking through the DNS hierarchy for it. Once all the data is collected from the root, domain, and subdomain DNS servers for the requested address, the resolver then pumps the answer back to its client.

How DNS queries are supposed to work—when they’re not being used as weapons.

To save time, DNS requests don’t use the “three-way handshake” of the Transmission Control Protocol (TCP) to make all these queries. Instead, DNS typically uses the User Datagram Protocol (UDP)—a “connectionless” protocol that lets the server fire and forget requests.

Pump up the volume

That makes the sending of requests and responses quicker—but it also opens up a door to abuse of DNS that DNS amplification uses to wreak havoc on a target. All the attacker has to do is find a DNS server open to requests from any client and send it requests forged as being from the target of the attack. And there are millions of them.

The “amplification” in DNS amplification attacks comes from the size of those responses. While a DNS lookup request itself is fairly small, the resulting response of a recursive DNS lookup can be much larger. A relatively small number of attacking systems sending a trickle of forged UDP packets to open DNS servers can result in a firehose of data being blasted at the attackers’ victim.

DNS amplification attacks wouldn’t be nearly as amplified if it weren’t for the “open” DNS servers they use to fuel the attacks. These servers have been configured (or misconfigured) to answer queries from addresses outside of their network. The volume of traffic that can be generated by such open DNS servers is huge. Last year, Ars reported on a paper presented by Randal Vaughan of Baylor University and Israeli security consultant Gadi Evron at the 2006 DefCon security conference. The authors documented a series of DNS amplification attacks in late 2005 and early 2006 that generated massive traffic loads for the routers of their victims. In one case, the traffic was “as high as 10Gbps and used as many as 140,000 exploited name servers,” Vaughan and Evron reported. “A DNS query consisting of a 60 byte request can be answered with responses of over 4000 bytes, amplifying the response packet by a factor of 60.”

But even if you can’t find an open DNS server to blast recursive responses from, you can still depend on the heart of the Internet for a respectable hail of packet projectiles. A “root hint” request—sending a request for name servers for the “.” domain—results in a response 20 times larger than the packet the request came in. That’s in part thanks to DNS-SEC, the standard adopted to make it harder to spoof DNS responses, since now the response includes certificate data from the responding server.

A comparison of a “root hint” query and the response delivered by the DNS server. Not all data shown.
Sean Gallagher

In the case of the attack on Spamhaus, the organization was able to turn to the content delivery network CloudFlare for help. CloudFlare hid Spamhaus behind its CDN, which uses the Anycast feature of the Border Gateway Protocol to cause packets destined for the antispam provider’s site to be routed to the closest CloudFlare point of presence. This spread out the volume of the attack. And CloudFlare was able to then shut off amplified attacks aimed at Spamhaus with routing filters that blocked aggregated DNS responses matching the pattern of the attack.

But that traffic still had to get to Cloudflare before it could be blocked. And that resulted in a traffic jam in the core of the Internet, slowing connections for the Internet as a whole.

No fix on the horizon

The simplest way to prevent DNS amplification and reflection attacks would be to prevent forged DNS requests from being sent along in the first place. But that “simple” fix isn’t exactly easy—or at least easy to get everyone who needs to participate to do.

There’s been a proposal on the books to fix the problem for nearly 13 years—the Internet Engineering Task Force’s BCP 38, an approach to “ingress filtering” of packets. First pitched in 2000  1998 as part of RFC 2267 , the proposal has gone nowhere. And while the problem would be greatly reduced if zone and domain DNS servers simply were configured not to return recursive or even “root hint” responses received from outside their own networks, that would require action by the owners of the network. It’s an action that doesn’t have a direct monetary or security benefit to them associated with it.

ISPs generally do “egress filtering”—they check outbound traffic to make sure it’s coming from IP addresses within their network.  This prevents them from filling up their peering connections with bad traffic.  But “ingress” filtering would check to make sure that requests coming in through a router were coming from the proper direction based on their advertised IP source.

Another possible solution that would eliminate the problem entirely is to make DNS use TCP for everything—reducing the risk of forged packets.  DNS already uses TCP for tasks like zone transfers. But that would require a change to DNS itself, so it’s unlikely that would ever happen, considering that you can’t even convince people to properly configure their DNS servers to begin with.

Maybe the attack on Spamhaus will change that, and core network providers will move to do more to filter DNS traffic that doesn’t seem to match up with known DNS servers. Maybe just maybe, BCP 38 will get some traction. And maybe pigs will fly.

Source:  arstechnica.com

Recent reports of DHS-themed ransomware

Monday, March 25th, 2013

US-CERT has received reports of apparently DHS-themed ransomware occurring in the wild. Users who are being targeted by the ransomware receive an email message claiming that use of their computer has been suspended and that the user must pay a fine to unblock it. The ransomware falsely claims to be from the U.S. Department of Homeland Security and the National Cyber Security Division.

Users who are infected with the malware should consult with a reputable security expert to assist in removing the malware, or perform a clean reinstallation of their OS after formatting their computer’s hard drive.

US-CERT and DHS encourage users and administrators to use caution when encountering these types of email messages and take the following preventive measures to protect themselves from phishing scams and malware campaigns that attempt to frighten and deceive a recipient for the purpose of illegal gain.

  • Do not click on or submit any information to webpages.
  • Do not follow unsolicited web links in email messages.
  • Use caution when opening email attachments. Refer to the Security Tip Using Caution with Email Attachments for more information on safely handling email attachments.
  • Maintain up-to-date antivirus software.
  • Users who are infected should change all passwords AFTER removing the malware from their system.
  • Refer to the Recognizing and Avoiding Email Scams (pdf) document for more information on avoiding email scams.
  • Refer to the Security Tip Avoiding Social Engineering and Phishing Attacks for more information on social engineering attacks.

Source:  US-CERT

GSA breach highlights dangers of SSNs as IDs

Saturday, March 23rd, 2013

A recent security breach at the U.S. General Services Administration highlights the dangers of using your Social Security Number for identification. Federal and state laws restrict use of SSNs by public and private organizations.

Last Friday, the General Services Administration sent an e-mail alert to users of its System for Award Management (SAM), reporting that a security vulnerability exposed the users’ names, taxpayer identification numbers (TINs), marketing partner information numbers, and bank account information to “[r]egistered SAM users with entity administrator rights and delegated entity registration rights.”

The notice warned that “[r]egistrants using their Social Security Numbers instead of a TIN for purposes of doing business with the federal government may be at greater risk for potential identity theft.” Also provided was a link to a page on the agency’s site where SAM users could find information for protecting against identity theft and financial loss.

The message, which was sent by GSA Integrated Award Environment Acting Assistant Commissioner Amanda Fredriksen, included this suggestion: “We recommend that you monitor your bank accounts and notify your financial institution immediately if you find any discrepancies.”

The GSA breach highlights the risks of using your SSN for identification rather than for only tax and government-benefits purposes.

Who has a right to require your SSN?
According to the Privacy Rights Clearinghouse’s fact sheet on Social Security Numbers, the Privacy Act of 1974 requires that all local, state, and federal agencies requesting your SSN provide a disclosure statement on the form that “explains whether you are required to provide your SSN or if it’s optional, how the SSN will be used, and under what statutory or other authority the number is requested (5 USC 552a, note).”

As the fact sheet points out, you can complain to the agency or to your elected representative in Congress if no disclosure statement is provided, but no penalties are specified for failure to offer such a statement. The Privacy Rights Clearinghouse also has an FAQ on appropriate and inappropriate use of SSNs by public and private organizations.

Kiplinger.com’s Cameron Huddleston lists the 10 Riskiest Places to Give Your Social Security Number. Topping the list are universities and colleges, banks and financial institutions, and hospitals. Cameron warns against providing any personal information to someone who contacts you by phone, e-mail, or in person.

She also notes that the Internal Revenue Service never requests information from taxpayers via e-mail.

Check your state’s laws for protecting SSNs
In late 2010 Congress enacted the Social Security Number Protection Act, which prohibits local, state, and federal government agencies from displaying an individual’s SSN or any “derivative” of the number on a government check. The law also restricts access to SSNs by prisoners.

Your state probably offers a higher level of protection for your SSN. According to the Data Quality Campaign, 34 states have enacted laws restricting the use and disclosure of SSNs. The site provides a state-by-state chart summarizing SSN-protection laws (PDF).

Back in 2005 the U.S. Government Accountability Office issued a report (PDF) summarizing its testimony to the Committee on Consumer Affairs and Protection and the Committee on Governmental Operations of the New York State Assembly that discussed federal and state laws protecting SSNs.

The report concluded that federal protections were industry-specific, focusing primarily on financial services, and no single agency was responsible for safeguarding our personal information. It also found that state SSN-protection statutes were uneven and inconsistent.

The New York State Division of Consumer Protection provides Information You Should Know About Your Social Security Number that explains how businesses and employers are prohibited from using SSNs. Likewise, the California Attorney General’s Office site’s Your Social Security Number: Controlling the Key to Identity Theft page describes the state’s restrictions on displaying SSNs and offers advice for keeping the number private.

California Attorney General's Office site on SSN protection

The National Conference on State Legislatures site summarizes state laws relating to Internet privacy and includes links to the privacy policies of 16 state Web sites. You can also search the site for privacy-related legislation pending in your state.

Consumers Union summarizes state laws restricting SSN use. The organization has also developed a Model State SSN Protection Law.

Resources for protecting SSNs
The Social Security Administration’s Publication No. 05-10064 explains how identity thieves acquire and use SSNs; the page also offers tips for protecting your number. SSA Publication No. 05-10002 serves as an FAQ on general SSN-related topics.

The agency’s My Social Security service lets you create an online account for managing your Social Security benefits. The Social Security Number Verification Service allows registered organizations to enter SSNs to ensure their employees’ names and SSNs match the agency’s records.

The SSA site also features a description of the legal requirements to provide your Social Security Number and lists a Social Security Number Chronology that covers the period from 1935 to 2005.

Experian’s Protect My ID site explains the steps required to request a new SSN. Scambusters.org describes how SSNs are stolen and offers tips for protecting yours.

The Electronic Privacy Information Center’s SSN page summarizes recent developments surrounding the security of Social Security Numbers. Finally, the IdentityHawk security service explains who can lawfully request your SSN.

Source:  CNET

IBM moves toward post-silicon transistor

Friday, March 22nd, 2013

IBM’s new work shows that flexible low-power circuitry could be built with strongly correlated materials

Exploring methods of computing without silicon, IBM has found a way to make transistors that could be fashioned into virtual circuitry that mimics how the human brain operates.

The proposed transistors would be made from strongly correlated materials, which researchers have found to posses characteristics favorable for building more powerful, less power-hungry computation circuitry.

IBM transistor“The scaling of conventional-based transistors is nearing an end after a fantastic run of 50 years,” said Stuart Parkin, an IBM fellow at IBM Research who leads the research. “We need to consider alternative devices and materials that will have to operate entirely differently. There aren’t many avenues to follow beyond silicon. One of them is correlated electronic systems.”

Parkin’s team is the first to convert metal oxides from an insulated to conductive state by applying oxygen ions to the material. The team published details of the work in this week’s edition of the Science journal.

The circuitry used in today’s computer processors, memory and other components is made from large numbers of integrated transistors built from silicon wafers. Conventional transistors work by applying a small voltage across a gate, which can control — or switch on or off — a larger current passing through the transistor.

IBM’s technique uses another approach to switching conductive states of a material. It requires strongly correlated electron materials, such as metal oxides. By conventional theory, these materials should act like conductors, but they are actually insulators. “They don’t obey conventional band theory,” Parkin said. Under certain conditions, however, they can change their conductive states.

Research has been going on for several years, in fact, to find ways of changing conductivity states in strongly correlated materials. Previous approaches, however, relied on techniques of applying stress to a material, or subjecting it to temperature changes. Neither approach would be practical to use in mass-produced circuitry. IBM’s specific breakthrough, described in the paper, is that the conductive state of material could be changed by injecting oxygen molecules.

In IBM’s setup, these electrons are introduced through contact with an ionic liquid, consisting of large, irregularly shaped molecules. When a voltage is applied to this liquid, and the liquid is placed on the oxide material, the material can change from a conductor to an insulator, or vice versa.

This approach could be more energy effective than standard silicon transistors, in that the resulting transistors would be nonvolatile — they don’t need to be constantly refreshed with a power source to maintain their state, Parkin said. A charge can be set by applying the voltage once.

These materials may not switch their states as quickly as silicon transistors, though their relatively low switching speed may not be a factor, given their greater flexibility, Parkin said. In theory such transistors could mimic how the human brain operates in that “liquids and currents of ions [are used] to change materials,” Parkin said. “We know brains can carry out computing operations a million times more efficiently than silicon-based computers,” Parkin said.

To work, such circuitry would take advantage of microfluidics, the emerging practice of engineering around how to tightly control small amounts of liquids in a system. “We would direct fluids to particular surfaces or three-dimensional structures of oxides and then change their properties by applying gate voltages,” Parkin said. Entire virtual circuits could be built and once their work is finished, they could be torn down by simply passing liquid though other channels.

Source:  networkworld.com

VMware’s hybrid cloud gambit will rely on its public cloud partners

Friday, March 22nd, 2013

VMware has been rather cagey about its plans to launch its own hybrid cloud service, announced at a recent Strategic Forum for Institutional Investors. Companies are usually more than happy to talk journalists’ ears off about a new product or service, but when InfoWorld reached out to VMware about this one, a spokesman said the company had nothing further to share beyond what it presented in a sparse press release and a two-hour, multi-topic webcast.

In a nutshell, here’s what VMWare has revealed: It will offer a VMware vCloud Hybrid Service later this year, designed to let customers seamlessly extend their private VMware clouds to public clouds run by the company’s 220 certified vCloud Services Providers. Although the public component would run on partners’ hardware, VMware employees would manage the hybrid component and the underlying software.

For example, suppose Company X is running a critical cloud application on its own private, VMware-virtualized cloud. The company unexpectedly sees a massive uptick in demand for the service. Rather than having to hustle to install new hardware, Company X could leverage VMware’s hybrid service to consume public-cloud resources on the fly. In the process, Company X would not have to make any changes to the application, the networking architecture, or any of the underlying policies, as VMWare CEO Pat Gelsinger described the service.

“[T]he power of what we’ll uniquely be delivering, is this ability to not change the app, not change the networking, not change the policies, not change the security, and be able to run it private or public. That you could burst through the cloud, that you could develop in the cloud, deploy internally, that you could DR in the cloud, and do so without changing the apps, with that complete flexibility of a hybrid service” he said.

One of the delicate points in this plan is the question of how it will impact the aforementioned 220 VSPP partners, which include such well-known companies as CDW, Dell, and AT&T as well as lesser-known providers likeLokahi and VADS. Would VMware inserting itself into the mix result in the company stepping on its partners’ toes and eating up some of their cloud-hosting revenue?

Gelsinger did take pains to emphasize that the hybrid service would be “extremely partner-friendly,” adding that “every piece of intellectual property that we’re developing here we’re making available to VSPP partners,” he said. “Ultimately, we see this as another tool for business agility.”

451 Research Group analyst Carl Brooks took an optimistic view on the matter. “Using VSPP partner’s data centers and white-labeling existing infrastructure would both soothe hurt feelings and give VMware an ability to source and deploy new cloud locations extremely quickly, with minimal investment,” he said.

Gartner Research VP Chris Wolf, however, had words of caution for VMware as well as partner providers. “VMware needs to be transparent with provider partners about where it will leave them room to innovate. Of course, partners must remember that VMware reserves the right to change its mind as the market evolves, thus potentially taking on value adds that it originally left to its partners. SP partners are in a tough spot. VMware has brought many of them business, and they have to consider themselves at a crossroads,” he wrote.

Indeed, VMware’s foray into the hybrid cloud world isn’t sitting well with all of its partners. Tom Nats, managing partner at VMware service provider Bit Refinery, told CRN that the vCloud Hybrid Service is not a welcome development. “Many partners have built up [their infrastructure] and stayed true to VMware, and now all of a sudden we are competing with them,” he said.

As to customers: Will they feel comfortable with entrusting their cloud efforts in part to VMware and in part to one or more VMWare partners? Building and managing a cloud is complex enough without adding new parties into the mix. One reason Amazon Web Services has proven such a successful public cloud offering is that they fall under the purview of one entity. When a problem arises, there’s just one entity to call and one throat to choke. Under VMWare’s hybrid cloud model, customers may need to scrutinize SLAs carefully to determine which party would be responsible for which instances of downtime. Meanwhile, VMWare would have to be vigilant in ensuring that its partners were all running their respective clouds properly.

Source:  infoworld.com

Symantec finds Linux wiper malware used in S. Korean attacks

Friday, March 22nd, 2013

The cyber attacks used malware called Jokra and also targeted Windows computers’ master boot records

Security vendors analyzing the code used in the cyber attacks against South Korea are finding nasty components designed to wreck infected computers.

Tucked inside a piece of Windows malware used in the attacks is a component that erases Linux machines, an analysis from Symantec has found. The malware, which it called Jokra, is unusual, Symantec said.

“We do not normally see components that work on multiple operating systems, so it is interesting to discover that the attackers included a component to wipe Linux machines inside a Windows threat,” the company said on its blog.

Jokra also checks computers running Windows XP and 7 for a program called mRemote, which is a remote access tool that can used to manage devices on different platforms, Symantec said.

South Korea is investigating the Wednesday attacks that disrupted at least three television stations and four banks. Government officials reportedly cautioned against blaming North Korea.

McAfee also published an analysis of the attack code, which wrote over a computer’s master boot record, which is the first sector of the computer’s hard drive that the computer checks before the operating system is booted.

A computer’s MBR is overwritten with either one of two similar strings: “PRINCPES” or “PR!NCPES.” The damage can be permanent, McAfee wrote. If the MBR is corrupted, the computer won’t start.

“The attack also overwrote random parts of the file system with the same strings, rendering several files unrecoverable,” wrote Jorge Arias and Guilherme Venere, both malware analysts at McAfee. “So even if the MBR is recovered, the files on disk will be compromised too.”

The malware also attempts to shut down two South Korean antivirus products made by the companies Ahnlab and Hauri. Another component, a BASH shell script, attempts to erase partitions Unix systems, including Linux and HP-UX.

Security vendor Avast wrote on its blog that the attacks against South Korean banks originated from the website of the Korean Software Property Right Council.

The site had been hacked to serve up an iframe that delivered an attack hosted on another website, Avast said. The actual attack code exploits a vulnerability in Internet Explorer dating from July 2012, which has been patched by Microsoft.

Source:  infoworld.com

Cisco switches to weaker hashing scheme, passwords cracked wide open

Friday, March 22nd, 2013

Crypto technique requires little time and computing resources to crack.

Password cracking experts have reversed a secret cryptographic formula recently added to Cisco devices. Ironically, the encryption type 4 algorithm leaves users considerably more susceptible to password cracking than an older alternative, even though the new routine was intended to enhance protections already in place.

It turns out that Cisco’s new method for converting passwords into one-way hashes uses a single iteration of the SHA256 function with no cryptographic salt. The revelation came as a shock to many security experts because the technique requires little time and computing resources. As a result, relatively inexpensive computers used by crackers can try a dizzying number of guesses when attempting to guess the corresponding plain-text password. For instance, a system outfitted with two AMD Radeon 6990 graphics cards that run a soon-to-be-released version of the Hashcat password cracking program can cycle through more than 2.8 billion candidate passwords each second.

By contrast, the type 5 algorithm the new scheme was intended to replace used 1,000 iterations of the MD5 hash function. The large number of repetitions forces cracking programs to work more slowly and makes the process more costly to attackers. Even more important, the older function added randomly generated cryptographic “salt” to each password, preventing crackers from tackling large numbers of hashes at once.

“In my eyes, for such an important company, this is a big fail,” Jens Steube, the creator of ocl-Hashcat-plus said of the discovery he and beta tester Philipp Schmidt made last week. “Nowadays everyone in the security/crypto/hash scene knows that password hashes should be salted, at least. By not salting the hashes we can crack all the hashes at once with full speed.”

Cisco officials acknowledged the password weakness in an advisory published Monday. The bulletin didn’t specify the specific Cisco products that use the new algorithm except to say that they ran “Cisco IOS and Cisco IOS XE releases based on the Cisco IOS 15 code base.” It warned that devices that support Type 4 passwords lose the capacity to create more secure Type 5 passwords. It also said “backward compatibility problems may arise when downgrading from a device running” the latest version.

The advisory said that Type 4 protection was designed to use the Password-Based Key Derivation Function version 2 standard to SHA256 hash passwords 1,000 times. It was also designed to append a random 80-bit salt to each password.

“Due to an implementation issue, the Type 4 password algorithm does not use PBKDF2 and does not use a salt, but instead performs a single iteration of SHA256 over the user-provided plaintext password,” the Cisco advisory stated. “This approach causes a Type 4 password to be less resilient to brute-force attacks than a Type 5 password of equivalent complexity.”

The weakness threatens anyone whose router configuration data may be exposed in an online breach. Rather than store passwords in clear text, the algorithm is intended to store passwords as a one-way hash that can only be reversed by guessing the plaintext that generated it. The risk is exacerbated by the growing practice of including configuration data in online forums. Steube found the hash “luSeObEBqS7m7Ux97dU4qPfW4iArF8KZI2sQnuwGcoU” posted here and had little trouble cracking it. (Ars isn’t publishing the password in case it’s still being used to secure the Cisco gear.)

While Steube and Schmidt reversed the Type 4 scheme, word of the weakness they uncovered recently leaked into other password cracking forums. An e-mail posted on Saturday to a group dedicated to the John the Ripper password cracker, for instance, noted that the secret to the Type 4 password scheme “is it’s base64 SHA256 with character set ‘./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz’.” Armed with this knowledge, crackers have everything they need to crack hundreds of thousands, or even millions, of hashes in a matter of hours.

It’s hard to fathom an implementation error of this magnitude being discovered only after the new hashing mechanism went live. The good news is that Cisco is openly disclosing the weakness early in its life cycle. Ars strongly recommends that users consider the pros and cons before upgrading their Cisco gear.

Source:  arstechnica.com

Guerilla researcher created epic botnet to scan billions of IP addresses

Friday, March 22nd, 2013

In one of the more audacious and ethically questionable research projects in recent memory, an anonymous hacker built a botnet of more than 420,000 Internet-connected devices and used it to perform one of the most comprehensive surveys ever to measure the insecurity of the global network.

In all, the nine-month scanning project found 420 million IPv4 addresses that responded to probes and 36 million more addresses that had one or more ports open. A large percentage of the unsecured devices bore the hallmarks of broadband modems, network routers, and other devices with embedded operating systems that typically aren’t intended to be exposed to the outside world. The researcher found a total of 1.3 billion addresses in use, including 141 million that were behind a firewall and 729 million that returned reverse domain name system records. There were no signs of life from the remaining 2.3 billion IPv4 addresses.

Continually scanning almost 4 billion addresses for nine months is a big job. In true guerilla research fashion, the unknown hacker developed a small scanning program that scoured the Internet for devices that could be logged into using no account credentials at all or the usernames and passwords of either “root” or “admin.” When the program encountered unsecured devices, it installed itself on them and used them to conduct additional scans. The viral growth of the botnet allowed it to infect about 100,000 devices within a day of the program’s release. The critical mass allowed the hacker to scan the Internet quickly and cheaply. With about 4,000 clients, it could scan one port on all 3.6 billion addresses in a single day. Because the project ran 1,000 unique probes on 742 separate ports, and possibly because the binary was uninstalled each time an infected device was restarted, the hacker commandeered a total of 420,000 devices to perform the survey.

More than nine terabytes of data

“A lot of devices and services we have seen during our research should never be connected to the public Internet at all,” the guerilla researcher concluded in a 5,000-word report titled Internet Census 2012: Port scanning /0 using insecure embedded devices. “As a rule of thumb, if you believe that ‘nobody would connect to the Internet, really nobody,’ there are at least 1,000 people who did. Whenever you think ‘that shouldn’t be on the Internet but will probably be found a few times’ it’s there a few hundred thousand times. Like half a million printers, or a million Webcams, or devices that have root as a root password.”

In all, the botnet, which the researcher named “Carna” after the Roman goddess of physical health, collected more than 9TB worth of data. It performed 52 billion ICMP ping probes, 180 billion service probe records, and 2.8 billion SYN scan records for 660 million IPs with 71 billion ports tested. The researcher said he took precautions to prevent his program from disrupting the normal operation of the infected devices.

“Our binaries were running with the lowest possible priority and included a watchdog that would stop the executable in case anything went wrong,” he wrote. “Our scanner was limited to 128 simultaneous connections and had a connection timeout of 12 seconds.”

He continued: “We used the devices as a tool to work at the Internet scale. We did this in the least invasive way possible and with the maximum respect to the privacy of the regular device users.”

The researcher found that his scanning program wasn’t the only unauthorized code hitching a free ride on some of the commandeered devices. Competing botnet programs such as one known as Aidra infected as many as 30,000 embedded devices including the Linux-powered Dreambox TV receiver and other devices that run on a MIPS hardware. The scanning software detected capabilities in Aidra that forced compromised devices to carry out a variety of denial-of-service attacks on targets selected by the malicious botnet operators.

“Apparently its author only built it for a few platforms, so a majority of our target devices could not be infected with Aidra,” the researcher reported. “Since Aidra was clearly made for malicious actions and we could actually see their Internet scale deployment at that moment, we decided to let our bot stop telnet after deployment and applied the same iptable rules Aidra does, if iptables was available. This step was required to block Aidra from exploiting these machines for malicious activity.”

The changes didn’t survive reboots, however, allowing Aidra to resume control of the embedded devices once they were restarted. The scanning program was programmed to install itself on uninfected devices, so it’s possible it may have repeatedly disrupted the malicious bot software only to be foiled each time a device was rebooted.

Breaking the law

The research project almost certainly violated federal statutes prohibiting the unauthorized access of protected computers and possibly other hacking offenses. And since the unknown researcher is willing to take ethical and legal liberties in his work, it’s impossible to verify that he carried out the project in the manner described in the paper. Still, the findings closely resemble those of HD Moore, the CSO of security firm Rapid7 and chief architect of the Metasploit software framework used by hackers and penetration testers. Over a 12-month period last year, he used ethical and legal means to probe up to 18 ports of every IPv4 Internet address three or four times each day. The conclusion: there are about 1.3 billion addresses that respond to various scans, with about 500 million to 600 million of them coming from embedded devices that were never intended to be reachable on the Internet.

Over three months in mid-2012, the researcher sent an astounding 4 trillion service probes, 175 billion of which were sent back and saved. In mid-December the researcher probed the top 30 ports, providing about 5 billion additional saved service probes. A detailed list of the probes sent to specific ports is here.

“This looks pretty accurate,” Moore said of the guerilla report, which included a wealth of raw data to document the findings. “Embedded devices really are one of the most common devices on the Internet, and the security of these devices is terrible. I ran into a number of active botnets using those devices to propagate.”

The only way to ultimately confirm the veracity of the findings is to go through the data in precise detail, which is something fellow researchers have yet to do publicly.

Moore said there were advantages and disadvantages to each of the studies. While use of an illicit botnet may have provided greater visibility into the overall Internet population, it amounted to a much briefer snapshot in time. Moore’s approach, by contrast, was more limited since it probed just 18 ports. But because it surveyed devices every day for a year, its results are less likely to reflect anomalies resulting from seasonal differences in Internet usage.

Putting aside the ethical and legal concerns of taking unauthorized control of hundreds of millions of devices, the researcher builds a compelling case for taking on the project.

“We would also like to mention that building and running a gigantic botnet and then watching it as it scans nothing less than the whole Internet at rates of billions of IPs per hour over and over again is really as much fun as it sounds like,” he wrote. What’s more, with the advent of IPv6, the opportunity may never come again, since the next-generation routing system offers orders of magnitude more addresses that are impossible to be scanned en masse.

The researcher concluded by explaining the ultimate reason he took on the project.

“I did not want to ask myself for the rest of my life how much fun it could have been or if the infrastructure I imagined in my head would have worked as expected,” he explained. “I saw the chance to really work on an Internet scale, command hundred thousands of devices with a click of my mouse, portscan and map the whole Internet in a way nobody had done before, basically have fun with computers and the Internet in a way very few people ever will. I decided it would be worth my time.”

Source:  arstechnica.com

Microsoft begins pushing Windows 7 SP1 as an automatic update

Tuesday, March 19th, 2013

Starting this week, Microsoft will begin giving Windows 7 users who have yet to install Service Pack 1 a helpful push into the safer, more secure future. SP1 will start rolling out as an automatic update, and that’s a very good thing.

Not only does Windows 7 Service Pack patch numerous flaws in the uber-popular OS, but it also bring loads of performance and stability tweaks. It’s also going to be a support requirement going forward come April 9, 2013. Microsoft wants to make sure everyone who’s using Windows 7 is running the version that’s in line for all the upcoming bug fixes. Critical security fixes, of course, will still be delivered to all Windows 7 users, not just those who welcome SP1 with open arms.

There’s really no reason not to install the update, unless you’re a network administrator with very particular platform requirements for your in-house apps… or you happen to be running a copy of Windows that might not be 100% legal.

Don’t be expecting to see any dramatic changes after you install, though. Microsoft’s official notes about what’s included in Windows 7 SP1 are thin on details and the few changes that do get mentioned aren’t very exciting. Better print output from the XPS viewer won’t make you want to raise your glass, but improved audio reliability over HDMI connections might at least be worth a golf clap if you’re going to be running SP1 on a media center computer.

To make sure you’re ready to receive Microsoft’s SP1 push, just pop in to the Control Panel and click the Windows Update icon. If you’re feeling a bit geekier, hit services.msc from the search box and verify that the Windows Update service is running.

Source:  geek.com

Cisco aims software improvements at enterprise video communications

Tuesday, March 19th, 2013

Cisco plans deeper integration between its TelePresence and WebEx products, and better network management tools for video conferencing

Anticipating explosive growth in video communications, Cisco is readying product improvements designed to simplify the management of videoconferencing traffic and to streamline its use for employees.

The products, announced Tuesday, include new software for adjusting network resources based on video requirements, and a new cloud-based virtual meeting room service for its video-as-a-service hosting partners.

“We’re giving tools to IT managers to better deploy, manage and support video traffic on their networks and help eliminate some of the fears of adding video on the network,” said Roberto De La Mora, Cisco’s senior director of unified communications marketing.

“We also are expanding our cloud video-as-a-service options via a new on-demand virtual meeting room service we’re making available via our hosted collaboration partners,” he added.

An internal Cisco survey of enterprise desktop video conferencing forecasts an increase in usage from 36.4 million users in 2011 to 218.9 million by 2016, and IT professionals are demanding more and better tools for providing and handling this type of network-intensive video communications, he said.

The Cisco TelePresence Server and TelePresence Conductor products are getting a software upgrade designed to make adjustments to network resources based on video conferencing session requirements.

Available today, the upgraded software will recognize the requirements of the end point devices of different video conferencing session participants, for example allocating more bandwidth to people using large, high-definition displays, and less to those using a smartphone.

The Cisco products will be able to provision network resources in this more “intelligent” manner even to end point devices not made by the company, as long as they comply with codec industry standards.

The technology should help IT departments make more efficient use of their network resources, and ultimately lower costs, without changing their existing hardware, De La Mora said.

A similar enhancement is being made to Cisco’s Medianet Services Interface (MSI), a software component that is embedded in video endpoints and collaboration applications for enhanced network resources usage based on policies and configurations set by IT administrators. This upgrade will be available before the end of this month.

In addition, Cisco plans to deliver in this year’s first half deeper integration between its high-end TelePresence products and its midrange WebEx products, so that customers will be able to have both types of users on a single video conferencing session sharing the same two-way voice/video and content.

Currently, the only integration point in this scenario is voice — TelePresence and WebEx participants can’t see each other in the video conference, nor share their desktops and content.

Finally, Cisco will launch by the end of March a new cloud service for its Hosted Collaboration Solution (HCS) partners: on-demand virtual meeting rooms, which De La Mora likened to conference call bridges, except for video conferencing.

HCS partners include AT&T, Sprint, Telefonica, Verizon, Vodafone and other service providers, systems integrators, wholesalers and resellers that provide cloud-hosted Cisco collaboration software to their customers.

Source:  computerworld.com

The 49ers’ plan to build the greatest stadium Wi-Fi network of all time

Tuesday, March 19th, 2013

When the San Francisco 49ers’ new stadium opens for the 2014 NFL season, it is quite likely to have the best publicly accessible Wi-Fi network a sports facility in this country has ever known.

The 49ers are defending NFC champions, so 68,500 fans will inevitably walk into the stadium for each game. And every single one of them will be able to connect to the wireless network, simultaneously, without any limits on uploads or downloads. Smartphones and tablets will run into the limits of their own hardware long before they hit the limits of the 49ers’ wireless network.

Jon Brodkin

Until now, stadium executives have said it’s pretty much impossible to build a network that lets every single fan connect at once. They’ve blamed this on limits in the amount of spectrum available to Wi-Fi, despite their big budgets and the extremely sophisticated networking equipment that largesse allows them to purchase. Even if you build the network perfectly, it would choke if every fan tried to get on at once—at least according to conventional wisdom.

But the people building the 49ers’ wireless network do not have conventional sports technology backgrounds. Senior IT Director Dan Williams and team CTO Kunal Malik hail from Facebook, where they spent five years building one of the world’s largest and most efficient networks for the website. The same sensibilities that power large Internet businesses and content providers permeate Williams’ and Malik’s plan for Santa Clara Stadium, the 49ers’ nearly half-finished new home.

“We see the stadium as a large data center,” Williams told me when I visited the team’s new digs in Santa Clara.

I had previously interviewed Williams and Malik over the phone, and they told me they planned to make Wi-Fi so ubiquitous throughout the stadium that everyone could get on at once. I had never heard of such an ambitious plan before—how could this be possible?

Today’s networks are impressive—but not unlimited

An expansive Wi-Fi network at this year’s Super Bowl in the New Orleans Superdome was installed to allow as many as 30,000 fans to get online at once. This offloaded traffic from congested cellular networks and gave fans the ability to view streaming video or do other bandwidth-intensive tasks meant to enhance the in-game experience. (Don’t scoff—as we’ve noted before, three-plus-hour NFL games contain only 11 minutes of actual game action, or a bit more if you include the time quarterbacks spend shouting directions at teammates at the line of scrimmage. There is plenty of time to fill up.)

Superdome officials felt a network allowing 30,000 simultaneous connections would be just fine, given that the previous year’s Super Bowl saw only 8,260 at its peak. They were generally right, as the network performed well, even for part of the game’s power outage.

The New England Patriots installed a full-stadium Wi-Fi network this past season as well. It was never used by more than 10,000 or so people simultaneously, or by more than 16,000 people over the course of a full game. “Can 70,000 people get on the network at once? The answer to that is no,” said John Brams, director of hospitality and venues at the Patriots’ network vendor, Enterasys. “If everyone tried to do it all at once, that’s probably not going to happen.”

But as more fans bring smart devices into stadiums, activities like viewing instant replays or live camera angles available only to ticket holders will become increasingly common. It’ll put more people on the network at once and require bigger wireless pipes. So if Williams and Malik have their way, every single 49ers ticket holder will enjoy a wireless connection faster than any wide receiver sprinting toward the end zone.

“Is it really possible to give Wi-Fi to 68,500 fans at once?” I asked. I expected some hemming and hawing about how the 49ers will do their best and that not everyone will ever try to use the network at once anyway.

“Yes. We can support all 68,500,” Williams said emphatically.

How?

“How not?” he answered.

Won’t you have to limit the capacity each fan can get?

Again, absolutely not. “Within the stadium itself, there will probably be a terabit of capacity. The 68,500 will not be able to penetrate that. Our intentions in terms of Wi-Fi are to be able to provide a similar experience that you would receive with LTE services, which today is anywhere from 20 to 40 megabits per second, per user.

“The goal is to provide you with enough bandwidth that you would saturate your device before you saturate the network,” Williams said. “That’s what we expect to do.”

Fans won’t be limited by what section they’re in, either. If the 49ers offer an app that allows fans to order food from their seats, or if they offer a live video streaming app, they’ll be available to all fans.

“The mobile experience should not be limited to, ‘Hey, because you sit in a club seat you can see a replay, but because you don’t sit in a club seat you can’t see a replay,'” Malik said. “That’s not our philosophy. Our philosophy is to provide enhancement of the game experience to every fan.” (The one exception would be mobile features designed specifically for physical features of luxury boxes or club seats that aren’t available elsewhere in the stadium.)

It’s the design that counts

Current stadium Wi-Fi designs, even with hundreds of wireless access points distributed throughout a stadium, often can support only a quarter to a half of fans at once. They also often limit bandwidth for each user to prevent network slowdowns.

The Patriots offer fans a live video and instant replay app, with enough bandwidth to access video streams, upload photos to social networks, and use the Internet in general. Enterasys confirmed to Ars that the Patriots do enforce a bandwidth cap to prevent individual users from overloading the network, but Enterasys would not say exactly how big the cap is. The network has generally been a success, but some users of the Patriots app have taken to the Android app store to complain about the stadium Wi-Fi’s performance.

According to Williams, most current stadium networks are limited by a fundamental problem: sub-optimal location of wireless access points.

“A typical layout is overhead, one [access point] in front of the section, one behind the section, and they point towards each other,” he said. “This overhead design is widely used and provides enough coverage for those using the design.”

Williams would not reveal the exact layout of the 49ers’ design, perhaps to prevent the competition from catching on. How many access points will there be? “Zero to 1,500,” he said in a good-natured attempt to be both informative and vague.

That potentially doubles or quadruples the typical amount of stadium access points—the Super Bowl had 700 and the Patriots have 375. But this number isn’t the most important thing. “The number of access points will not give you any hint on whether the Wi-Fi is going to be great or not,” Malik said. “Other factors control that.”

If the plan is to generate more signal strength, just adding more access points to the back and front of a section won’t do that.

The Santa Clara Stadium design “will be unique to football stadiums,” Williams said. “The access points will be “spread and distributed. It’s really the best way to put it. Having your antennas distributed evenly around fans.” The 49ers are testing designs in Candlestick Park and experimenting with different access points in a lab. The movement of fans and the impact of weather on Wi-Fi performance are among the factors under analysis.

“Think of a stadium where it’s an open bowl, its raining, people are yelling, standing, how do you replicate that in your testing to show that if people are jumping from their seats, how is Wi-Fi going to behave, what will happen to the mobile app?” Malik said. “There is a lot that goes on during a game that is hard to replicate in your conceptual simulation testing. That is one of the big challenges where we have to be very careful.”

“We will make great use of Candlestick over the next year as we continue to test,” Williams said. “We’re evaluating placement of APs and how that impacts RF absorption during the game with folks in their seats, with folks out of their seats.”

Wi-Fi will be available in the stands, in the suites, in the walkways, in the whole stadium. The team has not yet decided whether to make Wi-Fi available in outdoor areas such as concourses and parking lots.

The same could theoretically be done at the 53-year-old Candlestick Park, even though it was designed decades before Wi-Fi was invented. Although the stadium serves as a staging ground for some of the 49ers’ wireless network tests, public access is mainly limited to premium seating areas and the press box.

The reason Wi-Fi in Candlestick hasn’t been expanded is a practical one. With only one year left in the facility, the franchise has decided not to invest any more money in its network. But Williams said 100 percent Wi-Fi coverage with no bandwidth caps could be done in any type of stadium, no matter how old. He says the “spectrum shortage” in stadiums is just a myth.

With the new stadium still undergoing construction, it was too early for me to test anything resembling Santa Clara Stadium’s planned Wi-Fi network. For what it’s worth, I was able to connect to the 49ers’ guest Wi-Fi in their offices with no password, and no problems.

The 2.4GHz problem

There is one factor preventing better stadium Wi-Fi that even the 49ers may not be able to solve, however. Wi-Fi works on both the 2.4GHz and 5GHz bands. Generally, 5GHz is better because it offers more powerful signals, less crowded airwaves and more non-overlapping channels that can be devoted to Wi-Fi use.

The 2.4GHz band has 11 channels overall and only three that don’t overlap with each other. By using somewhat unconventionally small 20MHz channels in the 5GHz range, the 49ers will be able to use about eight non-overlapping channels. That’s despite building an outdoor stadium, which is more restricted than indoor stadiums due to federal requirements meant to prevent interference with systems like radar.

Each 49ers access point will be configured to offer service on one channel, and access points that are right next to each other would use different channels to prevent interference. So even if you’re surrounding fans with access points, as the 49ers plan to, they won’t interfere with each other.

But what if most users’ devices are only capable of connecting to the limited and crowded 2.4GHz band? Enterasys said 80 percent of Patriots fans connecting to Wi-Fi this past season did so from devices supporting only the 2.4GHz band, and not the 5GHz one.

“You have to solve 2.4 right now to have a successful high-density public Wi-Fi,” Brams said.

The iPhone 5 and newer Android phones and tablets do support both the 2.4GHz and 5GHz bands, however. Williams said by the time Santa Clara Stadium opens in 2014, he expects 5GHz-capable devices to be in much wider use.

When asked if the 49ers would be able to support 100 percent of fans if most of them can only connect to 2.4GHz, Williams showed a little less bravado.

“For those 2.4 users we will certainly design it so that there’s less interference,” he said. “It is a more dense environment if you are strictly constrained in 2.4, but we are not constrained in 2.4. We’re not trying to answer the 2.4 problem, because we have 5 available.”

“It’s 2013, we have another year and a half of iteration,” he also said. “We’ll probably be on, what, the iPhone 7 by then? The move to 5GHz really just makes us lucky. We’re doing this at the right time.”

Building a stadium in Facebook’s image

Williams and Malik both joined the 49ers last May. Malik was hired first, and then brought his old Facebook friend, Williams, on board. Malik had been the head of IT at Facebook, while Williams was the website’s first network engineer and later a director. They both left the site, basically because they felt there was nothing left to accomplish. Williams did some consulting, and Malik initially planned to take some time off.

Williams was a long-time 49ers season ticket holder, but that was far from the only thing that sold him on coming to the NFL.

“I had been looking for something challenging and fun again,” Williams said. “Once you go through an experience like Facebook, it’s really hard to find something that’s similar. When Kunal came to me, I remember it like it was yesterday. He said, ‘If you’re looking for something like Facebook you’re not going to find it. Here’s a challenge.'”

“This is an opportunity to change the way the world consumes live sports in a stadium,” Malik said. “The technology problems live sports has today are unsolved and no one has ever done what we are attempting to do here. That’s what gets me out of bed every day.”

Williams and Malik have built the 49ers’ network in Facebook’s image. That means each service—Wi-Fi, point-of-sale, IPTV, etc.—gets its own autonomous domain, a different physical switching system to provide it bandwidth. That way, problems or slowdowns in one service do not affect another one.

“It’s tribal knowledge that’s only developed within large content providers, your Facebooks, your Googles, your Microsofts,” Williams said. “You’ll see the likes of these large content providers build a different network that is based on building blocks, where you can scale vertically as well as horizontally with open protocols and not proprietary protocols.

“This design philosophy is common within the content provider space but has yet to be applied to stadiums or venues. We are taking a design we have used in the past, and we are applying it here, which makes sense because there is a ton of content. I would say stadium networks are 10 years behind. It’s fun for us to be able to apply what we learned [at Facebook].”

The 49ers are still evaluating what Wi-Fi equipment they will use. The products available today would suit them fine, but by late 2014 there will likely be stadium-class access points capable of using the brand-new 802.11ac protocol, which allows greater throughput in the 5GHz range than the widely used 802.11n. 11ac consumer devices are rare today, but the 49ers will use 802.11ac access points to future-proof the stadium if appropriate gear is available. 11ac is backwards compatible with 11n, so supporting the new protocol doesn’t leave anyone out—the 49ers also plan to support previous standards such as 11a, 11b, and 11g.

802.11ac won’t really become crucial until 802.11n’s 5GHz capabilities are exhausted, said Daren Dulac, director of business development and technology alliances at Enterasys.

“Once we get into 5GHz, there’s so much more capacity there that 11ac doesn’t even become relevant until we’ve reached capacity in the 5GHz range,” he said. “We really think planning for growth right now in 5GHz is acceptable practice for the next couple of years.”

Santa Clara Stadium network construction is expected to begin in Q1 2014. Many miles of cabling will support the “zero to 1,500” access points, which connect back to 48 server closets or mini-data centers in the stadium that in turn tie back to the main data center.

“Based on service type you plug into your specific switch,” Williams said. “If you’re IPTV, you’re in an IPTV switch, if you’re Wi-Fi you’re in a Wi-Fi switch. If you’re in POS [point-of-sale], you’re in a POS switch. It will come down to a Wi-Fi cluster, an IPTV cluster, a POS cluster, all autonomous domains that are then aggregated by a very large fabric, that allows them to communicate lots of bandwidth throughput, and allows them to communicate to the Internet.”

Whereas Candlestick Park’s network uses Layer 2 bridging—with all of the Wi-Fi nodes essentially on a single LAN— Santa Clara Stadium will rely on Layer 3 IP routing, turning the stadium itself into an Internet-like network. “We will be Layer 3 driven, which means we do not have the issue of bridge loops, spanning tree problems, etc.,” Williams said.

Keeping the network running smoothly

Wireless networks should be closely watched during games to identify interference from any unauthorized devices and identify usage trends that might result in changes to access points. At the Patriots’ Gillette Stadium, management tools show bandwidth usage, the number of fans connected to each access point, and even what types of devices they’re using (iPhone, Android, etc.) If an access point was overloaded by fans, network managers would get an alert. Altering radio power, changing antenna tilt, or adding radios may be required, but generally any major changes are made between games.

Enlarge / Dashboard view of Patriots’ in-game connectivity.
Enterasys

“In terms of real-time correction, it depends on what the event is,” said John Burke, a senior architect at Enterasys. “Realistically, some of these APs are overhead. If an access point legitimately went down and it’s on the catwalk above 300 [the balcony sections] you’re not going to fix that in the game. That’s something that would have to wait.”

So far, the Patriots’ capacity has been enough. Fans have yet to overwhelm a single access point. Even if they did, there is some overlap among access points, allowing fans to get on in case one AP is overloaded (or just broken).

The 49ers will use similar management tools to watch network usage and adjust access point settings in real time during games. “We expect to overbuild and actually play with things throughout,” Williams said. “Though we are building the environment to support 100 percent capacity, we do not expect 100 percent capacity to be used, so we believe we will be able to move resources around as needed [during each game].”

The same sorts of security protections in place in New England will be used in Santa Clara. Business systems will be password-protected and encrypted, and there will be encrypted tunnels between access points and the back-end network. While that level of protection won’t extend to the public network, fans shouldn’t be able to attack each other, because peer-to-peer connections will not be allowed.

What if the worst happens and the power goes out? During the Super Bowl’s infamous power outage, Wi-Fi did stay on for at least a while. Williams and Malik acknowledged that no system is perfect, but they said that they plan for Wi-Fi uptime even if power is lost.

“We have generators in place, and we’ll have UPS systems, so from a communications standpoint our plan is to keep all the communication infrastructure up and online [during outages],” Williams said. “But all of this stuff is man-made.”

A small team that does it all

Believe it or not, the 49ers have a tech team of less than 10 people, yet the organization is designing and building everything itself. Sports teams often outsource network building to carriers or equipment vendors, but not the 49ers. Besides building its own Wi-Fi network, the team will build a carrier-neutral distributed antenna system to boost cellular signals within the stadium.

“We are control freaks,” Williams said with a laugh. He explained that doing everything themselves makes it easier to track down problems, accept responsibility, and fix things. They also feel the need to take ownership of the project because none of the existing networks in the rest of the league approach what they want to achieve. There is a lot of low-hanging fruit just from solving the easy problems other franchises haven’t addressed, they think.Not all the hardware must be in-house, though. The 49ers will use cloud services like Amazon’s Elastic Compute Cloud when it makes sense.

“Let’s say we want to integrate a POS system with ordering,” Malik said. “If you have an app that lets you order food, and there’s a point of sale system, all the APIs and integration need to sit in the cloud. There’s no reason for it to sit in our data center.”

There are cases where the cloud is clearly not appropriate, though. Say the team captures video on site and distributes it to fans’ devices—pushing that video to a faraway cloud data center in the middle of that process would slow things down dramatically. And ultimately, the 49ers have a greater vision than just providing Wi-Fi to fans.

When I toured a preview center meant to show off the stadium experience to potential ticket buyers, a mockup luxury suite had an iPad embedded in the wall with a custom application for controlling a projector. That provides a hint of what the 49ers might provide.

“Our view is whatever you have at home you should have in your suite,” Williams said. “If that means there’s an iPad on the wall or an application you can use, hopefully that’s available. Your life should be much easier in this stadium.”

And whatever applications are built should be cross-platform. As Malik said, the 49ers are moving away from proprietary technologies to standards-based systems so they can provide nifty mobile features to fans regardless of what device they use.

Williams and Malik are already working long hours, and their jobs will get even more time-intensive when network construction actually begins. But they wouldn’t have it any other way—particularly the longtime season ticket holder Williams.

When work is “tied to something that you love deeply, which is sports, and tied to your favorite team in the world, that’s awesome,” Williams said. “I’m crazy about it, man. I get super passionate.”

Source:  arstechnica.com

911 tech pinpoints people in buildings—but could disrupt wireless ISPs

Tuesday, March 19th, 2013

FCC decision could wreak havoc on ISPs, baby monitors, and smart meters.

Cell phones replacing landlines are making it difficult to accurately locate people who call 911 from inside buildings. If a person having a heart attack on the 30th floor of a giant building can call for help but is unable to speak their location, actually finding that person from cell phone and GPS location data is a challenge for emergency responders.

Thus, new technologies are being built to accurately locate people inside buildings. But a system that is perhaps the leading candidate for enhanced 911 geolocation is also controversial because it uses the same wireless frequencies as wireless Internet Service Providers, smart meters, toll readers like EZ-Pass, baby monitors, and various other devices.

NextNav, the company that makes the technology, is seeking permission from the Federal Communications Commission to start commercial operations. More than a dozen businesses and industry groups oppose NextNav (which holds FCC licenses through a subsidiary called Progeny), saying the 911 technology will wipe out devices and services used by millions of Americans.

Harold Feld, legal director for Public Knowledge, a public interest advocacy group for copyright, telecom, and Internet issues, provided the best summary of these FCC proceedings in a very long and detailed blog post:

Depending on whom you ask, the Progeny Waiver will either (a) totally wipe out the smart grid industry, annihilate wireless ISP service in urban areas, do untold millions of dollars of damage to the oil and gas industry, and wipe out hundreds of millions (possibly billions) of dollars in wireless products from baby monitors to garage door openers; (b) save thousands of lives annually by providing enhanced 9-1-1 geolocation so that EMTs and other first responders can find people inside apartment buildings and office complexes; (c) screw up EZ-Pass and other automatic toll readers, which use neighboring licensed spectrum; or (d) some combination of all of the above.

That’s not bad for a proceeding you probably never heard about.

All eyes on the FCC

While the Progeny proceeding has flown under the radar, the FCC may be inching toward a decision. The FCC’s public meeting next Wednesday will tackle the problem of improving 911 services. Feld says the FCC seems to be close to making a decision, although the FCC itself did not respond to our requests for comment this week. All the public documents related to the proceeding are available on the FCC website.

NextNav’s website says the company “was founded in 2007 to solve the indoor positioning problem.” But it has no revenue today, and it won’t unless the FCC approves its application or it finds another line of business.

The Wireless Internet Service Providers Association (WISPA) is worried that the FCC will rule in Progeny’s favor, despite tests that WISPA and others believe prove Progeny service would degrade performance of many existing devices or render them unusable altogether.

“The FCC appears poised to completely disregard technical reality, disregard the record in their own proceeding and give final approval to Progeny to do something that’s going to be very disruptive to the band that’s been in use for 20 years harmoniously by millions of users,” Jack Unger, WISPA’s technical consultant, told Ars.

The band in question is 902-928MHz. This band is similar to Wi-Fi in that it permits many unlicensed uses such as the ones mentioned earlier in this article. It also permits a select few licensed uses, including Progeny’s M-LMS (Multilateration Location and Monitoring Service), which forms the backbone of its enhanced 911 service.

NextNav has already set up a network of roughly 60 transmitters to cover a 900-square-mile area including San Francisco, Oakland, and San Jose, NextNav CEO Gary Parsons told Ars in a phone interview. NextNav has begun deployments in the rest of the top 40 markets in the country, but the Bay Area is the only one fully built out.

“We’ve been actually broadcasting in the San Francisco and Silicon Valley area now, portions of it, for over three years,” Parsons said. NextNav has FCC licenses allowing it to transmit, but it needs a further approval in order to begin commercial operations, he said.

Progeny technology may not solve the 911 problem

In order to work, the GPS chips in the next generation of cell phones would need to be slightly modified to allow communication with the Progeny network. That’s just a software upgrade, but one that has to be done prior to a phone being built, Parsons said.

Why is this necessary? GPS is good at locating people outside, but not indoors, Parsons said. “What we bring to the party is a location accuracy that is much more precise than that which is currently available, with the ability to identify vertically what floor you’re on,” Parsons said. “It’s one thing knowing what block you’re in, but if you’re trying to send someone to a heart attack victim on the 89th floor of the Chrysler Building in New York, you better hope they can tell you where they are.”

Results for the Progeny system are promising, but perhaps not enough so to declare it a winner. An FCC advisory committee known as CSRIC (Communications Security, Reliability, and Interoperability Council) gave Progeny high marks compared to contenders Qualcomm and Polaris in a report dated February 19, 2013. (Unger provided a copy of this report to Ars.)

Progeny claims horizontal accuracy to within 20 meters and vertical accuracy to within 2 meters. But the CSRIC report said that even today’s best technology consistent enough.

“[P]rogress has been made in the ability to achieve significantly improved search rings in both a horizontal and vertical dimension,” CSRIC wrote. “However, even the best location technologies tested have not proven the ability to consistently identify the specific building and floor, which represents the required performance to meet Public Safety’s expressed needs. This is not likely to change over the next 12-24 months. Various technologies have projected improved performance in the future, but none of those claims have yet been proven through the test bed process.”

One set of test results, interpreted in many different ways

NextNav and its opponents collaborated on a series of tests to determine how the Progeny system would interact with WISP signals and smart meters. The test results were released last October. The numbers themselves aren’t in dispute, but each side interprets them very differently.

Among Progeny’s opposition is the Part 15 Coalition (Part 15 of the FCC rules regulate the operation of low power devices on unlicensed spectrum). Besides those already mentioned, Part 15 technology includes devices that monitor safety of gas and oil pipelines, hearing aids, Plantronics headsets, and emergency response devices made by Inovonics, said Henry Goldberg, counsel for the Part 15 Coalition.

Goldberg told Ars that the Progeny system has an 80 percent duty cycle (meaning it operates 80 percent of the time), and that Progeny’s 30-watt transmissions would overwhelm the 1-watt transmissions used by numerous Part 15 devices.

This is one example of where the two sides interpret the same results differently. Parsons said each Progeny transmitter operates only 10 percent of the time, explaining that the 80 percent figure is true only when you add up the transmissions of devices within range of each other.

“What [Progeny opponents] generally fail to note is the ones they are seeing that are far away have a very weak signal coming in,” Parsons said. “They might see one or two strong ones and six more that are miles away and at a much lower intensity level.”

Progeny operates on a total of 4MHz in the 902-928MHz band, roughly within 920-922 and 926-928, he said. Smart meters that periodically report statistics to utilities could occasionally miss one transmission due to interference from Progeny but get the data through the next time by hopping frequencies, Parsons said.

Smart meter maker Itron said in a recent filing that “Any radio receiver mounted outdoors is subject to multiple beacons, experiencing the effect of cumulative duty cycles which, as Itron has shown, would be 80-90% in densely deployed areas… testing shows that unlicensed devices cannot co-exist with the Progeny system on its frequencies, which means that, at the very least, Progeny’s operations will take away 4 MHz of spectrum from unlicensed use, that the compression effect will further degrade use of the remaining spectrum, and that some users will experience greater loss of the spectrum.”

Unger believes the wireless Internet Service Providers are at the greatest risk of interference from Progeny systems. WISPs serve more than 3 million users nationwide, with perhaps a quarter of them on the 900MHz band, he said.

This service is primarily for rural areas where customers have no other options. WISP speed is already low, from 500Kbps to 3 or 4 Mbps, and at its worst, interference from Progeny could reduce download speeds by 47.9 percent and upload speeds by 41.5 percent, Unger said. Besides lower speeds, interference could result in lost connections, he said.

The interference isn’t really that bad, Progeny says

Progeny put a more positive spin on those numbers in a filing. “In two of the co-frequency tests the BWA [broadband wireless access] throughput reduction did reach 47.9 and 49 percent,” Progeny said. “Most of the co-frequency tests documented much lower levels of BWA throughput reduction, however, with two co-frequency tests documenting reductions of just 2.5 and 8.3 percent. In fact, when the two worst case outliers are excluded from the results, the average throughput reduction for even the co-frequency tests drops to 16.33 percent.”

Unger said the WISPs 4-watt transmissions would be wiped out by Progeny’s much stronger ones. The 30 watts used by Progeny is measured in ERP (Effective Radiated Power) whereas the WISP’s 4 watts is measured in EIRP (Effective Isotropic Radiated Power). Ultimately, this means the WISPs use 4 watts compared to Progeny’s 49.2 watts when measured on the same scale, Unger said.

Although Progeny uses just 4MHz of spectrum, it’s placed in such a way as to wipe out two of the three usable channels in the 902-928MHz band, WISPA argued in its most recent filing:

Progeny further counters that WISP equipment already has to avoid interference from other unlicensed devices, and that their networks could be designed to avoid interference from Progeny. “What the joint tests do show is that the impact of Progeny’s M-LMS network on BWA equipment is highly variable and can be affected significantly by the configuration of the BWA link, the choice and placement of antennas, and the proximity and direction toward Progeny’s M-LMS beacons,” Progeny wrote. “The test results also demonstrate that the impact of Progeny’s M-LMS network on BWA equipment is only a small fraction of the degradation that BWA networks already routinely experience from other users of the 902-928 MHz band.”

Parsons argued that the existence of Progeny’s network in the Bay Area without any complaints proves that it can co-exist with unlicensed devices. “It’s not like we’re asking to light up a network,” he said. “All of this interference potential that some of these parties are making political points about are not there in practical impact, because we’ve been operating for years.”

Unger counters that while Progeny has operated, there could be interference that wasn’t detected because Progeny didn’t actually test for interference with existing devices except for when the FCC demanded it. Of course, Progeny’s opponents believe those tests prove the network would disrupt devices in the band, and the opponents are numerous.

One Progeny, many opponents

Businesses and organizations filing opposition against Progeny or at least demanding further testing include Plantronics, Google, the Utilities Telecom Council, the Maryland Transportation Authority, National Association of Regulatory Utility Commissioners, smart grid company Landis & Gyr, Inovonics, the New America Foundation, the American Petroleum Institute., the Alarm Industry Communications Committee, several individual utilities, EZ-Pass, and others.

A few members of Congress have weighed in. US Rep. Anna Eshoo (D-CA) wrote favorably of Progeny’s ability to improve 911 services but acknowledged that more work may be required to prevent interference with “other important spectrum users.”

US. Sen. Maria Cantwell (D-WA) and Sen. Amy Klobuchar (D-MN) opposed Progeny’s request for a waiver.

“There are like 60 companies saying to the FCC there is a real problem and Progeny is the only one saying there’s no problem,” Unger said. “I’ve never seen this kind of an unbalanced record before.”

Goldberg said the Progeny proceeding reminds him of LightSquared, which wanted to build a nationwide LTE network but failed to gain FCC approval because of interference with GPS devices. Goldberg, who was counsel to Lightsquared in that case, believes the Progeny one could have a more favorable outcome for both sides if Progeny is willing to compromise.

“As originally proposed and as currently pushed by Progeny, it can’t live together with Part 15,” Goldberg said. “There’s a way for there to be more compatibility between Part 15 and Progeny but that means Progeny has to use lower power and less of a duty cycle. It has to look more like a Part 15 device.”

Parsons contends that Progeny has already compromised by using only one-way transmissions and using a duty cycle of 10 percent on each transmitter. “There was no need for us to put a 10 percent duty cycle in. We already gave up 90 percent,” he said. If Progeny reduced its power output to 4 watts, “we would have to put a lot more beacons, which frankly I’m not sure it really improves the situation much.”

The FCC will have to sort it all out. But the outcome seems to be up in the air because the FCC has not yet defined what an “unacceptable” level of interference would be in this case. The stakes are high for NextNav, for all its opponents, and for the millions of people using devices and services in the 902-928MHz band.

The Progeny case is also a perfect example of just how complicated an FCC proceeding can be. As Feld wrote, “For me, the Progeny Waiver is a microcosm of why it has become so damn hard to repurpose spectrum for new uses.”

Source:  arstechnica.com

National Vulnerability Database taken down by vulnerability-exploiting hack

Thursday, March 14th, 2013

The federal government’s official catalog of software vulnerabilities was taken offline after administrators discovered two of its servers had been compromised. By malware. That exploited a software vulnerability.

The National Vulnerability Database is maintained by the National Institute of Standards and Technology and has been unavailable since late last week, according to an e-mail sent by NIST official Gail Porter published on Google+. At the time of this article on Thursday afternoon, the database remained down and there was no indication when service would be restored.

“On Friday March 8, a NIST firewall detected suspicious activity and took steps to block unusual traffic from reaching the Internet,” Porter wrote in the March 14 message. “NIST began investigating the cause of the unusual activity and the servers were taken offline. Malware was discovered on two NIST Web servers and was then traced to a software vulnerability.”

There’s no evidence that any NIST pages were used to infect people visiting the site. Ars has e-mailed Porter for further details, and this post will be updated if additional information is available.

The infection is a graphic reminder that just about anyone on just about any complex system can be compromised. The hack was reported earlier by The Register.

Source:  arstechnica.com

AT&T Labs pumps 495Gbps of data over 7,400 miles

Thursday, March 14th, 2013

It’s a new record for long-distance network speeds.

AT&T Labs researchers are set to present data from a recent test that set a record for long-distance network speeds at the Optical Fiber Communication and National Fiber Optics Engineers Conference (OFC/NFOEC) on March 19.

The researchers, led by AT&T researcher Xiang Zhou, have perfected a technology that allows existing 100 gigabit-per-second fiber connections to be used to transmit over four times as much data. When used with a new low-loss optical fiber, they can sustain that data rate at distances over 7,500 miles. The new technology could dramatically increase the amount of bandwidth on the Internet’s backbone, especially over submarine cables that connect the continents.

The transmission system developed by Xiang Zhou’s team uses a new modulation technique that allows for the tuning of the signal to get the most out of available bandwidth. In the test data being presented next week, the researchers used a recirculating transmission test platform using 100-kilometer fiber cable segments to demonstrate that they could multiplex eight 495Gbps wave-division multiplexed signals with 100GHz of space between them over a distance over 12,000km (7,456.45 miles).

In a statement published by the organizers of OFC/NFOEC, Xiang Zhou said, “This result not only represents a reach increase by a factor of 2.5 for 100 GHz-spaced 400 G-class WDM systems, it also sets a new record for the product of spectral efficiency and distance.” A previous test by Zhou’s team, using 50GHz spacing, was the previous record-holder for 400-gigabit transmission distance—at 3,000 kilometers.

Source:  arstechnica.com

New Google site aimed at helping webmasters of hacked sites

Wednesday, March 13th, 2013

Google wants to aid webmasters in identifying site hacks and recovering from them

Google has launched a site for webmasters whose sites have been hacked, something that the company says happens thousands of times every day.

The new site features articles and videos designed to help webmasters identify, diagnose and recover from hacks.

The site addresses different types of ways sites can be compromised. For example, malicious hackers can break into a site and load malware on it to infect visitors, or they can flood it with invisible spam content to inflate their sites’ search rankings.

In announcing the new Help for Hacked Sites resource, Google is also reminding webmasters of best practices for prevention, including keeping all site software updated and patched and being aware of potential security issues of third-party applications and plug-ins before installing them.

This latest Google initiative builds on its efforts over the years to proactively detect malware on sites it indexes and alert both users of its search engine and affected webmasters.

Since Google is the main tool people worldwide use to find and link to websites, it is in Google’s best business interest to make sure it isn’t pointing users of its search engine to malicious or compromised Web destinations.

As part of these efforts, Google also has been a supporter since 2006 of the nonprofit StopBadware organization, which creates awareness about compromised sites and provides informational resources to end users, webmasters and Internet hosts.

However, the problem remains, and continues to be complicated for many webmasters to solve, since ridding a site from malware often requires advanced IT knowledge and outside help.

A StopBadware/Commtouch survey last year of webmasters whose sites had been hacked showed that 26 percent been unable to undo the damage, and 2 percent had opted to give up and abandon the compromised site.

Source:  networkworld.com

New Microsoft patch purges USB bug that allowed complete system hijack

Tuesday, March 12th, 2013

Hole allowed USB-connected drives to infect machines with malware à la Stuxnet.

Microsoft has plugged a hole in its Windows operating system that allowed attackers to use USB-connected drives to take full control of a targeted computer.

Microsoft said it classified the vulnerability as “important,” a less severe rating than “critical,” because exploits require physical access to the computer being attacked. While that requirement makes it hard for hacks to spread online, readers should bear in mind that the vulnerability in theory allows attackers to carpet bomb conferences or other gatherings with booby-trapped drives that when plugged in to a vulnerable computer infect it with malware. Such vulnerabilities also allow attackers to penetrate sensitive networks that aren’t connected to the Internet, in much the way the Stuxnet worm that targeted Iran’s nuclear program did.

“When you look at it in the sense of a targeted attack, it does make the vulnerability critical,” Marc Maiffret, CTO of BeyondTrust, told Ars. “Because of things like Stuxnet raising awareness around the physical aspect of planting USB drives or having people to take these things into facilities, it does make it critical.”

According to Microsoft, the MS13-027 series of vulnerabilities can be exploited when a maliciously formatted USB drive is inserted in to a computer. When Windows drivers read a specially manipulated descriptor, the system will execute attack code with the full permissions of the operating system kernel.

“Because the vulnerability is triggered during device enumeration, no user intervention is required,” Microsoft Security Response Center researchers Josh Carlson and William Peteroy wrote in a blog post. “In fact, the vulnerability can be triggered when the workstation is locked or when no user is logged in, making this an un-authenticated elevation of privilege for an attacker with casual physical access to the machine.”

Over the past few years, Microsoft has closed a variety of security holes related to USB hard drives. In addition to fixing the LNK file vulnerability that allowed Stuxnet to infect machines when a stick was plugged in, company engineers have also reworked the autorun feature that used to automatically open a window each time a removable drive was connected. Hackers had long abused the feature to display options that would say things like “open folder to view files” but install malware when clicked instead.

MS13-027 is one of seven bulletins Microsoft issued as part of this month’s Patch Tuesday. (The company releases fixes on the second Tuesday of each month.) In all, the bulletins fixed 20 separate vulnerabilities in Internet Explorer, Silverlight, Visio Viewer, SharePoint, OneNote, and Outlook. While the USB patch isn’t among the four bulletins rated critical, readers might consider it urgent nonetheless.

Source:  arstechnica.com

Don’t auction off empty TV airwaves, SXSW activists tell FCC

Tuesday, March 12th, 2013

Activists at the South by Southwest Interactive festival in Austin, TX, built a free wireless network to help publicize the power of unlicensed “white spaces” technology. The project is part of a broader campaign to persuade the FCC not to auction off this spectrum for the exclusive use of wireless carriers.

Almost everyone agrees that until recently, the spectrum allocated for broadcasting television channels was used inefficiently. In less populous areas, many channels sat idle. And channels were surrounded by “guard bands” to prevent adjacent channels from interfering with each other. A coalition that includes technology companies such as Google and Microsoft and think tanks such as the New America Foundation has been lobbying the FCC to open this unused spectrum up to third parties.

The proposal initially faced fierce opposition from broadcasters, but they dropped their opposition after reaching a compromise with the FCC last year. As a result, the FCC recently opened up white space frequencies to unlicensed uses.

Now debate has shifted to a new question: whether to auction off some of these white space frequencies for the exclusive use of private wireless companies. Supporters of the auction approach argue that incumbent wireless providers could use the spectrum to improve their networks. And they point out that the auctions would generate much-needed cash for the federal treasury.

“We ♥ WiFi”

But advocates of unlicensed uses say the spectrum will generate more value if the FCC leaves it open for unlicensed uses. They point to the success of Wi-Fi, which is now embedded in billions of electronic devices and allows people to communicate wirelessly without subscribing to a wireless service.

Enter the “We ♥ WiFi” project. Austin has 14 vacant television channels that are now open for use by white space devices. So during this weekend’s South by Southwest Interactive confab, activists set up a wireless network designed to showcase their potential.

The “white space” networking gear they used doesn’t have any official connection to the Wi-Fi standard. But the brand has become so well-known that “super Wi-Fi” has become a shorthand for describing unlicensed white space technologies. These devices are permitted to operate at higher power levels than conventional Wi-Fi, making them suitable for longer-distance applications than would be possible with a conventional Wi-Fi network.

Of course, the network would only be useful to conference-goers if the “last hop” of the network was a Wi-Fi link. But these Wi-Fi access points were connected to the rest of the Internet using “white space” gear.

Nick Grossman, an activist in residence at Union Square Ventures and a visiting scholar in the Center for Civic Media at the MIT Media Lab, was an organizer of the project.

“FCC engineers have made clear that the most promising spectrum for broadband wireless is at or below 2.7GHz, where today’s Wi-Fi operates,” Grossman said. “These frequencies are able to pierce through walls and buildings the same way that TV signals do. There is currently five times more broadband spectrum in this range reserved for exclusive licensed use than for unlicensed. The FCC is now considering whether to make this imbalance worse.”

Grossman says the event generated 1,000 signatures asking the FCC not to auction off the unlicensed frequencies for the exclusive use of private network operators. He and other supporters of unlicensed spectrum believe that entrepreneurs will be able to find new and creative ways to use the spectrum, but only if it’s left open for anyone to use. They’re asking people to sign a petition to the FCC urging the agency to “follow through on your proposal to open up a large slice of high-quality spectrum for open networks.”

Source:  arstechnica.com

China’s new internet backbone explained: verified sources, IPv6 at the core

Monday, March 11th, 2013

While most of the world is still coming to grips with malware and weaning itself off of IPv4, we’re just learning that China has been thinking further ahead.

A newly publicized US Navy report reveals that China’s new internet backbone revolves around an IPv6-based architecture that leans on Source Address Validation Architecture, or SAVA.  The technique creates a catalog of known good matches between computers and their IP addresses, and blocks traffic when there’s a clear discrepancy.  The method could curb attempts to spread malware through spoofing and tackle some outbreaks automatically — and, perhaps not so coincidentally, complicate any leaps over the Great Firewall.

Even with the existence of that potential curb on civil liberties, the improved backbone could still keep network addresses and security under reasonable control when China expects that over 70 percent of its many, many homes will have broadband in the near future.

Source:  engadget.com

Telecom seeks critical infrastructure status for IT vendors

Friday, March 8th, 2013

Experts say it doesn’t matter if IT is classified because requirements will be passed on to them by the utility, telecom or defense manufacturer

The Obama administration excluded the information technology (IT) industry from its definition of the nation’s critical infrastructure, giving them immunity from security-related requirements unless changed by Congress.

While this is good for tech companies, the telecom industry is crying foul, saying IT businesses should share any regulatory burden.

The tech industry’s exclusion, the result of lobbying by the Software & Information Industry Association, was included in President Barack Obama’s executive order, issued last month.

In directing the Department of Homeland Security (DHS) to identify critical infrastructure, the order said DHS “shall not identify any commercial information technology products or consumer information technology services under this section.”

The executive order is meant as a framework for protecting power plants, telecommunication networks, water filtration systems, manufacturers and financial systems from cyberattacks by terrorists or hostile governments.

Congress is considering proposed legislation to require the sharing of attack information between government and companies that own or operate critical infrastructure. The Obama administration wants to include some security regulations.

Because additional regulations are possible, telecommunication companies such as Verizon Communications and AT&T want the IT industry to share the burden. They argue that some IT companies should be considered critical infrastructure, since the products and services they provide are a crucial part of communication networks and are usually the targets of hackers.

While not naming which companies, candidates could include Microsoft, Google, IBM, Cisco and other leading tech companies.

“Network security must go beyond what is traditionally considered critical infrastructure,” a Verizon spokesman said on Thursday. “The Internet ecosystem is far more interconnected and dependent on a host of players than it was even five years ago.”

Tech companies contacted by CSO Online either declined comment or did not respond.

Cybersecurity experts believe it doesn’t matter whether IT vendors are considered critical infrastructure, since whatever security requirements are handed down by the government will be passed on to them by the utility, telecom company or defense manufacturer.

“As a practical matter, commercial products won’t escape secondary regulation,” said Stewart Baker, a partner at the law firm Steptoe & Johnson and a former assistant secretary for policy at DHS.

Letting critical infrastructure owners and operators hand off security requirements also avoids having to decide which of a vendor’s products need to be regulated, Jacob Olcott, principal consultant for cybersecurity at Good Harbor Consulting, said.

Many IT vendors have products and services that span consumer, business and government markets. “Rather than apply the same security rules across the board, it is better to have the requirements fit the needs of the environment,” Olcott said.

Not all experts agreed. Paul Rosenzweig, founder of the homeland security consulting firm Red Branch Consulting, said if the government is going to impose more regulations, then it should do so for each industry that plays a part in running critical infrastructure.

However, Rosenzweig believes the government’s approach through frameworks and legislation is wrong. He favors a non-regulatory strategy that would include information sharing, creation of a civil liability regime, better education, more international engagement and a methodology for certifying hardware components.

“For me, the government shouldn’t be responsible for creating the framework,” said Rosenzweig, who is also a visiting fellow at The Heritage Foundation, a conservative think tank.

Instead, Rosenzweig favors a private sector institution building a framework that is based on common law and is a “product of the market, not of a government fiat.”

Source:  csoonline.com

Microsoft Patch Tuesday targets Internet Explorer drive-by attacks

Thursday, March 7th, 2013

Microsoft’s SharePoint, drawing application Visio get patched

Internet Explorer vulnerabilities warrant notice in this month’s set of Microsoft Patch Tuesday bulletins and need to be fixed quickly even though the sheer number of patches may seem daunting.

The weaknesses leave users open to drive-by attacks where malicious code is downloaded without the user’s knowledge while browsing. Not patching them because they are time-consuming will just widen the window of opportunity hackers have to exploit them, says Alex Horan, a senior product manager at CORE Security.

“Preventing future drive-by style attacks and protecting end-users appear to be the theme of this month’s Patch Tuesday,” Horan says. “These patches can be a hassle for users to deploy and have the potential to create a long enough delay where hackers can take advantage.”

So far the weaknesses haven’t been exploited. “Fortunately, this issue has no known attacks in the wild,” says Paul Henry, a security and forensic analyst at Lumension. “However, you should still plan to patch this immediately. ”

Four of seven bulletins for March are rated critical, with the first addressing browser problems. “It fixes critical vulnerabilities that could be used for machine takeover in all versions of Internet Explorer from 6 to 10, on all platforms including Windows 8 and Windows RT,” says Qualys CTO Wolfgang Kandek.

Microsoft’s Silverlight media application framework is also critically vulnerable, according to the company’s Security Bulletin Advance Notification. It affects Silverlight whether deployed on Windows or Mac OS X operating systems, where it is used to run media applications such as Netflix, Kandek says.

This vulnerability is more of concern to consumers because it only affects the Silverlight plug-in. Henry says plug-ins should be avoided in general. “[T]hey add another threat vector and are frequently an easy target for the bad guys,” he says.

Also in critical need of patching is Microsoft’s drawing application Visio, which comes as a surprise to Kandek. “It is puzzling to see such a high rating for this software that typically requires opening of an infected file in order for the attack to work. It will be interesting to see the attack vector for this vulnerability that warrants the ‘critical’ rating,” he says.

Critical vulnerabilities are those that could allow code execution without user interaction if they are successfully exploited. This type of exploit includes network worms, browsing to infected Web pages or opening infected emails.

The final critical vulnerability lies in SharePoint Server, Microsoft says.

Three of the bulletins are rated important and include two that could allow data to leak and one that could allow attackers to elevate privileges on an exploited machine. Important bulletins include vulnerabilities that could lead to compromised confidentiality, integrity or availability of user data, or of the integrity or availability of processing resources, Microsoft says. Such exploits may include warnings or prompts.

Source:  networkworld.com