Archive for July, 2013

Shorter, higher-speed DDoS attacks on the rise, Arbor Networks says

Tuesday, July 30th, 2013

Almost half of the distributed denial-of-service attacks monitored in a threat system set up by Arbor Networks now reach speeds of over 1Gbps. That’s up 13.5% from last year, while the portion of DDoS attacks over 10Gbps increased about 41% in the same period, Arbor says.

In addition, the Arbor Networks monitoring system, which is based on anonymous traffic data from more than 270 service providers, saw in the second quarter of this year the more than doubling of the total number of attacks over 20Gbps that occurred in all of 2012. The only number that went down was the duration of all of these DDoS attacks, which now trend shorter, with 86% lasting less than one hour, according to the Arbor Networks trends report for the second quarter of 2013.

Jeff Wilson, principal network security analyst with Infonetics Research, says attackers have their own motivations for launching DDoS attacks, such as political ones or organized crime-related ones, but it’s the ready availability of botnets for hire and crowd-sourced attack tools that give them the easy means.

Separately, FireHost, a Dallas company focused on building in security defense as part of its web-hosting service, issued its own findings related to cyberattacks detected over the second quarter.

FireHost says its customers were targets for about 24 million different types of attacks. About 3.6 million of these blocked cyberattacks were aimed at compromising websites through what’s known as SQL Injection, Cross-Site Request Forgery (CSRF), Directory Traversal and Cross-Site Scripting (XSS). This represents an increase in web-compromising attacks of this type from the 3.4 million seen in the first quarter, FireHost says.

In the second quarter, the number of CSRF attacks rose 16% over the previous quarter, and SQL Injection attacks rose 28%. However, the XSS attacks, which involve the insertion of malicious code into webpages to manipulate visitors, remained the most prevalent attack type. FireHost says sometimes attacks are “blended” with other exploits and automated.

FireHost claims it’s not unusual to see these blended attacks originating from within cloud-service provider networks.

“Cybercriminals can easily deploy and administer powerful botnets that run on cloud infrastructure,” says FireHost founder and CEO, Chris Drake. “Many cloud providers unfortunately don’t adequately validate new customer sign-ups, so opening accounts with fake information is quite easy.” After the account is set up, the attacker can run an automated process that can be leveraged to “deploy a lot of computing power on fast networks, giving a person the ability to create a lot of havoc with minimal effort,” Drake concludes.

Source:  networkworld.com

Universities putting sensitive data at risk via unsecure email

Tuesday, July 30th, 2013

Survey finds half of institutions allow naked transmission of the personal and financial data of students and parents

Colleges and universities are putting the financial and personal information of students and parents at risk by allowing them to submit such data to the school in unencrypted email.

That was a finding in a survey released Monday by Halock Security Labs after surveying 162 institutions of higher learning in the United States.

Half the institutions allowed sensitive documents to be sent to them in unencrypted emails, the survey said, while a quarter of the schools actually encouraged such transmissions.

“Typically, they do what they need to do to comply with regulations, but they’re weak on risk management and actively controlling  and managing risk,” Terry Kurzynski, a partner with Halock Security Labs, said in an interview.

Security at larger universities tends to be better than at smaller schools and community colleges, he continued.

“Smaller colleges are breached all the time,” Kurzynski said.”They can’t develop the right level of security until they’ve been breached several times and someone at the president or board of trustee level says, ‘Enough is enough.'”

In addition to budget constraints, culture at universities works against solid security.

“Universities are unique because their purpose is to build and disseminate knowledge which means they must operate in a culture of openness and sharing,” said Rob Reed, worldwide education evangelist for the big data security firm Splunk.

That open culture can work against the kind of centralization needed for good security. Policies can vary from school to school within a university. “It doesn’t make a lot of sense, but a lot of these units strive to maintain a degree of autonomy,” said Larry Ponemon, founder and chairman of the Ponemon Institute.

“Each school or department can be a silo for data,” he said. “So it’s hard from a data protection point of view to have central control over information and as a result, a lot of these universities have data losses.”

Ponemon has been performing data breach studies for years and he said universities typically place in industry comparisons  as some of the riskiest places for sensitive data.

Even at a schools with university-wide policies requiring encryption of sensitive data, it can be tough to run a secure ship. “You’ve got all sorts of units engaging in all sorts of practices and it’s difficult in a highly distributed environment like that to police all of it,”   Mike Corn, chief privacy and security officer at the University of Illinois, said in an interview.

“It’s a simple thing for someone to say in the interest of customer service, ‘Why don’t you scan that and send it to me,'” Corn added. “It isn’t that anyone is intentionally violating a policy. In an environment where you have a lot of high touch customers, it’s easy to fall back on what works easiest for the customer and not think about security implications.”

Not everyone was worried, however, by Halock’s findings. “I’m not very alarmed by what they found,” Marc Gaffan, founder of Incapsula, a cloud security company, said in an interview. “Email encryption is overkill.”

He argued that there are practical concerns when considering widespread use of encryption.

“The usability aspects around email encryption are not trivial,” Gaffan said.

Encrypting email is only a small part of the problem, he continued. “The real problem is what happens to that email when it hits the university.”

“It’s like keeping a key in the lock,” Gaffan said. “The fact that the door has a lock on it doesn’t protect it if the key is in the lock and anyone can unlock it.”

Source:  csoonline.com

High court bans publication of car-hacking paper

Tuesday, July 30th, 2013

A high court judge has ruled that a computer scientist cannot publish an academic paper over fears that it could lead to vehicle theft.

Flavio Garcia, from the University of Birmingham, has cracked the algorithm behind Megamos Crypto—a system used by several luxury car brands to verify the identity of keys used to start the ignition. He was intending to present his results at the Usenix Security Symposium.

But Volkswagen’s parent company, which owns the Porsche, Audi, Bentley and Lamborghini brands, asked the court to prevent the scientist from publishing his paper. It said that the information could “allow someone, especially a sophisticated criminal gang with the right tools, to break the security and steal a car.”

The company asked the scientists to publish a redacted version of the paper without the crucial codes, but the researchers declined, claiming that the information is publicly available online.

Instead, they protested that “the public have a right to see weaknesses in security on which they rely exposed,” adding that otherwise, “industry and criminals know security is weak but the public do not.”

The judge, Colin Birss, ultimately sided with the car companies, despite saying he “recognized the importance of the right for academics to publish.”

Source:  arstechnica.com

Tampering with a car’s brakes and speed by hacking its computers: A new how-to

Tuesday, July 30th, 2013

The “Internet of automobiles” may hold promise, but it comes with risks, too.

Just about everything these days ships with tiny embedded computers that are designed to make users’ lives easier. High-definition TVs, for instance, can run Skype and Pandora and connect directly to the Internet, while heating systems have networked interfaces that allow people to crank up the heat on their way home from work. But these newfangled features can often introduce opportunities for malicious hackers. Witness “Smart TVs” from Samsung or a popular brand of software for controlling heating systems in businesses.

Now, security researchers are turning their attention to the computers in cars, which typically contain as many as 50 distinct ECUs—short for electronic control units—that are all networked together. Cars have relied on on-board computers for some three decades, but for most of that time, the circuits mostly managed low-level components. No more. Today, ECUs control or finely tune a wide array of critical functions, including steering, acceleration, braking, and dashboard displays. More importantly, as university researchers documented in papers published in 2010 and 2011, on-board components such as CD players, Bluetooth for hands-free calls, and “telematics” units for OnStar and similar road-side services make it possible for an attacker to remotely execute malicious code.

The research is still in its infancy, but its implications are unsettling. Trick a driver into loading the wrong CD or connecting the Bluetooth to the wrong handset, and it’s theoretically possible to install malicious code on one of the ECUs. Since the ECUs communicate with one another using little or no authentication, there’s no telling how far the hack could extend.

Later this week at the Defcon hacker conference, researchers plan to demonstrate an arsenal of attacks that can be performed on two popular automobiles: a Toyota Prius and a Ford Escape, both 2010 models. Starting with the premise that it’s possible to infect one or more of the ECUs remotely and cause them to send instructions to other nodes, Charlie Miller and Chris Valasek have developed a series of attacks that can carry out a range of scary scenarios. The researchers work for Twitter and security firm IOActive respectively.

Among the attacks: suddenly engaging the brakes of the Prius, yanking its steering wheel, or causing it to accelerate. On the Escape, they can disable the brakes when the SUV is driving slowly. With an $80,000 grant from the DARPA Cyber Fast Track program, they have documented the cars’ inner workings and included all the code needed to make the attacks work in the hopes of coming up with new ways to make vehicles that are more resistant to hacking.

“Currently, there is no easy way to write custom software to monitor and interact with the ECUs in modern automobiles,” a white paper documenting their work states. “The fact that a risk of attack exists but there is not a way for researchers to monitor or interact with the system is distressing. This paper is intended to provide a framework that will allow the construction of such tools for automotive systems and to demonstrate the use on two modern automobiles.”

The hacking duo reverse-engineered the vehicles’ CAN, or controller area networks, to isolate the code one ECU sends to another when requesting it take some sort of action, such as turning the steering wheel or disengaging the brakes. They discovered that the network has no mechanism for positively identifying the ECU sending a request or using an authentication passcode to ensure a message sent to a controller is coming from a trusted source. These omissions make it easy for them to monitor all messages sent over the network and to inject phony messages that masquerade as official requests from a trusted ECU.

“By examining the CAN on which the ECUs communicate, it is possible to send proprietary messages to the ECUs in order to cause them to take some action, or even completely reprogram the ECU,” the researchers wrote in their report. “ECUs are essentially embedded devices, networked together on the CAN bus. Each is powered and has a number of sensors and actuators attached to them.”

Using a computer connected to the cars’ On-Board Diagnostic System, Miller and Valasek were able to cause the vehicles to do some scary things. For instance, by tampering with the so-called Intelligent Park Assist System of the Prius, which helps drivers parallel park, they were able to jerk the wheel of the vehicle, even when it’s moving at high speeds. The feat takes only seconds to perform, but it involved a lot of work to initially develop, since it required requests made in precisely the right sequence from multiple ECUs. By replaying the request in the same order, they were able to control the steering even when the Prius wasn’t in reverse, as is usually required when invoking the park assist system. They developed similar techniques to control acceleration, braking, and other critical functions, as well as ways to change readings displayed by speedometers, odometers, and other dashboard features.

For a video demonstration of the hacks, see this segment from Monday’s The Today Show. In it, both Toyota and the Ford Motor company emphasize that the manipulations Miller and Valasek carry out require physical access to the car’s computer systems. That’s a fair point, but it’s also worth remembering the previous research showing that there are often more stealthy ways to commandeer a vehicle’s on-board computers. The aim behind this latest project wasn’t to develop new ways to take control but to show the range of things that are possible once that happens.

When combined with the previous research into hacking cars’ Bluetooth and other interfaces, the proof-of-concept exploits should serve as a wake-up call not only to automobile manufacturers, but to anyone designing other so-called Internet-of-things devices. If Apple, Microsoft, and the rest of the computing behemoths have to invest heavily to ensure their products are hack-resistant, so too will those embedding tiny computers into their once-mundane wares. A car, TV, or even your washing machine that interacts with Internet-connected services is only nifty until someone gets owned.

Source:  arstechnica.com

Oil, gas field sensors vulnerable to attack via radio waves

Friday, July 26th, 2013

Researchers with IOActive say they can shut down a plant from up to 40 miles away by attacking industrial sensors

Sensors widely used in the energy industry to monitor industrial processes are vulnerable to attack from 40 miles away using radio transmitters, according to alarming new research.

Researchers Lucas Apa and Carlos Mario Penagos of IOActive, a computer security firm, say they’ve found a host of software vulnerabilities in the sensors, which are used to monitor metrics such as temperature and pipeline pressure, that could be fatal if abused by an attacker.

Apa and Penagos are scheduled to give a presentation next Thursday at the Black Hat security conference in Las Vegas but gave IDG News Service a preview of their research. They can’t reveal many details due to the severity of the problems.

“If you compromise a company on the Internet, you can cause a monetary loss,” Penagos said. “But in this case, [the impact] is immeasurable because you can cause loss of life.”

The U.S. and other nations have put increased focus in recent years on the safety of industrial control systems used in critical infrastructure such as nuclear power plants, energy and water utilities. The systems, often now connected to the Internet, may have not had thorough security audits, posing a risk of life-threatening attacks from afar.

Apa and Penagos studied sensors manufactured by three major wireless automation system manufacturers. The sensors typically communicate with a company’s home infrastructure using radio transmitters on the 900MHz or 2.4GHz bands, reporting critical details on operations from remote locations.

Apa and Penagos found that many of the sensors contained a host of weaknesses, ranging from weak cryptographic keys used to authenticate communication, software vulnerabilities and configuration errors.

For example, they found some families of sensors shipped with identical cryptographic keys. It means that several companies may be using devices that all share the same keys, putting them at a greater risk of attack if a key is compromised.

They tested various attacks against the sensors using a specific kind of radio antennae the sensors use to communicate with their home networks. They found it was possible to modify readings and disable sensors from up to 40 miles (64 kilometers) away. Since the attack isn’t conducted over the Internet, there’s no way to trace it, Penagos said.

In one scenario, the researchers concluded that by exploiting a memory corruption bug, all sensors could be disabled and a facility could be shut down.

Fixing the sensors, which will require firmware updates and configuration changes, won’t be easy or quick. “You need to be physically connected to the device to update them,” Penagos said.

Apa and Penagos won’t identify the vendors of the sensors since the problems are so serious. They’ve handed their findings to the U.S. Computer Emergency Readiness Team, which is notifying the affected companies.

“We care about the people working in the oil fields,” Penagos said.

Source:  computerworld.com

FBI, Microsoft takedown program blunts most Citadel botnets

Friday, July 26th, 2013

Microsoft estimates that 88% of botnets running the Citadel financial malware were disrupted as a result of a takedown operation launched by the company in collaboration with the FBI and partners in technology and financial services. The operation was originally announced on June 5.

Since then, almost 40% of Citadel-infected computers that were part of the targeted botnets have been cleaned, Richard Domingues Boscovich, an assistant general counsel with Microsoft’s Digital Crimes Unit, said Thursday in a blog post.

Microsoft did not immediately respond to an inquiry seeking information about how those computers were cleaned and the number of computers that remain infected with the malware.

However, Boscovich said in a different blog post on June 21 that Microsoft observed almost 1.3 million unique IP addresses connecting to a “sinkhole” system put in place by the company to replace the Citadel command-and-control servers used by attackers.

After analyzing unique IP addresses and user-agent information sent by botnet clients when connecting to the sinkhole servers, the company estimated that more than 1.9 million computers were part of the targeted botnets, Boscovich said at the time, noting that multiple computers can connect through a single IP address.

He also said that Microsoft was working with other researchers and anti-malware organizations like the Shadowserver Foundation in order to support victim notification and remediation.

The Shadowserver Foundation is an organization that works with ISPs, as well as hosting and Domain Name System (DNS) providers to identify and mitigate botnet threats.

According to statistics released Thursday by Boscovich, the countries with the highest number of IP addresses corresponding to Citadel infections between June 2 and July 21 were: Germany with 15% of the total, Thailand with 13%, Italy with 10%, India with 9% and Australia and Poland with 6% each. Five percent of Citadel-infected IP addresses were located in the U.S.

Boscovich praised the collaboration between public and private sector organizations to disrupt the Citadel botnet.

“By combining our collective expertise and taking coordinated steps to dismantle the botnets, we have been able to significantly diminish Citadel’s operation, rescue victims from the threat, and make it more costly for the cybercriminals to continue doing business,” he said Thursday in the blog post.

However, not everyone in the security research community was happy with how the takedown effort was implemented.

Shortly after the takedown, a security researcher who runs the abuse.ch botnet tracking services estimated that around 1,000 of approximately 4,000 Citadel-related domain names seized by Microsoft during the operation were already under the control of security researchers who were using them to monitor and gather information about the botnets.

Furthermore, he criticized Microsoft for sending configuration files to Citadel-infected computers that were connecting to its sinkhole servers, saying that this action implicitly modifies settings on those computers without their owners’ consent. “In most countries, this is violating local law,” he said in a blog post on June 7.

“Citadel blocked its victims’ ability to access many legitimate anti-virus and anti-malware sites in order to prevent them from being able to remove the malware from their computer,” Boscovich said on June 11 in an emailed statement. “In order for victims to clean their computers, the court order from the U.S. District Court for the Western District of North Carolina allowed Microsoft to unblock these sites when computers from around the world checked into the command and control structure for Citadel which is hosted in the U.S.”

Source:  computerworld.com

Networks Solutions reports MySQL hiccups following attacks

Tuesday, July 23rd, 2013

Network Solutions warned on Monday of latency problems for customers using MySQL databases just a week after the hosting company fended off distributed denial-of-service (DDoS) attacks.

“Some hosting customers using MySQL are reporting issues with the speed with which their websites are resolving,” the company wrote on Facebook. “Some sites are loading slowly; others are not resolving. We’re aware of the issue, and our technology team is working on it now.”

Network Solutions, which is owned by Web.com, registers domain names, offers hosting services, sells SSL certificates and provides other website-related administration services.

On July 17, Network Solutions said it came under a DDoS attack that caused many of the websites it hosts to not resolve. The company said later in the day that most of the problems had been fixed, and it apologized two days later.

“Because online security is our top priority, we continue to invest millions of dollars in frontline and mitigation solutions to help us identify and eliminate potential threats,” it said.

Some customers, however, reported problems before Network Solutions acknowledged the cyberattacks. One customer, who wrote to IDG News Service before Network Solutions issued the MySQL warning, said he had problems publishing a website on July 16, before the DDoS attacks are believed to have started.

Several other customers who commented on the company’s Facebook page reported problems going back to a scheduled maintenance period announced on July 5. The company warned customers they might experience service interruptions between 10 p.m. EST on July 5 and 7 a.m. the next morning.

Donna Marian, an artist who creates macabre dolls, wrote on the company’s Facebook page on Monday that her site was down for five days.

“I have been with you 13 years and have not got one word about this issue that has and is still costing my business thousands of dollars,” Marian wrote. “Will you be reimbursing me for my losses?”

Company officials could not be immediately reached for comment.

Source:  infoworld.com

AT&T uses small cells to improve service in Disney parks

Tuesday, July 23rd, 2013

AT&T will soon show off how small cell technology can improve network capacity and coverage in Walt Disney theme parks.

If you’re a Disney theme park fan and you happen to be an AT&T wireless customer, here’s some good news: Your wireless coverage within the company’s two main resorts is going to get a heck of a lot better.

AT&T and Disney Parks are announcing an agreement Tuesday that will make AT&T the official wireless provider for Walt Disney World Resort and Disneyland Resort.

What does this mean? As part of the deal, AT&T will be improving service within the Walt Disney World and Disneyland Resorts by adding small technology that will chop up AT&T’s existing licensed wireless spectrum and reuse it in smaller chunks to better cover the resort and add more capacity in high-volume areas. The company will also add free Wi-Fi hotspots, which AT&T customers visiting the resorts will also be able to use to offload data traffic.

Specifically, AT&T will add more than 25 distributed antenna systems in an effort to add capacity. It will also add more than 350 small cells, which extend the availability of the network. AT&T is adding 10 new cell sites across the Walt Disney World resort to boost coverage and capacity. And it will add nearly 50 repeaters to help improve coverage of the network.

Chris Hill, AT&T’s senior vice president for advanced solutions, said that AT&T’s efforts to improve coverage in an around Disney resorts is part of a bigger effort the company is making to add capacity and improve coverage in highly trafficked areas. He said that even though AT&T had decent network coverage already within the Disney parks, customers often experienced issues in some buildings or in remote reaches of the resorts.

“The macro cell sites can only cover so much,” he said. “So you need to go to small cells to really get everywhere you need to be and to provide the capacity you need in areas with a high density of people.”

Hill said the idea of creating smaller cell sites that reuse existing licensed spectrum is a big trend among all wireless carriers right now. And he said, AT&T is deploying this small cell technology in several cities as well as other areas where large numbers of people gather, such as stadiums and arenas.

“We are deploying this technology widely across metro areas to increase density of our coverage,” he said. “And it’s not just us. There’s a big wave of small cell deployments where tens of thousands of these access points are being deployed all over the place.”

Cooperation with Disney is a key element in this deployment since the small cell technology requires that AT&T place access points on the Disney property. The footprint of the access points is very small. They typically look like large access points used for Wi-Fi. Hill said they can be easily disguised to fit in with the surroundings.

Unfortunately, wireless customers with service from other carriers won’t see the same level of improved service. The network upgrade and the small cell deployments will only work for AT&T wireless customers. AT&T has no plans to allow other major carriers to use the network for roaming.

Also as part of the deal, AT&T will take over responsibility for Disney’s corporate wireless services, providing services to some 25,000 Disney employees. And the companies have struck various marketing and branding agreements. As part of that aspect of the deal, AT&T will become an official sponsor of Disney-created soccer and runDisney events at the ESPN Wide World of Sports Complex. In addition, Disney will join AT&T in its “It Can Wait” public service campaign, which educates the public about the dangers of texting while driving.

Source:  CNET

Crypto flaw makes millions of smartphones susceptible to hijacking

Tuesday, July 23rd, 2013

New attack targets weakness in at least 500 million smartphone SIM cards.

Millions of smartphones could be remotely commandeered in attacks that allow hackers to clone the secret encryption credentials used to secure payment data and identify individual handsets on carrier networks.

The vulnerabilities reside in at least 500 million subscriber identity module (SIM) cards, which are the tiny computers that store some of a smartphone’s most crucial cryptographic secrets. Karsten Nohl, chief scientist at Security Research Labs in Berlin, told Ars that the defects allow attackers to obtain the encryption key that safeguards the user credentials. Hackers who possess the credentials—including the unique International Mobile Subscriber Identity and the corresponding encryption authentication key—can then create a duplicate SIM that can be used to send and receive text messages, make phone calls to and from the targeted phone, and possibly retrieve mobile payment credentials. The vulnerabilities can be exploited remotely by sending a text message to the phone number of a targeted phone.

“We broke a significant number of SIM cards, and pretty thoroughly at that,” Nohl wrote in an e-mail. “We can remotely infect the card, send SMS from it, redirect calls, exfiltrate call encryption keys, and even hack deeper into the card to steal payment credentials or completely clone the card. All remotely, just based on a phone number.”

Nohl declined to identify the specific manufacturers or SIM models that contain the exploitable weaknesses. The vulnerabilities are in the SIM itself and can be exploited regardless of the particular smartphone they manage.

The cloning technique identified by the research team from Security Research Labs exploits a constellation of vulnerabilities commonly found on many SIMs. One involves the automatic responses some cards generate when they receive invalid commands from a mobile carrier. Another stems from the use of a single Data Encryption Standard key to encrypt and authenticate messages sent between the mobile carrier and individual handsets. A third flaw involves the failure to perform security checks before a SIM installs and runs Java applications.

The flaws allow an attacker to send an invalid command that carriers often issue to handsets to instruct them to install over-the-air (OTA) updates. A targeted phone will respond with an error message that’s signed with the 1970s-era DES cipher. The attacker can then use the response message to retrieve the phone’s 56-bit DES key. Using a pre-computed rainbow table like the one released in 2009 to crack cell phone encryption keys, an attacker can obtain the DES key in about two minutes. From there, the attacker can use the key to send a valid OTA command that installs a Java app that extracts the SIM’s IMSI and authentication key. The secret information is tantamount to the user ID and password used to authenticate a smartphone to a carrier network and associate a particular handset to a specific phone number.

Armed with this data, an attacker can create a fully functional SIM clone that could allow a second phone under the control of the attacker to connect to the network. People who exploit the weaknesses might also be able to run unauthorized apps on the SIM that redirect SMS and voicemail messages or make unauthorized purchases against a victim’s mobile wallet. It doesn’t appear that attackers could steal contacts, e-mails, or other sensitive information, since SIMs don’t have access to data stored on the phone, Nohl said.

Nohl plans to further describe the attack at next week’s Black Hat security conference in Las Vegas. He estimated that there are about seven billion SIMs in circulation. That suggests the majority of SIMs aren’t vulnerable to the attack. Right now, there isn’t enough information available for users to know if their particular smartphones are susceptible to this technique. This article will be updated if carriers or SIM manufacturers provide specific details about vulnerable cards or mitigation steps that can be followed. In the meantime, Security Research Labs has published this post that gives additional information about the exploit.

Source:  arstechnica.com

Internet traffic jams, meet your robot nemesis

Monday, July 22nd, 2013

MIT researchers have built a system to write better algorithms for tackling congestion

On an 80-core computer at the Massachusetts Institute of Technology, scientists have built a tool that might make networks significantly faster just by coming up with better algorithms.

The system, called Remy, generates its own algorithms for implementing TCP (Transmission Control Protocol), the framework used to prevent congestion on most networks. The algorithms are different from anything human developers have written, and so far they seem to work much better, according to the researchers. On one simulated network, they doubled the throughput.

Remy is not designed to run on individual PCs and servers, but someday it may be used to develop better algorithms to run on those systems, said Hari Balakrishnan, the Fujitsu professor in Electrical Engineering and Computer Science at MIT. For now, it’s churning out millions of possible algorithms and testing them against simulated networks to find the best possible one for a given objective.

IP networks don’t dictate how fast each attached computer sends out packets or whether they keep transmitting after the network has become congested. Instead, each system makes its own decisions using some implementation of the TCP framework. Each version of TCP uses its own algorithm to determine how best to act in different conditions.

These implementations of TCP have been refined many times over the past 30 years and sometimes fine-tuned for particular networks and applications. For example, a Web browser may put a priority on moving bits across the network quickly, while a VoIP application may call for less delay. Today, there are 30 to 50 “plausibly good” TCP schemes and five to eight that are commonly used, Balakrishnan said.

But up to now, those algorithms have all been developed by human engineers, he said. Remy could change that.

“The problem, on the face of it, is actually intractably hard for computers,” Balakrishnan said. Because there are so many variables involved and network conditions constantly change, coming up with the most efficient algorithm requires more than “naive” brute-force computing, he said.

Figuring out how to share a network requires strategic choices not unlike those that cyclists have to make in bike races, such as whether to race ahead and take the lead or cooperate with another racer, said Balakrishnan’s colleague, graduate student Keith Winstein.

“There’s a lot of different computers, and they all want to let their users browse the Web, and yet they have to cooperate to share the network,” Winstein said.

However, Remy can do things that human algorithm developers haven’t been able to achieve, Balakrishnan said. For one thing, current TCP algorithms use only a handful of rules for how a computer should respond to performance issues. Those might include things like slowing the transmission rate when the percentage of dropped packets passes some threshold. Remy can create algorithms with more than 150 rules, according to the researchers.

To create a new algorithm using Remy, Balakrishnan and Winstein put in a set of requirements and then let Remy create candidates and try them against software that simulates a wide range of network conditions. The system uses elements of machine learning to determine which potential algorithm best does the job. As it tests the algorithms, Remy focuses on situations where a small change in network conditions can lead to a big change in performance, rather than on situations where the network is more predictable.

After about four to 12 hours, Remy delivers the best algorithm it’s found. The results have been impressive, according to the researchers. In a test that simulated a fast wired network with consistent transmission rates across physical links, Remy’s algorithms produced about twice the throughput of the most commonly used versions of TCP and cut delay by two-thirds, they said. On a simulation of Verizon’s mobile data network, Remy’s algorithms gave 20 percent to 30 percent better throughput and 25 percent to 40 percent lower delay.

But don’t expect blazing page loads just yet. Balakrishnan and Winstein cautioned that Remy is still an academic research project.

For one thing, it hasn’t yet been tested on the actual Internet. Though Remy’s algorithms would probably work fairly well out there, it’s hard to be sure because they were developed in simulations that didn’t include all of the Internet’s unknown variables, according to Balakrishnan. For example, it may be hard to tell how many people are active on a particular part of the Internet, he said.

If machine-developed TCP algorithms do end up going live, it will probably happen first on private networks. For example, companies such as Google already fine-tune TCP for the requirements of their own data centers, Balakrishnan said. Those kinds of companies might turn to a system like Remy to develop better ones.

But even if Remy never makes it into the real world, it may have a lot to teach the engineers who write TCP algorithms, Balakrishnan said. For example, Remy uses a different way of thinking about whether there is congestion than TCP does today, he said.

Though the researchers understand certain tools that Remy uses, they still want to figure out how that combination of tools can create such good algorithms.

“Empirically, they work very well,” Winstein said. “We get higher throughput, lower delay, and more fairness than in all the human designs to date. But we cannot explain why they work.”

“At this point, our real aspiration is that other researchers and engineers pick it up and start using it as a tool,” Balakrishnan said. “Even if the ultimate impact of our work is … more about changing the way people think about the problem, for us that is a huge win.”

Source:  infoworld.com

VoIP phone hackers pose public safety threat

Friday, July 19th, 2013

Hospitals, 911 call centers and other public safety agencies can be shut down by hackers using denial-of-service attacks.

The demand stunned the hospital employee. She had picked up the emergency room’s phone line, expecting to hear a dispatcher or a doctor. But instead, an unfamiliar male greeted her by name and then threatened to paralyze the hospital’s phone service if she didn’t pay him hundreds of dollars.

Shortly after the worker hung up on the caller, the ER’s six phone lines went dead. For nearly two days in March, ambulances and patients’ families calling the San Diego hospital heard nothing but busy signals.

The hospital had become a victim of an extortionist who, probably using not much more than a laptop and cheap software, had single-handedly generated enough calls to tie up the lines.

Distributed denial-of-service attacks — taking a website down by forcing thousands of compromised personal computers to simultaneously visit and overwhelm it — has been a favored choice of hackers since the advent of the Internet.

Now, scammers are inundating phone lines by exploiting vulnerabilities in the burgeoning VoIP, or Voice over Internet Protocol, telephone system.

The frequency of such attacks is alarming security experts and law enforcement officials, who say that while the tactic has mainly been the tool of scammers, it could easily be adopted by malicious hackers and terrorists to knock out crucial infrastructure such as hospitals and 911 call centers.

“I haven’t seen this escalated to national security level yet, but it could if an attack happens during a major disaster or someone expires due to an attack,” said Frank Artes, chief technology architect at information security firm NSS Labs and a cybercrime advisor for federal agencies.

The U.S. Department of Homeland Security declined to talk about the attacks but said in a statement that the department was working with “private and public sector partners to develop effective mitigation and security responses.”

In the traditional phone system, carriers such as AT&T grant phone numbers to customers, creating a layer of accountability that can be traced. On the Web, a phone number isn’t always attached to someone. That’s allowed scammers to place unlimited anonymous calls to any land line or VoIP number.

They create a personal virtual phone network, typically either through hardware that splits up a land line or software that generates online numbers instantly. Some even infect cellphones of unsuspecting consumers with viruses, turning them into robo-dialers without the owners knowing that their devices have been hijacked. In all cases, a scammer has access to multiple U.S. numbers and can tell a computer to use them to dial a specific business.

Authorities say the line-flooding extortion scheme started in 2010 as phone scammers sought to improve on an old trick in which they pretend to be debt collectors. But the emerging bulls-eye on hospitals and other public safety lines has intensified efforts to track down the callers.

Since mid-February, the Internet Crime Complaint Center, a task force that includes the FBI, has received more than 100 reports about telephony denial-of-service attacks. Victims have paid $500 to $5,000 to bring an end to the attacks, often agreeing to transfer funds from their banks to the attackers’ prepaid debit card accounts. The attackers then use the debit cards to withdraw cash from an ATM.

The hospital attack, confirmed by two independent sources familiar with it, was eventually stopped using a computer firewall filter. No one died, the sources said. But hospital staff found the lack of reliable phone service disturbing and frustrating, one source said. They requested anonymity because they were not authorized to talk about the incident.

But typical firewalls, which are designed to block calls from specific telephone numbers, are less effective against Internet calls because hackers can delete numbers and create new ones constantly. Phone traffic carried over the Internet surged 25% last year and now accounts for more than a third of all international voice traffic, according to market research firm TeleGeography.

To thwart phone-based attacks, federal officials recently began working with telecommunications companies to develop a caller identification system for the Web. Their efforts could quell more than just denial-of-service attacks.

They could block other thriving fraud, including the spoofing and swatting calls that have targeted many people, from senior citizens to celebrities such as Justin Bieber. In spoofing, a caller tricks people into picking up the phone when their caller ID shows a familiar number. In swatting, a caller manipulates the caller ID to appear as though a 911 call is coming from a celebrity’s home.

Unclassified law enforcement documents posted online have vaguely identified some victims: a nursing home in Marquette, Wis., last November, a public safety agency and a manufacturer in Massachusetts in early 2013, a Louisiana emergency operations center in March, a Massachusetts medical center in April and a Boston hospital in May.

Wall Street firms, schools, media giants, insurance companies and customer service call centers have also temporarily lost phone service because of the attacks, according to telecommunications industry officials. Many of the victims want to remain anonymous out of fear of being attacked again or opening themselves up to lawsuits from customers.

The Marquette incident is noteworthy because when the business owner involved the Marquette County Sheriff’s Department, the scammer bombarded one of the county’s two 911 lines for 3 1/2 hours.

“The few people I’ve talked to about it have said that you just have to take it and that there’s no way to stop this,” Sheriff’s Capt. Chris Kuhl said.

A Texas hospital network has been targeted several times this year, said its chief technology officer, who spoke on the condition of anonymity because the individual’s employer has not discussed the attacks publicly. One of its nine hospitals lost phone service in a nurses unit for a day, preventing families from calling in to check on patients.

As the hospital searched for answers, it temporarily created a new number and turned to backup phone lines or cellphones for crucial communications. The chain eventually spent $20,000 per hospital to install a firewall-type device that is able to block calls from numbers associated with an attack.

For all the money spent on Internet security, companies often overlook protecting their telephones, Artes said.

“It’s kind of embarrassing when a website goes down, but when you shut down emergency operations for a county or a city, that has a direct effect on their ability to respond,” he said.

The Federal Communications Commission has begun huddling with phone carriers, equipment makers and other telecommunication firms to discuss ideas that would help stem the attacks. One possibility is attaching certificates, or a secret signature, to calls.

The FCC’s chief technology officer, Henning Schulzrinne, acknowledged that though such a solution is probably a year or two away, it could put an end to most fraudulent calls.

But Jon Peterson, a consultant with network analytics firm Neustar, said such measures raise privacy worries. Some calls, such as one to a whistle-blower hotline or one originating from a homeless shelter, may need to remain anonymous. There won’t be a single fix. But the goal is clear.

“The lack of secure attribution of origins of these calls is one of the key enablers of this attack,” Peterson said. “We have to resolve this question of accountability for the present day and the future.”

Source:  latimes.com

Wall Street batters defenses in make-believe cybercrisis

Friday, July 19th, 2013

Wall Street played its own version of war games on Thursday, testing its defenses against simulated cyberattacks bent on taking down U.S. stock exchanges.

A total of 500 people took part in the exercise, called Quantum Dawn 2, in offices across 50 financial institutions and government agencies.

“The exercise was completed successfully with robust engagement from all participants,” the Securities Industry and Financial Markets Association (SIFMA) said in a statement.

Participants included banks, insurance companies, brokers, hedge funds and exchanges. The Department of Homeland Security (DHS), the Treasury Department, the Securities and Exchange Commission (SEC) and the Federal Bureau of Investigation (FBI) also participated.

At stake is the preparedness of Wall Street to fend off cyberattackers hoping to disrupt the nation’s economy by taking down U.S. markets. The exercise tested the players’ crisis response plans and mitigation techniques, as well as electronic and telephone communications between institutions and coordination with government agencies.

The simulation included distributed denial of service (DDoS) attacks aimed at online banking sites. The players also had to counter a malware infection that threatened to take down trading operations, according to David Kennedy, founder and principal security consultant at TrustedSec. Kennedy spoke with representatives of banks participating in the tests.

The exercise was helpful to test participants’ collective effort to defend against attacks, but fell short of simulating a real-world assault, Kennedy said.

“Personally, what I’ve heard is it’s been a bit cheesy — not a real-world type scenario,” he said. “That’s hard to do in a simulated environment.”

The banks’ participation was part public relations to ease concerns customers may have about security in their financial institutions, Kennedy said.

“I actually think this is to create more of an outward-facing PR spin,” he said.

Customer confidence was shaken last year during several waves of DDoS attacks that disrupted online banking operations of some major financial institutions. A self-proclaimed Islamic hactivist group took credit for the assaults, which government officials believe originated from Iran.

No production systems were used in the exercises. Instead, separate software simulated three major attacks that attempted over a “multi-day period” to take down stock markets and banking operations.

Further attack details were not disclosed. SIFMA plans to release next month a report that will include recommendations on improving Wall Street’s response to a cybercrisis.

Financial institutions were expected to find holes in their defenses as a result of the tests, which supporters say is a good reason for having these types of simulations regularly.

“Cybersecurity as a whole is an arms race,” said Rich Bolstridge, chief strategist for financial services at Akamai Technologies. “The attackers are constantly evolving their techniques, so the defenses have to be [continuously] raised, coordinated and put in place.”

Akamai, which did not participate in the tests, provides security services to many financial institutions.

In 2011, the first Quantum Dawn exercise had a handful of participants, Kennedy said. The fact that the latest test had more than double the number of players indicates the importance of appearing secure in the financial sector.

“This is a show of force to say, ‘Hey, we’re taking it seriously,'” Kennedy said.

Source:  csoonline.com

Cisco releases security patches to mitigate attack against Unified Communications Manager

Friday, July 19th, 2013

Cisco Systems released a security patch for its Unified Communications Manager (Unified CM) enterprise telephony product in order to mitigate an attack that could allow hackers to take full control of the systems. The company also patched denial-of-service vulnerabilities in its Intrusion Prevention System software.

The Cisco Unified CM is a call processing component that extends enterprise telephony features and functions to IP phones, media processing devices, VoIP gateways, and multimedia applications, according to Cisco.

At the beginning of June, researchers from a French security consultancy firm called Lexfo publicly demonstrated an attack that chained together multiple “blind” SQL injection, command injection and privilege escalation vulnerabilities in order to compromise a Cisco Unified CM server.

The demonstration also revealed that all versions of Cisco Unified CM use a static hard-coded encryption key to encrypt sensitive data stored in the server’s database, including user credentials.

“The initial blind SQL injection allows an unauthenticated, remote attacker to use the hard-coded encryption key to obtain and decrypt a local user account. This allows for a subsequent, authenticated blind SQL injection,” Cisco said Wednesday in a security advisory.

“Successful exploitation of the command injection and privilege escalation vulnerabilities could allow an authenticated, remote attacker to execute arbitrary commands on the underlying operating system with elevated privileges,” the company said.

Cisco has released a security patch in the form of a Cisco Options Package (COP) called “cmterm-CSCuh01051-2.cop.sgn” that addresses some of the vulnerabilities used in the attack, including the one allowing the initial blind SQL injection.

Customers can download the file from Cisco’s website and install it as a temporary solution until the company releases new and patched versions of the Unified CM software.

The COP file mitigates the initial attack vector and reduces the documented attack surface, Cisco said. However, some other vulnerabilities used in the attack remain unpatched.

The remaining vulnerabilities are still being investigated and no workarounds are available for them yet, the company said.

Versions 7.1.x, 8.5.x, 8.6.x, 9.0.x and 9.1.x of the Cisco Unified CM are affected by the publicly demonstrated attack. Version 8.0 is also affected, but is no longer supported. Customers using this version are advised to contact Cisco for assistance in upgrading to a supported version.

Other possible threats

The company is also investigating the possibility that some of its other voice products are affected by one or more of the individual vulnerabilities used in the attack. These products are the Cisco Emergency Responder, Cisco Unified Contact Center Express, Cisco Unified Customer Voice Portal, Cisco Unified Presence Server/Cisco IM and Presence Service and Cisco Unity Connection.

On Wednesday, Cisco also advised customers about several denial-of-service vulnerabilities affecting the software running on some of its Intrusion Prevention System (IPS) products.

Products affects by one or several of those vulnerabilities are the Cisco ASA 5500-X Series IPS Security Services Processor (IPS SSP) software and hardware modules; Cisco IPS 4500 Series Sensors; Cisco IPS 4300 Series Sensors; the Cisco IPS Network Module Enhanced (NME) and the Cisco Catalyst 6500 Series Intrusion Detection System (IDSM-2) Module.

The company has released patched versions of the Cisco IPS Software for those products, except for the Cisco IDSM-2. A workaround for the vulnerability affecting Cisco IDSM-2 was made available.

Source:  pcworld.com

Nation’s first campus ‘Super Wi-Fi’ network launches at West Virginia University

Friday, July 19th, 2013

West Virginia University today (July 9) became the first university in the United States to use vacant broadcast TV channels to provide the campus and nearby areas with wireless broadband Internet services.

The university has partnered with AIR.U, the Advanced Internet Regions consortium, to transform the “TV white spaces” frequencies left empty when television stations moved to digital broadcasting into much-needed connectivity for students and the surrounding community.

http://wvutoday.assets.slate.wvu.edu/resources/1/1373572611_md.jpgThe initial phase of the network provides free public Wi-Fi access for students and faculty at the Public Rapid Transit platforms, a 73-car tram system that transports more than 15,000 riders daily.

“Not only does the AIR.U deployment improve wireless connectivity for the PRT System, but also demonstrates the real potential of innovation and new technologies to deliver broadband coverage and capacity to rural areas and small towns to drive economic development and quality of life, and to compete with the rest of the world in the knowledge economy,” said WVU Chief Information Officer John Campbell.

“This may well offer a solution for the many West Virginia communities where broadband access continues to be an issue,” Campbell said, “and we are pleased to be able to be a test site for a solution that may benefit thousands of West Virginians.”

Chairman of the Senate Committee on Commerce, Science and Transportation Sen. Jay Rockefeller, said, “As chairman of the Senate Commerce Committee, I have made promoting high-speed Internet deployment throughout West Virginia, and around the nation, a priority. That is why I am excited by today’s announcement of the new innovative wireless broadband initiative on West Virginia University’s campus.

“Wireless broadband is an important part of bringing the economic, educational, and social benefits of broadband to all Americans,” he said.

“My Public Safety Spectrum legislation, which the president signed into law last year, helped to preserve and promote innovative wireless services,” Rockefeller said. “The lessons learned from this pilot project will be important as Congress continues to look for ways to expand broadband access and advance smart spectrum policy.”

Mignon Clyburn, acting chair of the Federal Communications Commission, praised the development, saying, ””Innovative deployment of TV white spaces presents an exciting opportunity for underserved rural and low-income urban communities across the country. I commend AIR.U and West Virginia University on launching a unique pilot program that provides campus-wide Wi-Fi services using TV white space devices.

“This pilot will not only demonstrate how TV white space technologies can help bridge the digital divide, but also could offer valuable insights into how best to structure future deployments,” she said.

The network deployment is managed by AIR.U co-founder Declaration Networks Group LLC and represents a collaboration between AIR.U and the WVU Board of Governors; the West Virginia Network for Telecomputing, which provides the fiber optic Internet backhaul for the network; and Adaptrum Inc., a California start-up providing white space equipment designed to operate on vacant TV channels. AIR.U is affiliated with the Open Technology Institute at the New America Foundation, a non-partisan think tank based in Washington, D.C. Microsoft and Google both provided early support for AIR.U’s overall effort to spur innovation to upgrade the broadband available to underserved campuses and their surrounding communities.

“WVNET is proud to partner with AIR.U and WVU on this exciting new wireless broadband opportunity,” WVNET Director Judge Dan O’Hanlon said. “We are very pleased with this early success and look forward to expanding this last-mile wireless solution all across West Virginia.” O’Hanlon also serves as chairman of the West Virginia Broadband Council.

Because the unique propagation characteristics of TV band spectrum enables networks to broadcast Wi-Fi connections over several miles and over hilly and forested terrain, the Federal Communications Commission describes unlicensed access to vacant TV channels as enabling “Super Wi-Fi” services. For example, WVU can add additional Wi-Fi hotspots in other locations around campus where students congregate or lack connectivity today. Future applications include public Wi-Fi access on the PRT cars and machine-to-machine wireless data links supporting control functions of the PRT System.

AIR.U’s initial deployment, blanketing the WVU campus with Wi-Fi connectivity, demonstrates the equipment capabilities, the system throughput and performance of TV band frequencies to support broadband Internet applications. AIR.U intends to facilitate additional college community and rural broadband deployments in the future.

“The innovative WVU network demonstrates why it is critical that the FCC allows companies and communities to use vacant TV channel spectrum on an unlicensed basis,” said Michael Calabrese, director of the Wireless Future Project at the New America Foundation. “We expect that hundreds of rural and small town colleges and surrounding communities will soon take advantage of this very cost-effective technology to extend fast and affordable broadband connections where they are lacking.”

“Microsoft was built on the idea that technology should be accessible and affordable to everyone, and today access to a broadband connection is becoming increasingly important.” said Paul Mitchell, general manager/technology policy, at Microsoft. “White spaces technology and efficient spectrum management have a huge potential for expanding affordable broadband access in underserved areas and we are pleased to be partnering with AIR.U and West Virginia University on this new launch.”

The AIR.U consortium includes organizations that represent over 500 colleges and universities nationwide, and includes the United Negro College Fund, the New England Board of Higher Education, the Corporation for Education Network Initiatives in California, the National Institute for Technology in Liberal Education, and Gig.U, a consortium of 37 major universities.

“We are delighted that AIR.U was born out of the Gig.U effort,” said Blair Levin, executive director of Gig.U and former executive director of the National Broadband Plan. “The communities that are home to our research universities and colleges across the country need next generation speeds to compete in the global economy and we firmly believe this effort can be a model for other communities.”

Founding partners of AIR.U include Microsoft, Google, the Open Technology Institute at the New America Foundation, the Appalachian Regional Commission, and Declaration Networks Group, LLC, a new firm established to plan, deploy and operate Super Wi-Fi networks.

“Super Wi-Fi presents a lower-cost, scalable approach to deliver high capacity wireless networks, and DNG is leading the way for a new broadband alternative to provide sustainable models that can be replicated and extended to towns and cities nationwide,” stated Bob Nichols, CEO of Declaration Networks Group, LLC and AIR.U co-founder.

Source:  wvu.edu

3 more botched Windows patches: KB 2803821, KB 2840628, and KB 2821895

Thursday, July 18th, 2013

Two Black Tuesday patches — MS 13-052 and MS 13-057 — and last month’s nonsecurity patch KB 2821895 cause a variety of problems

Microsoft’s patching problems have hit a new low, with three botched patches now in desperate need of attention. MS 13-052 is supposed to plug security holes in .Net Framework and Silverlight, but it has problems getting along with Configuration Manager 2012 and ConfigMgr 2007), as well as with plug-ins running under Microsoft CRM 2011. MS 13-057 causes black bands to appear at the top of Windows Media videos, and it still hasn’t been fixed — although Microsoft has finally acknowledged the problem. The KB 2821895 Windows 8/Windows RT patch causes false System File Checker reports and hangs; Microsoft acknowledges the problem in its KB article, but the patch is still available.

Somebody please tell me who is in charge?

I’ve been covering the vagaries of Windows patches for a decade, and I’ve never seen the situation deteriorate like this. Here are the highlights:

  • MS 13-052/KB 2840628, a critical patch rolled out the Automatic Update chute as part of last week’s Black Tuesday disgorge, is throwing out exceptions with plug-ins running under Microsoft CRM 2011. There’s a detailed explanation of the problem on the North52 blog. There are also known problems with Configuration Manager 2012 and ConfigMgr 2007. MyITForum documents one problem with ConfigMgr 2007 and two with ConfigMgr 2012. According to MyITForum, Microsoft has acknowledged the problems as “database replication between sites (CAS/Primary/Secondary) with SQL 2012 will fail” and “Software Update point synchronization may fail at the end of the sync process.” The knowledge base article has no mention of these problems. But it looks like Microsoft has pulled the patch: My Windows 7 and Windows 8 PCs don’t show it. However, there’s been no indication of how to fix the problems (aside from some “short time” kludges in the MyITForum article) or whether Microsoft will release a fix for the patch or a new version of the patch.
  • MS 13-057/KB 2803821 (for Windows 7) has been turning the top half of WMV videos black, either on encoding or decoding. As I reported last week, people running Adobe Premier Pro CS6, Camtasia Studio 8.1, and Serif MoviePlus X6 had all reported problems, with a full description and fix offered by one burned customer on the day after the patch was released. It took five days after that fix appeared online, and four days after my article appeared, for Microsoft to acknowledge the problem in KB 2803821. But as I write this, the patch still appears in the Automatic Update queue, checked, ready to be installed on any Win7 machine that’s looking for updates.
  • KB 2821895, a Windows 8/Windows RT “servicing stack update” released in tandem with last month’s Black Tuesday patches, causes a lot of problems with the System File Checker. After installing the patch, running an sfc /scannow command freezes the computer for up to 10 minutes, then generates many bogus error messages about corrupted files it cannot fix. Microsoft’s recommendation is to run the DISM tool to repair Windows, when the only thing that’s broken is this botched patch. There’s been no fix to the patch, nor a new patch that I can find. If you installed this patch, there’s no way to uninstall it. More damning: Right now, KB 2821895 appears in Windows Update as an optional unchecked patch — Microsoft hasn’t even bothered to pull the patch.

Source: infoworld.com

Unusual file-infecting malware steals FTP credentials

Thursday, July 18th, 2013

A new version of a file-infecting malware program that’s being distributed through drive-by download attacks is also capable of stealing FTP (File Transfer Protocol) credentials, according to security researchers from antivirus firm Trend Micro.

The newly discovered variant is part of the PE_EXPIRO family of file infectors that was identified in 2010, the Trend Micro researchers said Monday in a blog post. However, this version’s information theft routine is unusual for this type of malware.

The new threat is distributed by luring users to malicious websites that host Java and PDF exploits as part of an exploit toolkit. If visitors’ browser plug-ins are not up to date, the malware will be installed on their computers.

The Java exploits are for the CVE-2012-1723 and CVE-2013-1493 remote code execution vulnerabilities that were patched by Oracle in June 2012 and March 2013 respectively.

Based on information shared by Trend Micro via email, a spike in infections with this new EXPIRO variant was recorded on July 11. “About 70 percent of total infections are within the United States,” the researchers said in the blog post.

Once the new EXPIRO variant runs on a system, it searches for .EXE files on all local, removable and networked drives, and adds its malicious code to them. In addition, it collects information about the system and its users, including Windows log-in credentials, and steals FTP credentials from a popular open-source FTP client called FileZilla.

The stolen information is stored in a file with a .DLL extension and is uploaded to the malware’s command and control servers.

“The combination of threats used is highly unusual and suggests that this attack was not an off-the-shelf attack that used readily available cybercrime tools,” the Trend Micro researchers said.

The theft of FTP credentials suggests that the attackers are either trying to compromise websites or are trying to steal information from organizations that is stored on FTP servers. However, it doesn’t appear that this threat is targeting any industry in particular, the Trend Micro researchers said via email.

Source:  csoonline.com

Report: Markets at risk due to cyberattacks against exchanges

Thursday, July 18th, 2013

Survey finds more than half of the world’s financial exchanges fell victim to some kind of cyberattack in the last year

A new report from the Research Department of the International Organization of Securities Commissions (IOSCO) and the World Federation of Exchanges (WFE) Office says that cybercrime within the securities markets can be considered a potentially systemic risk.

 

A joint study, published by the IOSCO and the WFE, examines how cybercrime is evolving, and what kind of threat it poses to the world’s markets. In a survey of 46 financial exchanges, 53 percent of them reported experiencing some kind of cyberattack in the last year. As such, the study’s authors say that cybercrime within the securities markets can be considered a potentially systemic risk, a notion that a majority of the exchanges surveyed agreed with.

Based on the responses sent by the exchanges, most of the attacks that have been experienced are disruptive in nature, such at DDoS attacks that seek to prevent access to websites and networks. Other wise they are malware related. It should be noted that financial theft didn’t show up in any of the responses. These responses, the report notes, suggest a shift from financial gain, and towards more disruptive aims.

In addition, the report also says there is “a high level of awareness of the threat across exchanges surveyed.” Accordingly, 93 percent of the exchanges responded that cyber threats are discussed and understood by senior management, and the same amount also confirmed that there are disaster recovery plans in place to deal with the aftermath of an attack. All of them reported that they’d be able to identify a cyberattack within 48-hours.

Overall, the report shows that exchanges are highly aware of the risks they face, the full extent of the threat remains unknown.

“One way to overcome this uncertainty and still engage with cybercrime is to envision and list potential factors and scenarios where cybercrime could have the most devastating impacts and then mould responses to best engage with those factors, effectively minimizing opportunities for cyber attacks to manifest systemic consequences,” the report concludes.

One thing that a majority of the respondents confirmed was the fear that the potential impact of a major cyberattack could affect confidence and reputation, followed by integrity and efficiency, and financial stability. Thus, a broader and more robust system-wide response to the issue is needed.

Source:  csoonline.com

NIST closer to critical infrastructure cybersecurity framework

Thursday, July 18th, 2013

The National Institute of Standards and Technology (NIST) held in San Diego last week the third of four workshops to develop a comprehensive cybersecurity framework for critical infrastructure as required under an executive order signed by President Obama on February 12, 2013. NIST’s goal with the workshop was to solicit feedback from nearly five hundred attendees to generate content for the preliminary draft framework, which is due in early October.

Ahead of the workshop NIST issued a barebones draft outline of the framework, with the intent of having the attendees fill in a framework “core” pegged off five cybersecurity functions: know, detect, prevent, respond and recover. Each of these five functions were to be populated with categories (for example, under “know,” the category might be “know the enterprise assets and systems”) and in turn each category has subcategories (for example, “know the enterprise risk architecture”).

For each category and subcategory, the attendees were asked to identify relevant informative references, such as existing standards, that might be helpful to achieving the objectives of the category or subcategory. NIST prepared a compendium of 322 references, mostly from standards-setting organizations such as ISO, ANSI or NERC, for this purpose.

To get the work done, NIST assigned the attendees to eight working groups, each of which spent the three days of the workshop with a NIST facilitator, assessing and modifying the functions and deriving the categories and subcategories, while trying to map the relevant references to the appropriate parts of the core.

NIST plans to aggregate the results of the eight working groups into a consolidated document by the end of July and release a more advanced version by the end of August ahead of the next workshop on September 11 in Dallas.

Although few of the workshop attendees could gain visibility into what areas of agreement or disagreement emerged across the eight groups, NIST is pleased with how the process worked. “What it looks increasingly like is a very rich tool box and a rules management process that teaches you how to use this toolbox,” NIST Director Patrick Gallagher said during the second day of the workshop.

“Most of the groups took the task at hand and really started working on the outline and the things we presented,” Adam Sedgewick, Senior Information Technology Policy Advisor at NIST and one of the chief organizers of the framework process said.

“It is a little hard to generalize with the working groups being so separate,” one cybersecurity specialist for a large municipally owned utility said. “My sense is that aggregating the feedback will give NIST some valuable insight to refine the good start that the framework draft core represented.”

Indeed, NIST got high marks from most of the attendees for a smoothly-run three days with high-caliber and professionally run facilitation. Even so, as the clock counts down to the extremely tight October deadline, the following cracks in the framework process continued to emerge:

Is NIST Reinventing the Wheel?

One recurrent concern that has cropped up throughout the entire process is how well this framework fits with existing critical infrastructure cybersecurity practices, most of which have been developed and refined over many years. The specific concern is that critical infrastructure asset owners and operators will have to contend with yet another set of requirements simply layered on top of existing practices, which, they believe, already serve them well.

“One theme I heard over and over is why were building something from scratch, wholly new, when existing frameworks would provide most of the building blocks,” one security director at a large investor-owned utility said.

NIST, however, dismisses this notion, saying that the goal of the process is to develop a higher level, flexible framework that can be applied to the widest range of sectors. “At a high level this is about identifying the existing practices that are out there&thats the theme that weve had from the very beginning,” NISTs Sedgewick said. “We want to build off existing practices and not reinvent the wheel.”

Only a Few Selected Sectors are Truly Active in the Process:
The presidential policy directive accompanying the February executive order identifies sixteen critical infrastructure sectors to which the framework will apply, covering a diverse range of industries, from chemical to agriculture to wastewater systems. However, to date, workshop attendance and participation has been dominated by, at most, three sectors — communications, energy and financial.

The relatively weak showing by the other sectors could handicap the broad applicability of the framework once its finalized in February 2014. “The sectors that dont participate are sleeping at the wheel because this will have a profound impact on their businesses and their lack of presence means that theyre having little influence on the final product,” one telecom industry cybersecurity representative said.

“Our process is completely open and we work with the people who come to the table. Every stage of this process is completely open,” NISTs Sedgewick said, adding that other sectors have been engaged in the process in different ways, such as through special webinars organized by trade associations and other groups.

Ongoing Concern About Coordination with DHS Efforts:
From the start of the framework process, participants have expressed continual concerns about how well the Department of Homeland Security (DHS), which has been assigned many related tasks under the executive order and policy directive, is coordinating with NIST, a concern only heightened by the upcoming departure of DHS Secretary Janet Napolitano, which was announced on the last day of the workshop. Both NIST and DHS representatives assured the workshop attendees that the two groups are working well on the shared and related tasks.

But some of the attendees felt even more concerned about the coordination between the two government arms following the workshop. For example, the executive order requires DHS to separately provide performance goals for the framework, while also stating that the framework itself shall include guidance for measuring the performance of an entity in implementing the framework.

A topic-specific working session on the DHS performance goals held at the workshop was described by one telecom attendee as a “train wreck.”

“They [DHS and NIST] were completely unprepared and were stumbling over themselves” in trying to explain the distinction between the two performance-related measures.

Ongoing Fear That The Voluntary Framework May Become Mandatory Regulation:
Again, from the outset of the framework process, many of the participants, particularly Washington representatives of critical infrastructure industries, fear that political developments or a highly publicized cyber incident may push the current voluntary framework into the mandatory regulation category. This fear was underscored on July 11, the second day of the workshop, by the introduction of a draft Senate Commerce Committee cybersecurity bill which incorporates the framework, still strictly on a voluntary basis.

The fear is that “the heavy hand will come down because the heavy hand is paranoid right now,” government cybersecurity consultant Tom Goldberg said. It doesnt help matters that Section 10 of the executive order appears to give the government a hammer of sorts by ordering the sector specific government agencies to determine if their current regulatory authorities are sufficient to ensure adequate cybersecurity and if not to propose new regulatory authorities.

These and other cracks may close as the framework becomes even more solidified — the implementation of the executive order and the framework process are still fluid. The House Homeland Security Committees Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies will hold an oversight hearing tomorrow, July 18, on the executive order and the development of the framework during which DHS and NIST will share more information on the status of their initiatives.

Source:  csoonline.com

With universities under attack, security experts talk best defenses

Thursday, July 18th, 2013

Faced with millions of hacking attempts a week, U.S. research universities’ best option is to segment information, so a balance can be struck between security and the need for an open network, experts say.

Universities are struggling to tighten security without destroying the culture of openness that is so important for information sharing among researchers in and outside the institutions, The New York Times reported on Wednesday.

Universities have become a major target by hackers looking to steal highly valuable research that is the backbone of thousands of patents awarded to the schools each year, the newspaper said. The research spans a wide variety of fields, ranging from drugs and computer chips to military weapons and medical devices.

Like U.S. corporations, universities are battling hackers who are believed to be mostly from China. However, the schools are in the unusual position of having to protect valuable data while maintaining an open network.

“It is a unique problem for universities,” said Nick Bennett, a security consultant for Mandiant.

Experts agree that the schools should audit all the information they hold, including research data and student and employee personal information; categorize it all and then decide the level of security needed. The extent of the protection should depend on the damage that could result if the data is stolen.

The most sensitive information, such as research related to national security, should be taken off the Internet and accessible only through university-approved computers on campus.

“[That way] you can still maintain somewhat of an open culture university wide, while still protecting the crown jewels,” Bennett said.

For less sensitive data, there’s more flexibility, experts say. Some information may only need additional access controls, such as two-factor authentication. Other data could also be wrapped in intrusion detection technology.

Universities tend to have many silos of data stored within individual schools and centers on campus. Oftentimes, the information is left up to the individual entities to protect, which can have disastrous results.

In an incident he called “industrial strength stupid,” Kevin Coleman, a cyberterrorism expert at Technolytics Institute, said he knew of one university were researchers set up their own server on the school’s network and connected it to the Internet without a firewall, antivirus software or intrusion detection capabilities.

“That action exposed much more than just that research initiative,” he said.

An alternative is for universities to follow a more corporate model, where a single department is responsible for setting and upholding standards across the organization, said Brandon Knight, a senior consultant for SecureState.

If such a top-down approach is impossible, then the various groups should have a way to share information on security and to collaborate whenever possible.

“When you see people implement their own security and reinvent the wheel and do this in a vacuum, it leads to problems,” Knight said. “People obviously want to do the best, but they don’t always know what they’re doing and they may not have the resources.”

The sophistication of hackers engaged in cyberespionage means they are likely to breach any organization’s security eventually. In those cases, the best defense is to have technology that prevents intruders from obtaining credentials to access internal systems, a strategy called “defense in depth.”

“Even if an attacker is able to get access to a few systems in your environment, there are still additional security controls in place preventing them from escalating their privileges and moving laterally to other sensitive systems,” Bennett said.

Many of the above suggestions are considered best practices in the security industry. But the basics go a long way to protecting computer systems.

“It doesn’t really matter if the attackers are from China, some other nation state or just hacktivists,” said Brent Huston, chief executive of MicroSolved. “Until [universities] get better at doing the basics right, they will continue to be hotbeds of attacker activity.”

Source:  cso.com

Network Solutions restores service after DDoS attack

Thursday, July 18th, 2013

Network Solutions said Wednesday it has restored services after a distributed denial-of-service (DDoS) attack knocked some websites it hosts offline for a few hours.

The company, which is owned by Web.com, registers domain names, offers hosting services, sells SSL certificates and provides other website-related administration services.

Network Solutions wrote on Facebook around mid-day Wednesday EDT that it was under attack. About three hours later, it said most customer websites should resolve normally.

Some customers commented on Facebook, however, that they were still experiencing downtime. Many suggested a problem with Network Solutions’ DNS (Domain Name System) servers, which are used to look up domain names and translate the names into an IP addresses that can be requested by a browser.

DDoS attacks are a favored method to disrupt websites and involve sending large amounts of data in hopes of overwhelming servers and causing websites to not respond to requests.

Focusing DDoS attacks on DNS servers has proven to be a very effective attack method. In early June, three domain name management and hosting providers — DNSimple, easyDNS and TPP Wholesale — reported DNS-related outages caused by DDoS attacks.

Hosting service DNSimple said it came under a DNS reflection attack, where DNS queries are sent to one party but the response is directed to another network, exhausting the victim network’s bandwidth.

Source:  cso.com