Posts Tagged ‘Encryption’

Target’s nightmare goes on: Encrypted PIN data stolen

Friday, December 27th, 2013

After hackers stole credit and debit card records for 40 million Target store customers, the retailer said customers’ personal identification numbers, or PINs, had not been breached.

Not so.

On Friday, a Target spokeswoman backtracked from previous statements and said criminals had made off with customers’ encrypted PIN information as well. But Target said the company stored the keys to decrypt its PIN data on separate systems from the ones that were hacked.

“We remain confident that PIN numbers are safe and secure,” Molly Snyder, Target’s spokeswoman said in a statement. “The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems.”

The problem is that when it comes to security, experts say the general rule of thumb is: where there is will, there is a way. Criminals have already been selling Target customers’ credit and debit card data on the black market, where a single card is selling for as much as $100. Criminals can use that card data to create counterfeit cards. But PIN data is the most coveted of all. With PIN data, cybercriminals can make withdrawals from a customer’s account through an automatic teller machine. And even if the key to unlock the encryption is stored on separate systems, security experts say there have been cases where hackers managed to get the keys and successfully decrypt scrambled data.

Even before Friday’s revelations about the PIN data, two major banks, JPMorgan Chase and Santander Bank both placed caps on customer purchases and withdrawals made with compromised credit and debit cards. That move, which security experts say is unprecedented, brought complaints from customers trying to do last-minute shopping in the days leading to Christmas.

Chase said it is in the process of replacing all of its customers’ debit cards — about 2 million of them — that were used at Target during the breach.

The Target breach,from Nov. 27 to Dec. 15, is officially the second largest breach of a retailer in history. The biggest was a 2005 breach at TJMaxx that compromised records for 90 million customers.

The Secret Service and Justice Department continue to investigate.

Source:  nytimes.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Computers share their secrets if you listen

Friday, December 20th, 2013

Be afraid, friends, for science has given us a new way in which to circumvent some of the strongest encryption algorithms used to protect our data — and no, it’s not some super secret government method, either. Researchers from Tel Aviv University and the Weizmann Institute of Science discovered that they could steal even the largest, most secure RSA 4096-bit encryption keys simply by listening to a laptop as it decrypts data.

To accomplish the trick, the researchers used a microphone to record the noises made by the computer, then ran that audio through filters to isolate the vibrations made by the electronic internals during the decryption process. With that accomplished, some cryptanalysis revealed the encryption key in around an hour. Because the vibrations in question are so small, however, you need to have a high powered mic or be recording them from close proximity. The researchers found that by using a highly sensitive parabolic microphone, they could record what they needed from around 13 feet away, but could also get the required audio by placing a regular smartphone within a foot of the laptop. Additionally, it turns out they could get the same information from certain computers by recording their electrical ground potential as it fluctuates during the decryption process.

Of course, the researchers only cracked one kind of RSA encryption, but they said that there’s no reason why the same method wouldn’t work on others — they’d just have to start all over to identify the specific sounds produced by each new encryption software. Guess this just goes to prove that while digital security is great, but it can be rendered useless without its physical counterpart. So, should you be among the tin-foil hat crowd convinced that everyone around you is a potential spy, waiting to steal your data, you’re welcome for this newest bit of food for your paranoid thoughts.

Source:  engadget.com

Cisco says controversial NIST crypto-potential NSA backdoor ‘not invoked’ in products

Thursday, October 17th, 2013

Controversial crypto technology known as Dual EC DRBG, thought to be a backdoor for the National Security Agency, ended up in some Cisco products as part of their code libraries. But Cisco says they cannot be used because it chose other crypto as an operational default which can’t be changed.

Dual EC DRBG or Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) from the National Institute of Standards and Technology and a crypto toolkit from RSA is thought to have been one main way the crypto ended up in hundreds of vendors’ products.

Because Cisco is known to have used the BSAFE crypto toolkit, the company has faced questions about where Dual EC DRBG may have ended up in the Cisco product line. In a Cisco blog post today, Anthony Grieco, principle engineer at Cisco, tackled this topic in a notice about how Cisco chooses crypto.

“Before we go any further, I’ll go ahead and get it out there: we don’t use the Dual_EC_DRBG in our products. While it is true that some of the libraries in our products can support the DUAL_EC_DRBG, it is not invoked in our products.”

Grieco wrote that Cisco, like most tech companies, uses cryptography in nearly all its products, if only for secure remote management.

“Looking back at our DRBG decisions in the context of these guiding principles, we looked at all four DRBG options available in NIST SP 800-90. As none had compelling interoperability or legal implementation implications, we ultimately selected the Advanced Encryption Standard Counter mode (AES-CTR) DRBG as out default.”

Grieco stated this was “because of our comfort with the underlying implementation, the absence of any general security concerns, and its acceptable performance. Dual_EC_DRBG was implemented but wasn’t seriously considered as the default given the other good choices available.”

Grieco said the DRBG choice that Cisco made “cannot be changed by the customer.”

Faced with the Dual EC DRBG controversy, which was triggered by the revelations about the NSA by former NSA contractor Edward Snowden, NIST itself has re-opened comments about this older crypto standard.

“The DRBG controversy has brought renewed focus on the crypto industry and the need to constantly evaluate cryptographic algorithm choices,” Grieco wrote in the blog today. “We welcome this conversation as an opportunity to improve security of the communications infrastructure. We’re open to serious discussions about the industry’s cryptographic needs, what’s next for our products, and how to collectively move forward.” Cisco invited comment on that online.

Grieco concluded, “We will continue working to ensure out products offer secure algorithms, and if they don’t, we’ll fix them.”

Source:  computerworld.com

Ransomware comes of age with unbreakable crypto, anonymous payments

Thursday, October 17th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/10/ScreenShot1-640x498.jpg

Malware that takes computers hostage until users pay a ransom is getting meaner, and thanks to the growing prevalence of Bitcoin and other digital payment systems, it’s easier than ever for online crooks to capitalize on these “ransomware” schemes. If this wasn’t already abundantly clear, consider the experience of Nic, an Ars reader who fixes PCs for a living and recently helped a client repair the damage inflicted by a particularly nasty title known as CryptoLocker.

It started when an end user in the client’s accounting department received an e-mail purporting to come from Intuit. Yes, the attached archived zip file with an executable inside should have been a dead giveaway that this message was malicious and was in no way affiliated with Intuit. But accounting employees are used to receiving e-mails from financial companies. When the receiver clicked on it, he saw a white box flash briefly on his screen but didn’t notice anything else out of the ordinary. He then locked his computer and attended several meetings.

Within a few hours, the company’s IT department received word of a corrupt file stored on a network drive that was available to multiple employees, including the one who received the malicious e-mail. A quick investigation soon uncovered other corrupted files, most or all of which had been accessed by the accounting employee. By the time CryptoLocker had run its course, hundreds of gigabytes worth of company data was no longer available.

“After reading about the ransomware on reddit earlier this week, we guessed [that it was] what we were dealing with, as all the symptoms seemed to be popping up,” Nic, who asked that his last name not be published, wrote in an e-mail to Ars. “We went ahead and killed the local network connection on the machine in question and we were immediately presented with a screenshot letting us know exactly what we were dealing with.”

According to multiple participants in the month-long discussion, CryptoLocker is true to its name. It uses strong cryptography to lock all files that a user has permission to modify, including those on secondary hard drives and network storage systems. Until recently, few antivirus products detected the ransomware until it was too late. By then, victims were presented with a screen like the one displayed on the computer of the accounting employee, which is pictured above. It warns that the files are locked using a 2048-bit version of the RSA cryptographic algorithm and that the data will be forever lost unless the private key is obtained from the malware operators within three days of the infection.

“Nobody and never will be able to restore files”

“The server will destroy the key after a time specified in this window,” the screen warns, displaying a clock that starts with 72:00:00 and counts down with each passing second. “After that, nobody and never will be able to restore files. To obtain the private key for this computer, which will automatically decrypt files, you need to pay 300 USD / 300 EUR / similar amount in another currency.”

None of the reddit posters reported any success in breaking the encryption. Several also said they had paid the ransom and received a key that worked as promised. Full backup files belonging to Nic’s clients were about a week old at the time that CryptoLocker first took hold of the network. Nic advised them to comply with the demand. The ransomware operators delivered a key, and about 24 hours later, some 400 gigabytes of data was restored.

CryptoLocker accepts payment in Bitcoins or through the MoneyPak payment cards, as the following two screenshots illustrate.

The outcome hasn’t been as happy for other CryptoLocker victims. Whitehats who tracked the ransomware eventually took down some of the command and control servers that the operators relied on. As a result, people on reddit reported, some victims who paid the ransom were unable to receive the unique key needed to unlock files on their computer. The inability to undo the damage hit some victims particularly hard. Because CryptoLocker encrypted all files that an infected computer had access to, the ransomware in many cases locked the contents of backup disks that were expected to be relied upon in the event that the main disks failed. (The threat is a graphic example of the importance of “cold,” or offline backup, a backup arrangement that prevents data from being inadvertently overwritten.)

Several people have reported that the 72-hour deadline is real and that the only way it can be extended is by setting a computer’s BIOS clock back in time. Once the clock runs out, the malware uninstalls itself. Reinfecting a machine does nothing to bring back the timer or restore the old encrypted session.

Earlier this year, researchers from Symantec who infiltrated the servers of one ransomware syndicate conservatively estimated that its operators were easily able to clear $5 million per year. No wonder CryptoLocker has had such a long run. As of last week, more than three weeks after it was first published, the reddit thread was still generating five to 10 new posts per day. Also a testament to the prevalence and staying power of CryptoLocker, researchers from security firms TrendMicro and Emsisoft provided technical analyses here and here.

“This bug is super scary and could really wipe the floor with lots of small businesses that don’t have the best backup practices,” Nic observed. “Given the easy money available to scam operators, it’s not hard to see why.”

Source:  arstechnica.com

Researchers create nearly undetectable hardware backdoor

Thursday, September 26th, 2013

University of Massachusetts researchers have found a way to make hardware backdoors virtually undetectable.

With recent NSA leaks and surveillance tactics being uncovered, researchers have redoubled their scrutiny of things like network protocols, software programs, encryption methods, and software hacks. Most problems out there are caused by software issues, either from bugs or malware. But one group of researchers at the University of Massachusetts decided to investigate the hardware side, and they found a new way to hack a computer processor at such a low-level, it’s almost impossible to detect it.

What are hardware backdoors?

Hardware backdoors aren’t exactly new. We’ve known for a while that they are possible, and we have examples of them in the wild. They are rare, and require a very precise set of circumstances to implement, which is probably why they aren’t talked about as often as software or network code. Even though hardware backdoors are rare and notoriously difficult to pull off, they are a cause of concern because the damage they could cause could be much greater than software-based threats. Stated simply, a hardware backdoor is a malicious piece of code placed in hardware so that it cannot be removed and is very hard to detect. This usually means the non-volatile memory in chips like the BIOS on a PC, or in the firmware of a router or other network device.

A hardware backdoor is very dangerous because it’s so hard to detect, and because it typically has full access to the device it runs on, regardless of any password or access control system. But how realistic are these threats? Last year, a security consultant showcased a fully-functioning hardware backdoor. All that’s required to implement that particular backdoor is flashing a BIOS with a malicious piece of code. This type of modification is one reason why Microsoft implemented Secure Boot in Windows 8, to ensure the booting process in a PC is trusted from the firmware all the way to the OS. Of course, that doesn’t protect you from other chips on the motherboard being modified, or the firmware in your router, printer, smartphone, and so on.

New research

The University of Massachusetts researchers found an even more clever way to implement a hardware backdoor. Companies have taken various measures for years now to ensure their chips aren’t modified without their knowledge. After all, most of our modern electronics are manufactured in a number of foreign factories. Visual inspections are commonly done, along with tests of the firmware code, to ensure nothing was changed. But in this latest hack, even those measures may not be enough. The way to do that is ingenious and quite complex.

The researchers used a technique called doping transistors. Basically, a transistor is made of a crystalline structure which provides the needed functionality to amplify or switch a current that goes through it. Doping a transistor means changing that crystalline structure to add impurities, and change the way it behaves. The Intel Random Number Generator (RNG) is the basic building block of any encryption system since it provides those important starting numbers with which to create encryption keys. By doping the RNG, the researchers can make the chip behave in a slightly different way. In this case, they simply changed the transistors so that one particular number became a constant instead of a variable. That means a number that was supposed to be random and impossible to predict, is now always the same.

By introducing these changes at the hardware level, it weakens the RNG, and in turn weakens any encryption that comes from keys created by that system, such as SSL connections, encrypted files, and so on. Intel chips contain self tests that are supposed to catch hardware modifications, but the researchers claim that this change is at such a low level in the hardware, that it doesn’t get detected. Fixing this flaw isn’t easy either, even if you could detect it. The RNG is part of the security process in a CPU, and for safety, it is isolated from the rest of the system. That means there is nothing a user or even administrator can do to correct the problem.

There’s no sign that this particular hardware backdoor is being used in the wild, but if this type of change is possible, then it’s likely that groups with a lot of technical expertise could find similar methods. This may lend more credence to moves from various countries to ban certain parts from some regions of the world. This summer Lenovo saw its systems being banned from defense networks in many countries after allegations that China may have added vulnerabilities in the hardware of some of its systems. Of course, with almost every major manufacturer having their electronics part made in China, that isn’t much of a relief. It’s quite likely that as hardware hacking becomes more cost effective and popular, we may see more of these types of low level hacks being performed, which could lead to new types of attacks, and new types of defense systems.

Source:  techrepublic.com

Snowden leaks: US and UK ‘crack online encryption’

Friday, September 6th, 2013

US and UK intelligence have reportedly cracked the encryption codes protecting the emails, banking and medical records of hundreds of millions of people.

Disclosures by leaker Edward Snowden allege the US National Security Agency (NSA) and the UK’s GCHQ successfully decoded key online security protocols.

They suggest some internet companies provided the agencies backdoor access to their security systems.

The NSA is said to spend $250m (£160m) a year on the top-secret operation.

It is codenamed Bullrun, an American civil-war battle, according to the documents published by the Guardian in conjunction with the New York Times and ProPublica.

The British counterpart scheme run by GCHQ is called Edgehill, after the first major engagement of the English civil war, say the documents.

‘Behind-the-scenes persuasion’

The reports say the UK and US intelligence agencies are focusing on the encryption used in 4G smartphones, email, online shopping and remote business communication networks.

The encryption techniques are used by internet services such as Google, Facebook and Yahoo.

Under Bullrun, it is said that the NSA has built powerful supercomputers to try to crack the technology that scrambles and encrypts personal information when internet users log on to access various services.

The NSA also collaborated with unnamed technology companies to build so-called back doors into their software – something that would give the government access to information before it is encrypted and sent over the internet, it is reported.

As well as supercomputers, methods used include “technical trickery, court orders and behind-the-scenes persuasion to undermine the major tools protecting the privacy of everyday communications”, the New York Times reports.

The US reportedly began investing billions of dollars in the operation in 2000 after its initial efforts to install a “back door” in all encryption systems were thwarted.

Gobsmacked

During the next decade, it is said the NSA employed code-breaking computers and began collaborating with technology companies at home and abroad to build entry points into their products.

The documents provided to the Guardian by Mr Snowden do not specify which companies participated.

The NSA also hacked into computers to capture messages prior to encryption, and used broad influence to introduce weaknesses into encryption standards followed by software developers the world over, the New York Times reports.

When British analysts were first told of the extent of the scheme they were “gobsmacked”, according to one memo among more than 50,000 documents shared by the Guardian.

NSA officials continue to defend the agency’s actions, claiming it will put the US at considerable risk if messages from terrorists and spies cannot be deciphered.

But some experts argue that such efforts could actually undermine national security, noting that any back doors inserted into encryption programs can be exploited by those outside the government.

It is the latest in a series of intelligence leaks by Mr Snowden, a former NSA contractor, who began providing caches of sensitive government documents to media outlets three months ago.

In June, the 30-year-old fled his home in Hawaii, where he worked at a small NSA installation, to Hong Kong, and subsequently to Russia after making revelations about a secret US data-gathering programme.

A US federal court has since filed espionage charges against Mr Snowden and is seeking his extradition.

Mr Snowden, however, remains in Russia where he has been granted temporary asylum.

Source:  BBC

Crypto flaw makes millions of smartphones susceptible to hijacking

Tuesday, July 23rd, 2013

New attack targets weakness in at least 500 million smartphone SIM cards.

Millions of smartphones could be remotely commandeered in attacks that allow hackers to clone the secret encryption credentials used to secure payment data and identify individual handsets on carrier networks.

The vulnerabilities reside in at least 500 million subscriber identity module (SIM) cards, which are the tiny computers that store some of a smartphone’s most crucial cryptographic secrets. Karsten Nohl, chief scientist at Security Research Labs in Berlin, told Ars that the defects allow attackers to obtain the encryption key that safeguards the user credentials. Hackers who possess the credentials—including the unique International Mobile Subscriber Identity and the corresponding encryption authentication key—can then create a duplicate SIM that can be used to send and receive text messages, make phone calls to and from the targeted phone, and possibly retrieve mobile payment credentials. The vulnerabilities can be exploited remotely by sending a text message to the phone number of a targeted phone.

“We broke a significant number of SIM cards, and pretty thoroughly at that,” Nohl wrote in an e-mail. “We can remotely infect the card, send SMS from it, redirect calls, exfiltrate call encryption keys, and even hack deeper into the card to steal payment credentials or completely clone the card. All remotely, just based on a phone number.”

Nohl declined to identify the specific manufacturers or SIM models that contain the exploitable weaknesses. The vulnerabilities are in the SIM itself and can be exploited regardless of the particular smartphone they manage.

The cloning technique identified by the research team from Security Research Labs exploits a constellation of vulnerabilities commonly found on many SIMs. One involves the automatic responses some cards generate when they receive invalid commands from a mobile carrier. Another stems from the use of a single Data Encryption Standard key to encrypt and authenticate messages sent between the mobile carrier and individual handsets. A third flaw involves the failure to perform security checks before a SIM installs and runs Java applications.

The flaws allow an attacker to send an invalid command that carriers often issue to handsets to instruct them to install over-the-air (OTA) updates. A targeted phone will respond with an error message that’s signed with the 1970s-era DES cipher. The attacker can then use the response message to retrieve the phone’s 56-bit DES key. Using a pre-computed rainbow table like the one released in 2009 to crack cell phone encryption keys, an attacker can obtain the DES key in about two minutes. From there, the attacker can use the key to send a valid OTA command that installs a Java app that extracts the SIM’s IMSI and authentication key. The secret information is tantamount to the user ID and password used to authenticate a smartphone to a carrier network and associate a particular handset to a specific phone number.

Armed with this data, an attacker can create a fully functional SIM clone that could allow a second phone under the control of the attacker to connect to the network. People who exploit the weaknesses might also be able to run unauthorized apps on the SIM that redirect SMS and voicemail messages or make unauthorized purchases against a victim’s mobile wallet. It doesn’t appear that attackers could steal contacts, e-mails, or other sensitive information, since SIMs don’t have access to data stored on the phone, Nohl said.

Nohl plans to further describe the attack at next week’s Black Hat security conference in Las Vegas. He estimated that there are about seven billion SIMs in circulation. That suggests the majority of SIMs aren’t vulnerable to the attack. Right now, there isn’t enough information available for users to know if their particular smartphones are susceptible to this technique. This article will be updated if carriers or SIM manufacturers provide specific details about vulnerable cards or mitigation steps that can be followed. In the meantime, Security Research Labs has published this post that gives additional information about the exploit.

Source:  arstechnica.com

Avoid built-in SSD encryption to ensure data recovery after failure, warns specialist

Monday, July 8th, 2013

Companies wanting to ensure their data is recoverable from solid state disk (SSD) drives should make sure they use third-party encryption tools with known keys rather than relying on devices’ built-in encryption, a data-recovery specialist has advised.

Noting that the shift from mechanical hard drives to flash RAM-based solid state disk (SSD) drives had increased the complexity of data recovery, Adrian Briscoe, general manager of data-recovery specialist Kroll Ontrack, told CSO Australia that the growing use of SSD in business servers, mobile phones, tablets, laptops and even cloud data centres had made recovering data from the devices “a very black or white situation”.

“You either get everything or you don’t get anything at all” from damaged SSD-based equipment, he explained.

“With mechanical hard drives it’s a percentage situation, particularly since large drives are typically not used to capacity. But with SSDs we spend a lot of time trying to find ways of recovering data. The major issue is interacting with the [SSD controller] chips: Although there are only six controller chip makers, there are at least 220 manufacturers of SSD devices, and the way they’re designed is different from one device to the next.”

Many manufacturers, in particular, had taken their own approaches to data security, automatically scrambling the information on SSDs with encryption keys that are stored on the device itself.

That has presented new challenges for the company’s data-recovery engineers, who work from a dedicated data-recovery clean-room in Brisbane where damaged hard drives are regularly rebuilt to the point where their data can be recovered.

The proportion of SSD and flash RAM media going to that cleanroom had grown steadily, from 2.1 per cent of all data recovery jobs in late 2008 to 6.41 per cent of jobs in Q4 2012.

Recovering data from SSDs is already more difficult than sequential-write hard drives because SSD-stored data is distributed throughout the flash RAM cells by design. Once SSD-stored keys are made inaccessible by damage to the device, however, recovering the data becomes far more complicated – and chances of getting any of it back plummet.

“SSD devices do have encryption on them, and we are recommending people not use hardware encryption on an SSD if they are wanting to ever recover data from that device,” Briscoe explained, suggesting that users instead run computer-based software like the open-source TrueCrypt, whose keys can be managed by the user rather than internally by the drive itself.

“By having encryption turned on, an SSD with a hardware key is going to fail any data recovery effort,” he continued. “We are not hackers, and we can’t get into encrypted data. Instead, we’re recommending that people use something that holds the key outside the device.”

Many users had yet to appreciate the complexity that SSD poses, with a November 2012 customer survey suggesting just 31 per cent were aware of the complexity of SSD-based encryption and 48% saying there was no additional risk posed by using SSDs. An additional 38 per cent said they didn’t know.

The SSD challenge isn’t limited to smartphone-wielding users, however: as data-centre operators increasingly turn to SSD to boost the effective speed of their data-storage operations, Briscoe warned that a growing number of the company’s recovery operations were involving data lost to cloud-computing operators.

“A lot of vendors are using hybrid solutions with a bank of SSDs in a storage area network, then write data to [conventional] drives,” he said.

“We’re seeing more and more instances of cloud providers losing data: they rely very much on snapshots, and if something happens to the data – if there is corruption to the operating system or some type of user error – we are having more and more cloud providers coming to us with data loss.”

Source:  cso.com

Calif. attorney general: Time to crack down on companies that don’t encrypt

Friday, July 5th, 2013

State’s first data breach report finds that more than 1.4 million residents’ data would have been safe had companies used encryption

If organizations throughout California encrypted their customers’ sensitive data, more than 1.4 million Californians would not have had their information put at risk in 2012, according to a newly released report [PDF] on statewide data breaches from California Attorney General Kamala Harris. All told, some 2.5 million people were affected by the 131 breaches reported to the state. Notably, organizations in the Golden State are only required to report a breach if it affects 500 or more users, so it’s plausible (if not likely) that the overall number of breaches is higher.

California does offer incentives to companies that embrace encryption, according to Harris, but because the carrot isn’t working, she’s now turning to the stick: She cautioned that her office “will make it an enforcement priority to investigate breaches involving unencrypted personal information” and will “encourage … law-enforcement agencies to similarly prioritize these investigations.”

California breachin’
According to the report simply titled “Data Breach Report 2012,” 103 different entities suffered data breaches in 2012, nine of which reported more than one. Three of the entities reporting multiple breaches were payment card issuers: American Express with 19, Discover Financial Services with three, and Yolo Federal Credit Union with two. Those breaches occurred either at a merchant or at a payment processor.

Other key stats from the report:

  • The average breach incident involved the information of 22,500 individuals.
  • The retail industry reported the most data breaches in 2012: 34 (26 percent of the total reported breaches), followed by finance and insurance with 30 (23 percent).
  • More than half of the breaches (56 percent) involved Social Security numbers.
  • Outsider intrusions accounted for 45 percent the total incidents, with 23 percent occurring at a merchant via such techniques as skimming devices installed at a point-of-sale terminal.
  • 10 percent of the breaches were caused by insiders — employees, contractors, vendors, customers — who accessed systems and data without authority.

Encryption and beyond
Beyond threatening greater scrutiny of companies that suffer data breaches but don’t use encryption, Harris recommended that the California Legislature should consider enacting a law requiring organizations to use encryption to protect personal information.

Additionally, the report called on organizations to review and tighten their security controls to protect personal information, including training of employees and contractors. “More than half of the breaches reported in 2012 … were the result of intentional access to data by outsiders or by unauthorized insiders,” the report says. “This suggests a need to review and strengthen security controls applied to personal information.”

The report further noted that organizations not only “have legal and moral obligations” to protect personal information, but California law requires businesses “to use reasonable and appropriate security procedures and practices to protect personal information.”

Suggested practices include using multifactor authentication to protect sensitive systems, having strong encryption to protect user IDs and passwords in storage, and providing regular training for employees, contractors, and other agents who handle personal information. “Many of the 17 percent of breaches that resulted from procedural failures were likely the result of ignorance of or noncompliance with organiza­tional policies regarding email, data destruction, and website posting,” the report says.

It also cites companies for making breach notices sent to customers too difficult to read. In reviewing sample notices, Harris’ office found that the average reading level of the breach notices submitted in 2012 was 14th grade. That’s “significantly higher than the average reading level in the U.S.” according to the National Assessment of Adult Literacy.

“Communications professionals can help in making the notice more accessible, using techniques like shorter sentences, familiar words and phrases, the active voice, and layout that supports clarity, such as headers for key points and smaller text blocks,” according to the report.

Additionally, the report called on companies to offer customers affected by data breaches with mitigation products — such as credit monitoring — or information on security freezes. These types of protective measures that can limit victims’ risk of identity theft, “yet in 29 percent of the breaches of this type, no credit monitoring or other mitigation product was offered to victims.”

Finally, the report recommended legislation to amend the state’s breach notification laws to require notification of breaches of online credentials, such as user name and password.

Source:  infoworld.com

Quantum encryption keys obtained from a moving plane

Thursday, April 4th, 2013

A technical demonstration shows that an exchange with satellites is possible.

Here in the Ars science section, we cover a lot of interesting research that may eventually lead to the sort of technology discussed in other areas of the site. In many cases, that sort of deployment will be years away (assuming it ever happens). But in a couple of fields, the rapid pace of proof-of-principle demonstrations hints that commercialization isn’t too far beyond the horizon.

One of these areas is quantum key distribution between places that aren’t in close proximity. Quantum keys hold the promise of creating a unique, disposable key on demand in such a way that any attempts to eavesdrop will quickly become obvious. We know how to do this over relatively short distances using fiber optic cables, so the basic technique is well-established. Throughout the past couple of years, researchers have been getting rid of the cables: first by sending quantum information across a lake, then by exchanging it between two islands.

The latter feat involved a distance of 144km, which is getting closer to the sorts of altitudes occupied by satellites. But exchanging keys with satellites would seem to add a significant challenge—they move. Over the weekend, Nature Photonics published a paper that indicates we shouldn’t necessarily view that as an obstacle. The paper describes a team of German researchers who managed to obtain quantum keys transmitted from a moving aircraft.

The aircraft in question was a Dornier 228 turboprop in which the authors set up a shock-protected optical bench to generate the photons they needed for the experiment. Those were sent via a fiber optic cable to a transmitter on the underside of the aircraft. This included tracking equipment that allowed it to keep the transmissions pointed at a specific ground station.

That ground station was a 40cm telescope operated by the German Aerospace Center. It was kept pointed at the aircraft by using GPS coordinates transmitted by the aircraft over classical communications channels. Once it had a fix, a beacon laser was used to illuminate the aircraft, confirming that a directional link had been established. At that point, the plane’s hardware could start transmitting bits using the polarization of photons.

Since this was a proof of principle, the authors simply rotated through four potential polarizations in order to ensure that they could tell when the ground station was picking up the appropriate bit. One of the big problems was noise. Each second, the ground station’s detectors were picking up background noise at a rate of about 1,000 events per second, while the aircraft was only transmitting 800 bits per second (so there was a lot of noise to filter out). Some of this was actually from the aircraft’s blinking anticollision light, which the detector picked up nicely.

By filtering out the noise (and discarding anything from when the anticollision light flashed), the authors were able to achieve a rate of about 145 bits a second. Adding the extra information needed to detect eavesdropping would drop that to eight bits a second. That would be a horrific rate for transmitting data, but remember, these are just the bits of a key. Once the key is established, encrypted communications can take place on much faster channels. If they were willing to gather keys for a while, they could get as many as 80 kilobits in a single passage of the plane.

In the end, the authors say that the hard part was developing the pointing system and developing a system that could account for the rotation of the hardware as it tracked, which can otherwise skew the measurements. But with those developed, it seems that exchanging keys with a free-moving object is relatively straightforward. We may not be ready to put this in orbit yet, but it certainly seems like we’re getting very close to being ready to try.

Source:  arstechnica.com

“Lucky Thirteen” attack snarfs cookies protected by SSL encryption

Monday, February 4th, 2013

Exploit is the latest to subvert crypto used to secure Web transactions.

Software developers are racing to patch a recently discovered vulnerability that allows attackers to recover the plaintext of authentication cookies and other encrypted data as they travel over the Internet and other unsecured networks.

The discovery is significant because in many cases it makes it possible for attackers to completely subvert the protection provided by the secure sockets layer and transport layer protocols. Together, SSL, TLS, and a close TLS relative known as Datagram Transport Layer Security are the sole cryptographic means for websites to prove their authenticity and to encrypt data as it travels between end users and Web servers. The so-called “Lucky Thirteen” attacks devised by computer scientists to exploit the weaknesses work against virtually all open-source TLS implementations, and possibly implementations supported by Apple, Microsoft, and Cisco Systems as well.

The attacks are extremely complex, so for the time being, average end users are probably more susceptible to attacks that use phishing e-mails or rely on fraudulently issued digital certificates to defeat the Web encryption protection. Nonetheless, the success of the cryptographers’ exploits—including the full plaintext recovery of data protected by the widely used OpenSSL implementation—has clearly gotten the attention of the developers who maintain those programs. Already, the Opera browser and PolarSSL have been patched to plug the hole, and developers for OpenSSL, NSS, and CyaSSL are expected to issue updates soon.

“The attacks can only be carried out by a determined attacker who is located close to the machine being attacked and who can generate sufficient sessions for the attacks,” Nadhem J. AlFardan and Kenneth G. Paterson researchers wrote in a Web post that accompanied their research. “In this sense, the attacks do not pose a significant danger to ordinary users of TLS in their current form. However, it is a truism that attacks only get better with time, and we cannot anticipate what improvements to our attacks, or entirely new attacks, may yet to be discovered.”

A PDF of their paper is here.

How it works

Lucky Thirteen uses a technique known as a padding oracle that works against the main cryptographic engine in TLS that performs encryption and ensures the integrity of data. It processes data into 16-byte chunks using a routine known as MEE, which runs data through a MAC (Message Authentication Code) algorithm, then encodes and encrypts it. The routine adds “padding” data to the ciphertext so the resulting data can be neatly aligned in 8- or 16-byte boundaries. The padding is later removed when TLS decrypts the ciphertext.

The attacks start by capturing the ciphertext as it travels over the Internet. Using a long-discovered weakness in TLS’s CBC, or cipher block chaining, mode, attackers replace the last several blocks with chosen blocks and observe the amount of time it takes for the server to respond. TLS messages that contain the correct padding will take less time to process. A mechanism in TLS causes the transaction to fail each time the application encounters a TLS message that contains tampered data, requiring attackers to repeatedly send malformed messages in a new session following each previous failure. By sending large numbers of TLS messages and statistically sampling the server response time for each one, the scientists were able to eventually correctly guess the contents of the ciphertext.

It took the scientists as little 223 sessions to extract the entire contents of a TLS-encrypted authentication cookie. They were able to improve their results when they knew details of a the ciphertext they were trying to decrypt. Cookies formatted in base 64 encoding, for example, could be extracted in 219 TLS sessions. The researchers required 213 sessions when a byte of plaintext in one of the last two positions in a block was already known.

To make the attacks more efficient, they can incorporate methods unveiled two years ago in a separate TLS attack dubbed BEAST. That attack used JavaScript in the browser to open multiple sessions. By combining it with the padding oracle exploit, attackers required 213 sessions to extract each byte without needing to know one of the last two positions in a block.

The Lucky Thirteen attacks are only the latest exploits to subvert TLS, which along with SSL is intended to safeguard bank transactions, login sessions, and other sensitive activities carried out over unsecured networks. One of the most serious recent attacks used a universal wildcard certificate to spoof the credentials of virtually any website on the Internet. The previously mentioned BEAST attack was able to decrypt an eBay authentication cookie, although the technique required the attackers to first subvert something known as the same origin policy. Late last year, the same researchers behind BEAST devised CRIME, an attack that used Web compression to subvert TLS/SSL.

TLS remains vulnerable to such attacks largely because of design decisions engineers made in the mid-1990s when SSL was first devised, Johns Hopkins University professor Matthew Green observed in a blog post published Monday that explains how Lucky Thirteen works. Since then, engineers have applied a series of “band-aids” to the protocols rather than fixing the problems outright.

The attacks apply to all implementations that conform to version 1.1 or 1.2 or version 1.0 or 1.1 of TLS or DTLS respectively. They also apply to implementations that conform to version 3.0 of SSL or version 1.0 of TLS when they have been tweaked to incorporate countermeasures designed to defeat a previous padding oracle attack discovered several years ago.

It’s not the first time SSL and TLS have been brought down using a padding Oracle attack. The protocols were later patched to prevent attacks that used subtle differences in timing to ferret out details about the encrypted plaintext. At the time, some cryptographers acknowledged a tiny window that could still permit that type of exploit.

The scientists dubbed their exploit “Lucky Thirteen” because it’s made possible by the fact that the TLS MAC calculation includes 13 bytes of header information.

“So, in the context of our attacks, 13 is lucky—from the attacker’s perspective at least,” the researchers wrote in their Web post. “This is what passes for humor amongst cryptographers.”

Source:  arstechnica.com

New ransomware trojan encrypts files to make you pay up

Friday, February 1st, 2013

A new type of ransomware has appeared, and it’s got the potential to be a lot more nasty than other trojans in the category. This as-yet unnamed trojan follows through on the threats made by other malware authors. It actually encrypts files on a PC in an attempt to force users to pay up.

Ransomware started popping up a few years ago with a now-familiar MO. An infected user is confronted by a message claiming that their PC has been somehow used in a criminal act or is at risk in some way. In order to rectify the imaginary problem, a fee has to be paid. This extortion scheme is sometimes accompanied by the locking down of parts of the system, but never before has ransomware gone to the extremes of actually encrypting files and holding them hostage. There’s no way to reclaim access to the files by simply removing the trojan.

When a PC picks up the new trojan, it goes to work by creating two encryption keys based on the PC’s ID. It also spawns a new instance of ctfmon.exe or svchost.exe and injects its own code there. This allows it to run in the background more stealthily. The first of the encryption keys is used to encrypt communications with the command and control server. The second key is the one causing all the heartache.

The second key is encrypted by the first, and sent to the command and control server for safekeeping. The server then determines which files should be locked up. It goes after images, documents, and some executables, using the second key to encrypt them. In this case, the scary warning that pops up is not making idle threats — those files aren’t coming back without the key.

The goal here is not to cripple a computer, so the Windows files are left intact. However, the malware does block regedit, task manager, and msconfig. Since the malware controller has the encryption keys, he or she could technically remove the file encryption if the fee is paid. That’s far from a guarantee, though.

Source:  geek.com

Carnegie Mellon, MIT researchers create grammar-aware password cracking algorithm

Friday, January 25th, 2013

You’re best off forgetting your grammar lessons when it comes to creating passphrases, according to new research out of Carnegie Mellon University and MIT.

The researchers say that using grammar – good or bad – can clue in hackers about the words in a multi-word password. And they’ve built an algorithm as a proof-of-concept to show it (the team, led by software engineering Ph.D. student Ashwini Rao of CMU’s Institute for Software Research, will present its research at the Association for Computing Machinery’s Conference on Data and Application Security and Privacy on Feb. 20 in San Antonio.).

The team tested its grammar-aware password cracking algorithm against 1,434 passwords containing 16 or more characters, and cracked 10% of the dataset via the algorithm.

“We should not blindly rely on the number of words or characters in a password as a measure of its security,” Rao said, in a statement.

The researchers say that while a password based on a phrase or short sentence can be easier for a user to remember, it also makes it simpler to crack because grammatical rules narrow word choices and structures (in other words, a passphrase with pronoun-verb-adjective-noun would be easier to crack than one made up of noun-verb-adjective).

The researchers found that “Hammered asinine requirements,” for instance, is harder to crack than even the longer and seemingly clever “Th3r3 can only b3 #1!”

Passwords in general have come under increasing fire by security pros, as some of the highest profile breaches (LinkedIn, Nvidia) have been the result of password compromises or resulted in passwords (including encrypted ones) being made public.

Google’s security team is looking into ways to avoid passwords altogether for logging into websites.

Source:  networkworld.com

China reinforces its ‘Great Firewall’ to prevent encryption

Monday, December 17th, 2012

It may be a real problem for Chinese citizens and Westerners, but that hasn’t stopped the Chinese government from using new technology to plug holes in the “Great Firewall of China.”

China has begun reinforcing its infamous firewall with new tech designed to prevent encrypted communication.

To prevent the more enterprising citizens of China from exploiting holes in the country’s firewall through the use of virtual private networks and circumventors, the Chinese government is using new technology to block encryption, according to The Guardian.

The publication reports that both consumers and businesses are being hit by the new Internet barrier, which is able to “learn, discover and block” encrypted channels provided by VPN companies. According to one company that has a customer base in the Asian country, one of the largest telecom providers in the area, China Unicorn, is now automatically killing connections to the Internet when a virtual private network is detected.

For Chinese residents, this could mean that access to Western reading material and Web sites, including social networks, could become even harder to access. By using Blockedinchina.net, you can see which sites are currently inaccessible through standard Internet access — and this includes Facebook, Twitter, and YouTube — which may contain content that goes against China’s policies or ethos.

Companies that run a VPN business that reaches out to a Chinese audience must register with the Ministry of Industry and Information Technology, according to The Global Times. In addition, only Chinese companies and Sino-foreign joint ventures are allowed to apply to begin a VPN business in China, possibly due to registration regulations which keep the “Great Firewall of China” operating properly.

The alleged VPN-detection and blocking technology will not only hit audiences that want to access social networks, but will also affect businesses. One executive at a multinational tech firm in China told the publication:  “You can’t block all VPNs without blocking businesses, including Chinese businesses. China wants businesses to put regional headquarters in China. It has these economic and business goals that are reliant on modern business infrastructure.”

Source:  CNET

Virtual machine used to steal crypto keys from other VM on same server

Tuesday, November 6th, 2012

http://cdn.arstechnica.net/wp-content/uploads/2012/11/virtual-macine-side-channel-attack-640x112.jpg

New technique could pierce a key defense found in cloud environments.

Piercing a key defense found in cloud environments such as Amazon’s EC2 service, scientists have devised a virtual machine that can extract private cryptographic keys stored on a separate virtual machine when it resides on the same piece of hardware.

The technique, unveiled in a research paper published by computer scientists from the University of North Carolina, the University of Wisconsin, and RSA Laboratories, took several hours to recover the private key for a 4096-bit ElGamal-generated public key using the libgcrypt v.1.5.0 cryptographic library. The attack relied on “side-channel analysis,” in which attackers crack a private key by studying the electromagnetic emanations, data caches, or other manifestations of the targeted cryptographic system.

One of the chief selling points of virtual machines is their ability to run a variety of tasks on a single computer rather than relying on a separate machine to run each one. Adding to the allure, engineers have long praised the ability of virtual machines to isolate separate tasks, so one can’t eavesdrop or tamper with the other. Relying on fine-grained access control mechanisms that allow each task to run in its own secure environment, virtual machines have long been considered a safer alternative for cloud services that cater to the rigorous security requirements of multiple customers.

“In this paper, we present the development and application of a cross-VM side-channel attack in exactly such an environment,” the scientists wrote. “Like many attacks before, ours is an access-driven attack in which the attacker VM alternates execution with the victim VM and leverages processor caches to observe behavior of the victim.”

The attack extracted an ElGamal decryption key that was stored on a VM running the open-source GNU Privacy Guard. The code that leaked the tell-tale details to the malicious VM is the latest version of the widely used libgcrypt, although earlier releases are also vulnerable. The scientists focused specifically on the Xen hypervisor, which is used by services such as EC2. The attack worked only when both attacker and target VMs were running on the same physical hardware. That requirement could make it harder for an attacker to target a specific individual or organization using a public cloud service. Even so, it seems feasible that attackers could use the technique to probe a given machine and possibly mine cryptographic keys stored on it.

The technique, as explained by Johns Hopkins University professor and cryptographer Matthew Green, works by causing the attack VM to allocate continuous memory pages and then execute instructions that load the cache of the virtual CPU with cache-line-sized blocks it controls. Green continued:

The attacker then gives up execution and hopes that the target VM will run next on the same core—and moreover, that the target is in the process of running the square-and-multiply operation. If it is, the target will cause a few cache-line-sized blocks of the attacker’s instructions to be evicted from the cache. Which blocks are evicted is highly dependent on the operations that the attacker conducts.

The technique allows attackers to acquire fragments of the cryptographic “square-and-multiply” operation carried out by the target VM. The process can be difficult, since some of the fragments can contain errors that have the effect of throwing off an attacker trying to guess the contents of a secret key. To get around this limitation, the attack compares thousands of fragments to identify those with errors. The scientists then stitched together enough reliable fragments to deduce the decryption key.

The researchers say it’s the first demonstration of a successful side-channel attack on a virtualized, multicore server. Their paper lists a few countermeasures administrators can take to close the key leakage. One is to avoid co-residency and instead use a separate, “air-gapped” computer for high-security tasks. Two additional countermeasures include the use of side-channel resistant algorithms and a defense known as core scheduling to prevent attack VMs from being able to tamper with the cache processes of the other virtual machine. Future releases of Xen already include plans to modify the way so-called processor “interrupts” are handled.

While the scope of the attack remains limited, the research is important because it opens the door to more practical attacks in the future.

“This threat has long been discussed, and security people generally agree that it’s a concern,” Green wrote. “But actually implementing such an attack has proven surprisingly difficult.”

Source:  arstechnica.com

Internet architects mull changes to fight SSL-busting CRIME attacks

Friday, October 19th, 2012

IETF proposes change to long-standing practice of compressing encrypted data.

Engineers who help oversee Internet standards are proposing changes to long-standing website practices in order to guard against a new attack that exposes user login credentials even when they are transmitted through encrypted channels.

The tentative recommendations are included in a draft document filed earlier this week with the IETF, or Internet Engineering Task Force. It is among the first technical documents to grapple with an attack unveiled last month that allowed white hat hackers to decrypt the contents of encrypted session cookies used to log in to user accounts on Dropbox.com, Github.com, and other sites. (The sites took measures to block the exploit after researchers Juliano Rizzo and Thai Duong gave them advanced notice of their exploit.) Short for Compression Ratio Info-leak Made Easy, CRIME provided a reliable and repeatable means for attackers to defeat the widely used secure sockets layer and transport layer security protocols. Together, they form the basis of virtually all encryption between websites and end users.

CRIME is able to deduce the contents of encrypted communications that use data compression to reduce the amount of time it takes to move packets from one point to another. By injecting different pieces of known data into a compressed SSL data stream over and over and then comparing the number of bytes each time, attackers can use the method to deduce the encrypted contents character by character. The method worked against protected Web communications that used TLS compression or SPDY, an open networking protocol developed by Google engineers.

“It is RECOMMENDED to disable compression when communications are not trivial, unless traffic increase is considerable,” IETF members B. Kihara and K. Shimizu wrote in the draft, which was billed as a “work in progress.” “If data are confidential and other mitigations are inapplicable, compression MUST be disabled, especially when the compression is applied in the lower layer like TLS compression.”

When compressing whole data in the same context is unavoidable, the draft continued, encryption schemes must insert random paddings to prevent disclosure of the original size of the compressed data. “Note that this mitigation cannot prevent attackers from guessing secrets by statistical approaches,” the authors continued. The ineffectiveness of padding wasn’t lost on other cryptographers. “Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn’t pollute enough,” Johns Hopkins University professor Matthew Green said in a Twitter dispatch. Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: “Or like adding noise to electric cars so hearing impaired people can cross the street?”

This week’s draft will expire in the middle of April and could be updated, replaced, or obsoleted by other documents at any time.

Source:  arstechnica.com

Scientists crack RSA SecurID 800 tokens, steal cryptographic keys

Tuesday, June 26th, 2012

Scientists penetrate hardened security devices in under 15 minutes.

RSA’s SecurID 800 is one of at least five commercially available security devices susceptible to a new attack that extracts cryptographic keys used to log in to sensitive corporate and government networks.

http://cdn.arstechnica.net/wp-content/uploads/2012/06/securid-800.jpgScientists have devised an attack that takes only minutes to steal the sensitive cryptographic keys stored on a raft of hardened security devices that corporations and government organizations use to access networks, encrypt hard drives, and digitally sign e-mails.

The exploit, described in a paper to be presented at the CRYPTO 2012 conference in August, requires just 13 minutes to extract a secret key from RSA’s SecurID 800, which company marketers hold out as a secure way for employees to store credentials needed to access confidential virtual private networks, corporate domains, and other sensitive environments. The attack also works against other widely used devices, including the electronic identification cards the government of Estonia requires all citizens 15 years or older to carry, as well as tokens made by a variety of other companies.

Security experts have long recognized the risks of storing sensitive keys on general purpose computers and servers, because all it takes is a vulnerability in a single piece of hardware or software for adversaries to extract the credentials. Instead, companies such as RSA; Belcamp, Maryland-based SafeNet; and Amsterdam-based Gemalto recommend the use of special-purpose USB sticks that act as a digital Fort Knox that employees can use to safeguard their credentials. In theory, keys can’t be removed from the devices except during a highly controlled export process, in which they’re sealed in a cryptographic wrapper that is impossible for outsiders to remove.

“They’re designed specifically to deal with the case where somebody gets physical access to it or takes control of a computer that has access to it, and they’re still supposed to hang onto their secrets and be secure,” Matthew Green, a professor specializing in cryptography in the computer science department at Johns Hopkins University, told Ars. “Here, if the malware is very smart, it can actually extract the keys out of the token. That’s why it’s dangerous.” Green has blogged about the attack here.

If devices such as the SecurID 800 are a Fort Knox, the cryptographic wrapper is like an armored car used to protect the digital asset while it’s in transit. The attack works by repeatedly exploiting a tiny weakness in the wrapper until its contents are converted into plaintext. One version of the attack uses an improved variation of a technique introduced in 1998 that works against keys using the RSA cryptographic algorithm. By subtly modifying the ciphertext thousands of times and putting each one through the import process, an attacker can gradually reveal the underlying plaintext, D. Bleichenbacher, the original scientist behind the exploit, discovered. Because the technique relies on “padding” inside the cryptographic envelope to produce clues about its contents, cryptographers call it a “padding oracle attack.” Such attacks rely on so-called side-channels to see if ciphertext corresponds to a correctly padded plaintext in a targeted system.

It’s this version of the attack the scientists used to extract secret keys stored on RSA’s SecurID 800 and many other devices that use PKCS#11, a programming interface included in a wide variety of commercial cryptographic devices. Under the attack Bleichenbacher devised, it took attackers about 215,000 oracle calls on average to pierce a 1024-bit cryptographic wrapper. That required enough overhead to prevent the attack from posing a practical threat against such devices. By modifying the algorithm used in the original attack, the revised method reduced the number of calls to just 9,400, requiring only about 13 minutes of queries, Green said.

Other devices that store RSA keys that are vulnerable to the same attack include the Aladdin eTokenPro and iKey 2032 made by SafeNet, the CyberFlex manufactured by Gemalto, and Siemens’ CardOS, according to the paper.

The researchers also use refinements of an attack introduced in 2002 by Serge Vaudenay that exploits weaknesses in what is known as CBC padding to extract symmetric keys.

The CRYPTO 2012 paper is the latest research to demonstrate serious weaknesses in devices that large numbers of organizations rely on to secure digital certificates. In 2008, a team of hardware engineers and cryptographers cracked the encryption in the Mifare Classic, a wireless card used by transit operators and other organizations in the public and private sectors to control physical access to buildings. Netherlands-based manufacturer NXP Semiconductors said at the time it had sold 1 billion to 2 billion of the devices. Since then, crypto in a steady stream of other devices, including the Keeloq security system and the MiFare DESFire MF3ICD40, has also been seriously compromised.

The latest research comes after RSA warned last year that the effectiveness of the SecurID system its customers use to secure corporate and governmental networks was compromised after hackers broke into RSA networks and stole confidential information concerning the two-factor authentication product. Not long after that, military contractor Lockheed Martin revealed a breach it said was aided by the theft of that confidential RSA data. There’s nothing in the new paper that suggests the attack works on SecurID devices other than the 800 model.

RSA didn’t return e-mails seeking comment for this article. According to the researchers, RSA officials are aware of the attacks first described by Bleichenbacher and are planning a fix. SafeNet and Siemens are also in the process of fixing the flaws, they said. The researchers also reported that Estonian officials have said the attack is too slow to be practical.

Source:  arstechnica.com

Code crackers break 923-bit encryption record

Thursday, June 21st, 2012

In what was thought an impossibility, researchers break the longest code ever over a 148-day period using 21 computers.

Before today no one thought it was possible to successfully break a 923-bit code. And even if it was possible, scientists estimated it would take thousands of years.

However, over 148 days and a couple of hours, using 21 computers, the code was cracked.

Working together, Fujitsu Laboratories, the National Institute of Information and Communications Technology, and Kyushu University in Japan announced today that they broke the world record for cryptanalysis using next-generation cryptography.

“Despite numerous efforts to use and spread this cryptography at the development stage, it wasn’t until this new way of approaching the problem was applied that it was proven that pairing-based cryptography of this length was fragile and could actually be broken in 148.2 days,” Fujitsu Laboratories wrote in a press release.

Using “pairing-based” cryptography on this code has led to the standardization of this type of code cracking, says Fujitsu Laboratories. Scientists say that breaking the 923-bit encryption, which is 278-digits, would have been impossible using previous “public key” cryptography; but using pairing-based cryptography, scientists were able to apply identity-based encryption, keyword searchable encryption, and functional encryption.

Researchers’ efforts to crack this type of code is useful because it helps companies, governments, and organizations better understand how secure their electronic information needs to be.

“The cryptanalysis is the equivalent to spoofing the authority of the information system administrator,” Fujitsu Laboratories wrote. “As a result, for the first time in the world we proved that the cryptography of the parameter was vulnerable and could be broken in a realistic amount of time.”

Researchers from NICT and Hakodate Future University hold the previous world record for code cracking, which required far less computer power. They managed to figure out a 676-bit, or 204-digit, encryption in 2009.

Source:  CNET

Crypto crack makes satellite phones vulnerable to eavesdropping

Thursday, February 9th, 2012
Crypto crack makes satellite phones vulnerable to eavesdropping

Layout of a geostationary orbit telephone network

Cryptographers have cracked the encryption schemes used in a variety of satellite phones, a feat that makes it possible for attackers to surreptitiously monitor data received by vulnerable devices.

The research team, from the Ruhr University Bochum in Germany, is among the first to analyze the secret encryption algorithms implemented by the European Telecommunications Standards Institute. After reverse engineering phones that use the GMR-1 and GMR-2 standards, the team discovered serious cryptographic weaknesses that allow attackers using a modest PC running open-source software to recover protected communications in less than an hour.

The findings, laid out in a paper (PDF) to be presented at the IEEE Symposium on Security and Privacy 2012, are the latest to poke holes in proprietary encryption algorithms. Unlike standard algorithms such as AES and Blowfish—which have been subjected to decades of scrutiny from some of the world’s foremost cryptographers—these secret encryption schemes often rely more on obscurity than mathematical soundness and peer review to rebuff attacks.

“Contrary to the practice recommended in modern security engineering, both standards rely on proprietary algorithms for (voice) encryption,” the researchers wrote in the paper. “Even though it is impossible for outsiders (like us) to decide whether this is due to historic developments or because secret algorithms were believed to provide a higher level of ‘security,’ the findings of our work are not encouraging from a security point of view.”

The GMR-1 standard uses an algorithm that closely resembles the proprietary A5/2 cipher once employed by cellphones based on GSM, or Global System for Mobile Communications. A5/2 was dropped in 2006 after cryptographers exposed weaknesses that made it possible for attackers with modest hardware to crack the cipher in almost real time.

The problem with a5-gmr, as the cipher in GMR-1 is known, is that its output gives adversaries important clues about the secret key used to encrypt communications, Benedikt Driessen, a Ph.D. student who co-authored the paper, told Ars. By making a series of educated guesses based on a small sample of the ciphertext, attackers can quickly deduce the key needed to unscramble the protected data.

“If the guess is correct and given enough equations, the equations can be solved to reveal the encryption key,” Driessen said.

He also faulted the algorithm for performing what’s known as clocking separately and generating output equations with a low algebraic degree, flaws that also diminish security.

a5-gmr-2, the cipher used in GMR-2 phones, is also vulnerable to cracking when adversaries know a small sample of the data before it was encrypted. Because data sent over phone networks contains headers and other predictable content, it is possible for attackers to exploit the weakness.

Phones under attack

It’s tempting for critics of the satphone standards to seize on the security-through-obscurity approach, which relies on the lack of documentation to prevent attacks. But in fairness to the engineers who designed it, the approach hasn’t completely failed. The new crack works only on the data sent from a satellite to a phone, making it possible to retrieve data from only one end of a conversation. What’s more, researchers have yet to reverse engineer the audio codecs used by the standards, so eavesdropping on voice conversations isn’t yet possible.

“Our claim is, (a) we can decrypt and the codec will be revealed shortly which allows full eavesdropping and (b) we can apply the attack to different channels (fax, SMS) for which we don’t even need a codec,” Driessen said. Satphones “are vulnerable because the protection-layer is worthless.”

Over the past couple of years, cryptographers have gradually whittled away at many of the algorithms protecting data sent by phones. Standards including GSM, DECT (Digital Enhanced Cordless Telecommunications), and GPRS (General Packet Radio Service) have all been targeted. Devices that are vulnerable to the latest attacks include the Thuraya So-2510 and the Inmarsat IsatPhone Pro.

The secret algorithms were analyzed by downloading publicly available firmware used by the phones, disassembling the code, and using some clever techniques to isolate the ciphers. The analysis techniques may prove valuable in exposing weaknesses in other encryption schemes as well.

Source:  arstechnica.com