Archive for the ‘Privacy’ Category

IT Consulting Case Studies: Microsoft SharePoint Server for CMS

Friday, February 14th, 2014

Gyver Networks recently designed and deployed a Microsoft SharePoint Server infrastructure for a financial consulting firm servicing banks and depository institutions with assets in excess of $200 billion.

Challenge:  A company specializing in regulatory compliance audits for financial institutions found themselves inundated by documents submitted via inconsistent workflow processes, raising concerns regarding security and content management as they continued to expand.

http://officeimg.vo.msecnd.net/en-us/files/819/194/ZA103888538.pngWith many such projects running concurrently, keeping up with the back-and-forth flow of multiple versions of the same documents became increasingly difficult.  Further complicating matters, the submission process consisted of clients sending email attachments or uploading files to a company FTP server, then emailing to let staff know something was sent.  Other areas of concern included:

  • Security of submitted financial data in transit and at rest, as defined in SSAE 16 and 201 CMR 17.00, among other standards and regulations
  • Secure, customized, compartmentalized client access
  • Advanced user management
  • Internal and external collaboration (multiple users working on the same documents simultaneously)
  • Change and version tracking
  • Comprehensive search capabilities
  • Client alerts, access to project updates and timelines, and feedback

Resolution: Gyver Networks proposed a Microsoft SharePoint Server environment as the ideal enterprise content management system (CMS) to replace their existing processes.  Once deployed, existing archives and client profiles were migrated into the SharePoint infrastructure designed for each respective client and, seamlessly, the company was fully operational and ready to go live.

Now, instead of an insecure and confusing combination of emails, FTP submissions, and cloud-hosted, third-party management software, they are able to host their own secure, all-in-one CMS on premises, including:

  • 256-bit encryption of data in transit and at rest
  • Distinct SharePoint sites and logins for each client, with customizable access permissions and retention policies for subsites and libraries
  • Advanced collaboration features, with document checkout, change review and approval, and workflows
  • Metadata options so users can find what they’re searching for instantly
  • Client-customized email alerts, views, reporting, timelines, and the ability to submit requests and feedback directly through the SharePoint portal

The end result?  Clients of this company are thrilled to have a comprehensive content management system that not only saves them time and provides secure submission and archiving, but also offers enhanced project oversight and advanced-metric reporting capabilities.

The consulting firm itself experienced an immediate increase in productivity, efficiency, and client retention rates; they are in full compliance with all regulations and standards governing security and privacy; and they are now prepared for future expansion with a scalable enterprise CMS solution that can grow as they do.

Contact Gyver Networks today to learn more about what Microsoft SharePoint Server can do for your organization.  Whether you require a simple standalone installation or a more complex hybrid SharePoint Server farm, we can assist you in planning, deploying, administration, and troubleshooting to ensure you get the most out of your investment.

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

Friday, January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.'”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

Unencrypted Windows crash reports give ‘significant advantage’ to hackers, spies

Wednesday, January 1st, 2014

Microsoft transmits a wealth of information from Windows PCs to its servers in the clear, claims security researcher

Windows’ error- and crash-reporting system sends a wealth of data unencrypted and in the clear, information that eavesdropping hackers or state security agencies can use to refine and pinpoint their attacks, a researcher said today.

Not coincidentally, over the weekend the popular German newsmagazine Der Spiegel reported that the U.S. National Security Agency (NSA) collects Windows crash reports from its global wiretaps to sniff out details of targeted PCs, including the installed software and operating systems, down to the version numbers and whether the programs or OSes have been patched; application and operating system crashes that signal vulnerabilities that could be exploited with malware; and even the devices and peripherals that have been plugged into the computers.

“This information would definitely give an attacker a significant advantage. It would give them a blueprint of the [targeted] network,” said Alex Watson, director of threat research at Websense, which on Sunday published preliminary findings of its Windows error-reporting investigation. Watson will present Websense’s discovery in more detail at the RSA Conference in San Francisco on Feb. 24.

Sniffing crash reports using low-volume “man-in-the-middle” methods — the classic is a rogue Wi-Fi hotspot in a public place — wouldn’t deliver enough information to be valuable, said Watson, but a wiretap at the ISP level, the kind the NSA is alleged to have in place around the world, would.

“At the [intelligence] agency level, where they can spend the time to collect information on billions of PCs, this is an incredible tool,” said Watson.

And it’s not difficult to obtain the information.

Microsoft does not encrypt the initial crash reports, said Watson, which include both those that prompt the user before they’re sent as well as others that do not. Instead, they’re transmitted to Microsoft’s servers “in the clear,” or over standard HTTP connections.

If a hacker or intelligence agency can insert themselves into the traffic stream, they can pluck out the crash reports for analysis without worrying about having to crack encryption.

And the reports from what Microsoft calls “Windows Error Reporting” (ERS), but which is also known as “Dr. Watson,” contain a wealth of information on the specific PC.

When a device is plugged into a Windows PC’s USB port, for example — say an iPhone to sync it with iTunes — an automatic report is sent to Microsoft that contains the device identifier and manufacturer, the Windows version, the maker and model of the PC, the version of the system’s BIOS and a unique machine identifier.

By comparing the data with publicly-available databases of device and PC IDs, Websense was able to establish that an iPhone 5 had been plugged into a Sony Vaio notebook, and even nail the latter’s machine ID.

If hackers are looking for systems running outdated, and thus, vulnerable versions of Windows — XP SP2, for example — the in-the-clear reports will show which ones have not been updated.

Windows Error Reporting is installed and activated by default on all PCs running Windows XP, Vista, Windows 7, Windows 8 and Windows 8.1, Watson said, confirming that the Websense techniques of deciphering the reports worked on all those editions.

Watson characterized the chore of turning the cryptic reports into easily-understandable terms as “trivial” for accomplished attackers.

More thorough crash reports, including ones that Microsoft silently triggers from its end of the telemetry chain, contain personal information and so are encrypted and transmitted via HTTPS. “If Microsoft is curious about the report or wants to know more, they can ask your computer to send a mini core dump,” explained Watson. “Personal identifiable information in that core dump is encrypted.”

Microsoft uses the error and crash reports to spot problems in its software as well as that crafted by other developers. Widespread reports typically lead to reliability fixes deployed in non-security updates.

The Redmond, Wash. company also monitors the crash reports for evidence of as-yet-unknown malware: Unexplained and suddenly-increasing crashes may be a sign that a new exploit is in circulation, Watson said.

Microsoft often boasts of the value of the telemetry to its designers, developers and security engineers, and with good reason: An estimated 80% of the world’s billion-plus Windows PCs regularly send crash and error reports to the company.

But the unencrypted information fed to Microsoft by the initial and lowest-level reports — which Watson labeled “Stage 1” reports — comprise a dangerous leak, Watson contended.

“We’ve substantiated that this is a major risk to organizations,” said Watson.

Error reporting can be disabled manually on a machine-by-machine basis, or in large sets by IT administrators using Group Policy settings.

Websense recommended that businesses and other organizations redirect the report traffic on their network to an internal server, where it can be encrypted before being forwarded to Microsoft.

But to turn it off entirely would be to throw away a solid diagnostic tool, Watson argued. ERS can provide insights not only to hackers and spying eavesdroppers, but also the IT departments.

“[ERS] does the legwork, and can let [IT] see where vulnerabilities might exist, or whether rogue software or malware is on the network,” Watson said. “It can also show the uptake on BYOD [bring your own device] policies,” he added, referring to the automatic USB device reports.

Microsoft should encrypt all ERS data that’s sent from customer PCs to its servers, Watson asserted.

A Microsoft spokesperson asked to comment on the Websense and Der Spiegel reports said, “Microsoft does not provide any government with direct or unfettered access to our customer’s data. We would have significant concerns if the allegations about government actions are true.”

The spokesperson added that, “Secure Socket Layer connections are regularly established to communicate details contained in Windows error reports,” which is only partially true, as Stage 1 reports are not encrypted, a fact that Microsoft’s own documentation makes clear.

“The software ‘parameters’ information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted,” Microsoft acknowledged in a document about ERS.

Source:  computerworld.com

Target’s nightmare goes on: Encrypted PIN data stolen

Friday, December 27th, 2013

After hackers stole credit and debit card records for 40 million Target store customers, the retailer said customers’ personal identification numbers, or PINs, had not been breached.

Not so.

On Friday, a Target spokeswoman backtracked from previous statements and said criminals had made off with customers’ encrypted PIN information as well. But Target said the company stored the keys to decrypt its PIN data on separate systems from the ones that were hacked.

“We remain confident that PIN numbers are safe and secure,” Molly Snyder, Target’s spokeswoman said in a statement. “The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems.”

The problem is that when it comes to security, experts say the general rule of thumb is: where there is will, there is a way. Criminals have already been selling Target customers’ credit and debit card data on the black market, where a single card is selling for as much as $100. Criminals can use that card data to create counterfeit cards. But PIN data is the most coveted of all. With PIN data, cybercriminals can make withdrawals from a customer’s account through an automatic teller machine. And even if the key to unlock the encryption is stored on separate systems, security experts say there have been cases where hackers managed to get the keys and successfully decrypt scrambled data.

Even before Friday’s revelations about the PIN data, two major banks, JPMorgan Chase and Santander Bank both placed caps on customer purchases and withdrawals made with compromised credit and debit cards. That move, which security experts say is unprecedented, brought complaints from customers trying to do last-minute shopping in the days leading to Christmas.

Chase said it is in the process of replacing all of its customers’ debit cards — about 2 million of them — that were used at Target during the breach.

The Target breach,from Nov. 27 to Dec. 15, is officially the second largest breach of a retailer in history. The biggest was a 2005 breach at TJMaxx that compromised records for 90 million customers.

The Secret Service and Justice Department continue to investigate.

Source:  nytimes.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site PHP.net used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of PHP.net, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to PHP.net, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the PHP.net attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.

Source:  csoonline.com

Study finds zero-day vulnerabilities abound in popular software

Friday, December 6th, 2013

Subscribers to organizations that sell exploits for vulnerabilities not yet known to software developers gain daily access to scores of flaws in the world’s most popular technology, a study shows.

NSS Labs, which is in the business of testing security products for corporate subscribers, found that over the last three years, subscribers of two major vulnerability programs had access on any given day to at least 58 exploitable flaws in Microsoft, Apple, Oracle or Adobe products.

In addition, NSS labs found that an average of 151 days passed from the time when the programs purchased a vulnerability from a researcher and the affected vendor released a patch.

The findings, released Thursday, were based on an analysis of 10 years of data from TippingPoint, a network security maker Hewlett-Packard acquired in 2010, and iDefense, a security intelligence service owned by VeriSign. Both organizations buy vulnerabilities, inform subscribers and work with vendors in producing patches.

Stefan Frei, NSS research director and author of the report, said the actual number of secret vulnerabilities available to cybercriminals, government agencies and corporations is much larger, because of the amount of money they are willing to pay.

Cybercriminals will buy so-called zero-day vulnerabilities in the black market, while government agencies and corporations purchase them from brokers and exploit clearinghouses, such as VUPEN Security, ReVuln, Endgame Systems, Exodus Intelligence and Netragard.

The six vendors collectively can provide at least 100 exploits per year to subscribers, Frei said. According to a February 2010 price list, Endgame sold 25 zero-day exploits a year for $2.5 million.

In July, Netragard founder Adriel Desautels told The New York Times that the average vulnerability sells from around $35,000 to $160,000.

Part of the reason vulnerabilities are always present is because of developer errors and also because software makers are in the business of selling product, experts say. The latter means meeting deadlines for shipping software often trumps spending additional time and money on security.

Because of the number of vulnerabilities bought and sold, companies that believe their intellectual property makes them prime targets for well-financed hackers should assume their computer systems have already been breached, Frei said.

“One hundred percent prevention is not possible,” he said.

Therefore, companies need to have the experts and security tools in place to detect compromises, Frei said. Once a breach is discovered, then there should be a well-defined plan in place for dealing with it.

That plan should include gathering forensic evidence to determine how the breach occurred. In addition, all software on the infected systems should be removed and reinstalled.

Steps taken following a breach should be reviewed regularly to make sure they are up to date.

Source:  csoonline.com

Scientist-developed malware covertly jumps air gaps using inaudible sound

Tuesday, December 3rd, 2013

Malware communicates at a distance of 65 feet using built-in mics and speakers.

Computer scientists have developed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection.

The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an “air gap” between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals.

The researchers, from Germany’s Fraunhofer Institute for Communication, Information Processing, and Ergonomics, recently disclosed their findings in a paper published in the Journal of Communications. It came a few weeks after a security researcher said his computers were infected with a mysterious piece of malware that used high-frequency transmissions to jump air gaps. The new research neither confirms nor disproves Dragos Ruiu’s claims of the so-called badBIOS infections, but it does show that high-frequency networking is easily within the grasp of today’s malware.

“In our article, we describe how the complete concept of air gaps can be considered obsolete as commonly available laptops can communicate over their internal speakers and microphones and even form a covert acoustical mesh network,” one of the authors, Michael Hanspach, wrote in an e-mail. “Over this covert network, information can travel over multiple hops of infected nodes, connecting completely isolated computing systems and networks (e.g. the internet) to each other. We also propose some countermeasures against participation in a covert network.”

The researchers developed several ways to use inaudible sounds to transmit data between two Lenovo T400 laptops using only their built-in microphones and speakers. The most effective technique relied on software originally developed to acoustically transmit data under water. Created by the Research Department for Underwater Acoustics and Geophysics in Germany, the so-called adaptive communication system (ACS) modem was able to transmit data between laptops as much as 19.7 meters (64.6 feet) apart. By chaining additional devices that pick up the signal and repeat it to other nearby devices, the mesh network can overcome much greater distances.

The ACS modem provided better reliability than other techniques that were also able to use only the laptops’ speakers and microphones to communicate. Still, it came with one significant drawback—a transmission rate of about 20 bits per second, a tiny fraction of standard network connections. The paltry bandwidth forecloses the ability of transmitting video or any other kinds of data with large file sizes. The researchers said attackers could overcome that shortcoming by equipping the trojan with functions that transmit only certain types of data, such as login credentials captured from a keylogger or a memory dumper.

“This small bandwidth might actually be enough to transfer critical information (such as keystrokes),” Hanspach wrote. “You don’t even have to think about all keystrokes. If you have a keylogger that is able to recognize authentication materials, it may only occasionally forward these detected passwords over the network, leading to a very stealthy state of the network. And you could forward any small-sized information such as private encryption keys or maybe malicious commands to an infected piece of construction.”

Remember Flame?

The hurdles of implementing covert acoustical networking are high enough that few malware developers are likely to add it to their offerings anytime soon. Still, the requirements are modest when measured against the capabilities of Stuxnet, Flame, and other state-sponsored malware discovered in the past 18 months. And that means that engineers in military organizations, nuclear power plants, and other truly high-security environments should no longer assume that computers isolated from an Ethernet or Wi-Fi connection are off limits.

The research paper suggests several countermeasures that potential targets can adopt. One approach is simply switching off audio input and output devices, although few hardware designs available today make this most obvious countermeasure easy. A second approach is to employ audio filtering that blocks high-frequency ranges used to covertly transmit data. Devices running Linux can do this by using the advanced Linux Sound Architecture in combination with the Linux Audio Developer’s Simple Plugin API. Similar approaches are probably available for Windows and Mac OS X computers as well. The researchers also proposed the use of an audio intrusion detection guard, a device that would “forward audio input and output signals to their destination and simultaneously store them inside the guard’s internal state, where they are subject to further analyses.”

Source:  arstechnica.com

N.S.A. may have hit Internet companies at a weak spot

Tuesday, November 26th, 2013

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

“From Echelon to Total Information Awareness to Prism, all these programs have gone under different names, but in essence do the same thing,” said Chip Pitts, a law lecturer at Stanford University School of Law.

Based in the Denver suburbs, Level 3 is not a household name like Verizon or AT&T, but in terms of its ability to carry traffic, it is bigger than the other two carriers combined. Its networking equipment is found in 200 data centers in the United States, more than 100 centers in Europe and 14 in Latin America.

Level 3 did not directly respond to an inquiry about whether it had given the N.S.A., or the agency’s foreign intelligence partners, access to Google and Yahoo’s data. In a statement, Level 3 said: “It is our policy and our practice to comply with laws in every country where we operate, and to provide government agencies access to customer data only when we are compelled to do so by the laws in the country where the data is located.”

Also, in a financial filing, Level 3 noted that, “We are party to an agreement with the U.S. Departments of Homeland Security, Justice and Defense addressing the U.S. government’s national security and law enforcement concerns. This agreement imposes significant requirements on us related to information storage and management; traffic management; physical, logical and network security arrangements; personnel screening and training; and other matters.”

Security experts say that regardless of whether Level 3’s participation is voluntary or not, recent N.S.A. disclosures make clear that even when Internet giants like Google and Yahoo do not hand over data, the N.S.A. and its intelligence partners can simply gather their data downstream.

That much was true last summer when United States authorities first began tracking Mr. Snowden’s movements after he left Hawaii for Hong Kong with thousands of classified documents. In May, authorities contacted Ladar Levison, who ran Lavabit, Mr. Snowden’s email provider, to install a tap on Mr. Snowden’s email account. When Mr. Levison did not move quickly enough to facilitate the tap on Lavabit’s network, the Federal Bureau of Investigation did so without him.

Mr. Levison said it was unclear how that tap was installed, whether through Level 3, which sold bandwidth to Lavabit, or at the Dallas facility where his servers and networking equipment are stored. When Mr. Levison asked the facility’s manager about the tap, he was told the manager could not speak with him. A spokesman for TierPoint, which owns the Dallas facility, did not return a call seeking a comment.

Mr. Pitts said that while working as the chief legal officer at Nokia in the 1990s, he successfully fended off an effort by intelligence agencies to get backdoor access into Nokia’s computer networking equipment.

Nearly 20 years later, Verizon has said that it and other carriers are forced to comply with government requests in every country in which they operate, and are limited in what they can say about their arrangements.

“At the end of the day, if the Justice Department shows up at your door, you have to comply,” Lowell C. McAdam, Verizon’s chief executive, said in an interview in September. “We have gag orders on what we can say and can’t defend ourselves, but we were told they do this with every carrier.”

Source:  nytimes.com

U.S. government rarely uses best cybersecurity steps: advisers

Friday, November 22nd, 2013

The U.S. government itself seldom follows the best cybersecurity practices and must drop its old operating systems and unsecured browsers as it tries to push the private sector to tighten its practices, technology advisers told President Barack Obama.

“The federal government rarely follows accepted best practices,” the President’s Council of Advisors on Science and Technology said in a report released on Friday. “It needs to lead by example and accelerate its efforts to make routine cyberattacks more difficult by implementing best practices for its own systems.”

PCAST is a group of top U.S. scientists and engineers who make policy recommendations to the administration. William Press, computer science professor at the University of Texas at Austin, and Craig Mundie, senior adviser to the CEO at Microsoft Corp, comprised the cybersecurity working group.

The Obama administration this year stepped up its push for critical industries to bolster their cyber defenses, and Obama in February issued an executive order aimed at countering the lack of progress on cybersecurity legislation in Congress.

As part of the order, a non-regulatory federal standard-setting board last month released a draft of voluntary standards that companies can adopt, which it compiled through industry workshops.

But while the government urges the private sector to adopt such minimum standards, technology advisers say it must raise its own standards.

The advisers said the government should rely more on automatic updates of software, require better proof of identities of people, devices and software, and more widely use the Trusted Platform Module, an embedded security chip.

The advisers also said for swifter response to cyber threats, private companies should share more data among themselves and, “in appropriate circumstances” with the government. Press said the government should promote such private sector partnerships, but that sensitive information exchanged in these partnerships “should not be and would not be accessible to the government.”

The advisers steered the administration away from “government-mandated, static lists of security measures” and toward standards reached by industry consensus, but audited by third parties.

The report also pointed to Internet service providers as well-positioned to spur rapid improvements by, for instance, voluntarily alerting users when their devices are compromised.

Source: reuters.com

Encrypt everything: Google’s answer to government surveillance is catching on

Thursday, November 21st, 2013

While Microsoft’s busy selling t-shirts and mugs about how Google’s “Scroogling” you, the search giant’s chairman is busy tackling a much bigger problem: How to keep your information secure in a world full of prying eyes and governments willing to drag in data by the bucket load. And according to Google’s Eric Schmidt, the answer is fairly straightforward.

“We can end government censorship in a decade,” Schmidt said Wednesday during a speech in Washington, according to Bloomberg. “The solution to government surveillance is to encrypt everything.”

Google’s certainly putting its SSL certificates where Schmidt’s mouth is, too: In the “Encrypt the Web” scorecard released by the Electronic Frontier Foundation earlier this month, Google was one of the few Internet giants to receive a perfect five out of five score for its encryption efforts. Even basic Google.com searches default to HTTPS encryption these days, and Google goes so far as to encrypt data traveling in-network between its data centers.

Spying eyes

That last move didn’t occur in a vacuum, however. Earlier this month, one of the latest Snowden revelations revealed that the National Security Agency’s MUSCULAR program taps the links flowing between Google and Yahoo’s internal data centers.

“We have strengthened our systems remarkably as a result of the most recent events,” Schmidt said during the speech. “It’s reasonable to expect that the industry as a whole will continue to strengthen these systems.”

Indeed, Yahoo recently announced plans to encrypt, well, everything in the wake of the recent NSA surveillance revelations. Dropbox, Facebook, Sonic.net, and the SpiderOak cloud storage service also received flawless marks in the EFF’s report.

And the push for ubiquitous encryption recently gained an even more formidable proponent. The Internet Engineering Task Force working on HTTP 2.0 announced last week that the next-gen version of the crucial protocol will only work with HTTPS encrypted URLs.

Yes, all encryption, all the time could very well become the norm on the ‘Net before long. But while that will certainly raise the general level of security and privacy web-wide, don’t think for a minute that HTTPS is a silver bullet against pervasive government surveillance. In yet another Snowden-supplied revelation released in September, it was revealed that the NSA spends more than $250 million year-in and year-out in its efforts to break online encryption techniques.

Source:  csoonline.com

Repeated attacks hijack huge chunks of Internet traffic, researchers warn

Thursday, November 21st, 2013

Man-in-the-middle attacks divert data on scale never before seen in the wild.

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Researchers from network intelligence firm Renesys made that sobering assessment in a blog post published Tuesday. Since February, they have observed 38 distinct events in which large blocks of traffic have been improperly redirected to routers at Belarusian or Icelandic service providers. The hacks, which exploit implicit trust placed in the border gateway protocol used to exchange data between large service providers, affected “major financial institutions, governments, and network service providers” in the US, South Korea, Germany, the Czech Republic, Lithuania, Libya, and Iran.

The ease of altering or deleting authorized BGP routes, or of creating new ones, has long been considered a potential Achilles Heel for the Internet. Indeed, in 2008, YouTube became unreachable for virtually all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country. Later that year, researchers at the Defcon hacker conference showed how BGP routes could be manipulated to redirect huge swaths of Internet traffic. By diverting it to unauthorized routers under control of hackers, they were then free to monitor or tamper with any data that was unencrypted before sending it to its intended recipient with little sign of what had just taken place.”This year, that potential has become reality,” Renesys researcher Jim Cowie wrote. “We have actually observed live man-in-the-middle (MitM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries.”

At least one unidentified voice-over-IP provider has also been targeted. In all, data destined for 150 cities have been intercepted. The attacks are serious because they affect the Internet equivalents of a US interstate that can carry data for hundreds of thousands or even millions of people. And unlike the typical BGP glitches that arise from time to time, the attacks observed by Renesys provide few outward signs to users that anything is amiss.

“The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the Web,” Cowie wrote. “Even if he ran his own traceroute to verify connectivity to the world, the paths he’d see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with.”

Guadalajara to Washington via Belarus

Renesys observed the first route hijacking in February when various routes across the globe were mysteriously funneled through Belarusian ISP GlobalOneBel before being delivered to their final destination. One trace, traveling from Guadalajara, Mexico, to Washington, DC, normally would have been handed from Mexican provider Alestra to US provider PCCW in Laredo, Texas, and from there to the DC metro area and then, finally, delivered to users through the Qwest/Centurylink service provider. According to Cowie:

Instead, however, PCCW gives it to Level3 (previously Global Crossing), who is advertising a false Belarus route, having heard it from Russia’s TransTelecom, who heard it from their customer, Belarus Telecom. Level3 carries the traffic to London, where it delivers it to Transtelecom, who takes it to Moscow and on to Belarus. Beltelecom has a chance to examine the traffic and then sends it back out on the “clean path” through Russian provider ReTN (recently acquired by Rostelecom). ReTN delivers it to Frankfurt and hands it to NTT, who takes it to New York. Finally, NTT hands it off to Qwest/Centurylink in Washington DC, and the traffic is delivered.

Such redirections occurred on an almost daily basis throughout February, with the set of affected networks changing every 24 hours or so. The diversions stopped in March. When they resumed in May, they used a different customer of Bel Telecom as the source. In all, Renesys researchers saw 21 redirections. Then, also during May, they saw something completely new: a hijack lasting only five minutes diverting traffic to Nyherji hf (also known as AS29689, short for autonomous system 29689), a small provider based in Iceland.

Renesys didn’t see anything more until July 31 when redirections through Iceland began in earnest. When they first resumed, the source was provider Opin Kerfi (AS48685).

Cowie continued:

In fact, this was one of seventeen Icelandic events, spread over the period July 31 to August 19. And Opin Kerfi was not the only Icelandic company that appeared to announce international IP address space: in all, we saw traffic redirections from nine different Icelandic autonomous systems, all customers of (or belonging to) the national incumbent Síminn. Hijacks affected victims in several different countries during these events, following the same pattern: false routes sent to Síminn’s peers in London, leaving ‘clean paths’ to North America to carry the redirected traffic back to its intended destination.

In all, Renesys observed 17 redirections to Iceland. To appreciate how circuitous some of the routes were, consider the case of traffic passing between two locations in Denver. As the graphic below traces, it traveled all the way to Iceland through a series of hops before finally reaching its intended destination.

Cowie said Renesys’ researchers still don’t know who is carrying out the attacks, what their motivation is, or exactly how they’re pulling them off. Members of Icelandic telecommunications company Síminn, which provides Internet backbone services in that country, told Renesys the redirections to Iceland were the result of a software bug and that the problem had gone away once it was patched. They told the researchers they didn’t believe the diversions had a malicious origin.

Cowie said that explanation is “unlikely.” He went on to say that even if it does prove correct, it’s nonetheless highly troubling.

“If this is a bug, it’s a dangerous one, capable of simulating an extremely subtle traffic redirection/interception attack that plays out in multiple episodes, with varying targets, over a period of weeks,” he wrote. “If it’s a bug that can be exploited remotely, it needs to be discussed more widely within the global networking community and eradicated.”

Source:  arstechnica.com

HP: 90 percent of Apple iOS mobile apps show security vulnerabilities

Tuesday, November 19th, 2013

HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.

Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97 percent of these apps inappropriately accessed private information sources within a device, and 86 percent proved to be vulnerable to attacks such as SQL injection.

The Apple guidelines for developing iOS apps help developers but this doesn’t go far enough in terms of security, says Armistead. Mobile apps are being used to extend the corporate website to mobile devices, but companies in the process “are opening up their attack surfaces,” he says.

In its summary of the testing, HP said 86 percent of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.

The same number did not have optimized security built in the early part of the development process, according to HP. Three quarters “did not use proper encryption techniques when storing data on mobile devices, which leaves unencrypted data accessible to an attacker.” A large number of the apps didn’t implement SSL/HTTPS correctly.To discover weaknesses in apps, developers need to involve practices such as app scanning for security, penetration testing and a secure coding development life-cycle approach, HP advises.

The need to develop mobile apps quickly for business purposes is one of the main contributing factors leading to weaknesses in these apps made available for public download, according to HP. And the weakness on the mobile side is impacting the server side as well.

“It is our earnest belief that the pace and cost of development in the mobile space has hampered security efforts,” HP says in its report, adding that “mobile application security is still in its infancy.”

Source:  infoworld.com

Internet architects propose encrypting all the world’s Web traffic

Thursday, November 14th, 2013

A vastly larger percentage of the world’s Web traffic will be encrypted under a near-final recommendation to revise the Hypertext Transfer Protocol (HTTP) that serves as the foundation for all communications between websites and end users.

The proposal, announced in a letter published Wednesday by an official with the Internet Engineering Task Force (IETF), comes after documents leaked by former National Security Agency contractor Edward Snowden heightened concerns about government surveillance of Internet communications. Despite those concerns, websites operated by Yahoo, the federal government, the site running this article, and others continue to publish the majority of their pages in a “plaintext” format that can be read by government spies or anyone else who has access to the network the traffic passes over. Last week, cryptographer and security expert Bruce Schneier urged people to “make surveillance expensive again” by encrypting as much Internet data as possible.

The HTTPbis Working Group, the IETF body charged with designing the next-generation HTTP 2.0 specification, is proposing that encryption be the default way data is transferred over the “open Internet.” A growing number of groups participating in the standards-making process—particularly those who develop Web browsers—support the move, although as is typical in technical deliberations, there’s debate about how best to implement the changes.

“There seems to be strong consensus to increase the use of encryption on the Web, but there is less agreement about how to go about this,” Mark Nottingham, chair of the HTTPbis working group, wrote in Wednesday’s letter. (HTTPbis roughly translates to “HTTP again.”)

He went on to lay out three implementation proposals and describe their pros and cons:

A. Opportunistic encryption for http:// URIs without server authentication—aka “TLS Relaxed” as per draft-nottingham-http2-encryption.

B. Opportunistic encryption for http:// URIs with server authentication—the same mechanism, but not “relaxed,” along with some form of downgrade protection.

C. HTTP/2 to only be used with https:// URIs on the “open” Internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).

In subsequent discussion, there seems to be agreement that (C) is preferable to (B), since it is more straightforward; no new mechanism needs to be specified, and HSTS can be used for downgrade protection.

(C) also has this advantage over (A) and furthermore provides stronger protection against active attacks.

The strongest objections against (A) seemed to be about creating confusion about security and discouraging use of “full” TLS, whereas those against (C) were about limiting deployment of better security.

Keen observers have noted that we can deploy (C) and judge adoption of the new protocol, later adding (A) if necessary. The reverse is not necessarily true.

Furthermore, in discussions with browser vendors (who have been among those most strongly advocating more use of encryption), there seems to be good support for (C), whereas there’s still a fair amount of doubt/disagreement regarding (A).

Pros, cons, and carrots

As Nottingham acknowledged, there are major advantages and disadvantages for each option. Proposal A would be easier for websites to implement because it wouldn’t require them to authenticate their servers using a digital certificate that is recognized by all the major browsers. This relaxation of current HTTPS requirements would eliminate a hurdle that stops many websites from encrypting traffic now, but it also comes at a cost. The lack of authentication could make it trivial for the person at an Internet cafe or the spy monitoring Internet backbones to create a fraudulent digital certificate that impersonates websites using this form of relaxed transport layer security (TLS). That risk calls into question whether the weakened measure is worth the hassle of implementing.

Proposal B, by contrast, would make it much harder for attackers, since HTTP 2.0 traffic by default would be both encrypted and authenticated. But the increased cost and effort required by millions of websites may stymie the adoption of the new specification, which in addition to encryption offers improvements such as increased header compression and asynchronous connection multiplexing.

Proposal C seems to resolve the tension between the other two options by moving in a different direction altogether—that is, by implementing HTTP 2.0 only in full-blown HTTPS traffic. This approach attempts to use the many improvements of the new standard as a carrot that gives websites an incentive to protect their traffic with traditional HTTPS encryption.

The options that the working group is considering do a fair job of mapping the current debate over Web-based encryption. A common argument is that more sites can and should encrypt all or at least most of their traffic. Even better is when sites provide this encryption while at the same time providing strong cryptographic assurances that the server hosting the website is the one operated by the domain-name holder listed in the address bar—rather than by an attacker who is tampering with the connection.

Unfortunately, the proposals are passing over an important position in the debate over Web encryption, involving the viability of the current TLS and secure sockets layer (SSL) protocols that underpin all HTTPS traffic. With more than 500 certificate authorities located all over the world recognized by major browsers, all it takes is the compromise of one of them for the entire system to fail (although certificate pinning in some cases helps contain the damage). There’s nothing in Nottingham’s letter indicating that this single point of failure will be addressed. The current HTTPS system has serious privacy implications for end users, since certificate authorities can log huge numbers of requests for SSL-protected websites and map them to individual IP addresses. This is also unaddressed.

It’s unfortunate that the letter didn’t propose alternatives to the largely broken TLS system, such as the one dubbed Trust Assertions for Certificate Keys, which was conceived by researchers Moxie Marlinspike and Trevor Perrin. Then again, as things are now, the engineers in the HTTPbis Working Group are likely managing as much controversy as they can. Adding an entirely new way to encrypt Web traffic to an already sprawling list of considerations would probably prove to be too much.

Source:  arstechnica.com

US government releases draft cybersecurity framework

Thursday, October 24th, 2013

(Credit: The National Institute of Standards and Technology)

NIST comes out with its proposed cybersecurity standards, which outlines how private companies can protect themselves against hacks, cyberattacks, and security breaches.

The National Institute of Standards and Technology released its draft cybersecurity framework for private companies and infrastructure networks on Tuesday. These standards are part of an executive order that President Obama proposed in February.

The aim of NIST’s framework (PDF) is to create guidelines that companies can use to beef up their networks and guard against hackers and cybersecurity threats. Adopting this framework would be voluntary for companies. NIST is a non-regulatory agency within the Department of Commerce.

The framework was written with the involvement of roughly 3,000 industry and academic experts, according to Reuters. It outlines ways that companies could protect their networks and act fast if and when they experience security breaches.

“The framework provides a common language for expressing, understanding, and managing cybersecurity risk, both internally and externally,” reads the draft standards. “The framework can be used to help identify and prioritize actions for reducing cybersecurity risk and is a tool for aligning policy, business, and technological approaches to managing that risk.”

Obama’s executive order in February was part of a government effort to get cybersecurity legislation in place, but the bill was put on hold after the National Security Agency’s surveillance program was revealed.

Some of the components in Obama’s order included: expanding “real time sharing of cyber threat information” to companies that operate critical infrastructure, asking NIST to devise cybersecurity standards, and proposing a “review of existing cybersecurity regulation.”

Critical infrastructure networks, banks, and private companies have increasingly been hit by cyberattacks over the past couple of years. For example, weeks after the former head of Homeland Security, Janet Napolitano, announced that she believed a “cyber 9/11” could happen “imminently” — crippling the country’s power grid, water infrastructure, and transportation networks — hackers hit the US Department of Energy. While no data was compromised, it did show that hackers were able to breach the computer system.

In May, Congress released a survey that claimed power utilities in the U.S. are under “daily” cyberattacks. Of about 160 utilities interviewed for the survey, more than a dozen reported “daily,” “constant,” or “frequent” attempted cyberattacks on their computer systems. While the data in the survey sounded alarming, none of the utilities reported any damage to their facilities or actual breaches of their systems — but rather attempts to hack their networks.

While companies are well aware that they need to secure their networks, many are wary of signing onto this voluntary framework. According to Reuters, some companies are worried that the standards could turn into requirements.

In an effort to get companies to adopt the framework, the government has been offering a slew of incentives, including cybersecurity insurance, priority consideration for grants, and streamlined regulations. These proposed incentives are a preliminary step for the government’s cybersecurity policy and have not yet been finalized.

NIST will now take public comments for 45 days and plans to issue the final cybersecurity framework in February 2014.

Source:  CNET

Cisco says controversial NIST crypto-potential NSA backdoor ‘not invoked’ in products

Thursday, October 17th, 2013

Controversial crypto technology known as Dual EC DRBG, thought to be a backdoor for the National Security Agency, ended up in some Cisco products as part of their code libraries. But Cisco says they cannot be used because it chose other crypto as an operational default which can’t be changed.

Dual EC DRBG or Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) from the National Institute of Standards and Technology and a crypto toolkit from RSA is thought to have been one main way the crypto ended up in hundreds of vendors’ products.

Because Cisco is known to have used the BSAFE crypto toolkit, the company has faced questions about where Dual EC DRBG may have ended up in the Cisco product line. In a Cisco blog post today, Anthony Grieco, principle engineer at Cisco, tackled this topic in a notice about how Cisco chooses crypto.

“Before we go any further, I’ll go ahead and get it out there: we don’t use the Dual_EC_DRBG in our products. While it is true that some of the libraries in our products can support the DUAL_EC_DRBG, it is not invoked in our products.”

Grieco wrote that Cisco, like most tech companies, uses cryptography in nearly all its products, if only for secure remote management.

“Looking back at our DRBG decisions in the context of these guiding principles, we looked at all four DRBG options available in NIST SP 800-90. As none had compelling interoperability or legal implementation implications, we ultimately selected the Advanced Encryption Standard Counter mode (AES-CTR) DRBG as out default.”

Grieco stated this was “because of our comfort with the underlying implementation, the absence of any general security concerns, and its acceptable performance. Dual_EC_DRBG was implemented but wasn’t seriously considered as the default given the other good choices available.”

Grieco said the DRBG choice that Cisco made “cannot be changed by the customer.”

Faced with the Dual EC DRBG controversy, which was triggered by the revelations about the NSA by former NSA contractor Edward Snowden, NIST itself has re-opened comments about this older crypto standard.

“The DRBG controversy has brought renewed focus on the crypto industry and the need to constantly evaluate cryptographic algorithm choices,” Grieco wrote in the blog today. “We welcome this conversation as an opportunity to improve security of the communications infrastructure. We’re open to serious discussions about the industry’s cryptographic needs, what’s next for our products, and how to collectively move forward.” Cisco invited comment on that online.

Grieco concluded, “We will continue working to ensure out products offer secure algorithms, and if they don’t, we’ll fix them.”

Source:  computerworld.com

Symantec disables 500,000 botnet-infected computers

Tuesday, October 1st, 2013

Symantec has disabled part of one of the world’s largest networks of infected computers.

About 500,000 hijacked computers have been taken out of the 1.9 million strong ZeroAccess botnet, the security company said.

The zombie computers were used for advertising and online currency fraud and to infect other machines.

Security experts warned that any benefits from the takedown might be short-lived.

The cybercriminals behind the network had not yet been identified, said Symantec.

“We’ve taken almost a quarter of the botnet offline,” Symantec security operations manager Orla Cox told the BBC. “That’s taken away a quarter of [the criminals’] earnings.”

The ZeroAccess network is used to generate illegal cash through a type of advertising deception known as “click fraud”.

Communications poisoned

Zombie computers are commanded to download online adverts and generate artificial mouse clicks on the ads to mimic legitimate users and generate payouts from advertisers.

The computers are also used to create an online currency called Bitcoin which can be used to pay for goods and services.

The ZeroAccess botnet is not controlled by one or two servers, but relies on waves of communications between groups of infected computers to do the bidding of the criminals.

The decentralised nature of the botnet made it difficult to act against, said Symantec.

In July, the company started poisoning the communications between the infected computers, permanently cutting them off from the rest of the hijacked network, said Ms Cox.

The company had set the ball in motion after noticing that a new version of the ZeroAccess software was being distributed through the network.

The updated version of the ZeroAccess Trojan contained modifications that made it more difficult to disrupt communications between peers in the infected network.

Symantec built its own mini-ZeroAccess botnet to study effective ways of taking down the network, and tested different takedown methods for two weeks.

The company studied the botnet and disabled the computers as part of its research operations, which feed into product development, said Ms Cox.

“Hopefully this will help us in the future to build up better protection,” she said.

Internet service providers have been informed which machines were taken out of the botnet in an effort to let the owners of the computers know that their machine was a zombie.

Resilient zombies

Although a quarter of the zombie network has been taken out of action, the upgraded version of the botnet will be more difficult to take down, said Ms Cox.

“These are professional cybercriminals,” she said. “They will likely be looking for ways to get back up to strength.”

In the long term, the zombie network could grow back to its previous size, security experts said.

“Every time a botnet is taken down, but the people who run it are not arrested, there is a chance they can rebuild the botnet,” said Vincent Hanna, a researcher for non-profit anti-spam project Spamhaus.

The remaining resilient part of the network may continue to be used for fraud, and could start spreading the upgraded ZeroAccess Trojan, Mr Hanna warned.

Taking down infected networks is a “thankless task”, according to Sophos, a rival to Symantec.

“It’s a bit like trying to deal with the rabbit problem in Australia – you know you’re unlikely ever to win, but you also know that you have to keep trying, or you will definitely lose,” said Sophos head of technology Paul Ducklin.

Source:  BBC

Weighing the IT implications of implementing SDNs

Friday, September 27th, 2013

Software-defined anything has myriad issues for data centers to consider before implementation

Software Defined Networks should make IT execs think about a lot of key factors before implementation.

Issues such as technology maturity, cost efficiencies, security implications, policy establishment and enforcement, interoperability and operational change weigh heavily on IT departments considering software-defined data centers. But perhaps the biggest consideration in software-defining your IT environment is, why would you do it?

“We have to present a pretty convincing story of, why do you want to do this in the first place?” said Ron Sackman, chief network architect at Boeing, at the recent Software Defined Data Center Symposium in Santa Clara. “If it ain’t broke, don’t fix it. Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.”

And if that compelling use case is established, the next task is to get everyone on board and comfortable with the notion of a software-defined IT environment.

“The willingness to accept abstraction is kind of a trade-off between control of people and hardware vs. control of software,” says Andy Brown, Group CTO at UBS, speaking on the same SDDC Symposium panel. “Most operations people will tell you they don’t trust software. So one of the things you have to do is win enough trust to get them to be able to adopt.”

Trust might start with assuring the IT department and its users that a software-defined network or data center is secure, at least as secure as the environment it is replacing or founded on. Boeing is looking at SDN from a security perspective trying to determine if it’s something it can objectively recommend to its internal users.

“If you look at it from a security perspective, the best security for a network environment is a good design of the network itself,” Sackman says. “Things like Layer 2 and Layer 3 VPNs backstop your network security, and they have not historically been a big cyberattack surface. So my concern is, are the capex and opex savings going to justify the risk that you’re taking by opening up a bigger cyberattack surface, something that hasn’t been a problem to this point?”

Another concern Sackman has is in the actual software development itself, especially if a significant amount of open source is used.

“What sort of assurance does someone have – particularly if this is open source software – that the software you’re integrating into your solution is going to be secure,” he asks. “How do you scan that? There’s a big development time security vector that doesn’t really exist at this point.”

Policy might be the key to ensuring security and other operational aspects in place pre-SDN/SDDC are not disrupted post implementation. Policy-based orchestration, automation and operational execution is touted as one of SDN’s chief benefits.

“I believe that policy will become the most important factor in the implementation of a software-defined data center because if you build it without policy, you’re pretty much giving up on the configuration strategy, the security strategy, the risk management strategy, that have served us so well in the siloed world of the last 20 years,” UBS’ Brown says.

Software Defined Data Center’s also promise to break down those silos through cross-function orchestration of the compute, storage, network and application elements in an IT shop. But that’s easier said than done, Brown notes – interoperability is not a guarantee in the software-defined world.

“Information protection and data obviously have to interoperate extremely carefully,” he says. The success of software defined workload management – aka, virtualization and cloud – in a way has created a set of children, not all of which can necessarily be implemented in parallel, but all of which are required to get to the end state of the software defined data center.

“Now when you think of all the other software abstraction we’re trying to introduce in parallel, someone’s going to cry uncle. So all of these things need to interoperate with each other.”

So are the purported capital and operational cost savings of implementing SDN/SDDCs worth the undertaking? Do those cost savings even exist?

Brown believes they exist in some areas and not in others.

“There’s a huge amount of cost take-out in software-defined storage that isn’t necessarily there in SDN right now,” he said. “And the reason it’s not there in SDN is because people aren’t ripping out the expensive under network and replacing it with SDN. Software-defined storage probably has more legs than SDN because of the cost pressure. We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.”

Sackman believes the overall savings are there in SDN/SDDCs but again, the security uncertainty may make those benefits not currently worth the risk.

“The capex and opex savings are very compelling, and there are particular use cases specifically for SDN that I think would be great if we could solve specific pain points and problems that we’re seeing,” he says. “But I think, in general, security is a big concern, particularly if you think about competitors co-existing as tenants in the same data center — if someone develops code that’s going to poke a hole in the L2 VPN in that data center and export data from Coke to Pepsi.

“We just won a proposal for a security operations center for a foreign government, and I’m thinking can we offer a better price point on our next proposal if we offer an SDN switch solution vs. a vendor switch solution? A few things would have to happen before we feel comfortable doing that. I’d want to hear a compelling story around maturity before we would propose it.”

Source: networkworld.com

Schools’ use of cloud services puts student privacy at risk

Tuesday, September 24th, 2013

Vendors should promise not to use targeted advertising and behavioral profiling, SafeGov said

Schools that compel students to use commercial cloud services for email and documents are putting privacy at risk, says a campaign group calling for strict controls on the use of such services in education.

A core problem is that cloud providers force schools to accept policies that authorize user profiling and online behavioral advertising. Some cloud privacy policies stipulate that students are also bound by these policies, even when they have not had the opportunity to grant or withhold their consent, said privacy campaign group SafeGov.org in a report released on Monday.

There is also the risk of commercial data mining. “When school cloud services derive from ad-supported consumer services that rely on powerful user profiling and tracking algorithms, it may be technically difficult for the cloud provider to turn off these functions even when ads are not being served,” the report said.

Furthermore, by failing to create interfaces that distinguish between ad-free and ad-supported versions, students may be lured from ad-free services for school use to consumer ad-driven services that engage in highly intrusive processing of personal information, according to the report. This could be the case with email, online video, networking and basic search.

Also, contracts used by cloud providers don’t guarantee ad-free services because they are ambiguously worded and include the option to serve ads, the report said.

SafeGov has sought support from European Data Protection Authorities (DPAs), some of which endorsed the use of codes of conduct establishing rules to which schools and cloud providers could voluntarily agree. Such codes should include a binding pledge to ban targeted advertising in schools as well as the processing or secondary use of data for advertising purposes, SafeGov recommended.

“We think any provider of cloud computing services to schools (Google Apps and Microsoft 365 included) should sign up to follow the Codes of Conduct outlined in the report,” said a SafeGov spokeswoman in an email.

Even when ad serving is disabled the privacy of students may still be jeopardized, the report said.

For example, while Google’s policy for Google Apps for Education states that no ads will be shown to enrolled students, there could still be a privacy problem, according to SafeGov.

“Based on our research, school and government customers of Google Apps are encouraged to add ‘non-core’ (ad-based) Google services such as search or YouTube, to the Google Apps for Education interface, which takes students from a purportedly ad-free environment to an ad-driven one,” the spokeswoman said.

“In at least one case we know of, it also requires the school to force students to accept the privacy policy before being able to continue using their accounts,” she said, adding that when this is done the user can click through to the ad-supported service without a warning that they will be profiled and tracked.

This issue was flagged by the French and Swedish DPAs, the spokeswoman said.

In September, the Swedish DPA ordered a school to stop using Google Apps or sign a reworked agreement with Google because the current terms of use lacked specifics on how personal data is being handled and didn’t comply with local data laws.

However, there are some initiatives that are encouraging, the spokeswoman said.

Microsoft’s Bing for Schools initiative, an ad-free, no cost version of its Bing search engine that can be used in public and private schools across the U.S., is one of them, she said. “This is one of the things SafeGov is trying to accomplish with the Codes of Conduct — taking out ad-serving features completely when providing cloud services in schools. This would remove the ad-profiling risk for students,” she said.

Microsoft and Google did not respond to a request for comment.

Source:  computerworld.com