Archive for the ‘Google’ Category

Google’s Dart language heads for standardization with new Ecma committee

Friday, December 13th, 2013

Ecma, the same organization that governs the standardization and development of JavaScript (or “EcmaScript” as it’s known in standardese), has created a committee to oversee the publication of a standard for Google’s alternative Web language, Dart.

Technical Committee 52 will develop standards for Dart language and libraries, create test suites to verify conformance with the standards, and oversee Dart’s future development. Other technical committees within Ecma perform similar work for EcmaScript, C#, and the Eiffel language.

Google released version 1.0 of the Dart SDK last month and believes that the language is sufficiently stable and mature to be both used in a production capacity and put on the track toward creating a formal standard. The company asserts that this will be an important step toward embedding native Dart support within browsers.

Source:  arstechnica.com

Crackdown successfully reduces spam

Friday, December 6th, 2013

Efforts to put an end to e-mail phishing scams are working, thanks to the development of e-mail authentication standards, according to a pair of Google security researchers.

Internet industry and standards groups have been working since 2004 to get e-mail providers to use authentication to put a halt to e-mail address impersonation. The challenge was both in creating the standards that the e-mail’s sending and receiving domains would use, and getting domains to use them.

Elie Bursztein, Google’s anti-abuse research lead, and Vijay Eranti, Gmail’s anti-abuse technical lead, wrote that these standards — called DomainKey Identified Email (DKIM) and Sender Policy Framework (SPF) — are now in widespread use.

http://asset2.cbsistatic.com/cnwk.1d/i/tim2/2013/12/06/chart.jpg“91.4 percent of nonspam e-mails sent to Gmail users come from authenticated senders,” they said. By ensuring that the e-mail has been authenticated, the standards have made it easier to block the billions of annual spam and phishing attempts.

While social media gets all the buzz, the statistics they shared tell the story of the enormous use of e-mail and the challenges in preventing e-mail address fraud.

More than 3.5 million domains that are active on a weekly basis use the SPF standard when sending e-mail via SMTP servers, which accounts for 89.1 percent of e-mail sent to Gmail.

More than half a million e-mail sending and receiving domains that are active weekly adopted the DKIM standards, which accounts for 76.9 percent of e-mails received by Gmail.

Another 74.7 percent of all incoming e-mail to Gmail accounts is authenticated using both DKIM and SPF standards, and more than 80,000 domains use e-mail policies that allow Google to use the Domain-based Message Authentication, Reporting and Conformance (DMARC) standard to reject “hundreds of millions” of unauthenticated e-mails per week.

The pair cautioned domain owners to make sure that their DKIM cryptographic keys were 1024 bits, as opposed to the weaker 512-bit keys. They added that owners of domains that never send e-mail should use DMARC to create a policy that identifies the domain as a “non-sender.”

Questions about the origins of the unsecured e-mails were not immediately returned by Google.

Source:  CNET

N.S.A. may have hit Internet companies at a weak spot

Tuesday, November 26th, 2013

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

“From Echelon to Total Information Awareness to Prism, all these programs have gone under different names, but in essence do the same thing,” said Chip Pitts, a law lecturer at Stanford University School of Law.

Based in the Denver suburbs, Level 3 is not a household name like Verizon or AT&T, but in terms of its ability to carry traffic, it is bigger than the other two carriers combined. Its networking equipment is found in 200 data centers in the United States, more than 100 centers in Europe and 14 in Latin America.

Level 3 did not directly respond to an inquiry about whether it had given the N.S.A., or the agency’s foreign intelligence partners, access to Google and Yahoo’s data. In a statement, Level 3 said: “It is our policy and our practice to comply with laws in every country where we operate, and to provide government agencies access to customer data only when we are compelled to do so by the laws in the country where the data is located.”

Also, in a financial filing, Level 3 noted that, “We are party to an agreement with the U.S. Departments of Homeland Security, Justice and Defense addressing the U.S. government’s national security and law enforcement concerns. This agreement imposes significant requirements on us related to information storage and management; traffic management; physical, logical and network security arrangements; personnel screening and training; and other matters.”

Security experts say that regardless of whether Level 3’s participation is voluntary or not, recent N.S.A. disclosures make clear that even when Internet giants like Google and Yahoo do not hand over data, the N.S.A. and its intelligence partners can simply gather their data downstream.

That much was true last summer when United States authorities first began tracking Mr. Snowden’s movements after he left Hawaii for Hong Kong with thousands of classified documents. In May, authorities contacted Ladar Levison, who ran Lavabit, Mr. Snowden’s email provider, to install a tap on Mr. Snowden’s email account. When Mr. Levison did not move quickly enough to facilitate the tap on Lavabit’s network, the Federal Bureau of Investigation did so without him.

Mr. Levison said it was unclear how that tap was installed, whether through Level 3, which sold bandwidth to Lavabit, or at the Dallas facility where his servers and networking equipment are stored. When Mr. Levison asked the facility’s manager about the tap, he was told the manager could not speak with him. A spokesman for TierPoint, which owns the Dallas facility, did not return a call seeking a comment.

Mr. Pitts said that while working as the chief legal officer at Nokia in the 1990s, he successfully fended off an effort by intelligence agencies to get backdoor access into Nokia’s computer networking equipment.

Nearly 20 years later, Verizon has said that it and other carriers are forced to comply with government requests in every country in which they operate, and are limited in what they can say about their arrangements.

“At the end of the day, if the Justice Department shows up at your door, you have to comply,” Lowell C. McAdam, Verizon’s chief executive, said in an interview in September. “We have gag orders on what we can say and can’t defend ourselves, but we were told they do this with every carrier.”

Source:  nytimes.com

Encrypt everything: Google’s answer to government surveillance is catching on

Thursday, November 21st, 2013

While Microsoft’s busy selling t-shirts and mugs about how Google’s “Scroogling” you, the search giant’s chairman is busy tackling a much bigger problem: How to keep your information secure in a world full of prying eyes and governments willing to drag in data by the bucket load. And according to Google’s Eric Schmidt, the answer is fairly straightforward.

“We can end government censorship in a decade,” Schmidt said Wednesday during a speech in Washington, according to Bloomberg. “The solution to government surveillance is to encrypt everything.”

Google’s certainly putting its SSL certificates where Schmidt’s mouth is, too: In the “Encrypt the Web” scorecard released by the Electronic Frontier Foundation earlier this month, Google was one of the few Internet giants to receive a perfect five out of five score for its encryption efforts. Even basic Google.com searches default to HTTPS encryption these days, and Google goes so far as to encrypt data traveling in-network between its data centers.

Spying eyes

That last move didn’t occur in a vacuum, however. Earlier this month, one of the latest Snowden revelations revealed that the National Security Agency’s MUSCULAR program taps the links flowing between Google and Yahoo’s internal data centers.

“We have strengthened our systems remarkably as a result of the most recent events,” Schmidt said during the speech. “It’s reasonable to expect that the industry as a whole will continue to strengthen these systems.”

Indeed, Yahoo recently announced plans to encrypt, well, everything in the wake of the recent NSA surveillance revelations. Dropbox, Facebook, Sonic.net, and the SpiderOak cloud storage service also received flawless marks in the EFF’s report.

And the push for ubiquitous encryption recently gained an even more formidable proponent. The Internet Engineering Task Force working on HTTP 2.0 announced last week that the next-gen version of the crucial protocol will only work with HTTPS encrypted URLs.

Yes, all encryption, all the time could very well become the norm on the ‘Net before long. But while that will certainly raise the general level of security and privacy web-wide, don’t think for a minute that HTTPS is a silver bullet against pervasive government surveillance. In yet another Snowden-supplied revelation released in September, it was revealed that the NSA spends more than $250 million year-in and year-out in its efforts to break online encryption techniques.

Source:  csoonline.com

Hackers compromise official PHP website, infect visitors with malware

Friday, October 25th, 2013

Maintainers of the open-source PHP programming language have locked down the php.net website after discovering two of its servers were hacked to host malicious code designed to surreptitiously install malware on visitors’ computers.

The compromise was discovered Thursday morning by Google’s safe browsing service, which helps the Chrome, Firefox, and Safari browsers automatically block sites that serve drive-by exploits. Traces of the malicious JavaScript code served to some php.net visitors were captured and posted to Hacker News here and, in the form of a pcap file, to a Barracuda Networks blog post here. The attacks started Tuesday and lasted through Thursday morning, PHP officials wrote in a statement posted late that evening.

Eventually, the site was moved to a new set of servers, PHP officials wrote in an earlier statement. There’s no evidence that any of the code they maintain has been altered, they added. Encrypted HTTPS access to php.net websites is temporarily unavailable until a new secure sockets layer certificate is issued and installed. The old certificate was revoked out of concern the intruders may have accessed the private encryption key. User passwords will be reset in the coming days. At time of writing, there was no indication of any further compromise.

“The php.net systems team have audited every server operated by php.net, and have found that two servers were compromised: the server which hosted the www.php.net, static.php.net and git.php.net domains and was previously suspected based on the JavaScript malware, and the server hosting bugs.php.net,” Thursday night’s statement read. “The method by which these servers were compromised is unknown at this time.”

According to a security researcher at Kaspersky Lab, Thursday’s compromise caused some php.net visitors to download “Tepfer,” a trojan spawned by the Magnitude Exploit Kit. At the time of the php.net attacks, the malware was detected by only five of 47 antivirus programs. An analysis of the pcap file suggests the malware attack worked by exploiting a vulnerability in Adobe Flash, although it’s possible that some victims were targeted by attacks that exploited Java, Internet Explorer, or other applications, Martijn Grooten, a security researcher for Virus Bulletin, told Ars.

Grooten said the malicious JavaScript was served from a file known as userprefs.js hosted directly on one of the php.net servers. While the userprefs.js code was served to all visitors, only some of those people received an additional payload that contained malicious iframe tags. The HTML code caused visitors’ browsers to connect to a series of third-party websites and eventually download malicious code. At least some of the sites the malicious iframes were pointing to were UK domains such as nkhere.reviewhdtv.co.uk, which appeared to have their domain name system server settings compromised so they resolved to IP addresses located in Moldova.

“Given what Hacker News reported (a site serving malicious JS) to some, this doesn’t look like someone manually changing the file,” Grooten said, calling into question an account php.net officials gave in their initial brief statement posted to the site. The attackers “somehow compromised the Web server. It might be that php.net has yet to discover that (it’s not trivial—some webserver malware runs entirely in memory and hides itself pretty well.)”

Ars has covered several varieties of malware that target webservers and are extremely hard to detect.

In an e-mail, PHP maintainer Adam Harvey said PHP officials first learned of the attacks at 6:15am UTC. By 8, they had provisioned a new server. In the interim, some visitors may have been exposed.

“We have no numbers on the number of visitors affected, due to the transient nature of the malicious JS,” Harvey wrote. “As the news post on php.net said, it was only visible intermittently due to interactions with an rsync job that refreshed the code from the Git repository that houses www.php.net. The investigation is ongoing. Right now we have nothing specific to share, but a full post mortem will be posted on php.net once the dust has settled.”

Source:  arstechnica.com

Google unveils an anti-DDoS platform for human rights organizations and media, but will it work?

Tuesday, October 22nd, 2013

Project Shield uses company’s infrastructure to absorb attacks

On Monday, Google announced a beta service that will offer DDoS protection to human rights organizations and media, in and effort to slow the amount of censorship that such attacks cause.

The announcement of Project Shield, the name given to the anti-DDoS platform, came during a presentation in New York, at the Conflict in a Connected World summit. The gathering included security experts, hacktivists, dissidents, and technologists, in order to explore the nature of conflict and how online tools can both be a source of protection and harm when it comes to expression, and information sharing.

“As long as people have expressed ideas, others have tried to silence them. Today one out of every three people lives in a society that is severely censored. Online barriers can include everything from filters that block content to targeted attacks designed to take down websites. For many people, these obstacles are more than an inconvenience — they represent full-scale repression,” the company explained in a blog post.

Project Shield uses Google’s massive infrastructure to absorb DDoS attacks. Enrollment in the service is invite only at the moment, but it could be expanded considerable in the future. The service is free, but will follow page speed pricing, should Google open enrollment and charge for it down the line.

However, while the service is sure to help smaller websites, such as those ran by dissidents exposing corrupt regimes, or media speaking out against those in power, Google makes no promises.

“No guarantees are made in regards to uptime or protection levels. Google has designed its infrastructure to defend itself from quite large attacks and this initiative is aimed at providing a similar level of protection to third-party websites,” the company explains in a Project Shield outline.

One problem Project Shield may inadvertently create is a change in tactics. If the common forms of DDoS attacks are blocked, then more advanced forms of attack will be used. Such an escalation has already happened for high value targets, such as banks and other financial services websites.

“Using Google’s infrastructure to absorb DDoS attacks is structurally like using a CDN (Content Delivery Network) and has the same pros and cons,” Shuman Ghosemajumder, VP of strategy at Shape Security, told CSO during an interview.

The types of attacks a CDN would solve, he explained, are network-based DoS and DDoS attacks. These are the most common, and the most well-known attack types, as they’ve been around the longest.

In 2000, flood attacks were in the 400Mb/sec range, but today’s attacks scale to regularly exceed 100Gb/sec, according to anti-DDoS vendor Arbor Networks. In 2010, Arbor started to see a trend led by attackers who were advancing DDoS campaigns, by developing new tactics, tools, and targets. What that has led to is a threat that mixes flood, application and infrastructure attacks in a single, blended attack.

“It is unclear how effective [Project Shield] would be against Application Layer DoS attacks, where web servers are flooded with HTTP requests. These represent more leveraged DoS attacks, requiring less infrastructure on the part of the attacker, but are still fairly simplistic. If the DDoS protection provided operates at the application layer, then it could help,” Ghosemajumder said.

“What it would not protect against is Advanced Denial of Service attacks, where the attacker uses knowledge of the application to directly attack the origin server, databases, and other backend systems which cannot be protected against by a CDN and similar means.”

Google hasn’t mentioned directly the number of sites currently being protected by Project Shield, so there is no way to measure the effectiveness of the program from the outside.

In related news, Google also released a second DDoS related tool on Monday, which is possible thanks to data collected by Arbor networks. The Digital Attack Map, as the tool is called, is a monitoring system that allows users to see historical DDoS attack trends, and connect them to related news events on any given day. The data is also shown live, and can be granularly sorted by location, time, and attack type.

Source:  csoonline.com

Schools’ use of cloud services puts student privacy at risk

Tuesday, September 24th, 2013

Vendors should promise not to use targeted advertising and behavioral profiling, SafeGov said

Schools that compel students to use commercial cloud services for email and documents are putting privacy at risk, says a campaign group calling for strict controls on the use of such services in education.

A core problem is that cloud providers force schools to accept policies that authorize user profiling and online behavioral advertising. Some cloud privacy policies stipulate that students are also bound by these policies, even when they have not had the opportunity to grant or withhold their consent, said privacy campaign group SafeGov.org in a report released on Monday.

There is also the risk of commercial data mining. “When school cloud services derive from ad-supported consumer services that rely on powerful user profiling and tracking algorithms, it may be technically difficult for the cloud provider to turn off these functions even when ads are not being served,” the report said.

Furthermore, by failing to create interfaces that distinguish between ad-free and ad-supported versions, students may be lured from ad-free services for school use to consumer ad-driven services that engage in highly intrusive processing of personal information, according to the report. This could be the case with email, online video, networking and basic search.

Also, contracts used by cloud providers don’t guarantee ad-free services because they are ambiguously worded and include the option to serve ads, the report said.

SafeGov has sought support from European Data Protection Authorities (DPAs), some of which endorsed the use of codes of conduct establishing rules to which schools and cloud providers could voluntarily agree. Such codes should include a binding pledge to ban targeted advertising in schools as well as the processing or secondary use of data for advertising purposes, SafeGov recommended.

“We think any provider of cloud computing services to schools (Google Apps and Microsoft 365 included) should sign up to follow the Codes of Conduct outlined in the report,” said a SafeGov spokeswoman in an email.

Even when ad serving is disabled the privacy of students may still be jeopardized, the report said.

For example, while Google’s policy for Google Apps for Education states that no ads will be shown to enrolled students, there could still be a privacy problem, according to SafeGov.

“Based on our research, school and government customers of Google Apps are encouraged to add ‘non-core’ (ad-based) Google services such as search or YouTube, to the Google Apps for Education interface, which takes students from a purportedly ad-free environment to an ad-driven one,” the spokeswoman said.

“In at least one case we know of, it also requires the school to force students to accept the privacy policy before being able to continue using their accounts,” she said, adding that when this is done the user can click through to the ad-supported service without a warning that they will be profiled and tracked.

This issue was flagged by the French and Swedish DPAs, the spokeswoman said.

In September, the Swedish DPA ordered a school to stop using Google Apps or sign a reworked agreement with Google because the current terms of use lacked specifics on how personal data is being handled and didn’t comply with local data laws.

However, there are some initiatives that are encouraging, the spokeswoman said.

Microsoft’s Bing for Schools initiative, an ad-free, no cost version of its Bing search engine that can be used in public and private schools across the U.S., is one of them, she said. “This is one of the things SafeGov is trying to accomplish with the Codes of Conduct — taking out ad-serving features completely when providing cloud services in schools. This would remove the ad-profiling risk for students,” she said.

Microsoft and Google did not respond to a request for comment.

Source:  computerworld.com

iOS and Android weaknesses allow stealthy pilfering of website credentials

Thursday, August 29th, 2013

Computer scientists have uncovered architectural weaknesses in both the iOS and Android mobile operating systems that make it possible for hackers to steal sensitive user data and login credentials for popular e-mail and storage services.

Both OSes fail to ensure that browser cookies, document files, and other sensitive content from one Internet domain are off-limits to scripts controlled by a second address without explicit permission, according to a just-published academic paper from scientists at Microsoft Research and Indiana University. The so-called same-origin policy is a fundamental security mechanism enforced by desktop browsers, but the protection is woefully missing from many iOS and Android apps. To demonstrate the threat, the researchers devised several hacks that carry out so-called cross-site scripting (XSS) and cross-site request forgery (CSRF) attacks to surreptitiously download user data from handsets.

The most serious of the attacks worked on both iOS and Android devices and required only that an end-user click on a booby-trapped link in the official Google Plus app. Behind the scenes, a script sent instructions that caused a text-editing app known as PlainText to send documents and text input to a Dropbox account controlled by the researchers. The attack worked against other apps, including TopNotes and Nocs.

“The problem here is that iOS and Android do not have this origin-based protection to regulate the interactions between those apps and between an app and another app’s Web content,” XiaoFeng Wang, a professor in Indiana University’s School of Informatics and Computing, told Ars. “As a result, we show that origins can be crossed and the same XSS and CSRF can happen.” The paper, titled Unauthorized Origin Crossing on Mobile Platforms: Threats and Mitigation, was recently accepted by the 20th ACM Conference on Computer and Communications Security.

All your credentials belong to us

The Plaintext app in this demonstration video was not configured to work with Dropbox. But even if the app had been set up to connect to the storage service, the attack could make it connect to the attacker’s account rather than the legitimate account belonging to the user, Wang said. All that was required was for the iPad user to click on the malicious link in the Google Plus app. In the researchers’ experiments, Android devices were susceptible to the same attack.

A separate series of attacks were able to retrieve the multi-character security tokens Android apps use to access private accounts on Facebook and Dropbox. Once the credentials are exposed, attackers could use them to download photos, documents, or other sensitive files stored in the online services. The attack, which relied on a malicious app already installed on the handset, exploited the lack of same-origin policy enforcement to bypass Android’s “sandbox” security protection. Google developers explicitly designed the mechanism to prevent one app from being able to access browser cookies, contacts, and other sensitive content created by another app unless a user overrides the restriction.

All attacks described in the 12-page paper have been confirmed by Dropbox, Facebook, and the other third-party websites whose apps were tested, Wang said. Most of the vulnerabilities have been fixed, but in many cases the patches were extremely hard to develop and took months to implement. The scientists went on to create a proof-of-concept app they called Morbs that provides OS-level protection across all apps on an Android device. It works by labeling each message with information about its origin and could make it easier for developers to specify and enforce security policies based on the sites where security tokens and other sensitive information originate.

As mentioned earlier, desktop browsers have long steadfastly enforced a same-origin policy that makes it impossible for JavaScript and other code from a domain like evilhacker.com to access cookies or other sensitive content from a site like trustedbank.com. In the world of mobile apps, the central role of the browser—and the gate-keeper service it provided—has largely come undone. It’s encouraging to know that the developers of the vulnerable apps took this research so seriously. Facebook awarded the researchers at least $7,000 in bounties (which the researchers donated to charity), and Dropbox offered valuable premium services in exchange for the private vulnerability report. But depending on a patchwork of fixes from each app maker is problematic given the difficulty and time involved in coming up with patches.

A better approach is for Apple and Google developers to implement something like Morbs that works across the board.

“Our research shows that in the absence of such protection, the mobile channels can be easily abused to gain unauthorized access to a user’s sensitive resources,” the researchers—who besides Wang, included Rui Wang and Shuo Chen of Microsoft and Luyi Xing of Indiana University—wrote. “We found five cross-origin issues in popular [software development kits] and high-profile apps such as Facebook and Dropbox, which can be exploited to steal their users’ authentication credentials and other confidential information such as ‘text’ input. Moreover, without the OS support for origin-based protection, not only is app development shown to be prone to such cross-origin flaws, but the developer may also have trouble fixing the flaws even after they are discovered.”

Source:  arstechnica.com

Mozilla may reject long-lived digital certificates after similar move by Google

Friday, August 23rd, 2013

Starting in early 2014 Google Chrome will block certificates issued after July 1, 2012, with a validity period of more than 60 months

Mozilla is considering the possibility of rejecting as invalid SSL certificates issued after July 1, 2012, with a validity period of more than 60 months. Google already made the decision to block such certificates in Chrome starting early next year.

“As a result of further analysis of available, publicly discoverable certificates, as well as the vibrant discussion among the CA/B Forum [Certificate Authority/Browser Forum] membership, we have decided to implement further programmatic checks in Google Chrome and the Chromium Browser in order to ensure Baseline Requirements compliance,” Ryan Sleevi, a member of the Google Chrome Team said Monday in a message to the CA/B Forum mailing list.

The checks will be added to the development and beta releases of Google Chrome at the beginning of 2014. The changes are expected in the stable release of Chrome during the first quarter of next year, Sleevi said.

The Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, sometimes simply referred to as the Baseline Requirements, is a set of guidelines agreed upon by all certificate authorities (CAs) and browser vendors that are members of the CA/B Forum.

Version 1.0 of the Baseline Requirements went into effect on July 1, 2012, and states that “Certificates issued after the Effective Date MUST have a Validity Period no greater than 60 months.” It also says that certificates to be issued after April 1, 2015, will need to have a validity period no greater than 39 months, but there are some clearly defined exceptions to this requirement.

The shortening of certificate validity period is a proactive measure that would allow for a timely implementation of changes made to the requirements in the future. It would be hard for future requirements, especially those with a security impact, to have a practical effect if older certificates that aren’t compliant with them would remain valid for 10 more years.

Google identified 2,038 certificates that were issued after July 1, 2012, and have validity periods longer than 60 months, in violation of the current Baseline Requirements.

“We encourage CAs that have engaged in this unfortunate practice, which appears to be a very limited subset of CAs, to reach out to affected customers and inform them of the upcoming changes,” Sleevi said referring to the fact that Chrome will start blocking those certificates in the beginning of 2014.

On Thursday, a discussion was started on the Mozilla bug tracker on whether the company should enforce a similar block in its products.

“Everyone agrees such certs, when newly issued, are incompatible with the Baseline Requirements,” said Gervase Markham, who deals with issues of project governance at Mozilla, on the bug tracker. “Some CAs have argued that when reissued, this is not so, but Google does not agree with them. We should consider making the same change.”

Daniel Veditz, the security lead at Mozilla said that he sees why CAs might have a problem with this from a business and legal standpoint. If a CA already sold a “product” — in this case a certificate — in the past with certain terms and would later violate those terms by deciding to reduce the certificate’s validity period, they might be in hot water, he said.

“Although it does seem as if reissuing as a 60-month cert with the promise to reissue with the balance later ought to be satisfactory,” Vediz said.

Markham agreed. “No one is asking CAs to not give customers what they’ve paid for in terms of duration; it will just need to be 2 (or more) separate certs,” he said. “I agree that changing certs once every 5 years rather than every 10 might be a minor inconvenience for customers who use the same web server hardware and software for more than 5 years, but I’m not sure how large a group that is.”

Mozilla’s PR firm in the U.K. could not immediately provide a statement from the company regarding this issue.

Source:  csoonline.com

Amazon is said to have tested a wireless network

Friday, August 23rd, 2013

Amazon.com Inc. (AMZN) has tested a new wireless network that would allow customers to connect its devices to the Internet, according to people with knowledge of the matter.

The wireless network, which was tested in Cupertino, California, used spectrum controlled by satellite communications company Globalstar Inc. (GSAT), said the people who asked not to be identified because the test was private.

The trial underlines how Amazon, the world’s largest e-commerce company, is moving beyond being a Web destination and hardware maker and digging deeper into the underlying technology for how people connect to the Internet. That would let Amazon create a more comprehensive user experience, encompassing how consumers get online, what device they use to connect to the Web and what they do on the Internet.

Leslie Letts, a spokeswoman for Amazon, didn’t respond to a request for comment. Katherine LeBlanc, a spokeswoman for Globalstar, declined to comment.

Amazon isn’t the only Internet company that has tested technology allowing it to be a Web gateway. Google Inc. (GOOG) has secured its own communications capabilities by bidding for wireless spectrum and building high-speed, fiber-based broadband networks in 17 cities, including Austin, Texas and Kansas City, Kansas. It also operates a Wi-Fi network in Mountain View, California, and recently agreed to provide wireless connectivity at Starbucks Corp. (SBUX)’s coffee shops.

Always Trying

Amazon continually tries various technologies, and it’s unclear if the wireless network testing is still taking place, said the people. The trial was in the vicinity of Amazon’s Lab126 research facilities in Cupertino, the people said. Lab126 designs and engineers Kindle devices.

“Given that Amazon’s becoming a big player in video, they could look into investing into forms of connectivity,” independent wireless analyst Chetan Sharma said in an interview.

Amazon has moved deeper into wireless services for several years, as it competes with tablet makers like Apple Inc. (AAPL) and with Google, which runs a rival application store. Amazon’s Kindle tablets and e-book readers have built-in wireless connectivity, and the company sells apps for mobile devices. Amazon had also worked on its own smartphone, Bloomberg reported last year.

Chief Executive Officer Jeff Bezos is aiming to make Amazon a one-stop shop for consumers online, a strategy that spurred a 27 percent increase in sales to $61.1 billion last year. It’s an approach investors have bought into, shown in Amazon’s stock price, which has more than doubled in the past three years.

Globalstar’s Spectrum

Globalstar is seeking regulatory approval to convert about 80 percent of its spectrum to terrestrial use. The Milpitas, California-based company applied to the Federal Communications Commission for permission to convert its satellite spectrum to provide Wi-Fi-like services in November 2012.

Globalstar met with FCC Chairwoman Mignon Clyburn in June, and a decision on whether the company can convert the spectrum could come within months. A company technical adviser conducted tests that showed the spectrum may be able to accommodate more traffic and offer faster speeds than traditional public Wi-Fi networks.

“We are now well positioned in the ongoing process with the FCC as we seek terrestrial authority for our spectrum,” Globalstar CEO James Monroe said during the company’s last earnings call.

Neil Grace, a spokesman for the FCC, declined to comment.

If granted FCC approval, Globalstar is considering leasing its spectrum, sharing service revenues with partners, and other business models, one of the people said. With wireless spectrum scarce, Globalstar’s converted spectrum could be of interest to carriers and cable companies, seeking to offload ballooning mobile traffic, as well as to technology companies.

The FCC issued the permit to trial wireless equipment using Globalstar’s spectrum to the satellite service provider’s technical adviser, Jarvinian Wireless Innovation Fund. In a letter to the FCC dated July 1, Jarvinian managing director John Dooley said his company is helping “a major technology company assess the significant performance benefits” of Globalstar’s spectrum.

Source:  bloomberg.com

Google’s “Project Loon” flying Internet coming to homes in California

Tuesday, August 20th, 2013

Google is asking residents of Central Valley in California to take part in a beta test of Project Loon, the company’s ambitious plan to deliver Internet access from balloons.

“Project Loon is looking for folks in the area who are willing to have a Loon Internet antenna installed on their house or small business building to help test the strength of the Loon Internet connection,” Google said on the project’s Google+ page. “When balloons fly overhead, the Loon Internet antennas will generate traffic that will load-test our service.”

Interested residents in Madera, Chowchilla, Mariposa, Merced, or Turlock can fill out a survey for the chance to participate. The tests will begin in August and run through the end of the year, Google said.

“You’ll be asked to help us load test the system during our research flights in the coming months,” Google said. “Load testing involves putting demand on the system and then measuring its response. During these research flights, you will be invited to install a specialized Internet antenna on your home or business building which will help us learn how to deliver reliable, fast Internet connectivity to as many people as possible.”

Google unveiled Project Loon in June, conducting the first publicly acknowledged tests in New Zealand. Google’s balloons have been flying for some time, though. Last October, residents of Kentucky noticed one of the balloons and called it a UFO, alerting police and attracting the attention of the Huffington Post with a YouTube video. Google didn’t acknowledge the test at the time, but it recently admitted that Kentucky residents had seen one of its Project Loon balloons.

The system is still just experimental, but Google believes it will eventually bring Internet access to many parts of the world where people have little or no connectivity. The balloons are sent into the stratosphere and fly untethered, but Google says it uses “complex algorithms” along with wind and solar power to control their movement. The balloons form a mesh network 20 kilometers above the ground, with each balloon communicating with its neighbors and ultimately to ground stations connected to Internet providers.

Source:  arstechnica.com

Mobile malware, mainly aimed at Android devices, jumps 614% in a year

Friday, July 12th, 2013

The threat to corporate data continues to grow as Android devices come under attack

The number of mobile malware apps has jumped 614% in the last year, according to studies conducted by McAfee and Juniper Networks.

The Juniper study — its third annual Mobile Threats Report — showed that the majority of attacks are directed at Android devices, as the Android market continues to grow. Malware aimed specifically at Android devices has increased at a staggering rate since 2010, growing from 24% of all mobile malware that year to 92% by March 2013.

According to data from Juniper’s Mobile Threat Center (MTC) research facility, the number of malicious mobile apps jumped 614% in the last year to 276,259, which demonstrates “an exponentially higher cyber criminal interest in exploiting mobile devices.”

“Malware writers are increasingly behaving like profit-motivated businesses when designing new attacks and malware distribution strategies,” Juniper said in a statement. “Attackers are maximizing their return on investment by focusing 92% of all MTC detected threats at Android, which has a commanding share of the global smartphone market.

In addition to malicious apps, Juniper Networks found several legitimate free applications that could allow corporate data to leak out. The study found that free mobile apps sampled by the MTC are three times more likely to track location and 2.5 times more likely to access user address books than their paid counterparts. Free applications requesting/gaining access to account information nearly doubled from 5.9% in October 2012 to 10.5% in May 2013.

McAfee’s study found that a type of SMS malware known as a Fake Installer can be used to charge a typical premium rate of $4 per message once installed on a mobile device. A “free” Fake Installer app can cost up to $28 since each one can tell a consumer’s device to send or receive up to seven messages from a premium rate SMS number.

Seventy-three percent of all known malware involves Fake Installers, according to the report.

“These threats trick people into sending SMS messages to premium-rate numbers set up by attackers,” the report states. “Based on research by the MTC, each successful attack instance can yield approximately $10 in immediate profit. The MTC also found that more sophisticated attackers are developing intricate botnets and targeted attacks capable of disrupting and accessing high-value data on corporate networks.”

Juniper’s report identified more than 500 third-party Android application stores worldwide, most with very low levels of accountability or oversight, that are known to host mobile malware — preying on unsuspecting mobile users as well as those with jail-broken iOS mobile devices. Of the malicious third-party stores identified by the MTC, 60% originate from either China or Russia.

According to market research firm ComScore, Android now has a 52.4% market share worldwide, up 0.7% from February. As Samsung has been taking market share from Apple, Android use is expected to continue to grow, according to ComScore.

According to market analyst firm Canalys, Android representedalmost 60% of the mobile devices shipped in 2012. Apple accounted for 19.3% of devices shipped last year, while Microsoft had 18.1%.

Source:  computerworld.com

Google: Critical Android security flaw won’t harm most users

Tuesday, July 9th, 2013

A security flaw could affect 99 percent of Android devices, a researcher claims, but the reality is that most Android users have very little to worry about.

Bluebox, a mobile security firm, billed the exploit as a “Master Key” that could “turn any legitimate application into a malicious Trojan, completely unnoticed by the app store, the phone, or the end user.” In a blog post last week, Bluebox CTO Jeff Forristal wrote that nearly any Android phone released in the last four years is vulnerable.

Bluebox’s claims led to a fair number of scary-sounding headlines, but as Google points out, most Android users are already safe from this security flaw.

Speaking to ZDNet, Google spokeswoman Gina Scigliano said that all apps submitted to the Google Play Store get scanned for the exploit. So far, no apps have even tried to take advantage of the exploit, and they’d be shut out from the store if they did.

If the attack can’t come from apps in the Google Play Store, how could it possibly get onto Android phones? As Forristal explained to Computerworld last week, the exploit could come from third-party app stores, e-mailed attachments, website downloads and direct transfer via USB.

But as any Android enthusiast knows, Android phones can’t install apps through those methods unless the user provides explicit permission through the phone’s settings menu. The option to install apps from outside sources is disabled by default. Even if the option is enabled, phones running Android 4.2 or higher have yet another layer of protection through app verification, which checks non-Google Play apps for malicious code. This verification is enabled by default.

In other words, to actually be vulnerable to this “Master Key,” you must enable the installation of apps from outside Google Play, disable Android’s built-in scanning and somehow stumble upon an app that takes advantage of the exploit. At that point, you must still knowingly go through the installation process yourself. When you consider how many people might go through all those steps, it’s a lot less than 99 percent of users.

Still, just to be safe, Google has released a patch for the vulnerability, which phone makers can apply in future software updates. Scigliano said Samsung is already pushing the fix to devices, along with other unspecified OEMs. The popular CyanogenMod enthusiast build has also been patched to protect against the peril.

Android’s fragmentation problem does mean that many users won’t get this patch in a timely manner, if at all, but it doesn’t mean that unpatched users are at risk.

None of this invalidates the work that Bluebox has done. Malicious apps have snuck into Google’s app store before, so the fact that a security firm uncovered the exploit first and disclosed it to Google is a good thing. But there’s a big difference between a potential security issue and one that actually affects huge swaths of users. Frightening headlines aside, this flaw is an example of the former.

Source:  techhive.com

‘Master key’ to Android phones uncovered

Friday, July 5th, 2013

A “master key” that could give cyber-thieves unfettered access to almost any Android phone has been discovered by security research firm BlueBox.

The bug could be exploited to let an attacker do what they want to a phone including stealing data, eavesdropping or using it to send junk messages.

The loophole has been present in every version of the Android operating system released since 2009.

Google said it currently had no comment to make on BlueBox’s discovery.

Writing on the BlueBox blog, Jeff Forristal, said the implications of the discovery were “huge”.

The bug emerges because of the way Android handles cryptographic verification of the programs installed on the phone.

Android uses the cryptographic signature as a way to check that an app or program is legitimate and to ensure it has not been tampered with. Mr Forristal and his colleagues have found a method of tricking the way Android checks these signatures so malicious changes to apps go unnoticed.

Any app or program written to exploit the bug would enjoy the same access to a phone that the legitimate version of that application enjoyed.

“It can essentially take over the normal functioning of the phone and control any function thereof,” wrote Mr Forristal. BlueBox reported finding the bug to Google in February. Mr Forristal is planning to reveal more information about the problem at the Black Hat hacker conference being held in August this year.

Marc Rogers, principal security researcher at mobile security firm Lookout said it had replicated the attack and its ability to compromise Android apps.

Mr Rogers added that Google had been informed about the bug by Mr Forristal and had added checking systems to its Play store to spot and stop apps that had been tampered with in this way.

The danger from the loophole remains theoretical because, as yet, there is no evidence that it is being exploited by cyber-thieves.

Source:  BBC

FCC approves Google’s ‘white space’ database operation

Sunday, June 30th, 2013

 

The database will allow unlicensed TV broadcast spectrum to be used for wireless broadband.

The Federal Communications Commission has approved Google’s plan to operate a database that would allow unlicensed TV broadcast spectrum to be used for wireless broadband and shared among many users.

Google, which was granted commission approval on Friday, is the latest company to complete the FCC’s 45-day testing phase. Spectrum Bridge and Telcordia completed their trials, and there are another 10 companies, including Microsoft, which are working on similar databases. The new database will keep track of the TV broadcast frequencies in use so that wireless broadband devices can take advantage of the unlicensed space on the spectrum, also called “white space.”

In the U.S., the FCC has been working to free up spectrum for wireless carriers, which complain they lack adequate available spectrum to keep up with market demand for data services. The FCC approved new rules in 2010 for using unlicensed white space that included establishing databases to track clear frequencies and ensure that devices do not interfere with existing broadcast TV license holders. The databases contain information supplied by the FCC.

However, TV broadcasters have resisted the idea of unlicensed use, worried that allowing others to use white space, which is very close to the frequencies they occupy, could cause interference. What Google and others developing this database technology hope to show is that it is possible to share white space without creating interference.

The Web giant announced in March that it had launched a trial program that would tap white spaces to provide wireless broadband to 10 rural schools in South Africa.

Source:  CNET

Look out Google Fiber, $35-a-month gigabit Internet comes to Vermont

Tuesday, April 30th, 2013

Heads up Google Fiber: A rural Vermont telephone company might just have your $70 gigabit Internet offer beat.

Vermont Telephone Co. (VTel), whose footprint covers 17,500 homes in the Green Mountain State, has begun to offer gigabit Internet speeds for $35 a month, using a brand new fiber network. So far about 600 Vermont homes have subscribed.

VTel’s Chief Executive Michel Guite says he’s made it a personal mission to upgrade the company’s legacy phone network, which dates back to 1890, with fiber for the broadband age. The company was able to afford the upgrades largely by winning federal stimulus awards set aside for broadband. Using $94 million in stimulus money, VTel has invested in stringing 1,200 miles of fiber across a number of rural Vermont counties over the past year. Mr. Guite says the gigabit service should be available across VTel’s footprint in coming months.

VTel joins an increasing number of rural telephone companies who, having lost DSL share to cable Internet over the years, are reinvesting in fiber-to-the-home networks.

The Wall Street Journal reported earlier this year that more than 700 rural telephone companies have made this switch, according to the Fiber to the Home Council, a trade group, and Calix Inc., a company that sells broadband equipment to cable and fiber operators. That comes as Google’s Fiber project, which began in Kansas City and is now extending to cities in Utah and Texas, has raised the profile of gigabit broadband and has captured the fancy of many city governments around the country.

“Google has really given us more encouragement,” Mr. Guite said. Mr. Guite said he was denied federal money for his upgrades the first time he applied, but won it the second time around–after Google had announced plans to build out Fiber.

Incumbent cable operators have largely downplayed the relevance of Google’s project, saying that it’s little more than a publicity stunt. They have also questioned whether residential customers even have a need for such speeds.

Mr. Guite says it remains to be seen whether what VTel is doing is a “sustainable model.” He admits that it’s going to be hard work ahead of VTel to educate customers about the uses of gigabit speeds. Much like Google Fiber in Kansas City, VTel has been holding public meetings in libraries and even one-on-one meetings with elderly folks to help them understand what gigabit Internet means, Mr. Guite said.

Source:  WSJ

Microsoft rolls out standards-compliant two-factor authentication

Thursday, April 18th, 2013

Microsoft today announced that it is rolling out optional two-factor authentication to the 700 million or so Microsoft Account users, confirming last week’s rumors. The scheme will become available to all users “in the next few days.”

It works essentially identically to existing schemes already available for Google accounts. Two-factor authentication augments a password with a one-time code that’s delivered either by text message or generated in an authentication app.

Computers that you trust will be allowed to skip the two factors and just use a password, and application-specific passwords can be generated for interoperating with software that doesn’t support two-factor authentication.

Microsoft has its own authentication app for Windows Phone. It isn’t offering apps for iOS or Android—however, it doesn’t need to. The system it’s using is standard, specified in RFC 6238, and Google uses the same system. As a result, Google’s own Authenticator app for Android can be used to authenticate Microsoft Accounts. And vice versa: Microsoft’s Authenticator app for Windows Phone works with Google accounts.

Source:  arstechnica.com

‘World’s fastest’ home internet service hits Japan with Sony’s help, 2Gbps down

Tuesday, April 16th, 2013

Google Fiber might be making waves with its 1Gbps speeds, but it’s no match for what’s being hailed as the world’s fastest commercially-provided home internet service: Nuro.

Launched in Japan yesterday by Sony-supported ISP So-net, the fiber connection pulls down data at 2Gbps, and sends it up at 1Gbps.  An optical network unit (ONU) given to Nuro customers comes outfitted with three Gigabit ethernet ports and supports 450Mbps over 802.11 a/b/g/n.

When hitched to a two-year contract, web surfers will be set back 4,980 yen ($51) per month and pony up a required 52,500 yen (roughly $540) installation fee, which is currently being waived for folks who apply online.  Those lucky enough to call the Land of the Rising Sun home can register their house, apartment or small business to receive the blazing hookup, so long as they’re located within Chiba, Gunma, Ibaraki, Tochigi, Tokyo, Kanagawa or Saitama.

Source:  engadget.com

Google search manipulation starves some websites of traffic

Tuesday, April 16th, 2013

Google’s placement of its own flight-finding service in search results is resulting in lower click-through rates for companies that have not bought advertising, according to a study by Harvard University academics.

The study provides data for how Google’s placement of its own services amid “organic” search results may hurt competitors, which is the focus of an ongoing antitrust case between Google and the European Union.

How paid and non-paid search results are displayed has a powerful sway over consumers, the study found. Ben Edelman, an associate professor at Harvard Business School, and Zhenyu Lai, a Harvard doctoral candidate, looked at when Google began inserting its own Flight Search feature, launched in December 2011, into search results.

They found that Google chose to display Flight Search depending on a user’s search terms. When Flight Search was displayed, it takes a top position in the search results, pushing lower down non-paid search results.

The result was an 85% increase in click-through rates — a key measure for advertisers — for paid advertisements. Non-paid, algorithmically generated search results for competing travel agencies dropped 65%.

In an interview on Tuesday, Edelman said the study showed that Flight Search wasn’t necessarily that popular with users. When Flight Search was displayed, however, users were more likely to click on AdWords, Google’s advertising product.

“Users are surprised to see Google Flight Search,” Edelman said. “They weren’t expecting it. They don’t necessarily like it or know they like it, but in the short run, it’s not what they thought was going to be there, so they flee to AdWords.

For Google, it’s all good, since the company collects revenue from AdWords.

The study analyzed data from ComScore’s Search Planner, which is a database that tracks algorithmic and paid clicks on search engines by users who agreed to have their web surfing recorded.

Edelman and Lai compared Internet searches performed for four months prior to the launch of Google Flight Search and then after it launched from January to April 2012.

If a user searched for a flight in the format “flights to Orlando,” Flight Search would be displayed. But if a user searched for “flight to Orlando FL,” it was not displayed, they wrote.

It wasn’t clear why slight query changes triggered the display of Flight Search. Edelman said it’s even more difficult these days to predict whether Flight Search will be displayed.

But showing Flight Search caused as much as an 80% drop in algorithmic traffic to the five online travel agencies that had received the most traffic from the search terms used, the academics wrote. By contrast, click-through rates for paid advertising jumped 160%.

They warned that intermediaries such as Google have a powerful influence over consumers by ordering search results in differing formats. Edelman said it makes it more difficult for “vertical” search engines such as Yelp, which focuses on specific types of search such as restaurants, to compete.

“Google has a massive degree of control and uses that discretion in ways that services Google’s interest but much less obviously serves other interests like the public or websites,” he said.

Source:  computerworld.com

New Google site aimed at helping webmasters of hacked sites

Wednesday, March 13th, 2013

Google wants to aid webmasters in identifying site hacks and recovering from them

Google has launched a site for webmasters whose sites have been hacked, something that the company says happens thousands of times every day.

The new site features articles and videos designed to help webmasters identify, diagnose and recover from hacks.

The site addresses different types of ways sites can be compromised. For example, malicious hackers can break into a site and load malware on it to infect visitors, or they can flood it with invisible spam content to inflate their sites’ search rankings.

In announcing the new Help for Hacked Sites resource, Google is also reminding webmasters of best practices for prevention, including keeping all site software updated and patched and being aware of potential security issues of third-party applications and plug-ins before installing them.

This latest Google initiative builds on its efforts over the years to proactively detect malware on sites it indexes and alert both users of its search engine and affected webmasters.

Since Google is the main tool people worldwide use to find and link to websites, it is in Google’s best business interest to make sure it isn’t pointing users of its search engine to malicious or compromised Web destinations.

As part of these efforts, Google also has been a supporter since 2006 of the nonprofit StopBadware organization, which creates awareness about compromised sites and provides informational resources to end users, webmasters and Internet hosts.

However, the problem remains, and continues to be complicated for many webmasters to solve, since ridding a site from malware often requires advanced IT knowledge and outside help.

A StopBadware/Commtouch survey last year of webmasters whose sites had been hacked showed that 26 percent been unable to undo the damage, and 2 percent had opted to give up and abandon the compromised site.

Source:  networkworld.com