Archive for April, 2013

Look out Google Fiber, $35-a-month gigabit Internet comes to Vermont

Tuesday, April 30th, 2013

Heads up Google Fiber: A rural Vermont telephone company might just have your $70 gigabit Internet offer beat.

Vermont Telephone Co. (VTel), whose footprint covers 17,500 homes in the Green Mountain State, has begun to offer gigabit Internet speeds for $35 a month, using a brand new fiber network. So far about 600 Vermont homes have subscribed.

VTel’s Chief Executive Michel Guite says he’s made it a personal mission to upgrade the company’s legacy phone network, which dates back to 1890, with fiber for the broadband age. The company was able to afford the upgrades largely by winning federal stimulus awards set aside for broadband. Using $94 million in stimulus money, VTel has invested in stringing 1,200 miles of fiber across a number of rural Vermont counties over the past year. Mr. Guite says the gigabit service should be available across VTel’s footprint in coming months.

VTel joins an increasing number of rural telephone companies who, having lost DSL share to cable Internet over the years, are reinvesting in fiber-to-the-home networks.

The Wall Street Journal reported earlier this year that more than 700 rural telephone companies have made this switch, according to the Fiber to the Home Council, a trade group, and Calix Inc., a company that sells broadband equipment to cable and fiber operators. That comes as Google’s Fiber project, which began in Kansas City and is now extending to cities in Utah and Texas, has raised the profile of gigabit broadband and has captured the fancy of many city governments around the country.

“Google has really given us more encouragement,” Mr. Guite said. Mr. Guite said he was denied federal money for his upgrades the first time he applied, but won it the second time around–after Google had announced plans to build out Fiber.

Incumbent cable operators have largely downplayed the relevance of Google’s project, saying that it’s little more than a publicity stunt. They have also questioned whether residential customers even have a need for such speeds.

Mr. Guite says it remains to be seen whether what VTel is doing is a “sustainable model.” He admits that it’s going to be hard work ahead of VTel to educate customers about the uses of gigabit speeds. Much like Google Fiber in Kansas City, VTel has been holding public meetings in libraries and even one-on-one meetings with elderly folks to help them understand what gigabit Internet means, Mr. Guite said.

Source:  WSJ

Open IP ports let anyone track ships on Internet

Tuesday, April 30th, 2013

In 12hrs, researchers log more than 2GB of data on ships due to automatic ID systems.

While digging through the data unearthed in an unprecedented census of nearly the entire Internet, Researchers at Rapid7 Labs have discovered a lot of things they didn’t expect to find openly responding to port scans. One of the biggest surprises they discovered was the availability of data that allowed them to track the movements of more than 34,000 ships at sea. The data can pinpoint ships down to their precise geographic location through Automated Identification System receivers connected to the Internet.

The AIS receivers, many of them connected directly to the Internet via serial port servers, are carried aboard ships, buoys, and other navigation markers. The devices are installed at Coast Guard and other maritime facilities ashore to prevent collisions at sea within coastal waters and to let agencies to track the comings and goings of international shipping. Rapid7 security researcher Claudio Guarnieri wrote in a blog post on Rapid7’s Security Street community site that he, Rapid7 Chief Research Officer H.D. Moore, and fellow researcher Mark Schloesser discovered about 160 AIS receivers still active and responding over the Internet. In 12 hours, the trio was able to log more than two gigabytes of data on ships’ positions—including military and law enforcement vessels.

For many of the ships, the vessel’s name was included in the broadcast data pulled from the receivers. For others, the identification numbers broadcast by their beacons are easily found on the Internet. By sifting through the data, the researchers were able to plot the location of individual ships. “Considering that a lot of military, law enforcement, cargoes, and passenger ships do broadcast their positions, we feel that this is a security risk,” Guarnieri wrote.

Among the other information found in the AIS data were “safety messages,” text messages sent between ships and navigation stations to inform each other of hazards. Some of the messages were actually the equivalent of casual texts to arriving ships’ masters: “MOINMOIN GREETINGS TO YOUR CPT.”

Source:  arstechnica.com

Attack hitting Apache websites is invisible to the naked eye

Monday, April 29th, 2013

Newly discovered Linux/Cdorked evades detection by running in shared memory.

Ongoing exploits infecting tens of thousands of reputable sites running the Apache Web server have only grown more powerful and stealthy since Ars first reported on them four weeks ago. Researchers have now documented highly sophisticated features that make these exploits invisible without the use of special forensic detection methods.

Linux/Cdorked.A, as the backdoor has been dubbed, turns Apache-run websites into platforms that surreptitiously expose visitors to powerful malware attacks. According to a blog post published Friday by researchers from antivirus provider Eset, virtually all traces of the backdoor are stored in the shared memory of an infected server, making it extremely hard for administrators to know their machine has been hacked. This gives attackers a new and stealthy launchpad for client-side attacks included in Blackhole, a popular toolkit in the underground that exploits security bugs in Oracle’s Java, Adobe’s Flash and Reader, and dozens of other programs used by end users. There may be no way for typical server admins to know they’re infected.

“Unless a person really has some deep-dive knowledge on the incident response team, the first thing they’re going to do is kill the evidence,” Cameron Camp, a security researcher at Eset North America, told Ars. “If you run a large hosting company you’re not going to send a guy in who’s going to do memory dumps, you’re going to go on their with your standard tool sets and destroy the evidence.”

Linux/Cdorked.A leaves no traces of compromised hosts on the hard drive other than its modified HTTP daemon binary. Its configuration is delivered by the attacker through obfuscated HTTP commands that aren’t logged by normal Apache systems. All attacker-controlled data is encrypted. Those measures make it all but impossible for administrators to know anything is amiss unless they employ special methods to peer deep inside an infected machine. The backdoor analyzed by Eset was programmed to receive 70 different encrypted commands, a number that could give attackers fairly granular control. Attackers can invoke the commands by manipulating the URLs sent to an infected website.

“The thing is receiving commands,” Camp said. “That means that suddenly you have a new vector that is difficult to detect but is receiving commands. Blackhole is a tricky piece of malware anyway. Now suddenly you have a slick delivery method.”

In addition to hiding evidence in memory, the backdoor is programmed to mask its malicious behavior in other ways. End users who request addresses that contain “adm,” “webmaster,” “support,” and similar words often used to denote special administrator webpages aren’t exposed to the client exploits. Also, to make detection harder, users who have previously been attacked are not exposed in the future.

It remains unclear what the precise relationship is between Linux/Cdorked.A and Darkleech, the Apache plug-in module conservatively estimated to have hijacked at least 20,000 sites. It’s possible they’re the same module, different versions of the same module, or different modules that both expose end users to Blackhole exploits. It also remains unclear exactly how legitimate websites are coming under the spell of the malicious plugins. While researchers from Sucuri speculate it takes hold after attackers brute-force the secure-shell access used by administrators, a researcher from Cisco Systems said he found evidence that vulnerable configurations of the Plesk control panel are being exploited to spread Darkleech. Other researchers who have investigated the ongoing attack in the past six months include AV provider Sophos and those from the Malware Must Die blog.

The malicious Apache modules are proving difficult to disinfect. Many of the modules take control of the secure shell mechanism that legitimate administrators use to make technical changes and update content to a site. That means attackers often regain control of machines that are only partially disinfected. The larger problem, of course, is that the highly sophisticated behavior of the infections makes them extremely hard to detect.

Eset researchers have released a tool that can be used by administrators who suspect their machine is infected with Linux/Cdorked.A. The free python script examines the shared memory of a sever running Apache and looks for commands issued by the stealthy backdoor. Eset’s cloud-based Livegrid system has already detected hundreds of servers that are infected. Because Livegrid works only with a small percentage of machines on the Internet, the number of compromised Apache servers is presumed to be much higher.

Source:  arstechnica.com

Spamhaus hacking suspect ‘had mobile attack van’

Monday, April 29th, 2013

A Dutchman accused of mounting one of the biggest attacks on the internet used a “mobile computing office” in the back of a van.

The 35-year-old, identified by police as “SK”, was arrested last week.

He has been blamed for being behind “unprecedentedly serious attacks” on non-profit anti-spam watchdog Spamhaus.

Dutch, German, British and US police forces took part in the investigation leading to the arrest, Spanish authorities said.

The Spanish interior minister said SK was able to carry out network attacks from the back of a van that had been “equipped with various antennas to scan frequencies”.

He was apprehended in the city of Granollers, 20 miles (35km) north of Barcelona. It is expected that he will be extradited from Spain to be tried in the Netherlands.

‘Robust web hosting’

Police said that upon his arrest SK told them he belonged to the “Telecommunications and Foreign Affairs Ministry of the Republic of Cyberbunker”.

Cyberbunker is a company that says it offers highly secure and robust web hosting for any material except child pornography or terrorism-related activity.

Spamhaus is an organisation based in London and Geneva that aims to help email providers filter out spam and other unwanted content.

To do this, the group maintains a number of blocklists, a database of servers known to be being used for malicious purposes.

Police alleged that SK co-ordinated an attack on Spamhaus in protest over its decision to add servers maintained by Cyberbunker to a spam blacklist.

Overwhelm server

Spanish police were alerted in March to large distributed-denial-of-service (DDoS) attacks originating in Spain but affecting servers in the UK, Netherlands and US.

DDoS attacks attempt to overwhelm a web server by sending it many more requests for data than it can handle.

A typical DDoS attack employs about 50 gigabits of data per second (Gbps). At its peak the attack on Spamhaus hit 300Gbps.

In a statement in March, Cyberbunker “spokesman” Sven Kamphuis took exception to Spamhaus’s action, saying in messages sent to the press that it had no right to decide “what goes and does not go on the internet”.

Source:  BBC

Cyberwar risks clamity, Eugene Kaspersky warns UK Government and spooks

Monday, April 29th, 2013

State-of-the-art cyberweapons are now powerful enough to severely disrupt nations and the organisations responsible for their critical infrastructure, Kaspersky Lab founder and CEO Eugene Kaspersky has warned in a speech to a select audience of UK police, politicians and CSOs.

That Kaspersky was invited to give the speech to such a high-level gathering is a clear signal that the message accords with the Government and UK security establishment’s view of the threat posed by cyber-weapons.

“Today, sophisticated malicious programs – cyberweapons – have the power to disable companies, cripple governments and bring whole nations to their knees by attacking critical infrastructure in sectors such as communications, finance, transportation and utilities. The consequences for human populations could, as a result, be literally catastrophic,” said Kaspersky.

As an illustration of his point, the number of malware samples analysed by Kaspersky Lab had risen from 700 per day in 2006 to 7,000 per day by 2011. Today the number including polymorphic variants had reached 200,000 each day, enough to overwhelm the defences of even well-defended firms.

The sophistication of threats had also risen dramatically since 2010 with the discovery of state-sponsored threats such as Red October, Flame, MiniFlame, Gauss, Stuxnet, Duqu, Shamoon and Wiper, some of which had been uncovered by Kaspersky Lab itself..

Countering this would be impossible as long as organisations tackled the problem one by one, each in isolation from others. Intelligence sharing was no longer a luxury and had become essential.

This would require intimate cooperation between the private sector and government bodies, he said. The heads of organisations had to internalise this as a new reality.

“But why should state intelligence and defence bother cooperating with the private sector? In the words of Francis Maude, UK Minister of the Cabinet Office, ‘We need to team up to fight common enemies but the key to cooperating, in a spirit of openness and sharing, are guarantees to maintain the confidentiality of data shared,” said Kaspersky.

Audience members included, City of London Police Commissioner Adrian Leppard, National Fraud Authority head Stephen Harrison, former Counter Terrorism and Security Minister Pauline Neville Jones, Minister for Crime and Security James Brokenshire, and CSOs from HSBC, Unilever, Vodafone and Barclays.

Although best known as a celebrity icon of the company that bears his name, Kaspersky has in recent times become vocal on issues of cyber-weapons and their geo-political as well as technical implications.

Although ostensibly preaching the orthodox position that cyber-defence should be a coalition of forces, his words contain nuances, warnings about the dangers of state-sponsored cyber-weapons, including those from the UK and its allies.

Most of the most advanced cyber-weapons uncovered by Kaspersky’s company are suspected of being created by the US, the early-adopter of such offensive capabilities. His point seems to be that the US and its allies will find themselves on the receiving end of the same if international standards of cyber-etiquette are not established.

Earlier this year, Interpol announced that Kaspersky Lab would be a key partner in its new Global Complex for Innovation (IGCI) in Singapore cybercrime fighting hub in Singapore, due for completion next year.

Source:  pcadvisor.com

Malware found scattered by cyber espionage attacks

Monday, April 29th, 2013

 

Researchers following a cyberespionage campaign apparently bent on stealing drone-related technology secrets have found additional malware related to the targeted attacks.FireEye researchers have been tracking so-called “Operation Beebus” for months, but only last week reported the connection to unmanned aircraft often used in spying. Drones have also been used by the Obama administration to assassinate leaders of the Al-Qaeda terrorist group.

Malware linked to spying

FireEye researcher James Bennett, who was the first to make the drone connection, said last week that he has found two new malware associated with the attack, bringing the total to four.

The first two were versions of the same malware called Mutter. The new malware includes one that uses the same custom encryption scheme, but a different command-and-control protocol. The fourth malware is completely different from Mutter, but uses the same C&C infrastructure.

Bennett has yet to fully analyze the new malware, which he hopes will provide “more threads to follow.”

Operation Beebus is a cyberespionage campaign that FireEye has linked to the infamous Comment Crew, which security firm Mandiant has identified as a secret unit of China’s People Liberation Army. The hacker group attempts to steal information from international companies and foreign governments.

Bennett reported in a blog last week that he had uncovered evidence of cyberattacks against a dozen organizations in the U.S. and India. The attacks against academia, government agencies, and the aerospace, defense and telecommunication industries targeted individuals knowledgeable in drone technology.

The spear-phishing campaign included sending email that contained decoy documents meant to trick recipients into clicking on the file, which would download the malware. One such document was an article about Pakistan’s unmanned aerial vehicle industry written by Aditi Malhotra, an Indian writer and associate fellow at the Centre for Land Warfare Studies in New Delhi.

How it worked

Once downloaded, the Mutter malware opened a backdoor to the infected systems in order to receive instructions from C&C servers and to send stolen information. To avoid detection, Mutter is capable of remaining dormant for long periods of time, so that it will eventually be categorized as benign by malware analysis systems.

Despite the exposure, Operation Beebus is still active, although its infrastructure has changed. All but one of the domain names studied by Bennett is no longer in use, but several IP addresses are still active, probably being used with other domains.

“We are still seeing active communications going out with this Mutter malware, so we do know that it’s still going,” Bennett said.

One in five data breaches are the result of cyberespionage campaigns, according to the latest study by Verizon. More than 95 percent of cases originated from China, with targets showing an almost fifty-fifty split between large and small organizations.

Source:  pcworld.com

 

Cisco releases security advisories

Friday, April 26th, 2013

Cisco has released three security advisories to address vulnerabilities affecting Cisco NX-OS-based products, Cisco Device Manager, and Cisco Unified Computing System. These vulnerabilities may allow an attacker to bypass authentication controls, execute arbitrary code, obtain sensitive information, or cause a denial-of-service condition.

US-CERT encourages users and administrators to review the following Cisco security advisories and apply any necessary updates to help mitigate the risks.

Source:  US-CERT

Hackers increasingly target shared Web hosting servers for use in mass phishing attacks

Friday, April 26th, 2013

Nearly half of phishing attacks seen during the second half of 2012 involved the use of hacked shared hosting servers, APWG report says

Cybercriminals increasingly hack into shared Web hosting servers in order to use the domains hosted on them in large phishing campaigns, according to a report from the Anti-Phishing Working Group (APWG).

Forty-seven percent of all phishing attacks recorded worldwide during the second half of 2012 involved such mass break-ins, APWG said in the latest edition of its Global Phishing Survey report published Thursday.

In this type of attack, once phishers break into a shared Web hosting server, they update its configuration so that phishing pages are displayed from a particular subdirectory of every website hosted on the server, APWG said. A single shared hosting server can host dozens, hundreds or even thousands of websites at a time, the organization said.

APWG is a coalition of over 2000 organizations that include security vendors, financial institutions, retailers, ISPs, telecommunication companies, defense contractors, law enforcement agencies, trade groups, government agencies and more.

Hacking into shared Web hosting servers and hijacking their domains for phishing purposes is not a new technique, but this type of malicious activity reached a peak in August 2012, when APWG detected over 14,000 phishing attacks sitting on 61 servers. “Levels did decline in late 2012, but still remained troublingly high,” APWG said.

During the second half of 2012, there were at least 123,486 unique phishing attacks worldwide that involved 89,748 unique domain names, APWG said. This was a significant increase from the 93,462 phishing attacks and 64,204 associated domains observed by the organization during the first half of 2012.

“Of the 89,748 phishing domains, we identified 5,835 domain names that we believe were registered maliciously, by phishers,” APWG said. “The other 83,913 domains were almost all hacked or compromised on vulnerable Web hosting.”

In order to break into such servers, attackers exploit vulnerabilities in Web server administration panels like cPanel or Plesk and popular Web applications like WordPress or Joomla. “These attacks highlight the vulnerability of hosting providers and software, exploit weak password management, and provide plenty of reason to worry,” the organization said.

Cybercriminals break into shared hosting environments in order to use their resources in various types of attacks, not just phishing, APWG said. For example, since late 2012 a group of hackers has been compromising Web servers in order to launch DDoS (distributed denial-of-service) attacks against U.S. financial institutions.

In one mass attack campaign dubbed Darkleech, attackers compromised thousands of Apache Web servers and installed SSH backdoors on them. It’s not clear how the Darkleech attackers break into these servers in the first place, but vulnerabilities in Plesk, cPanel, Webmin or WordPress have been suggested as possible entry points.

Source:  networkworld.com

For first time, smartphone sales top other mobile phones in first quarter

Friday, April 26th, 2013

Six years after the sale of the first iPhone and 14 years after the first BlackBerry email pager was unveiled, smartphone shipments have outnumbered sales of other types of mobile phones, IDC reported late Thursday.

IDC said 216.2 million smartphones were shipped globally in the first quarter of 2013. The smartphone total accounted for 51.6% of all mobile phones shipped.

Shipments of other mobile phones, which IDC calls feature phones, totaled 202.4 million in the quarter. Total shipments of all mobile phones was 418.6 million, IDC said.

“The balance of smartphone power has shifted,” said IDC analyst Kevin Restivo in a statement. “Phone users want computers in their pockets. The days when phones were used primarily to make phone calls and send text messages are quickly fading away.”

IDC also noted the emergence of China-based companies, including Huawei, ZTE, Coolpad and Lenovo, among the leading smartphone vendors.

Those newcomers and others have displaced longtime mobile phone leaders Nokia from Finland, BlackBerry from Canada, and HTC from Taiwan, in the list of top five smartphone makers, IDC said.

BlackBerry was producing what was essentially a smartphone before Apple introduced the iPhone in June 2007.

The first BlackBerry device was an email pager, introduced in 1999. Those devices were subsequently combined with voice calling.

Nokia has long been a top producer of mobile phones, though it slipped off the top five list for the first quarter.

A year ago, it was common to see previous market leaders Nokia, BlackBerry and HTC among the top five, said Ramon Llamas, an analyst at IDC.

IDC ranked the top five smartphone vendors in the first quarter as: Samsung (70.7%); Apple (37.4); LG (10.3%); Huawei (9.9%); ZTE (9.1%). The rest made up 36.4% of the market.

IDC ranked the top five vendors of feature phones and smartphones combined as: Samsung (27.5%); Nokia (14.8%); Apple (8.9%); LG (3.7%) and ZTE (3.2%). All others combined to hold 41.9% of the market.

LG showed a dramatic 110% year-over-year climb in smartphone shipments, while Huawei’s grew by 94% and Samsung’s by 61%. ZTE’s smartphone shipments grew by 49% and Apple’s by just 6.6%.

The last time Apple posted just a single digit year-over-year growth rate was in the third quarter of 2009. Apple has been in the second spot in smartphone rankings in each of the last five quarters, IDC noted.

Samsung, meanwhile, shipped more smartphones in the first quarter than the next four vendors combined, making it the “undisputed leader” in the worldwide smartphone market, IDC said.

Samsung’s next generation Galaxy S4 smartphone is about to go on sale, while Samsung is also building a new OS, called Tizen, that will run new smartphones later this year.

Source:  computerworld.com

Wireless networks may learn to live together by using energy pulses

Wednesday, April 24th, 2013

University-developed GapSense system could help prevent interference between Wi-Fi and other networks

Researchers at the University of Michigan have invented a way for different wireless networks crammed into the same space to say “excuse me” to one another.

Wi-Fi shares a frequency band with the popular Bluetooth and ZigBee systems, and all are often found in the same places together. But it’s hard to prevent interference among the three technologies because they can’t signal each other to coordinate the use of the spectrum. In addition, different generations of Wi-Fi sometimes fail to exchange coordination signals because they use wider or narrower radio bands. Both problems can slow down networks and break connections.

Michigan computer science professor Kang Shin and graduate student Xinyu Zhang, now an assistant professor at the University of Wisconsin, set out to tackle this problem in 2011. Last July, they invented GapSense, software that lets Wi-Fi, Bluetooth and ZigBee all send special energy pulses that can be used as traffic-control messages. GapSense is ready to implement in devices and access points if a standards body or a critical mass of vendors gets behind it, Shin said.

Wi-Fi LANs are a data lifeline for phones, tablets and PCs in countless homes, offices and public places. Bluetooth is a slower but less power-hungry protocol typically used in place of cords to connect peripherals, and ZigBee is an even lower powered system found in devices for home automation, health care and other purposes.

Each of the three wireless protocols has a mechanism for devices to coordinate the use of airtime, but they all are different from one another, Shin said.

“They can’t really speak the same language and understand each other at all,” Shin said.

Each also uses CSMA (carrier sense multiple access), a mechanism that instructs radios to hold off on transmissions if the airwaves are being used, but that system doesn’t always prevent interference, he said.

The main problem is Wi-Fi stepping on the toes of Bluetooth and ZigBee. Sometimes this happens just because it acts faster than other networks. For example, a Wi-Fi device using CSMA may not sense any danger of a collision with another transmission even though a nearby ZigBee device is about to start transmitting. That’s because ZigBee takes 16 times as long as Wi-Fi to emerge from idle mode and get the packets moving, Shin said.

Changing ZigBee’s performance to help it keep up with its Wi-Fi neighbors would defeat the purpose of ZigBee, which is to transmit and receive small amounts of data with very low power consumption and long battery life, Shin said.

Wi-Fi devices can even fail to communicate among themselves on dividing up resources. Successive generations of the Wi-Fi standard have allowed for larger chunks of spectrum in order to achieve higher speeds. As a result, if an 802.11b device using just 10MHz of bandwidth tries to tell the rest of a Wi-Fi network that it has packets to send, an 802.11n device that’s using 40MHz may not get that signal, Shin said. The 802.11b device then becomes a “hidden terminal,” Shin said. As a result, packets from the two devices may collide.

To get all these different devices to coordinate their use of spectrum, Shin and Zhang devised a totally new communication method. GapSense uses a series of energy pulses separated by gaps. The length of the gaps between pulses can be used to distinguish different types of messages, such as instructions to back off on transmissions until the coast is clear. The signals can be sent at the start of a communication or between packets.

GapSense might noticeably improve the experience of using Wi-Fi, Bluetooth and ZigBee. Network collisions can slow down networks and even cause broken connections or dropped calls. When Shin and Zhang tested wireless networks in a simulated office environment with moderate Wi-Fi traffic, they found a 45 percent rate of collisions between ZigBee and Wi-Fi. Using GapSense slashed that collision rate to 8 percent. Their tests of the “hidden terminal” problem showed a 40 percent collision rate, and GapSense reduced that nearly to zero, according to a press release.

One other possible use of GapSense is to let Wi-Fi devices stay alert with less power drain. The way Wi-Fi works now, idle receivers typically have to listen to an access point to be prepared for incoming traffic. With GapSense, the access point can send a series of repeated pulses and gaps that a receiver can recognize while running at a very low clock rate, Shin said. Without fully emerging from idle, the receiver can determine from the repeated messages that the access point is trying to send it data. This feature could reduce energy consumption of a Wi-Fi device by 44 percent, according to Shin.

Implementing GapSense would involve updating the firmware and device drivers of both devices and Wi-Fi access points. Most manufacturers would not do this for devices already in the field, so the technology will probably have to wait for hardware products to be refreshed, according to Shin.

A patent on the technology is pending. The ideal way to proliferate the technology would be through a formal standard, but even without that, it could become widely embraced if two or more major vendors license it, Shin said.

Source:  computerworld.com

IBM’s solar tech is 80% efficient thanks to supercomputer know-how

Tuesday, April 23rd, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/04/7873154258_bb101d450a_b-640x426.jpg

By borrowing cooling systems used in its supercomputers, IBM Research claims it can dramatically increase the overall efficiency of concentrated photovoltaic solar power from 30 to 80 percent.

Like other concentrated photovoltaic (CPV) collectors, IBM’s system at its Zurich laboratory uses a mirrored parabolic dish to concentrate incoming solar radiation onto PV cells. The dish uses a tracking system to move with the sun, concentrating the collected radiation by a factor of 2,000 onto a sensor containing triple-junction PV cells. During daylight hours, each 1-sq cm PV chip generates on average between 200 and 250 watts of electrical power, harnessing up to 30 percent of the incoming solar energy.

Ordinarily, the remaining 70 percent of energy would be lost as heat. But by capturing most of that heat with water, IBM Research says it is able to reduce system heat losses to around 20 percent of the total incoming energy. This results in a bottom-line efficiency of 80 percent for its CPV collector, dubbed HCPVT for High Concentration Photovoltaic Thermal. Unlike a regular CPV system, HCPVT delivers its energy in two forms: electricity and hot water.

http://cdn.arstechnica.net/wp-content/uploads/2013/04/8662459375_8ed35510be_b-300x329.jpgThe thermal system was adapted from IBM Research’s 6-teraflop Aquasar supercomputer, which went online at ETH Zurich in 2010. By using water as a coolant, Aquasar consumes three fifths of the energy of a comparable air-cooled machine of the time. Crucially, the hot water could be put to work heating university buildings, reducing Aquasar’s carbon footprint to a claimed 15 percent of what would otherwise have been the case.

As with Aquasar, micro-channels between 50 and 100 micrometers in diameter carry water exceptionally close to the source of heat: the processing units in the case of Aquasar, the PV cells here. Thermal resistance is reduced to a tenth of competing systems with larger water channels.

“This allows us to cool with hot water, which sounds a bit strange,” IBM scientist Dr. Bruno Michel told Ars during a Skype call. “The photovoltaic chip is around 100º [centigrade] while the coolant is 90º.”

To live up to its efficiency, the HCPVT system needs to put its hot water to good use. Though outside the scope of this team’s work, IBM Research is also looking at systems which could use the heat by-product to purify water or, somewhat counterintuitively, to cool buildings using adsorption refrigeration.

The team has developed a prototype with a 4×4-cm PV receiver which generates about 1kW of electrical power. It hopes to develop a much larger HCPVT system with a 100-sq m dish and a 25×25 cm receiver, producing 25kW of electrical power and 50kW of thermal power. (Larger PV receivers have gaps between the chips, so you don’t gain an additional 200W of electrical power for every square centimeter of receiver you add.)

In a YouTube video, Dr. Michel raises the possibility that these larger HCPVT collectors could one day be used to build solar power stations in, say, the Sahara Desert. According to the team’s calculations, covering 2 percent of the area of the Sahara with HCPVT would meet the world’s electricity needs, transmission issues aside. Not that you need a desert. Michel told Ars that the system is useful almost anywhere where you have direct solar radiation—Zurich, for instance. “By adding the thermal output we can extend its range of applications compared to CPV,” he said.

The HCPVT system has been in development for more than 5 years, initially in collaboration with the Egypt Nanotechnology Research Center.

Source:  arstechnica.com

Microsoft rolls out standards-compliant two-factor authentication

Thursday, April 18th, 2013

Microsoft today announced that it is rolling out optional two-factor authentication to the 700 million or so Microsoft Account users, confirming last week’s rumors. The scheme will become available to all users “in the next few days.”

It works essentially identically to existing schemes already available for Google accounts. Two-factor authentication augments a password with a one-time code that’s delivered either by text message or generated in an authentication app.

Computers that you trust will be allowed to skip the two factors and just use a password, and application-specific passwords can be generated for interoperating with software that doesn’t support two-factor authentication.

Microsoft has its own authentication app for Windows Phone. It isn’t offering apps for iOS or Android—however, it doesn’t need to. The system it’s using is standard, specified in RFC 6238, and Google uses the same system. As a result, Google’s own Authenticator app for Android can be used to authenticate Microsoft Accounts. And vice versa: Microsoft’s Authenticator app for Windows Phone works with Google accounts.

Source:  arstechnica.com

‘World’s fastest’ home internet service hits Japan with Sony’s help, 2Gbps down

Tuesday, April 16th, 2013

Google Fiber might be making waves with its 1Gbps speeds, but it’s no match for what’s being hailed as the world’s fastest commercially-provided home internet service: Nuro.

Launched in Japan yesterday by Sony-supported ISP So-net, the fiber connection pulls down data at 2Gbps, and sends it up at 1Gbps.  An optical network unit (ONU) given to Nuro customers comes outfitted with three Gigabit ethernet ports and supports 450Mbps over 802.11 a/b/g/n.

When hitched to a two-year contract, web surfers will be set back 4,980 yen ($51) per month and pony up a required 52,500 yen (roughly $540) installation fee, which is currently being waived for folks who apply online.  Those lucky enough to call the Land of the Rising Sun home can register their house, apartment or small business to receive the blazing hookup, so long as they’re located within Chiba, Gunma, Ibaraki, Tochigi, Tokyo, Kanagawa or Saitama.

Source:  engadget.com

Google search manipulation starves some websites of traffic

Tuesday, April 16th, 2013

Google’s placement of its own flight-finding service in search results is resulting in lower click-through rates for companies that have not bought advertising, according to a study by Harvard University academics.

The study provides data for how Google’s placement of its own services amid “organic” search results may hurt competitors, which is the focus of an ongoing antitrust case between Google and the European Union.

How paid and non-paid search results are displayed has a powerful sway over consumers, the study found. Ben Edelman, an associate professor at Harvard Business School, and Zhenyu Lai, a Harvard doctoral candidate, looked at when Google began inserting its own Flight Search feature, launched in December 2011, into search results.

They found that Google chose to display Flight Search depending on a user’s search terms. When Flight Search was displayed, it takes a top position in the search results, pushing lower down non-paid search results.

The result was an 85% increase in click-through rates — a key measure for advertisers — for paid advertisements. Non-paid, algorithmically generated search results for competing travel agencies dropped 65%.

In an interview on Tuesday, Edelman said the study showed that Flight Search wasn’t necessarily that popular with users. When Flight Search was displayed, however, users were more likely to click on AdWords, Google’s advertising product.

“Users are surprised to see Google Flight Search,” Edelman said. “They weren’t expecting it. They don’t necessarily like it or know they like it, but in the short run, it’s not what they thought was going to be there, so they flee to AdWords.

For Google, it’s all good, since the company collects revenue from AdWords.

The study analyzed data from ComScore’s Search Planner, which is a database that tracks algorithmic and paid clicks on search engines by users who agreed to have their web surfing recorded.

Edelman and Lai compared Internet searches performed for four months prior to the launch of Google Flight Search and then after it launched from January to April 2012.

If a user searched for a flight in the format “flights to Orlando,” Flight Search would be displayed. But if a user searched for “flight to Orlando FL,” it was not displayed, they wrote.

It wasn’t clear why slight query changes triggered the display of Flight Search. Edelman said it’s even more difficult these days to predict whether Flight Search will be displayed.

But showing Flight Search caused as much as an 80% drop in algorithmic traffic to the five online travel agencies that had received the most traffic from the search terms used, the academics wrote. By contrast, click-through rates for paid advertising jumped 160%.

They warned that intermediaries such as Google have a powerful influence over consumers by ordering search results in differing formats. Edelman said it makes it more difficult for “vertical” search engines such as Yelp, which focuses on specific types of search such as restaurants, to compete.

“Google has a massive degree of control and uses that discretion in ways that services Google’s interest but much less obviously serves other interests like the public or websites,” he said.

Source:  computerworld.com

DOJ identifies lower frequency spectrum as key to wireless competition

Sunday, April 14th, 2013

The Department of Justice has provided the FCC with new recommendations for governing spectrum auctions, and with a heavy emphasis on leveling the playing field, the findings are likely to draw the ire of AT&T and Verizon. In its briefing, the DOJ made its case that the nation’s two largest carriers currently hold market power, which is due to the heavy concentration of lower frequency spectrum (below 1,000MHz) allocated to the two incumbents.

According to DOJ officials, “This results in the two smaller nationwide carriers having a somewhat diminished ability to compete, particularly in rural areas, where the cost to build out coverage is higher with high-frequency spectrum.” Although the DOJ never came right out and said it, one can easily surmise that it’s guiding the FCC to establish rules that favor smaller carriers — namely Sprint and T-Mobile — in future low-frequency spectrum auctions. In the DOJ’s opinion, an incumbent carrier would need to demonstrate both compelling evidence of capacity constraints and an efficient use of its current licenses in order to gain additional lower frequency spectrum. Otherwise, the opportunity exists for AT&T and Verizon to snap up licenses simply in attempt to harm competitors.

Given that the FCC and DOJ share the responsibility of ensuring competition in the marketplace, it seems unlikely that this latest brief will fall on deaf ears.

Source:  engadget.com

Comcast to IPv6-enable commercial broadband service

Saturday, April 13th, 2013

Comcast plans to expand its IPv6-based offerings for business customers with the launch of commercial broadband and Metro Ethernet services that support the next-gen Internet Protocol later this year.

John Brzozowski, chief architect for IPv6 and distinguished engineer at Comcast, said the ISP will start a trial of IPv6-enabled commercial broadband service in May. Aimed at small businesses, home offices and teleworkers, the new IPv6-enabled cable modem service will be available in Philadelphia, Denver and Silicon Valley first.

“We’ve completed the rollout of IPv6 on half of our broadband network,” Brzozowski said. “Wherever we’ve upgraded our network, that’s where we are going to start with our commercial broadband service.”

Later in 2013, Comcast plans to announce IPv6 support for its Metro Ethernet service.

“We’ve had quite a few people asking us to enable IPv6 on Metro E, not to the tune of hundreds of thousands but more than we anticipated,” Brzozowski said.

The new IPv6 capabilities will available at no extra charge to Comcast’s business customers.

Comcast will post a link on its website — http://www.comcast6.net/ –where commercial broadband customers can sign up for the IPv6 trial.

Comcast will finish deploying IPv6 to all of its residential customers in 2013. Currently, more than 3% of its residential customer base is actively using IPv6. “We expect that 3% to increase dramatically by the end of the year,” Brzozowski said.

He added that the biggest roadblock to IPv6 deployment right now is lack of support by consumer electronics manufacturers, particularly those that produce TVs, Blue ray players and game consoles.

The Internet needs IPv6 because it is running out of addresses using the original version of the Internet Protocol, called IPv4. IPv4 uses 32-bit addresses and can support 4.3 billion devices connected directly to the Internet. IPv6, on the other hand, uses 128-bit addresses and provides such a vast number of addresses that it can only be expressed mathematically: 3.4 times 10 to the 38th power.

Source:  networkworld.com

Huge attack on WordPress sites could spawn never-before-seen super botnet

Friday, April 12th, 2013

Ongoing attack from >90,000 computers is creating a strain on Web hosts, too.

Security analysts have detected an ongoing attack that uses a huge number of computers from across the Internet to commandeer servers that run the WordPress blogging application.

The unknown people behind the highly distributed attack are using more than 90,000 IP addresses to brute-force crack administrative credentials of vulnerable WordPress systems, researchers from at least three Web hosting services reported. At least one company warned that the attackers may be in the process of building a “botnet” of infected computers that’s vastly stronger and more destructive than those available today. That’s because the servers have bandwidth connections that are typically tens, hundreds, or even thousands of times faster than botnets made of infected machines in homes and small businesses.

“These larger machines can cause much more damage in DDoS [distributed denial-of-service] attacks because the servers have large network connections and are capable of generating significant amounts of traffic,” Matthew Prince, CEO of content delivery network CloudFlare, wrote in a blog post describing the attacks.

It’s not the first time researchers have raised the specter of a super botnet with potentially dire consequences for the Internet. In October, they revealed that highly debilitating DDoS attacks on six of the biggest US banks used compromised Web servers to flood their targets with above-average amounts of Internet traffic. The botnet came to be known as the itsoknoproblembro or Brobot, names that came from a relatively new attack tool kit some of the infected machines ran. If typical botnets used in DDoS attacks were the network equivalent of tens of thousands of garden hoses trained on a target, the Brobot machines were akin to hundreds of fire hoses. Despite their smaller number, they were nonetheless able to inflict more damage because of their bigger capacity.

There’s already evidence that some of the commandeered WordPress websites are being abused in a similar fashion. A blog post published Friday by someone from Web host ResellerClub said the company’s systems running that platform are also under an “ongoing and highly distributed global attack.”

“To give you a little history, we recently heard from a major law enforcement agency about a massive attack on US financial institutions originating from our servers,” the blog post reported. “We did a detailed analysis of the attack pattern and found out that most of the attack was originating from [content management systems] (mostly WordPress). Further analysis revealed that the admin accounts had been compromised (in one form or the other) and malicious scripts were uploaded into the directories.”

The blog post continued:

“Today, this attack is happening at a global level and WordPress instances across hosting providers are being targeted. Since the attack is highly distributed in nature (most of the IPs used are spoofed), it is making it difficult for us to block all malicious data.”

According to CloudFlare’s Prince, the distributed attacks are attempting to brute force the administrative portals of WordPress servers, employing the username “admin” and 1,000 or so common passwords. He said the attacks are coming from tens of thousands of unique IP addresses, an assessment that squares with the finding of more than 90,000 IP addresses hitting WordPress machines hosted by HostGator.

“At this moment, we highly recommend you log into any WordPress installation you have and change the password to something that meets the security requirements specified on the WordPress website the company’s Sean Valant wrote. “These requirements are fairly typical of a secure password: upper and lowercase letters, at least eight characters long, and including ‘special’ characters (^%$#@*).”

Operators of WordPress sites can take other measures too, including installing plugins such as this one and this one, which close some of the holes most frequently exploited in these types of attacks. Beyond that, operators can sign up for a free plan from CloudFlare that automatically blocks login attempts that bear the signature of the brute-force attack.

Already, HostGator has indicated that the burden of this mass attack is causing huge strains on websites, which come to a crawl or go down altogether. There are also indications that once a WordPress installation is infected it’s equipped with a backdoor so that attackers can maintain control even after the compromised administrative credentials have been changed. In some respects, the WordPress attacks resemble the mass compromise of machines running the Apache Web server, which Ars chronicled 10 days ago.

With so much at stake, readers who run WordPress sites are strongly advised to lock down their servers immediately. The effort may not only protect the security of the individual site, it could help safeguard the Internet as a whole.

Source:  arstechnica.com

OpenDaylight: A big step toward the software-defined data center

Monday, April 8th, 2013

A who’s-who of industry players, including Cisco, launches open source project that could make SDN as pervasive as server virtualization

Manual hardware configuration is the scourge of the modern data center. Server virtualization and pooled storage have gone a long way toward making infrastructure configurable on the fly via software, but the third leg of the stool, networking, has lagged behind with fragmented technology and standards.

The OpenDaylight Project — a new open source project hosted by the Linux Foundation featuring every major networking player — promises to move the ball forward for SDN (software-defined networking). Rather than hammer out new standards, the project aims to produce an extensible, open source, virtual networking platform atop such existing standards as OpenFlow, which provides a universal interface through which either virtual or physical switches can be controlled via software.

The approach of OpenDaylight is similar to that of Hadoop or OpenStack, where industry players come together to develop core open source bits collaboratively, around which participants can add unique value. That roughly describes the Linux model as well, which may help explain why the Linux Foundation is hosting OpenDaylight.

“The Linux Foundation was contacted based on our experience and understanding of how to structure and set up an open community that can foster innovation,” said Jim Zemlin, executive director of the Linux Foundation, in an embargoed conference call last week. He added that OpenDaylight, which will be written in Java, will be available under the Eclipse Public License.

Collaboration or controversy?
It must be said that the politics of the OpenDaylight Project are mind-boggling. Cisco is on board despite the fact that SDN is widely seen as a threat to the company’s dominant position — because, when the network is virtualized, switch hardware becomes more commoditized. A cynic might be forgiven for wondering whether Cisco is there to rein things in rather than accelerate development.

Along with Cisco, the cavalcade of coopetition includes Arista Networks, Big Switch Networks, Brocade, Citrix, Dell, Ericsson, Fujitsu, HP, IBM, Intel, Juniper Networks, Microsoft, NEC, Nuage Networks, PLUMgrid, Red Hat, and VMware. BigSwitch, perhaps the highest-profile SDN upstart, is planning to donate a big chunk of its Open SDN Suite, including controller code and distributed virtual routing service applications. Although VMware has signed on, it’s unclear how the proprietary technology developed by Nicira, the SDN startup acquired for $1.2 billion by VMware last summer, will fit in.

Another question is how OpenDaylight will affect other projects. Some have voiced frustration over the Open Network Foundation’s stewardship of the OpenFlow, so OpenDaylight could be a way to work around that organization. Also, OSI president and InfoWorld contributor Simon Phipps wonders why Project Crossbow, an open source network virtualization technology built into Solaris, appears to have no role in OpenDaylight. You can be sure many more questions will emerge in the coming days and weeks.

The architecture of OpenDaylight
Zemlin described OpenDaylight as an extensible collection of technologies. “This project will focus on software and will deliver several components: an SDN controller, protocol plug-ins, applications, virtual overlay network, and the architectural and the programmatic interfaces that tie those things together.”

This list is consistent with the basic premise of SDN, where the control and data planes are separated, with a central controller orchestrating the data flows of many physical or virtual switches (the latter running on generic server hardware). OpenFlow currently provides the only standardized interface supported by many switch vendors, but OpenDaylight also plans to support other standards as well as proprietary interfaces as the project evolves.

More exciting are the “northbound” REST APIs to the controller, atop which developers will be able to build new types of applications that run on the network itself for specialized security, network management, and so on. In support of this, Cisco is contributing an application framework, while Citrix is throwing in “an application controller that integrates Layer 4-7 network services for enabling application awareness and comprehensive control.”

Although the embargoed OpenDaylight announcement was somewhat short on detail, a couple of quick conclusions can be drawn. One is that — on the model of Hadoop, Linux, and OpenStack — the future is now being hashed out in open source bits rather than standards committees. The rise in the importance of open source in the industry is simply stunning, with OpenDaylight serving as the latest confirmation.

More obviously, the amazing breadth of support for OpenDaylight signals new momentum for SDN. To carve up data center resources with the flexibility necessary for a cloud-enabled world where many tenants must coexist, the network needs to have the same software manageability as the rest of the infrastructure. OpenDaylight leaves no doubt the industry recognizes that need.

If the OpenDaylight Project can avoid getting bogged down in vendor politics, it could complete the last mile to the software defined data center in an industry-standard way that lowers costs for everyone. It could do for networking what OpenStack is doing for cloud computing.

Source:  infoworld.com

Microsoft Windows XP support ends in 365 days

Monday, April 8th, 2013

Microsoft wants its Windows XP users to get with the program, and is giving them 365 days to do so.

One year from today, Microsoft will shut down extended support for its 12-year-old operating system, in favor of newer platforms like Windows 7 and 8.

In 2002, Microsoft launched its Support Lifecycle policy, allowing 10 years of combined mainstream and extended support for Microsoft Business and Developer products, including Windows OSes. To that end, Windows XP SP3 and Office 2003 will lose that support on April 8, 2014.

“If your organization has not started the migration to a modern desktop, you are late,” Stephen Rose, senior product manager for Windows Commercial, wrote in a blog post. He revealed that it takes an average company 18 to 32 months to reach full deployment, and urged businesses to begin planning and application testing “immediately,” to avoid issues later.

But don’t think that a simple upgrade from XP to Windows 7 or 8 — a “modern operating system,” according to Rose — will do the trick.

“You will need to do a clean install,” Rose said, meaning user data must be migrated and applications reinstalled on the new OS. More details on testing hardware and apps can be found on the Windows blog.

Microsoft already pulled mainstream support for Windows XP in April 2009, but come this time next year, it will drop extended support, meaning no more security updates, non-security hotfixes, free or paid assisted support options, or online technical content updates.

Rose warned that running XP SP3 and Office 2003 after support ends can expose companies to potential security risks. Even anti-virus software support won’t be enough, and vulnerabilities discovered in the operating system or applications running on it will remain unpatched and open to malware.

“Using XP after April 2014 is an ‘at your own risk’ situation for any customers choosing not to migrate,” Rose wrote.

Windows XP launched in 2001, and has been named Microsoft’s most popular OS of its time. Redmond has given users plenty of time to make the move; the software giant announced the news last April, two years before the shutdown, before the Windows 8 launch.

According to March data from Net Applications, approximately 38.73 percent of PC users are still using Windows XP; the most popular OS is Windows 7 with 44.73 percent. About 4.99 percent are on Vista, while only 3.17 percent have upgraded to Microsoft’s latest, Windows 8.

Source:  pcmag.com

Microsoft Security Bulletin Advance Notification for April 2013

Monday, April 8th, 2013
Bulletin ID Maximum Severity Rating and Vulnerability Impact Restart Requirement Affected Software
Bulletin 1 Critical
Remote Code Execution
Requires restart Microsoft Windows,
Internet Explorer
Bulletin 2 Critical
Remote Code Execution
May require restart Microsoft Windows
Bulletin 3 Important
Information Disclosure
May require restart Microsoft Office,
Microsoft Server Software
Bulletin 4 Important
Elevation of Privilege
Requires restart Microsoft Windows
Bulletin 5 Important
Denial of Service
Requires restart Microsoft Windows
Bulletin 6 Important
Elevation of Privilege
Requires restart Microsoft Windows
Bulletin 7 Important
Elevation of Privilege
Requires restart Microsoft Security Software
Bulletin 8 Important
Elevation of Privilege
May require restart Microsoft Office,
Microsoft Server Software
Bulletin 9 Important
Elevation of Privilege
Requires restart Microsoft Windows

Excerpt from microsoft.com