IT Consulting Case Studies: Microsoft SharePoint Server for CMS

February 14th, 2014

Gyver Networks recently designed and deployed a Microsoft SharePoint Server infrastructure for a financial consulting firm servicing banks and depository institutions with assets in excess of $200 billion.

Challenge:  A company specializing in regulatory compliance audits for financial institutions found themselves inundated by documents submitted via inconsistent workflow processes, raising concerns regarding security and content management as they continued to expand.

http://officeimg.vo.msecnd.net/en-us/files/819/194/ZA103888538.pngWith many such projects running concurrently, keeping up with the back-and-forth flow of multiple versions of the same documents became increasingly difficult.  Further complicating matters, the submission process consisted of clients sending email attachments or uploading files to a company FTP server, then emailing to let staff know something was sent.  Other areas of concern included:

  • Security of submitted financial data in transit and at rest, as defined in SSAE 16 and 201 CMR 17.00, among other standards and regulations
  • Secure, customized, compartmentalized client access
  • Advanced user management
  • Internal and external collaboration (multiple users working on the same documents simultaneously)
  • Change and version tracking
  • Comprehensive search capabilities
  • Client alerts, access to project updates and timelines, and feedback

Resolution: Gyver Networks proposed a Microsoft SharePoint Server environment as the ideal enterprise content management system (CMS) to replace their existing processes.  Once deployed, existing archives and client profiles were migrated into the SharePoint infrastructure designed for each respective client and, seamlessly, the company was fully operational and ready to go live.

Now, instead of an insecure and confusing combination of emails, FTP submissions, and cloud-hosted, third-party management software, they are able to host their own secure, all-in-one CMS on premises, including:

  • 256-bit encryption of data in transit and at rest
  • Distinct SharePoint sites and logins for each client, with customizable access permissions and retention policies for subsites and libraries
  • Advanced collaboration features, with document checkout, change review and approval, and workflows
  • Metadata options so users can find what they’re searching for instantly
  • Client-customized email alerts, views, reporting, timelines, and the ability to submit requests and feedback directly through the SharePoint portal

The end result?  Clients of this company are thrilled to have a comprehensive content management system that not only saves them time and provides secure submission and archiving, but also offers enhanced project oversight and advanced-metric reporting capabilities.

The consulting firm itself experienced an immediate increase in productivity, efficiency, and client retention rates; they are in full compliance with all regulations and standards governing security and privacy; and they are now prepared for future expansion with a scalable enterprise CMS solution that can grow as they do.

Contact Gyver Networks today to learn more about what Microsoft SharePoint Server can do for your organization.  Whether you require a simple standalone installation or a more complex hybrid SharePoint Server farm, we can assist you in planning, deploying, administration, and troubleshooting to ensure you get the most out of your investment.

Wireless Case Studies: Cellular Repeater and DAS

February 7th, 2014

Gyver Networks recently designed and installed a cellular bi-directional amplifier (BDA) and distributed antenna system (DAS) for an internationally renowned preparatory and boarding school in Massachusetts.

BDA Challenge: Faculty, students, and visitors were unable to access any cellular voice or data services at one of this historic campus’ sports complexes; 3G and 4G cellular reception at the suburban Boston location were virtually nonexistent.

Of particular concern to the school was the fact that the safety of its student-athletes would be jeopardized in the event of a serious injury, with precious minutes lost as faculty were forced to scramble to find the nearest landline – or leave the building altogether in search of cellular signal – to contact first responders.

Additionally, since internal communications between management and facilities personnel around the campus took place via mobile phone, lack of cellular signal at the sports complex required staff to physically leave the site just to find adequate reception.

Resolution: Gyver Networks engineers performed a cellular site survey of selected carriers throughout the complex to acquire a precise snapshot of the RF environment. After selecting the optimal donor tower signal for each cell carrier, Gyver then engineered and installed a distributed antenna system (DAS) to retransmit the amplified signal put out by the bi-directional amplifier (BDA) inside the building.

The high-gain, dual-band BDA chosen for the system offered scalability across selected cellular and PCS bands, as well as the flexibility to reconfigure band settings on an as-needed basis, providing enhancement capabilities for all major carriers now and in the future.

Every objective set forth by the school’s IT department has been satisfied with the deployment of this cellular repeater and DAS: All areas of the athletic complex now enjoy full 3G and 4G voice and data connectivity; safety and liability concerns have been mitigated; and campus personnel are able to maintain mobile communications regardless of where they are in the complex.

Case Studies: Point-to-point wireless bridge – Campus

December 6th, 2013

IMG_0095

Gyver Networks recently completed a point-to-point (PTP) bridge installation to provide wireless backhaul for a Boston college

Challenge:  The only connectivity to local network or Internet resources from this school’s otherwise modern athletic center was via a T1 line topping out at 1.5 Mbps bandwidth.  This was unacceptable not only to the faculty onsite attempting to connect to the school’s network, but to the attendees, faculty, and media outlets attempting to connect to the Internet during the high-profile events and press conferences routinely held inside.

Another vendor’s design for a 150 Mbps unlicensed wireless backhaul link failed during a VIP visit, necessitating a redesign by Gyver Networks.

http://www.gyvernetworks.com/TechBlog/wp-content/uploads/2013/12/IMG_0103.jpgResolution:  After performing a spectrum analysis of the surrounding environment, Gyver Networks determined that the wireless solution originally proposed to the school was not viable due to RF spectrum interference.

For a price point close to the unlicensed, failed design, Gyver Networks engineered a secure, 700 Mbps point-to-point wireless bridge in the licensed 80GHz band to link the main campus with the athletic center, providing adequate bandwidth for both local network and Internet connectivity at the remote site.  Faculty are now able to work without restriction, and event attendees can blog, post to social media, and upload photos and videos without constraint.

Heartbleed Affecting Wireless Users

June 2nd, 2014

New Vulnerability in Wireless Devices

Cupid Heartbleed Logo

Per Ars Techinca, a new wireless vulnerability has been detected in many wireless vendors implementation of enterprise grade wireless security wich affects wireless users. We are working with all of our vendors to find out (1) which are affected, and (2) what needs to be done to mediate this risk. Please stay tuned!

IE 10 zero-day attack targets US military

February 14th, 2014

Fireeye, a security research firm, has identified a targeted and sophisticated attack which they believe to be aimed at US military personnel. Fireeye calls this specific attack Operation SnowMan.The attack was staged from the web site of the U.S. Veterans of Foreign Wars which the attackers had compromised. Pages from the site were modified to include code (in an IFRAME) which exploited an unpatched vulnerability in Internet Explorer 10 on systems which also have Adobe Flash Player.

The actual vulnerability is in Internet Explorer 10, but it relies on a malicious Flash object and a callback from that Flash object to the vulnerability trigger in JavaScript. Fireeye says they are in touch with Microsoft about the vulnerability.

The attack checks to make sure it is running on IE10 and that the user is not running the Microsoft Enhanced Mitigation Experience Toolkit (EMET), a tool which can help to harden applications against attack. So running another version of IE, including IE11, or installing EMET would protect against this attack.

The attack was first identified on February 11. Fireeye believes that it was placed on the VFW site in order to be found by US military personnel, and that the attack was timed to coincide with a long holiday weekend and the major snowstorm which struck the eastern United States this week, including the Washington DC region.

Fireeye also presents evidence that the attack comes from the same group of attackers they have identified in previous sophisticated, high-value attacks, specifically Operation DeputyDog and Operation Ephemeral Hydra. They reach this conclusion by analyzing the techniques used. They say that this group has, in the past, attacked U.S. government entities, Japanese firms, defense industrial base (DIB) companies, law firms, information technology (IT) companies, mining companies and non-governmental organizations (NGOs).

Source:  zdnet.com

Building control systems can be pathway to Target-like attack

February 11th, 2014

Credentials stolen from automation and control providers were used in Target hack

Companies should review carefully the network access given to third-party engineers monitoring building control systems to avoid a Target-like attack, experts say.

Security related to providers of building automation and control systems was in the spotlight this week after the security blog KrebsonSecurity reported that credentials stolen from Fazio Mechanical Services, based in Sharpsburg, Penn, were used by hackers who snatched late last year 40 million debit- and credit-card numbers from Target’s electronic cash registers, called point-of-sale (POS) systems.

The blog initially identified Fazio as a provider of refrigeration and heating, ventilation and air conditioning (HVAC) systems. The report sparked a discussion in security circles on how such a subcontractor’s credentials could provide access to areas of the retailer’s network Fazio would not need.

On Thursday, Fazio released a statement saying it does not monitor or control Target’s HVAC systems, according to KrebsonSecurity. Instead it remotely handles “electronic billing, contract submission and project management,” for the retailer.

In light of its work, Fazio having access to Target business applications that could be tied to POS systems is certainly possible. However, interviews with experts before Fazio’s clarification found that subcontractors monitoring and maintaining HVAC and other building systems remotely often have too much access to corporate networks.

“Generally what happens is some new business service needs network access, so, if there’s time pressure, it may be placed on an existing network, (without) thinking through all the security implications,” Dwayne Melancon, chief technology officer for data security company Tripwire, said.

Most building systems, such as HVAC, are Internet-enabled so maintenance companies can monitor them remotely. Use of the Shodan search engine for Internet-enabled devices can reveal thousands of systems ranging from building automation to crematoriums with weak login credentials, researchers have found.

Using homegrown technology, Billy Rios, director of threat intelligence for vulnerability management company Qualys, found on the Internet a building control system for Target’s Minneapolis-based headquarters.

While the system is connected to an internal network, Rios could not determine whether it’s a corporate network without hacking the system, which would be illegal.

“We know that we could probably exploit it, but what we don’t know is what purpose it’s serving,” he said. “It could control energy, it could control HVAC, it could control lighting or it could be for access control. We’re not sure.”

If the Web interface of such systems is on a corporate network, then some important security measures need to be taken.

All data traffic moving to and from the server should be closely monitored. To do their job, building engineers need to access only a few systems. Monitoring software should flag traffic going anywhere else immediately.

“Workstations in your HR (human resources) department should probably not be talking to your refrigeration devices,” Rios said. “Seeing high spikes in traffic from embedded devices on your corporate network is also an indication that something is wrong.”

In addition, companies should know the IP addresses used by subcontractors in accessing systems. Unrecognized addresses should be automatically blocked.

Better password management is also a way to prevent a cyberattack. In general, a subcontractor’s employees will share the same credentials to access a customer’s systems. Those credentials are seldom changed, even when an employee leaves the company.

“That’s why it’s doubly important to make sure those accounts and systems have very restricted access, so you can’t use that technician login to do other things on the network,” Melancon said.

Every company should do a thorough review of their networks to identify every building system. “Understanding where these systems are is the first step,” Rios said.

Discovery should be followed by an evaluation of the security around those systems that are on the Internet.

Source:  csoonline.com

Huge hack ‘ugly sign of future’ for internet threats

February 11th, 2014

A massive attack that exploited a key vulnerability in the infrastructure of the internet is the “start of ugly things to come”, it has been warned.

Online security specialists Cloudflare said it recorded the “biggest” attack of its kind on Monday.

Hackers used weaknesses in the Network Time Protocol (NTP), a system used to synchronise computer clocks, to flood servers with huge amounts of data.

The technique could potentially be used to force popular services offline.

Several experts had predicted that the NTP would be used for malicious purposes.

The target of this latest onslaught is unknown, but it was directed at servers in Europe, Cloudflare said.

Attackers used a well-known method to bring down a system known as Denial of Service (DoS) – in which huge amounts of data are forced on a target, causing it to fall over.

Cloudflare chief executive Matthew Prince said his firm had measured the “very big” attack at about 400 gigabits per second (Gbps), 100Gbps larger than an attack on anti-spam service Spamhaus last year.

Predicted attack

In a report published three months ago, Cloudflare warned that attacks on the NTP were on the horizon and gave details of how web hosts could best try to protect their customers.

NTP servers, of which there are thousands around the world, are designed to keep computers synchronised to the same time.

The fundamentals of the NTP began operating in 1985. While there have been changes to the system since then, it still operates in much the same way.

A computer needing to synchronise time with the NTP will send a small amount of data to make the request. The NTP will then reply by sending data back.

The vulnerability lies with two weaknesses. Firstly, the amount of data the NTP sends back is bigger than the amount it receives, meaning an attack is instantly amplified.

Secondly, the original computer’s location can be “spoofed”, tricking the NTP into sending the information back to somewhere else.

In this attack, it is likely that many machines were used to make requests to the NTP. Hackers spoofed their location so that the massive amounts of data from the NTP were diverted to a single target.

“Amplification attacks like that result in an attacker turning a small amount of bandwidth coming from a small number of machines into a massive traffic load hitting a victim from around the internet,” Cloudfare explained in a blog outlining the vulnerability, posted last month.

‘Ugly future’

The NTP is one of several protocols used within the infrastructure of the internet to keep things running smoothly.

Unfortunately, despite being vital components, most of these protocols were designed and implemented at a time when the prospect of malicious activity was not considered.

“A lot of these protocols are essential, but they’re not secure,” explained Prof Alan Woodward, an independent cyber-security consultant, who had also raised concerns over NTP last year.

“All you can really do is try and mitigate the denial of service attacks. There are technologies around to do it.”

Most effective, Prof Woodward suggested, was technology that was able to spot when a large amount of data was heading for one destination – and shutting off the connection.

Cloudflare’s Mr Prince said that while his firm had been able to mitigate the attack, it was a worrying sign for the future.

“Someone’s got a big, new cannon,” he tweeted. “Start of ugly things to come.”

Source:  BBC

Change your passwords: Comcast hushes, minimizes serious hack

February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

The case for Wi-Fi in the Internet of Things

January 14th, 2014

Whether it’s the “connected home” or the “Internet of Things,” many everyday home appliances and devices will soon feature some form of Internet connectivity. What form should that connectivity take? We sat down with Edgar Figueroa, president and CEO of the Wi-Fi Alliance, to discuss his belief that Wi-Fi is the clear choice.

Options are plentiful when it comes to the Internet, but some are easily disregarded for most Internet of Things designs. Ethernet and other wired solutions require additional equipment or more cabling than what is typically found in even a modern home. Cellular connectivity is pointless for stationary home goods and still too power-hungry for wearable items. Proprietary and purpose-built solutions, like ZigBee, are either too closed off or require parallel paths to solutions that are already in our homes.

Bluetooth makes a pretty good case for itself, though inconsistent user experiences remain the norm for several reasons. The latest Bluetooth specifications provide very low power data transfers and have very low overhead for maintaining a connection. The result is that the power profile for the connection is low whether you’re transacting data or not. Connection speeds are modest compared to the alternatives. But the biggest detractor for Bluetooth is inconsistency. Bluetooth has always felt kludgy; it’s an incomplete solution that will suffice until it improves. It’s helpful that Bluetooth devices can often have their performance, reliability, and features improved upon through software updates, but the experience can still remain frustrating.

Then there’s Wi-Fi.

Figueroa wanted to highlight a few key points from a study the Alliance commissioned. “Of those polled, more than half already have a non-traditional device with a Wi-Fi radio,” he said. Here, “non-traditional” falls among a broad swath of products that includes appliances, thermostats, and lighting systems. Figueroa continued, “Ninety-one percent of those polled said they’d be more likely to buy a smart device if it came equipped with Wi-Fi.” Alliance’s point: everyone already has a Wi-Fi network in their home. Why choose anything else?

One key consideration the study seems to ignore is power draw, which is one of Bluetooth’s biggest assets. Wi-Fi connections are active and power-hungry, even when they aren’t transacting large amounts of data. A separate study looking at power consumption per bit of data transferred demonstrated that Wi-Fi trumps Bluetooth by orders of magnitude. Where Wi-Fi requires large amounts of constant power, Bluetooth requires almost no power to maintain a connection.

In response to a question on the preference for low-power interfaces, Figueroa said simply, “Why?” In his eyes, the connected home isn’t necessarily a battery-powered home. Devices that connect to our Wi-Fi networks traditionally have plugs, so why must they sip almost no power?

Bluetooth has its place in devices whose current draw must not exceed the capabilities of a watch battery. But even in small devices, Wi-Fi’s performance and ability to create ad hoc networks and Wi-Fi Direct connections can better the experience, even if it’s at the risk of increasing power draw and battery size.

In the end, the compelling case for Wi-Fi’s use in the mobile space has more to do with what we want from our experiences than whether one is more power-hungry. Simplicity in all things is preferred. Even after all these years, pairing Bluetooth is usually more complex than connecting a new device to your existing Wi-Fi network. Even in the car, where Bluetooth has had a long dominance, the ability to connect multiple devices over Wi-Fi’s wide interface may ultimately be preferred. Still, despite Figueroa’s confidence, it’s an increasingly green (and preferably bill-shrinking) world looking to adopt an Internet of Things lifestyle. Wi-Fi may ultimately need to complete its case by driving power down enough to reside in all our Internet of Things devices, from the biggest to the smallest.

Source:  arstechnica.com

Feds to dump CGI from Healthcare.gov project

January 13th, 2014

The Obama Administration is set to fire CGI Federal as prime IT contractor of the problem-plagued Healthcare.gov website, a report says.

The government now plans to hire IT consulting firm Accenture to fix the Affordable Care Act (ACA) website’s lingering performance problems, the Washington Post reported today. Accenture will get a 12-month, $90 million contract to update the website, the newspaper reported.

The Healthcare.gov site is the main portal for consumers to sign up for new insurance plans under the Affordable Care Act.

CGI’s Healthcare.gov contract is due for renewal in February. The terms of the agreement included options for the U.S. to renew it for one more year and then another two more after that.

The decision not to renew comes as frustration grows among officials of the Centers for Medicare and Medicaid Services (CMS), which oversees the ACA, about the pace and quality of CGI’s work, the Post said, quoting unnamed sources. About half of the software fixes written by CGI engineers in recent months have failed on first attempt to use them, CMS officials told the Post.

The government awarded the contract to Accenture on a sole-source, or no-bid, basis because the CGI contract expires at the end of next month. That gives Accenture less than two months to familiarize itself with the project before it takes over the complex task of fixing numerous remaining glitches.

CGI did not immediately respond to Computerworld’s request for comment.

In an email, an Accenture spokesman declined to confirm or deny the report.

“Accenture Federal Services is in discussions with clients and prospective clients all the time, but it is not appropriate to discuss new business opportunities we may or may not be pursuing,” the spokesman said The decision to replace CGI comes as performance of the Healthcare.gov website appears to be steadily improving after its spectacularly rocky Oct. 1.

A later post mortem of the debacle showed that servers did not have the right production data, third party systems weren’t connecting as required, dashboards didn’t have data and there simply wasn’t enough server capacity to handle traffic.

Though CGI had promised to have the site ready and fully functional by Oct. 1, between 30% and 40% of the site had yet to be completed at the time. The company has taken a lot of the heat since.

Ironically, the company has impressive credentials. The company is nowhere as big as some of the biggest government IT contractors but still is only one of 10 companies in the U.S. to have achieved the highest level Capability Maturity Model Integration (CMMI) level for software development certification.

CGI Federal is a subsidiary of Montreal-based CGI Group. CMS hired the company as the main IT contractor for Healthcare.gov in 2011 under an $88 million contract. So far, the firm has received about $113 million for its work on the site.

Source:  pcadvisor.com

Cisco promises to fix admin backdoor in some routers

January 13th, 2014

Cisco Systems promised to issue firmware updates removing a backdoor from a wireless access point and two of its routers later this month. The undocumented feature could allow unauthenticated remote attackers to gain administrative access to the devices.

The vulnerability was discovered over the Christmas holiday on a Linksys WAG200G router by a security researcher named Eloi Vanderbeken. He found that the device had a service listening on port 32764 TCP, and that connecting to it allowed a remote user to send unauthenticated commands to the device and reset the administrative password.

It was later reported by other users that the same backdoor was present in multiple devices from Cisco, Netgear, Belkin and other manufacturers. On many devices this undocumented interface can only be accessed from the local or wireless network, but on some devices it is also accessible from the Internet.

Cisco identified the vulnerability in its WAP4410N Wireless-N Access Point, WRVS4400N Wireless-N Gigabit Security Router and RVS4000 4-port Gigabit Security Router. The company is no longer responsible for Linksys routers, as it sold that consumer division to Belkin early last year.

The vulnerability is caused by a testing interface that can be accessed from the LAN side on the WRVS4400N and RVS4000 routers and also the wireless network on the WAP4410N wireless access point device.

“An attacker could exploit this vulnerability by accessing the affected device from the LAN-side interface and issuing arbitrary commands in the underlying operating system,” Cisco said in an advisory published Friday. “An exploit could allow the attacker to access user credentials for the administrator account of the device, and read the device configuration. The exploit can also allow the attacker to issue arbitrary commands on the device with escalated privileges.”

The company noted that there are no known workarounds that could mitigate this vulnerability in the absence of a firmware update.

The SANS Internet Storm Center, a cyber threat monitoring organization, warned at the beginning of the month that it detected probes for port 32764 TCP on the Internet, most likely targeting this vulnerability.

Source:  networkworld.com

Hackers use Amazon cloud to scrape mass number of LinkedIn member profiles

January 10th, 2014

EC2 service helps hackers bypass measures designed to protect LinkedIn users

LinkedIn is suing a gang of hackers who used Amazon’s cloud computing service to circumvent security measures and copy data from hundreds of thousands of member profiles each day.

“Since May 2013, unknown persons and/or entities employing various automated software programs (often referred to as ‘bots’) have registered thousands of fake LinkedIn member accounts and have extracted and copied data from many member profile pages,” company attorneys alleged in a complaint filed this week in US District Court in Northern California. “This practice, known as ‘scraping,’ is explicitly barred by LinkedIn’s User Agreement, which prohibits access to LinkedIn ‘through scraping, spidering, crawling, or other technology or software used to access data without the express written consent of LinkedIn or its Members.’”

With more than 259 million members—many who are highly paid professionals in technology, finance, and medical industries—LinkedIn holds a wealth of personal data that can prove highly valuable to people conducting phishing attacks, identity theft, and similar scams. The allegations in the lawsuit highlight the unending tug-of-war between hackers who work to obtain that data and the defenders who use technical measures to prevent the data from falling into the wrong hands.

The unnamed “Doe” hackers employed a raft of techniques designed to bypass anti-scraping measures built in to the business network. Chief among them was the creation of huge numbers of fake accounts. That made it possible to circumvent restrictions dubbed FUSE, which limit the activity any single account can perform.

“In May and June 2013, the Doe defendants circumvented FUSE—which limits the volume of activity for each individual account—by creating thousands of different new member accounts through the use of various automated technologies,” the complaint stated. “Registering so many unique new accounts allowed the Doe defendants to view hundreds of thousands of member profiles per day.”

The hackers also circumvented a separate security measure that is supposed to require end users to complete bot-defeating CAPTCHA dialogues when potentially abusive activities are detected. They also managed to bypass restrictions that LinkedIn intended to impose through a robots.txt file, which websites use to make clear which content may be indexed by automated Web crawling programs employed by Google and other sites.

LinkedIn engineers have disabled the fake member profiles and implemented additional technological safeguards to prevent further scraping. They also conducted an extensive investigation into the bot-powered methods employed by the hackers.

“As a result of this investigation, LinkedIn determined that the Doe defendants accessed LinkedIn using a cloud computing platform offered by Amazon Web Services (‘AWS’),” the complaint alleged. “This platform—called Amazon Elastic Compute Cloud or Amazon EC2—allows users like the Doe defendants to rent virtual computers on which to run their own computer programs and applications. Amazon EC2 provides resizable computing capacity. This feature allows users to quickly scale capacity, both up and down. Amazon EC2 users may temporarily run hundreds or thousands of virtual computing machines. The Doe defendants used Amazon EC2 to create virtual machines to run automated bots to scrape data from LinkedIn’s website.”

It’s not the first time hackers have used EC2 to conduct nefarious deeds. In 2011, the Amazon service was used to control a nasty bank fraud trojan. (EC2 has also been a valuable tool to whitehat password crackers.) Plenty of other popular Web services have been abused by online crooks as well. In 2009, for instance, researchers uncovered a Twitter account that had been transformed into a command and control channel for infected computers.

The goal of LinkedIn’s lawsuit is to give lawyers the legal means to carry out “expedited discovery to learn the identity of the Doe defendants.” The success will depend, among other things, on whether the people who subscribed to the Amazon service used payment methods or IP addresses that can be traced.

Source:  arstechnica.com

DoS attacks that took down big game sites abused Web’s time-sync protocol

January 10th, 2014

Miscreants who earlier this week took down servers for League of Legends, EA.com, and other online game services used a never-before-seen technique that vastly amplified the amount of junk traffic directed at denial-of-service targets.

Rather than directly flooding the targeted services with torrents of data, an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.

“Prior to December, an NTP attack was almost unheard of because if there was one it wasn’t worth talking about,” Shawn Marck, CEO of DoS-mitigation service Black Lotus, told Ars. “It was so tiny it never showed up in the major reports. What we’re witnessing is a shift in methodology.”

The technique is in many ways similar to the DNS-amplification attacks waged on servers for years. That older DoS technique sends falsified requests to open domain name system servers requesting the IP address for a particular site. DNS-reflection attacks help aggravate the crippling effects of a DoS campaign since the responses sent to the targeted site are about 50 times bigger than the request sent by the attacker.

During the first week of the year, NTP reflection accounted for about 69 percent of all DoS attack traffic by bit volume, Marck said. The average size of each NTP attack was about 7.3 gigabits per second, a more than three-fold increase over the average DoS attack observed in December. Correlating claims DERP Trolling made on Twitter with attacks Black Lotus researchers were able to observe, they estimated the attack gang had a maximum capacity of about 28Gbps.

NTP servers help people synchronize their servers to very precise time increments. Recently, the protocol was found to suffer from a condition that could be exploited by DoS attackers. Fortunately, NTP-amplification attacks are relatively easy to repel. Since virtually all the NTP traffic can be blocked with few if any negative consequences, engineers can simply filter out the packets. Other types of DoS attacks are harder to mitigate, since engineers must first work to distinguish legitimate data from traffic designed to bring down the site.

Black Lotus recommends network operators follow several practices to blunt the effects of NTP attacks. They include using traffic policers to limit the amount of NTP traffic that can enter a network, implementing large-scale DDoS mitigation systems, or opting for service-based approaches that provide several gigabits of standby capacity for use during DDoS attacks.

Source:  arstechnica.com

Unencrypted Windows crash reports give ‘significant advantage’ to hackers, spies

January 1st, 2014

Microsoft transmits a wealth of information from Windows PCs to its servers in the clear, claims security researcher

Windows’ error- and crash-reporting system sends a wealth of data unencrypted and in the clear, information that eavesdropping hackers or state security agencies can use to refine and pinpoint their attacks, a researcher said today.

Not coincidentally, over the weekend the popular German newsmagazine Der Spiegel reported that the U.S. National Security Agency (NSA) collects Windows crash reports from its global wiretaps to sniff out details of targeted PCs, including the installed software and operating systems, down to the version numbers and whether the programs or OSes have been patched; application and operating system crashes that signal vulnerabilities that could be exploited with malware; and even the devices and peripherals that have been plugged into the computers.

“This information would definitely give an attacker a significant advantage. It would give them a blueprint of the [targeted] network,” said Alex Watson, director of threat research at Websense, which on Sunday published preliminary findings of its Windows error-reporting investigation. Watson will present Websense’s discovery in more detail at the RSA Conference in San Francisco on Feb. 24.

Sniffing crash reports using low-volume “man-in-the-middle” methods — the classic is a rogue Wi-Fi hotspot in a public place — wouldn’t deliver enough information to be valuable, said Watson, but a wiretap at the ISP level, the kind the NSA is alleged to have in place around the world, would.

“At the [intelligence] agency level, where they can spend the time to collect information on billions of PCs, this is an incredible tool,” said Watson.

And it’s not difficult to obtain the information.

Microsoft does not encrypt the initial crash reports, said Watson, which include both those that prompt the user before they’re sent as well as others that do not. Instead, they’re transmitted to Microsoft’s servers “in the clear,” or over standard HTTP connections.

If a hacker or intelligence agency can insert themselves into the traffic stream, they can pluck out the crash reports for analysis without worrying about having to crack encryption.

And the reports from what Microsoft calls “Windows Error Reporting” (ERS), but which is also known as “Dr. Watson,” contain a wealth of information on the specific PC.

When a device is plugged into a Windows PC’s USB port, for example — say an iPhone to sync it with iTunes — an automatic report is sent to Microsoft that contains the device identifier and manufacturer, the Windows version, the maker and model of the PC, the version of the system’s BIOS and a unique machine identifier.

By comparing the data with publicly-available databases of device and PC IDs, Websense was able to establish that an iPhone 5 had been plugged into a Sony Vaio notebook, and even nail the latter’s machine ID.

If hackers are looking for systems running outdated, and thus, vulnerable versions of Windows — XP SP2, for example — the in-the-clear reports will show which ones have not been updated.

Windows Error Reporting is installed and activated by default on all PCs running Windows XP, Vista, Windows 7, Windows 8 and Windows 8.1, Watson said, confirming that the Websense techniques of deciphering the reports worked on all those editions.

Watson characterized the chore of turning the cryptic reports into easily-understandable terms as “trivial” for accomplished attackers.

More thorough crash reports, including ones that Microsoft silently triggers from its end of the telemetry chain, contain personal information and so are encrypted and transmitted via HTTPS. “If Microsoft is curious about the report or wants to know more, they can ask your computer to send a mini core dump,” explained Watson. “Personal identifiable information in that core dump is encrypted.”

Microsoft uses the error and crash reports to spot problems in its software as well as that crafted by other developers. Widespread reports typically lead to reliability fixes deployed in non-security updates.

The Redmond, Wash. company also monitors the crash reports for evidence of as-yet-unknown malware: Unexplained and suddenly-increasing crashes may be a sign that a new exploit is in circulation, Watson said.

Microsoft often boasts of the value of the telemetry to its designers, developers and security engineers, and with good reason: An estimated 80% of the world’s billion-plus Windows PCs regularly send crash and error reports to the company.

But the unencrypted information fed to Microsoft by the initial and lowest-level reports — which Watson labeled “Stage 1″ reports — comprise a dangerous leak, Watson contended.

“We’ve substantiated that this is a major risk to organizations,” said Watson.

Error reporting can be disabled manually on a machine-by-machine basis, or in large sets by IT administrators using Group Policy settings.

Websense recommended that businesses and other organizations redirect the report traffic on their network to an internal server, where it can be encrypted before being forwarded to Microsoft.

But to turn it off entirely would be to throw away a solid diagnostic tool, Watson argued. ERS can provide insights not only to hackers and spying eavesdroppers, but also the IT departments.

“[ERS] does the legwork, and can let [IT] see where vulnerabilities might exist, or whether rogue software or malware is on the network,” Watson said. “It can also show the uptake on BYOD [bring your own device] policies,” he added, referring to the automatic USB device reports.

Microsoft should encrypt all ERS data that’s sent from customer PCs to its servers, Watson asserted.

A Microsoft spokesperson asked to comment on the Websense and Der Spiegel reports said, “Microsoft does not provide any government with direct or unfettered access to our customer’s data. We would have significant concerns if the allegations about government actions are true.”

The spokesperson added that, “Secure Socket Layer connections are regularly established to communicate details contained in Windows error reports,” which is only partially true, as Stage 1 reports are not encrypted, a fact that Microsoft’s own documentation makes clear.

“The software ‘parameters’ information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted,” Microsoft acknowledged in a document about ERS.

Source:  computerworld.com

Saas predictions for 2014

December 27th, 2013

While the bulk of enterprise software is still deployed on-premises, SaaS (software as a service) continues to undergo rapid growth. Gartner has said the total market will top $22 billion through 2015, up from more than $14 billion in 2012.

The SaaS market will likely see significant changes and new trends in 2014 as vendors jockey for competitive position and customers continue shifting their IT strategies toward the deployment model. Here’s a look at some of the possibilities.

The matter of multitenancy: SaaS vendors such as Salesforce.com have long touted the benefits of multitenancy, a software architecture where many customers share a single application instance, with their information kept separate. Multitenancy allows vendors to patch and update many customers at once and get more mileage out of the underlying infrastructure, thereby cutting costs and easing management.

This year, however, other variations on multitenancy emerged, such as one offered by Oracle’s new 12c database. An option for the release allows customers to host many “pluggable” databases within a single host database, an approach that Oracle says is more secure than the application-level multitenancy used by Salesforce.com and others.

Salesforce.com itself has made a shift away from its original definition of multitenancy. During November’s Dreamforce conference, CEO Marc Benioff announced a partnership with Hewlett-Packard around a new “Superpod” option for large enterprises, wherein companies can have their own dedicated infrastructure inside Salesforce.com data centers based on HP’s Converged Infrastructure hardware.

Some might say this approach has little distinction from traditional application hosting. Overall, in 2014 expect multitenancy to fade away as a major talking point for SaaS.

Hybrid SaaS: Oracle has made much of the fact its Fusion Applications could be deployed either on-premises or from its cloud, but due to the apparent complexity involved with the first option, most initial Fusion customers have chosen SaaS.

Still, concept of application code bases that are movable between the two deployment models could become more popular in 2014.

While there’s no indication Salesforce.com will offer an on-premises option — and indeed, such a thing seems almost inconceivable considering the company’s “No Software” logo and marketing campaign around the convenience of SaaS — the HP partnership is clearly meant to give big companies that still have jitters about traditional SaaS a happy medium.

As in all cases, customer demand will dictate SaaS vendors’ next moves.

Geographic depth: It was no accident that Oracle co-President Mark Hurd mentioned during the company’s recent earnings call that it now has 17 data centers around the world. Vendors want enterprise customers to know their SaaS offerings are built for disaster recovery and are broadly available.

Expect “a flurry of announcements” in 2014 from SaaS vendors regarding data center openings around the world, said China Martens, an independent business applications analyst, via email. “This is another move likely to benefit end-user firms. Some firms at present may not be able to proceed with a regional or global rollout of SaaS apps because of a lack of local data center support, which may be mandated by national data storage or privacy laws.”

Keeping customers happy: On-premises software vendors such as Oracle and SAP are now honing their knowledge of something SaaS vendors such as NetSuite and Salesforce.com had to learn years earlier: How to run a software business based on annual subscriptions, not perpetual software licenses and annual maintenance.

The latter model provides companies with big one-time payments followed by highly profitable support fees. With SaaS, the money flows into a vendor’s coffers in a much different manner, and it’s arguably also easier for dissatisfied customers to move to a rival product compared to an on-premises deployment.

As a result, SaaS vendors have suffered from “churn,” or customer turnover. In 2014, there will be increased focus on ways to keep customers happy and in the fold, according to Karan Mehandru, general partner at venture capital firm Trinity Ventures.

Next year “will further awareness that the purchase of software by a customer is not the end of the transaction but rather the beginning of a relationship that lasts for years,” he wrote in a recent blog post. “Customer service and success will be at the forefront of the customer relationship management process where terms like retention, upsells and churn reduction get more air time in board meetings and management sessions than ever before.”

Consolidation in marketing, HCM: Expect a higher pace of merger and acquisition activity in the SaaS market “as vendors buy up their competitors and partners,” Martens said.

HCM (human capital management) and marketing software companies may particularly find themselves being courted. Oracle, SAP and Salesforce.com have both invested heavily in these areas already, but the likes of IBM and HP may also feel the need to get in the game.

A less likely scenario would be a major merger between SaaS vendors, such as Salesforce.com and Workday.

SaaS goes vertical: “There will be more stratification of SaaS apps as vendors build or buy with the aim of appealing to particular types of end-user firms,” Martens said. “In particular, vendors will either continue to build on early industry versions of their apps and/or launch SaaS apps specifically tailored to particular verticals, e.g., healthcare, manufacturing, retail.”

However, customers will be burdened with figuring out just how deep the industry-specific features in these applications are, as well as gauging how committed the vendor is to the particular market, Martens added.

Can’t have SaaS without a PaaS: Salesforce.com threw down the gauntlet to its rivals in November, announcing Salesforce1, a revamped version of its PaaS (platform as a service) that couples its original Force.com offering with tools from its Heroku and ExactTarget acquisitions, a new mobile application, and 10 times as many APIs (application programming interfaces) than before.

A PaaS serves as a multiplying force for SaaS companies, creating a pool of developers and systems integrators who create add-on applications and provide services to customers while sharing an interest in the vendor’s success.

Oracle, SAP and other SaaS vendors have been building out their PaaS offerings and will make plenty of noise about them next year.

Source:  cio.com

Target’s nightmare goes on: Encrypted PIN data stolen

December 27th, 2013

After hackers stole credit and debit card records for 40 million Target store customers, the retailer said customers’ personal identification numbers, or PINs, had not been breached.

Not so.

On Friday, a Target spokeswoman backtracked from previous statements and said criminals had made off with customers’ encrypted PIN information as well. But Target said the company stored the keys to decrypt its PIN data on separate systems from the ones that were hacked.

“We remain confident that PIN numbers are safe and secure,” Molly Snyder, Target’s spokeswoman said in a statement. “The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems.”

The problem is that when it comes to security, experts say the general rule of thumb is: where there is will, there is a way. Criminals have already been selling Target customers’ credit and debit card data on the black market, where a single card is selling for as much as $100. Criminals can use that card data to create counterfeit cards. But PIN data is the most coveted of all. With PIN data, cybercriminals can make withdrawals from a customer’s account through an automatic teller machine. And even if the key to unlock the encryption is stored on separate systems, security experts say there have been cases where hackers managed to get the keys and successfully decrypt scrambled data.

Even before Friday’s revelations about the PIN data, two major banks, JPMorgan Chase and Santander Bank both placed caps on customer purchases and withdrawals made with compromised credit and debit cards. That move, which security experts say is unprecedented, brought complaints from customers trying to do last-minute shopping in the days leading to Christmas.

Chase said it is in the process of replacing all of its customers’ debit cards — about 2 million of them — that were used at Target during the breach.

The Target breach,from Nov. 27 to Dec. 15, is officially the second largest breach of a retailer in history. The biggest was a 2005 breach at TJMaxx that compromised records for 90 million customers.

The Secret Service and Justice Department continue to investigate.

Source:  nytimes.com

Cyber criminals offer malware for Nginx, Apache Web servers

December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Huawei sends 400Gbps over next-generation optical network

December 26th, 2013

Huawei Technologies and Polish operator Exatel have tested a next-generation optical network based on WDM (Wavelength Division Multiplexing) technology and capable of 400Gbps throughput.

More data traffic and the need for greater transmission speed in both fixed and wireless networks have consequences for all parts of operator networks. While faster versions of technologies such as LTE are being rolled out at the edge of networks, vendors are working on improving WDM (Wavelength-Division Multiplexing) to help them keep up at the core.

WDM sends large amounts of data using a number different wavelengths or channels over a single optical fiber.

However, the test conducted by Huawei and Exatel only used one channel to send the data, which has its advantages, according to Huawei. It means the system only needs one optical transceiver, which is used to both send and receive data. That, in turn, results in lower power consumption and a smaller chance that something may go wrong, it said.

Huawei didn’t say when it expects to include the technology in commercial products.

Currently operators are upgrading their networks to include 100Gbps links. That increased third quarter spending on optical networks in North America by 13.4 percent year-over-year, following an 11.1 percent increase in the previous quarter, according to Infonetics Research. Huawei, Ciena, and Alcatel-Lucent were the WDM market share leaders, it said.

Source:  networkworld.com

Critics: NSA agent co-chairing key crypto standards body should be removed

December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Computers share their secrets if you listen

December 20th, 2013

Be afraid, friends, for science has given us a new way in which to circumvent some of the strongest encryption algorithms used to protect our data — and no, it’s not some super secret government method, either. Researchers from Tel Aviv University and the Weizmann Institute of Science discovered that they could steal even the largest, most secure RSA 4096-bit encryption keys simply by listening to a laptop as it decrypts data.

To accomplish the trick, the researchers used a microphone to record the noises made by the computer, then ran that audio through filters to isolate the vibrations made by the electronic internals during the decryption process. With that accomplished, some cryptanalysis revealed the encryption key in around an hour. Because the vibrations in question are so small, however, you need to have a high powered mic or be recording them from close proximity. The researchers found that by using a highly sensitive parabolic microphone, they could record what they needed from around 13 feet away, but could also get the required audio by placing a regular smartphone within a foot of the laptop. Additionally, it turns out they could get the same information from certain computers by recording their electrical ground potential as it fluctuates during the decryption process.

Of course, the researchers only cracked one kind of RSA encryption, but they said that there’s no reason why the same method wouldn’t work on others — they’d just have to start all over to identify the specific sounds produced by each new encryption software. Guess this just goes to prove that while digital security is great, but it can be rendered useless without its physical counterpart. So, should you be among the tin-foil hat crowd convinced that everyone around you is a potential spy, waiting to steal your data, you’re welcome for this newest bit of food for your paranoid thoughts.

Source:  engadget.com

New modulation scheme said to be ‘breakthrough’ in network performance

December 20th, 2013

A startup plans to demonstrate next month a new digital modulation scheme that promises to dramatically boost bandwidth, capacity, and range, with less power and less distortion, on both wireless and wired networks.

MagnaCom, a privately held company based in Israel, now has more than 70 global patent applications, and 15 issued patents in the U.S., for what it calls and has trademarked Wave Modulation (or WAM), which is designed to replace the long-dominant quadrature amplitude modulation (QAM) used in almost every wired or wireless product today on cellular, microwave radio, Wi-Fi, satellite and cable TV, and optical fiber networks. The company revealed today that it plans to demonstrate WAM at the Consumer Electronics Show, Jan. 7-10, in Las Vegas.

The vendor, which has released few specifics about WAM, promises extravagant benefits: up to 10 decibels of additional gain compared to the most advanced QAM schemes today; up to 50 percent less power; up to 400 percent more distance; up to 50 percent spectrum savings. WAM tolerates noise or interference better, has lower costs, is 100 percent backward compatible with existing QAM-based systems; and can simply be swapped in for QAM technology without additional changes to other components, the company says.

Modulation is a way of conveying data by changing some aspect of a carrier signal (sometimes called a carrier wave). A very imperfect analogy is covering a lamp with your hand to change the light beam into a series of long and short pulses, conveying information based on Morse code.

QAM, which is both an analog and a digital modulation scheme, “conveys two analog message signals, or two digital bit streams, by changing the amplitudes of two carrier waves,” as the Wikipedia entry explains. It’s used in Wi-Fi, microwave backhaul, optical fiber systems, digital cable television and many other communications systems. Without going into the technical details, you can make QAM more efficient or denser. For example, nearly all Wi-Fi radios today use 64-QAM. But 802.11ac radios can use 256-QAM. In practical terms, that change boosts the data rate by about 33 percent.

But there are tradeoffs. The denser the QAM scheme, the more vulnerable it is to electronic “noise.” And amplifying a denser QAM signal requires bigger, more powerful amplifiers: when they run at higher power, which is another drawback, they also introduce more distortion.

MagnaCom claims that WAM modulation delivers vastly greater performance and efficiencies than current QAM technology, while minimizing if not eliminating the drawbacks. But so far, it’s not saying how WAM actually does that.

“It could be a breakthrough, but the company has not revealed all that’s needed to assure the world of that,” says Will Straus, president of Forward Concepts, a market research firm that focuses on digital signal processing, cell phone chips, wireless communications and related markets. “Even if the technology proves in, it will take many years to displace QAM that’s already in all digital communications. That’s why only bounded applications — where WAM can be [installed] at both ends – will be the initial market.”

“There are some huge claims here,” says Earl Lum, founder of EJL Wireless, a market research firm that focuses on microwave backhaul, cellular base station, and related markets. “They’re not going into exactly how they’re doing this, so it’s really tough to say that this technology is really working.”

Lum, who originally worked as an RF design engineer before switching to wireless industry equities research on Wall Street, elaborated on two of those claims: WAM’s greater distance and its improved spectral efficiency.

“Usually as you go higher in modulation, the distance shrinks: it’s inversely proportional,” he explains. “So the 400 percent increase in distance is significant. If they can compensate and still get high spectral efficiency and keep the distance long, that’s what everyone is trying to have.”

The spectrum savings of up to 50 percent is important, too. “You might be able to double the amount of channels compared to what you have now,” Lum says. “If you can cram more channels into that same spectrum, you don’t have to buy more [spectrum] licenses. That’s significant in terms of how many bits-per-hertz you can realize. But, again, they haven’t specified how they do this.”

According to MagnaCom, WAM uses some kind of spectral compression to improve spectral efficiency. WAM can simply be substituted for existing QAM technology in any product design. Some of WAM’s features should result in simpler transmitter designs that are less expensive and use less power.

For the CES demonstration next month, MagnaCom has partnered with Altera Corp., which provides custom field programmable gate arrays, ASICs and other custom logic solutions.

Source:  networkworld.com