Archive for the ‘Database’ Category

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Saas predictions for 2014

Friday, December 27th, 2013

While the bulk of enterprise software is still deployed on-premises, SaaS (software as a service) continues to undergo rapid growth. Gartner has said the total market will top $22 billion through 2015, up from more than $14 billion in 2012.

The SaaS market will likely see significant changes and new trends in 2014 as vendors jockey for competitive position and customers continue shifting their IT strategies toward the deployment model. Here’s a look at some of the possibilities.

The matter of multitenancy: SaaS vendors such as Salesforce.com have long touted the benefits of multitenancy, a software architecture where many customers share a single application instance, with their information kept separate. Multitenancy allows vendors to patch and update many customers at once and get more mileage out of the underlying infrastructure, thereby cutting costs and easing management.

This year, however, other variations on multitenancy emerged, such as one offered by Oracle’s new 12c database. An option for the release allows customers to host many “pluggable” databases within a single host database, an approach that Oracle says is more secure than the application-level multitenancy used by Salesforce.com and others.

Salesforce.com itself has made a shift away from its original definition of multitenancy. During November’s Dreamforce conference, CEO Marc Benioff announced a partnership with Hewlett-Packard around a new “Superpod” option for large enterprises, wherein companies can have their own dedicated infrastructure inside Salesforce.com data centers based on HP’s Converged Infrastructure hardware.

Some might say this approach has little distinction from traditional application hosting. Overall, in 2014 expect multitenancy to fade away as a major talking point for SaaS.

Hybrid SaaS: Oracle has made much of the fact its Fusion Applications could be deployed either on-premises or from its cloud, but due to the apparent complexity involved with the first option, most initial Fusion customers have chosen SaaS.

Still, concept of application code bases that are movable between the two deployment models could become more popular in 2014.

While there’s no indication Salesforce.com will offer an on-premises option — and indeed, such a thing seems almost inconceivable considering the company’s “No Software” logo and marketing campaign around the convenience of SaaS — the HP partnership is clearly meant to give big companies that still have jitters about traditional SaaS a happy medium.

As in all cases, customer demand will dictate SaaS vendors’ next moves.

Geographic depth: It was no accident that Oracle co-President Mark Hurd mentioned during the company’s recent earnings call that it now has 17 data centers around the world. Vendors want enterprise customers to know their SaaS offerings are built for disaster recovery and are broadly available.

Expect “a flurry of announcements” in 2014 from SaaS vendors regarding data center openings around the world, said China Martens, an independent business applications analyst, via email. “This is another move likely to benefit end-user firms. Some firms at present may not be able to proceed with a regional or global rollout of SaaS apps because of a lack of local data center support, which may be mandated by national data storage or privacy laws.”

Keeping customers happy: On-premises software vendors such as Oracle and SAP are now honing their knowledge of something SaaS vendors such as NetSuite and Salesforce.com had to learn years earlier: How to run a software business based on annual subscriptions, not perpetual software licenses and annual maintenance.

The latter model provides companies with big one-time payments followed by highly profitable support fees. With SaaS, the money flows into a vendor’s coffers in a much different manner, and it’s arguably also easier for dissatisfied customers to move to a rival product compared to an on-premises deployment.

As a result, SaaS vendors have suffered from “churn,” or customer turnover. In 2014, there will be increased focus on ways to keep customers happy and in the fold, according to Karan Mehandru, general partner at venture capital firm Trinity Ventures.

Next year “will further awareness that the purchase of software by a customer is not the end of the transaction but rather the beginning of a relationship that lasts for years,” he wrote in a recent blog post. “Customer service and success will be at the forefront of the customer relationship management process where terms like retention, upsells and churn reduction get more air time in board meetings and management sessions than ever before.”

Consolidation in marketing, HCM: Expect a higher pace of merger and acquisition activity in the SaaS market “as vendors buy up their competitors and partners,” Martens said.

HCM (human capital management) and marketing software companies may particularly find themselves being courted. Oracle, SAP and Salesforce.com have both invested heavily in these areas already, but the likes of IBM and HP may also feel the need to get in the game.

A less likely scenario would be a major merger between SaaS vendors, such as Salesforce.com and Workday.

SaaS goes vertical: “There will be more stratification of SaaS apps as vendors build or buy with the aim of appealing to particular types of end-user firms,” Martens said. “In particular, vendors will either continue to build on early industry versions of their apps and/or launch SaaS apps specifically tailored to particular verticals, e.g., healthcare, manufacturing, retail.”

However, customers will be burdened with figuring out just how deep the industry-specific features in these applications are, as well as gauging how committed the vendor is to the particular market, Martens added.

Can’t have SaaS without a PaaS: Salesforce.com threw down the gauntlet to its rivals in November, announcing Salesforce1, a revamped version of its PaaS (platform as a service) that couples its original Force.com offering with tools from its Heroku and ExactTarget acquisitions, a new mobile application, and 10 times as many APIs (application programming interfaces) than before.

A PaaS serves as a multiplying force for SaaS companies, creating a pool of developers and systems integrators who create add-on applications and provide services to customers while sharing an interest in the vendor’s success.

Oracle, SAP and other SaaS vendors have been building out their PaaS offerings and will make plenty of noise about them next year.

Source:  cio.com

U.S. Dept. of Energy reports second security breach

Friday, August 16th, 2013

For the second time this year, the U.S. Department of Energy is recovering from a data breach involving the personally identifying information of federal employees

In a letter sent to employees on Wednesday, the U.S. Department of Energy (DOE) disclosed a security incident, which resulted in the loss of personally identifying information (PII) to unauthorized individuals. This is the second time this year such a breach has occurred. The letter, obtained by the Wall Street Journal, doesn’t identify the root cause of the incident, or provide much detail, other than the fact that no classified data was lost.

“The Department of Energy has confirmed a recent cyber incident that occurred at the end of July and resulted in the unauthorized disclosure of federal employee Personally Identifiable Information (PII)…We believe about 14,000 past and current DOE employees PII may have been affected,” the letter states in part.

Back in February, the DOE disclosed a similar incident where PII was lost. In addition, that incident also included the compromise of 14 servers and 20 workstations. At the time, officials blamed Chinese hackers, but two weeks earlier a group calling itself Parastoo (a common girls name in Farsi) claimed they were behind the breach, posting data allegedly taken from a DOE webserver (including a copy of /etc/passwd and Apache config files) as proof.

In this most recent case, the motive behind the attack may be something simple, such as data harvesting, since PII is rather valuable to criminals. Or it may be something else entirely.

“In some cases, attackers target information about employees because they can use that information to impersonate those employees in spear phishing attacks or compromise their access credentials,” Tom Cross, director of security research at Lancope, told CSO in an email.

“Sometimes, the attackers log right in using employees access credentials and then proceed to access information on the network without using any custom malware. A defensive strategy that focuses exclusively on detecting exploits and malware cannot detect this sort of unauthorized activity.”

In related news, defense contractor Northrop Grumman disclosed a similar data breach, involving the loss of PII related to employees who applied to the Balkans Linguist Support Program.

According to the notification letter, Northrop says the breach, which occurred between late November 2012 and May 2013, targeted a database housing applicant and participant data for the program. The data that was exposed includes names, date of births, blood types, Social Security Numbers, other government-issued identification numbers, and contact information.

Source:  csoonline.com

Networks Solutions reports MySQL hiccups following attacks

Tuesday, July 23rd, 2013

Network Solutions warned on Monday of latency problems for customers using MySQL databases just a week after the hosting company fended off distributed denial-of-service (DDoS) attacks.

“Some hosting customers using MySQL are reporting issues with the speed with which their websites are resolving,” the company wrote on Facebook. “Some sites are loading slowly; others are not resolving. We’re aware of the issue, and our technology team is working on it now.”

Network Solutions, which is owned by Web.com, registers domain names, offers hosting services, sells SSL certificates and provides other website-related administration services.

On July 17, Network Solutions said it came under a DDoS attack that caused many of the websites it hosts to not resolve. The company said later in the day that most of the problems had been fixed, and it apologized two days later.

“Because online security is our top priority, we continue to invest millions of dollars in frontline and mitigation solutions to help us identify and eliminate potential threats,” it said.

Some customers, however, reported problems before Network Solutions acknowledged the cyberattacks. One customer, who wrote to IDG News Service before Network Solutions issued the MySQL warning, said he had problems publishing a website on July 16, before the DDoS attacks are believed to have started.

Several other customers who commented on the company’s Facebook page reported problems going back to a scheduled maintenance period announced on July 5. The company warned customers they might experience service interruptions between 10 p.m. EST on July 5 and 7 a.m. the next morning.

Donna Marian, an artist who creates macabre dolls, wrote on the company’s Facebook page on Monday that her site was down for five days.

“I have been with you 13 years and have not got one word about this issue that has and is still costing my business thousands of dollars,” Marian wrote. “Will you be reimbursing me for my losses?”

Company officials could not be immediately reached for comment.

Source:  infoworld.com

With universities under attack, security experts talk best defenses

Thursday, July 18th, 2013

Faced with millions of hacking attempts a week, U.S. research universities’ best option is to segment information, so a balance can be struck between security and the need for an open network, experts say.

Universities are struggling to tighten security without destroying the culture of openness that is so important for information sharing among researchers in and outside the institutions, The New York Times reported on Wednesday.

Universities have become a major target by hackers looking to steal highly valuable research that is the backbone of thousands of patents awarded to the schools each year, the newspaper said. The research spans a wide variety of fields, ranging from drugs and computer chips to military weapons and medical devices.

Like U.S. corporations, universities are battling hackers who are believed to be mostly from China. However, the schools are in the unusual position of having to protect valuable data while maintaining an open network.

“It is a unique problem for universities,” said Nick Bennett, a security consultant for Mandiant.

Experts agree that the schools should audit all the information they hold, including research data and student and employee personal information; categorize it all and then decide the level of security needed. The extent of the protection should depend on the damage that could result if the data is stolen.

The most sensitive information, such as research related to national security, should be taken off the Internet and accessible only through university-approved computers on campus.

“[That way] you can still maintain somewhat of an open culture university wide, while still protecting the crown jewels,” Bennett said.

For less sensitive data, there’s more flexibility, experts say. Some information may only need additional access controls, such as two-factor authentication. Other data could also be wrapped in intrusion detection technology.

Universities tend to have many silos of data stored within individual schools and centers on campus. Oftentimes, the information is left up to the individual entities to protect, which can have disastrous results.

In an incident he called “industrial strength stupid,” Kevin Coleman, a cyberterrorism expert at Technolytics Institute, said he knew of one university were researchers set up their own server on the school’s network and connected it to the Internet without a firewall, antivirus software or intrusion detection capabilities.

“That action exposed much more than just that research initiative,” he said.

An alternative is for universities to follow a more corporate model, where a single department is responsible for setting and upholding standards across the organization, said Brandon Knight, a senior consultant for SecureState.

If such a top-down approach is impossible, then the various groups should have a way to share information on security and to collaborate whenever possible.

“When you see people implement their own security and reinvent the wheel and do this in a vacuum, it leads to problems,” Knight said. “People obviously want to do the best, but they don’t always know what they’re doing and they may not have the resources.”

The sophistication of hackers engaged in cyberespionage means they are likely to breach any organization’s security eventually. In those cases, the best defense is to have technology that prevents intruders from obtaining credentials to access internal systems, a strategy called “defense in depth.”

“Even if an attacker is able to get access to a few systems in your environment, there are still additional security controls in place preventing them from escalating their privileges and moving laterally to other sensitive systems,” Bennett said.

Many of the above suggestions are considered best practices in the security industry. But the basics go a long way to protecting computer systems.

“It doesn’t really matter if the attackers are from China, some other nation state or just hacktivists,” said Brent Huston, chief executive of MicroSolved. “Until [universities] get better at doing the basics right, they will continue to be hotbeds of attacker activity.”

Source:  cso.com

US agency baffled by modern technology, destroys mice to get rid of viruses

Tuesday, July 9th, 2013

The Economic Development Administration (EDA) is an agency in the Department of Commerce that promotes economic development in regions of the US suffering low growth, low employment, and other economic problems. In December 2011, the Department of Homeland Security notified both the EDA and the National Oceanic and Atmospheric Administration (NOAA) that there was a potential malware infection within the two agencies’ systems.

The NOAA isolated and cleaned up the problem within a few weeks.

The EDA, however, responded by cutting its systems off from the rest of the world—disabling its enterprise e-mail system and leaving its regional offices no way of accessing centrally-held databases.

It then recruited in an outside security contractor to look for malware and provide assurances that not only were EDA’s systems clean, but also that they were impregnable against malware. The contractor, after some initial false positives, declared the systems largely clean but was unable to provide this guarantee. Malware was found on six systems, but it was easily repaired by reimaging the affected machines.

EDA’s CIO, fearing that the agency was under attack from a nation-state, insisted instead on a policy of physical destruction. The EDA destroyed not only (uninfected) desktop computers but also printers, cameras, keyboards, and even mice. The destruction only stopped—sparing $3 million of equipment—because the agency had run out of money to pay for destroying the hardware.

The total cost to the taxpayer of this incident was $2.7 million: $823,000 went to the security contractor for its investigation and advice, $1,061,000 for the acquisition of temporary infrastructure (requisitioned from the Census Bureau), $4,300 to destroy $170,500 in IT equipment, and $688,000 paid to contractors to assist in development a long-term response. Full recovery took close to a year.

The full grim story was detailed in Department of Commerce audit released last month, subsequently reported by Federal News Radio.

The EDA’s overreaction is, well, a little alarming. Although not entirely to blame—the Department of Commerce’s initial communication with EDA grossly overstated the severity of the problem (though corrected its error the following day)—the EDA systematically reacted in the worst possible way. The agency demonstrated serious technical misunderstandings—it shut down its e-mail servers because some of the e-mails on the servers contained malware, even though this posed no risk to the servers themselves—and a general sense of alarmism.

The malware that was found was common stuff. There were no signs of persistent, novel infections, nor any indications that the perpetrators were nation-states rather than common-or-garden untargeted criminal attacks. The audit does, however, note that the EDA’s IT infrastructure was so badly managed and insecure that no attacker would need sophisticated attacks to compromise the agency’s systems.

Source:  arstechnica.com

Calif. attorney general: Time to crack down on companies that don’t encrypt

Friday, July 5th, 2013

State’s first data breach report finds that more than 1.4 million residents’ data would have been safe had companies used encryption

If organizations throughout California encrypted their customers’ sensitive data, more than 1.4 million Californians would not have had their information put at risk in 2012, according to a newly released report [PDF] on statewide data breaches from California Attorney General Kamala Harris. All told, some 2.5 million people were affected by the 131 breaches reported to the state. Notably, organizations in the Golden State are only required to report a breach if it affects 500 or more users, so it’s plausible (if not likely) that the overall number of breaches is higher.

California does offer incentives to companies that embrace encryption, according to Harris, but because the carrot isn’t working, she’s now turning to the stick: She cautioned that her office “will make it an enforcement priority to investigate breaches involving unencrypted personal information” and will “encourage … law-enforcement agencies to similarly prioritize these investigations.”

California breachin’
According to the report simply titled “Data Breach Report 2012,” 103 different entities suffered data breaches in 2012, nine of which reported more than one. Three of the entities reporting multiple breaches were payment card issuers: American Express with 19, Discover Financial Services with three, and Yolo Federal Credit Union with two. Those breaches occurred either at a merchant or at a payment processor.

Other key stats from the report:

  • The average breach incident involved the information of 22,500 individuals.
  • The retail industry reported the most data breaches in 2012: 34 (26 percent of the total reported breaches), followed by finance and insurance with 30 (23 percent).
  • More than half of the breaches (56 percent) involved Social Security numbers.
  • Outsider intrusions accounted for 45 percent the total incidents, with 23 percent occurring at a merchant via such techniques as skimming devices installed at a point-of-sale terminal.
  • 10 percent of the breaches were caused by insiders — employees, contractors, vendors, customers — who accessed systems and data without authority.

Encryption and beyond
Beyond threatening greater scrutiny of companies that suffer data breaches but don’t use encryption, Harris recommended that the California Legislature should consider enacting a law requiring organizations to use encryption to protect personal information.

Additionally, the report called on organizations to review and tighten their security controls to protect personal information, including training of employees and contractors. “More than half of the breaches reported in 2012 … were the result of intentional access to data by outsiders or by unauthorized insiders,” the report says. “This suggests a need to review and strengthen security controls applied to personal information.”

The report further noted that organizations not only “have legal and moral obligations” to protect personal information, but California law requires businesses “to use reasonable and appropriate security procedures and practices to protect personal information.”

Suggested practices include using multifactor authentication to protect sensitive systems, having strong encryption to protect user IDs and passwords in storage, and providing regular training for employees, contractors, and other agents who handle personal information. “Many of the 17 percent of breaches that resulted from procedural failures were likely the result of ignorance of or noncompliance with organiza­tional policies regarding email, data destruction, and website posting,” the report says.

It also cites companies for making breach notices sent to customers too difficult to read. In reviewing sample notices, Harris’ office found that the average reading level of the breach notices submitted in 2012 was 14th grade. That’s “significantly higher than the average reading level in the U.S.” according to the National Assessment of Adult Literacy.

“Communications professionals can help in making the notice more accessible, using techniques like shorter sentences, familiar words and phrases, the active voice, and layout that supports clarity, such as headers for key points and smaller text blocks,” according to the report.

Additionally, the report called on companies to offer customers affected by data breaches with mitigation products — such as credit monitoring — or information on security freezes. These types of protective measures that can limit victims’ risk of identity theft, “yet in 29 percent of the breaches of this type, no credit monitoring or other mitigation product was offered to victims.”

Finally, the report recommended legislation to amend the state’s breach notification laws to require notification of breaches of online credentials, such as user name and password.

Source:  infoworld.com

Spamhaus hacking suspect ‘had mobile attack van’

Monday, April 29th, 2013

A Dutchman accused of mounting one of the biggest attacks on the internet used a “mobile computing office” in the back of a van.

The 35-year-old, identified by police as “SK”, was arrested last week.

He has been blamed for being behind “unprecedentedly serious attacks” on non-profit anti-spam watchdog Spamhaus.

Dutch, German, British and US police forces took part in the investigation leading to the arrest, Spanish authorities said.

The Spanish interior minister said SK was able to carry out network attacks from the back of a van that had been “equipped with various antennas to scan frequencies”.

He was apprehended in the city of Granollers, 20 miles (35km) north of Barcelona. It is expected that he will be extradited from Spain to be tried in the Netherlands.

‘Robust web hosting’

Police said that upon his arrest SK told them he belonged to the “Telecommunications and Foreign Affairs Ministry of the Republic of Cyberbunker”.

Cyberbunker is a company that says it offers highly secure and robust web hosting for any material except child pornography or terrorism-related activity.

Spamhaus is an organisation based in London and Geneva that aims to help email providers filter out spam and other unwanted content.

To do this, the group maintains a number of blocklists, a database of servers known to be being used for malicious purposes.

Police alleged that SK co-ordinated an attack on Spamhaus in protest over its decision to add servers maintained by Cyberbunker to a spam blacklist.

Overwhelm server

Spanish police were alerted in March to large distributed-denial-of-service (DDoS) attacks originating in Spain but affecting servers in the UK, Netherlands and US.

DDoS attacks attempt to overwhelm a web server by sending it many more requests for data than it can handle.

A typical DDoS attack employs about 50 gigabits of data per second (Gbps). At its peak the attack on Spamhaus hit 300Gbps.

In a statement in March, Cyberbunker “spokesman” Sven Kamphuis took exception to Spamhaus’s action, saying in messages sent to the press that it had no right to decide “what goes and does not go on the internet”.

Source:  BBC

PostgreSQL updates address high-risk vulnerability, other issues

Friday, April 5th, 2013

VMware also releases fixes for its PostgreSQL-based vFabric Postgres database product

The PostgreSQL developers released updates for all major branches of the popular open-source database system on Thursday in order to address several vulnerabilities, including a high-risk one that could allow attackers to crash the server, modify configuration variables as superuser or execute arbitrary code if certain conditions are met.

“This update fixes a high-exposure security vulnerability in versions 9.0 and later,” the PostgreSQL Global Development Group said in the release announcement. “All users of the affected versions are strongly urged to apply the update immediately.”

The high-risk vulnerability, identified as CVE-2013-1899, can be exploited by sending maliciously crafted connection requests to a targeted PostgreSQL server that include command-line switches specifying a database name beginning with the “-” character. Depending on the server’s configuration, successful exploit can result in persistent denial of service, privilege escalation or arbitrary code execution.

The vulnerability can be exploited by a remote unauthenticated attacker to append error messages to files located in the PostgreSQL data directory. “Files corrupted in this way may cause the database server to crash, and to refuse to restart,” the PostgreSQL developers said in an advisory accompanying the new releases. “The database server can be fixed either by editing the files and removing the garbage text, or restoring from backup.”

Furthermore, if the attacker has access to a database user whose name is identical to a database name, he can leverage the vulnerability to temporarily modify a configuration variable with superuser privileges. If this condition is met and the attacker can also write files somewhere on the system — for example in the /tmp directory — he can exploit the vulnerability to load and execute arbitrary C code, the PostgreSQL developers said.

Systems that don’t restrict access to the PostgreSQL network port, which is common for PostgreSQL database servers running in public clouds, are especially vulnerable to these attacks.

The PostgreSQL developers advise server administrators to update their PostgreSQL installations to the newly released 9.0.13, 9.1.9 or 9.2.4 versions, and to block access to their database servers from untrusted networks. The 8.4 branch of PostgreSQL is not affected by CVE-2013-1899, but PostgreSQL 8.4.17 was also released to fix other issues.

All the new releases also address less serious security fixes, including CVE-2013-1900, which could allow a database user to guess random numbers generated by contrib/pgcrypto functions, and CVE-2013-1901, which could allow an unprivileged user to run commands that could interfere with in-progress backups.

Two other security issues with the PostgreSQL graphical installers for Linux and Mac OS X have also been addressed. They allow the insecure passing of superuser passwords to a script (CVE-2013-1903) and the use of predictable filenames in /tmp (CVE-2013-1902), the PostgreSQL developers said.

As a result of the new PostgreSQL security updates, VMware also released fixes for its vFabric Postgres relational database product that’s optimized for virtual environments. The VMware updates are vFabric Postgres 9.2.4 and 9.1.9.

Source: infoworld.com

National Vulnerability Database taken down by vulnerability-exploiting hack

Thursday, March 14th, 2013

The federal government’s official catalog of software vulnerabilities was taken offline after administrators discovered two of its servers had been compromised. By malware. That exploited a software vulnerability.

The National Vulnerability Database is maintained by the National Institute of Standards and Technology and has been unavailable since late last week, according to an e-mail sent by NIST official Gail Porter published on Google+. At the time of this article on Thursday afternoon, the database remained down and there was no indication when service would be restored.

“On Friday March 8, a NIST firewall detected suspicious activity and took steps to block unusual traffic from reaching the Internet,” Porter wrote in the March 14 message. “NIST began investigating the cause of the unusual activity and the servers were taken offline. Malware was discovered on two NIST Web servers and was then traced to a software vulnerability.”

There’s no evidence that any NIST pages were used to infect people visiting the site. Ars has e-mailed Porter for further details, and this post will be updated if additional information is available.

The infection is a graphic reminder that just about anyone on just about any complex system can be compromised. The hack was reported earlier by The Register.

Source:  arstechnica.com

Sensors lead to burst of tech creativity in government

Thursday, March 7th, 2013

Human and mechanical sensors are creating excitement in offices of government IT executives

LAS VEGAS — Here at an IBM conference, City of Boston CIO Bill Oates was telling the audience how citizens are using apps to improve city operations. But it was one of Boston’s latest apps, called Street Bump, that got the interest of one attendee, Gary Gilot, an engineer who heads the public works board in South Bend, Ind.

Information collected by the new app, which uses a smartphone’s accelerometer to record road conditions and send the data to public works workers, has already helped utilities to do a better job at making manhole covers even with the road, Oates said.

Street Bump will be the subject of a citywide publicity campaign this summer in an effort to attract more users, he added.

Gilot was struck by the app’s use of crowdsourcing to assess Boston roads.

South Bend has taken different approaches to same problem.

It once had a half-dozen city supervisors spend six weeks each year driving every street in the city and rating them using a standard road condition measures. It’s latest effort was to hire a vendor to drive all South Bend streets and produce digital video for an analysis of pavement conditions.

But after hearing Oates explain how the Street Bump data was producing “big data” about road conditions by people who launched the app in their cars, Gilot had an admiring smile.

“We are behind them by a bunch,” said Gilot, who sees Boston’s app as a possible alternative to costly road surveys.

“I love the idea of the future — that you can avoid the expense by crowdsourcing,” said Gilot.

South Bend is not behind in the trend of using sensors to improve other operations.

For instance, the city has worked with IBM to create a wireless sensor system that detects changes in the sewer flow, and alerts the city to any problems detected. The system, which includes automated valves that can respond to issues, has reduced overflows and backups, said Gilot.

Improving municipal operations is a major theme at the IBM conference. The company’s Smarter Planet initiative combines sensors, asset management, big data, mobile and cloud services into systems for managing government operations.

Boston and South Bend share in the use of sensors, one human-based and the other mechanical. The adoption of sensors, mobile apps and otherwise, appears to be leading to a burst of creativity in state and local governments.

Boston’s chief vehicle for connecting with residents is its Citizens Connect app. The city will release version 4.0 this summer, with changes that will make it easier for city workers to connect directly with residents.

Citizens Connect allows residents to report issues that need government action. Those issues might be a broken street light, trash, graffiti. The reports are public.

Oates said the app encourages participation. To find out why people used the app, the city asked app users why they didn’t call the city about maintenance issues in the pre-app days.

The response, said Oates, was this: “When we call the city we feel like we’re complaining, but when we use this (the app), we feel like we’re helping.”

In discussing Street Bump, he says it’s entirely possible that analysis of the data may lead to new sources of information. Similarly, Gilot said the sewer data collection was making it possible to determine what “normal” was.

“You really don’t know what’s normal until have you have this kind of modeling,” said Gilot.

The changes in Citizens Connect 4.0 will help personalize the connections that city residents make with government.

For instance, today a citizen sends in a pothole repair request and the city fills the pothole. With the update, the worker will be able to take a picture of the completed work and send it back to constituent who sent the request.

The person who drew attention to the maintenance problem will be informed that “the case is closed, and here’s a picture and this is who did it for me,” said Oates.

The citizen will be able to respond with a “great job” acknowledgement, although Oats realizes negative feedback is also possible. “We think it puts pressure on the quality of the service delivery,” he said.

Boston gets about 20% of its maintenance “quality of life” requests via the app.

Boston’s effort is the forerunner of a Massachusetts state-wide initiative called Commonwealth Connect that was announced in December.

This state-wide app is being built by SeeClickFix, a startup whose app is already used in many cities and towns. The app is free. The firm offers a “premium dashboard” used by municipalities. It also has a free Web-based tool that is used by smaller towns, said Zack Beatty, head of media and content partnerships for the New Haven, Conn.-based firm.

Beatty said the app will be deployed in more than 50 Massachusetts communities, its first state-led deployment.

SeeClickFix uses cloud-based services to host its app, something South Bend is doing as well for a sewer sensor system as well to manage its IBM system.

Authorizing an in-house deployment would have required an authorization for hardware, said Gilot. From a budgeting perspective, it was easier to move money from other accounts for cloud-based services. In any event, running IT equipment is not the city’s core competence.

Source:  computerworld.com

Google’s white spaces database goes live in test next week

Thursday, February 28th, 2013

Two years ago, Google was one of ten entities selected by the Federal Communications Commission to operate a white spaces database. Google’s database is finally just about ready to go: on Monday, the company will begin a 45-day trial allowing the database to be tested by the public.

White spaces technology allows unused TV spectrum to be repurposed for wireless Internet networks. The companies Spectrum Bridge and Telcordia have already completed their tests and have started operating. Google is the third to reach this stage. The databases are necessary to ensure that wireless Internet networks use only empty spectrum and thus don’t interfere with TV broadcasts.

“This is a limited trial that is intended to allow the public to access and test Google’s database system to ensure that it correctly identifies channels that are available for unlicensed radio transmitting devices that operate in the TV band (unlicensed TV band devices), properly registers radio transmitting facilities entitled to protection, and provides protection to authorized services and registered facilities as specified in the rules,” the FCC said yesterday. “We encourage all interested parties to test the database and provide appropriate feedback to Google.”

If nothing goes wrong, Google’s database could be open for business a few months after the test closes.

The test doesn’t necessarily signal that Google itself is on the cusp of creating wireless networks using white spaces spectrum, although it could. Google has already become an Internet service provider with Google Fiber in Kansas City and has offered free public Wi-Fi in a small part of New York City and Mountain View.

“This has nothing to do with Google creating a wireless network, though Google is interested in the business and could, potentially, create a white space network on down the line,” Steven Crowley, a wireless engineer who blogs about the FCC, wrote in an e-mail.

White spaces networks haven’t exactly revolutionized broadband Internet access in the US, but companies pushing the technology still hope it will have an impact, particularly in rural areas. An incentive auction the FCC is planning to reclaim spectrum controlled by TV broadcasters may increase the airwaves available to white spaces networks.

The FCC decided to authorize multiple white spaces databases to prevent any one company from having a stranglehold over the process. Google, meanwhile, may be hedging its bets. Public Knowledge Senior VP Harold Feld thinks Google applied to become a database provider so it wouldn’t have to worry about anyone else providing key infrastructure.

“I have no specific information, but my belief has always been that Google applied primarily to cover its rear end and make sure that—however they ultimately ended up monetizing the TVWS [TV white spaces]—they didn’t need to worry about someone else having some kind of control over one of the key components (the database),” Feld wrote in an e-mail. “So it is not (IMO) that this demonstrates any specific plans about what it wants to do in the TVWS, it just means that Google doesn’t want anyone to be able to mess with them once they launch whatever they are going to do.”

We’ve contacted Google to see if the company will provide any information on their long-term plans.

The remaining database operators that must go through public tests are Microsoft, Comsearch, Frequency Finder, KB Enterprises LLC and LS Telcom, Key Bridge Global LLC, Neustar, and WSdb LLC.

Source:  arstechnica.com

Node.js integrates with M: Next big thing in healthcare IT

Thursday, February 7th, 2013

Join the M revolution and the next big thing in healthcare IT: the integration of the node.js programming language with the NoSQL hierarchical database, M.

M was developed to organize and access with high efficiency the type of data that is typically managed in healthcare, thus making it uniquely well-suited for the job.

One of the biggest reasons for the success of M is that it integrates the database into the language in a natural and seamless way. The growth and involvement of the community of M developers however, has been below the radar for educators and the larger IT community. As a consequence it has been facing challenges for recruiting young new developers, despite the critical importance of this technology for supporting the Health IT infrastructure of the US.

At the recent 26th VistA Community Meeting, an exciting alternative was presented by Rob Tweed. I summarize it as: Node.js meets the M Database.

In his work, Rob has created an intimate integration between the M database and the language features of node.js. The result is a new way of accessing the M database from JavaScript code in such a way that the developer doesn’t feel that is accessing a database.

It is now possible to access M from node.js, both when using the M implementation of Intersystems Cache and with the open source M implementation of GT.M. This second interface was implemented by David Wicksell, based on the API previously defined for Cache in the GlobalsDB project.

In a recent blog post, Rob describes some of the natural notation in node.js that provides access to the M hierarchical database by nicely following the language patterns of JavaScript. Here are some of Rob’s examples:

The M expression:

set town = ^patient(123456, "address", "town")

becomes the JavaScript expression:

 var town = patient.$('address').$('town')._value;

with some flavor of jQuery.

The following M expression of a healthcare typical example:

^patient(123456,"birthdate")=-851884200
^patient(123456,"conditions",0,"causeOfDeath")=""
^patient(123456,"conditions",0,"codes","ICD-10-CM",0)="I21.01"
^patient(123456,"conditions",0,"codes","ICD-9-CM",0)="410.00"
^patient(123456,"conditions",0,"description")="Diagnosis, Active: Hospital Measures - AMI (Code List: 2.16.840.1.113883.3.666.5.3011)"
^patient(123456,"conditions",0,"end_time")=1273104000

becomes the following JSON datastructure that can be manipulated with Javascript:

var patient = new ewd.GlobalNode("patient", [123456]);
patient._delete();

var document = {
  "birthdate": -851884200,
  "conditions": [
    {
      "causeOfDeath": null,
      "codes": {
        "ICD-9-CM": [
          "410.00"
        ],
        "ICD-10-CM": [
          "I21.01"
        ]
      },
      "description": "Diagnosis, Active: Hospital Measures - AMI (Code List: 2.16.840.1.113883.3.666.5.3011)",
      "end_time": 1273104000
    }
  ]
};

More detailed examples are provided in Rob’s blog post. The M module for node.js is available here.

What this achieves is seamless integration between the powerful M hierarchical database and the language features of the very popular node.js implementation of JavaScript. This integration becomes a great opportunity for hundreds of node.js developers to join the space of healthcare IT, and to do, as Tim O’Reilly advises: Work on Stuff that Matters!

M is currently being used in hundreds of hospitals in the public sector:

  • The Department of Veterans Affairs
  • The Department of Defense
  • The Indian Health Service

As well as hundreds of hospitals in the private sector:

  • Kaiser Permanente hospital system
  • Johns Hopkins
  • Beth Israel Deaconess Medical Center
  • Harvard Medical School

In particular at deployments of these EHR systems:

  • Epic
  • GE/Centricity
  • McKesson
  • Meditech

Given this, and the large popularity of JavaScript and the high efficiency of node.js, this may be the most significant event happening in healthcare IT in recent years.

If you are an enthusiast of node.js, or you are looking for the best next language to learn, or you want to do some social good, this could be the thing for you.

Source:  opensource.com

Federal Reserve confirms hack attack led to data leak

Thursday, February 7th, 2013

The US’s central bank has confirmed information was stolen from its servers during a hack attack.

The Federal Reserve told the Reuters news agency it had contacted individuals whose personal information had been involved.

It follows the hacktivist collective Anonymous’s publication of what it described as the credentials of 4,000 US bank executives.

The Fed did not say whether the two incidents were related.

The Anonymous document contains the names and workplaces of employees at dozens of community banks, credit unions and other lenders, as well as mobile phone numbers and what appear to be computer log-on names and passwords.

However, Reuters reported that the Fed had issued an internal report stating that “passwords were not compromised” and had indicated that the leaked list had been a contact database to be used during natural disasters.

“The Federal Reserve system is aware that information was obtained by exploiting a temporary vulnerability in a website vendor product,” a Fed spokeswoman said.

“Exposure was fixed shortly after discovery and is no longer an issue. The incident did not affect critical operations of the Federal Reserve system.”

Unanswered questions

Over recent years, computer hackers identifying themselves under the Anonymous umbrella have carried out a series of attacks on US government sites and linked organisations such as the US-based intelligence company Stratfor.

In 2011 Anonymous threatened to take action against the Fed over its economic policies, but the latest incident is the first time it has claimed success at breaching the agency.

However, it would not be the first time the central bank’s systems have been compromised. In 2010 a Malaysian man pleaded guilty to adding “malicious code” to the Fed’s network via one of its regional banks.

One UK-based expert said the financial industry would want to know more details about the latest incident.

“If the core Federal Reserve systems are compromised it would be massively concerning for the financial community because it provides a lot of sensitive financial disclosures for regulatory reasons to the Fed, and potentially if a third-party got access to all of that information it could open a can of worms within the banking system overall,” said Chris Skinner, chairman of the Financial Services Club networking group.

 Anonymous linked its alleged attack to the suicide of Aaron Swartz.

“People will want to know exactly how it was compromised and what information was leaked.”

Hacking laws

Anonymous has linked its alleged attack to wider protests following the suicide of internet freedom campaigner Aaron Swartz.

The 26-year-old had been accused of illegally downloading academic documents from the Massachusetts Institute of Technology (MIT) network.

He had been charged with computer intrusion, fraud and data theft, and if found guilty could have faced up to 35 years in prison.

Anonymous and others have called for a change to anti-hacking laws to temper sentences.

MIT has also acknowledged its own systems have suffered a series of hack attacks – the most recent redirected visitors from its site to a page saying “RIP Aaron Swartz”.

Source:  BBC

South Carolina reveals massive data breach of Social Security Numbers, credit cards

Friday, October 26th, 2012

Approximately 3.6 million Social Security numbers and 387,000 credit and debit card numbers belonging to South Carolina taxpayers were exposed after a server at the state’s Department of Revenue was breached by an international hacker, state officials said Friday.

All but 16,000 of the credit and debit card numbers were encrypted, the officials said.

The state’s Department of Revenue became aware of the breach Oct. 10 and an investigation revealed the hacker had stolen the data in mid-September, after probing the system for vulnerabilities in late August and early September.

The vulnerability exploited by the attacker was closed Oct. 20.

During a press conference Friday, South Carolina Governor Nikki Haley described the attack as international and “creative in nature.”

Asked if she knew where the attack originated from, she said she does but declined to name the location because it might hurt the law enforcement investigation. She did, however, say she wants the hacker “slammed to the wall.”

“We want to make sure everybody understands that our State will respond with a big, large-scale plan that is somewhat unprecedented to take care of this problem,” Haley said.

The state will provide affected taxpayers with a year of credit monitoring and identity theft protection service from Experian.

“Anyone who has filed a South Carolina tax return since 1998 is urged to visit protectmyid.com/scdor or call 1- 866-578-5422 to determine if their information is affected,” the Department said.

“While details are still emerging, we can already say that this breach of records at the South Carolina Department of Revenue (SCDOR) is exceptional, both in terms of the large number of records compromised and the potential damage to confidence in state government that may result,” Stephen Cobb, a security evangelist at security firm ESET, said via email Friday.

“The cost is also going to be enormous, given that South Carolina may be required to pay for identity theft protection services for anyone who has paid taxes in South Carolina since 1998,” he said.

“Encryption of the data may slow down the process by which the stolen records are converted into cash through identity theft and fraudulent accounts, although that will also depend on the strength of the encryption,” Cobb said.

Cobb pointed out that this breach came only a couple of months before people can start filing their income tax returns.

“Fraudulent electronic claims for refunds are a huge problem for the Internal Revenue Service (IRS) as criminals can easily make fake versions of the income tax withholding form known as W-2, showing that the employer withheld more tax than was owed,” Cobb said. “Employers often dont inform the IRS of taxes withheld until several months into the New Year.”

Source:  networkworld.com

Oracle Database suffers from “stealth password cracking vulnerability”

Monday, September 24th, 2012

A weakness in an Oracle login system—used in the company’s databases which grant access to sensitive information—makes it trivial for attackers to crack user passwords and gain entry without authorization, a researcher has warned.

The issue has been dubbed the “Oracle stealth password cracking vulnerability,” by the researcher who discovered it, and the problem stems from a session key the Oracle Database 11g Releases 1 and 2 sends to users each time they attempt to log on, according to a report published Thursday by Threatpost. The key leaks information about a cryptographic hash used to obscure the plaintext password. The hash, in turn, can be cracked using off-the-shelf hardware, free software, and a variety of attack methods that have grown increasingly powerful over the past decade. Proof-of-concept code exploiting the weakness can crack an eight-character alphabetic password in about five hours using standard CPUs.

Oracle engineers have corrected the problem in version 12 of the authentication protocol, but they have no plans to fix it in version 11.1, security researcher Esteban Martinez Fayo told Threatpost. Even in version 12, the vulnerability isn’t removed until an administrator changes the configuration of a server to use only the new version of the authentication system. Oracle representatives didn’t respond to an e-mail seeking comment for this story.

There are no overt signs when an outsider has targeted the weakness, and attackers aren’t required to have “man-in-the-middle” control of a network to exploit it. That’s because the session key is sent whenever a remote user sends a few network packets or uses standard Oracle desktop software to contact the database server. All an attacker needs is a valid username on the system and a rudimentary background in password cracking.

The best way to prevent attacks that exploit the vulnerability is to install the patch and make the necessary configuration changes. Even those who continue to use vulnerable systems can take precautions that will go a long way. Passwords for all users should be randomly generated and contain a minimum of nine characters, although 13 or even 20 characters is better. The strategy here is to create a passcode that will take months or years to crack using brute-force methods, which systematically guess every possible combination of letters, numbers, and symbols.

More coverage of the Oracle Database weakness from Dark Reading is here.

Source:  arstechnica.com

Microsoft opens up access to cloud-based ALM server

Monday, June 11th, 2012

The Team Foundation Service, which had been invitation-only, is now open to anyone, but it still is in preview mode

Microsoft is expanding access to its cloud-based application lifecycle management service, although the service still remains in preview mode.

At its TechEd conference in Orlando, Fla. Monday, the company will announce that anyone can utilize its Team Foundation Service ALM server, which is hosted on Microsoft’s Windows Azure cloud. First announced last September, the preview had been limited to invitation-only usage. Since it remains in a preview phase, the service can be used free of charge.

“Anybody who wants to try it can try it,” said Brian Harry, Microsoft technical fellow and product line manager for Team Foundation Server, the behind-the-firewall version of the ALM server. Developers can access Team Foundation Service at the Team Foundation Service preview site.

Through the cloud ALM service, developers can plan projects, collaborate, and manage code online. Code is checked into the cloud using the Visual Studio or Eclipse IDEs. Languages ranging from C# to Python are supported, as are such platforms as Windows and Android.

With Team Foundation Service, Microsoft expects to compete with rival tools like Atlassian Jira. “Team Foundation Service is a full application lifecycle management product that provides a rich set of capabilities from early project planning and management through development, testing, and deployment,” Harry said. “We’ve got the most comprehensive ALM tool in the market, and it is simple and easy to use and easy to get started.” Eventually, Microsoft will charge for use of Team Foundation Service, but it will not happen this year, Harry said.

Microsoft has been adding capabilities to Team Foundation Service every three weeks. A new continuous deployment feature enables applications to be deployed to Azure automatically. A build service was added in March. On Monday, Microsoft will announce the addition of a rich landing page with more information about the product.

Source:  computerworld.com

Release of exploit code puts Oracle Database users at risk of attack

Tuesday, May 1st, 2012
Release of exploit code puts Oracle Database users at risk of attack

An attack devised by security researcher Joxean Koret allows hackers to hijack legitimate client connections to systems running Oracle’s flagship Database Server.

Oracle has declined to patch a critical vulnerability in its flagship database product, leaving customers vulnerable to attacks that siphon confidential information from corporate servers and execute malware on backend systems, a security researcher said.

Virtually all versions of the Oracle Database Server released in the past 13 years contain a bug that allows hackers to perform man-in-the-middle attacks that monitor all data passing between the server and end users who are connected to it. That’s what Joxean Koret, a security researcher based in Spain, told Ars. The “Oracle TNS Poison” vulnerability, as he has dubbed it, resides in the Transparent Network Substrate Listener, which routes connections between clients and the database server. Koret said Oracle learned of the bug in 2008 and indicated in a recent e-mail that it had no plans to fix current supported versions of the enterprise product because of concerns it could cause “regressions” in the code base.

“This is a 0day vulnerability with no patch,” Koret wrote in a post published Thursday to the Full-disclosure security list. “Oracle refuses to patch the vulnerability in *any* existing version and Oracle refuses to give details about which version will have the fix.”

He told Ars he was concerned the vulnerability may come under attack after he inadvertently released bug details and proof-of-concept exploit code a day earlier. Koret said he published that report after Oracle earlier this month publicly recognized him by name for his report and later provided him with a tracking number that indicated the contribution related to his discovery of the TNS Poison vulnerability.

Only after Koret published his detailed advisory did he learn that the bug wasn’t being removed from current versions of Oracle Database. Rather, he said, an e-mail he received from a member of Oracle’s security team indicated only future versions of the enterprise package would be fixed to remove the bug. The message went on to suggest current versions would not be updated because of concerns they might be corrupted by the changes.

“The fix is very complex and it is extremely risky to backport,” the Oracle e-mail stated. “This fix is in a sensitive part of our code where regressions are a concern. Customers have requested that Oracle not include such security fixes into Critical Patch Updates that increases [sic] the chance of regressions.”

When Koret pressed Oracle to explicitly say if engineers planned to allow the bug to remain in current versions, an Oracle employee responded:

“To protect the interest of our customers, we do not provide these level of details (like versions affected) for the issues that are addressed as in-depth. The future releases will have the fix.”

Oracle representatives didn’t respond to multiple e-mails seeking comment for this article.

Oblivious to attack

A TNS Listener feature known as remote registration dates back to at least 1999 with version 8i of the Oracle Database. By sending a simple query to the service, an attacker can hijack connections legitimate users have already established with the database without the need of a password or other authentication. From then on, data traveling between legitimate users and the server pass through the connection set up by the attacker.

“The attacker owns the data as almost all the connections go through the attacker’s box,” Koret wrote in his detailed advisory. “The attacker can record all the data exchanged between the database server and the client machines and both client and server will be oblivious of the attack.” Attackers can also use the connection to send commands to the server that instruct it to add, delete, or modify data. Attackers could also exploit the bug to install rootkits that take control of the server itself, he told Ars.

“Regarding the server side, yes, it can be used to install, for example, a database rootkit, if a DBA (database administrator) connection is captured in the man in the middle (or by capturing a normal user’s session and then using a privilege escalation vulnerability, something not that hard),” he wrote in an e-mail to Ars. “Also let’s say that an attacker finally gained DBA access to the database: (s)he then is capable of executing operating system commands with the privileges of the running user.” On servers running Microsoft Windows, the Oracle Database runs as a Local System, giving an attacker significant control. Systems running Unix-based operating systems have more limited control, but attackers could possibly exploit other vulnerabilities to elevate those privileges.

TNS Listener can be set up to listen for connections over the Internet, Koret warned. This makes it possible for the vulnerability to be exploited remotely over the Internet. Fortunately, Koret said, such Internet-wide configurations are rare. In those cases, attackers would require access to the private network hosting the database server.

The lack of a fix is compounded by Koret’s inadvertent disclosure of detailed instructions for exploiting the vulnerability. Making matters worse, Oracle has yet to confirm or deny Koret’s claim that there will never be a fix for current versions of the database product.

That means it’s up to Oracle customers to lower their exposure to the vulnerability. Koret’s initial post includes a list of “possible workarounds.” One such technique involves setting up load balancing on client machines and updating their configuration to include a full list of Oracle RAC nodes. Another possible mitigation is to update the protocol.ora or sqlnet.ora files on vulnerable servers to check for valid nodes. Customers who have bought the Oracle Advanced Security feature can also lower the risk of attack by mandating the use of secure sockets layer authentication between clients and servers.

On Monday afternoon, Oracle released its own list of mitigations and strongly urged customers to implement them right away.

“Considering that the technical details of vulnerability CVE-2012-1675 have now widely been distributed, Oracle highly recommends that customers make the configuration changes documented in the above mentioned My Oracle Support Notes as soon as possible,” Oracle’s Eric Maurice blogged. “Customers should also feel free to contact Oracle Support if they have questions or concerns.”

Source:  arstechnica.com

Hacker commandeers GitHub to prove Rails vulnerability

Tuesday, March 6th, 2012

A Russian hacker dramatically demonstrated one of the most common security weaknesses in the Ruby on Rails web application language. By doing so, he took full control of the databases GitHub uses to distribute Linux and thousands of other open-source software packages.

Egor Homakov exploited what’s known as a mass assignment vulnerability in GitHub to gain administrator access to the Ruby on Rails repository hosted on the popular website. The weekend hack allowed him to post an entry in the framework’s bug tracker dated 1,001 years into the future. It also allowed him to gain write privileges to the code repository. He carried out the attack by replacing a cryptographic key of a known developer with one he created. While the hack was innocuous, it sparked alarm among open-source advocates because it could have been used to plant malicious code in repositories millions of people use to download trusted software.

Homakov launched the attack two days after he posted a vulnerability report to the Rails bug list warning mass assignments in Rails made the websites relying on the developer language susceptible to compromise. A variety of developers replied with posts saying the vulnerability is already well known and responsibility for preventing exploits rests with those who use the language. Homakov responded by saying even developers for large sites for GitHub, Poster, Speakerdeck, and Scribd were failing to adequately protect against the vulnerability.

In the following hours, participants in the online discussion continued to debate the issue. The mass assignment vulnerability is to Rails what SQL injection weaknesses are to other web applications. It’s a bug that’s so common many users have grown impatient with warnings about them. Maintainers of Rails have largely argued individual developers should single out and “blacklist” attributes that are too sensitive to security to be externally modified. Others such as Homakov have said Rails maintainers should turn on whitelist technology by default. Currently, applications must explicitly enable such protections.

A couple days into the debate, Homakov responded by exploiting mass assignment bugs in GitHub to take control of the site. Less than an hour after discovering the attack, GitHub administrators deployed a fix for the underlying vulnerability and initiated an investigation to see if other parts of the site suffered from similar weaknesses. The site also temporarily suspended Homakov, later reinstating him.

“Now that we’ve had a chance to review his activity, and have determined that no malicious intent was present, @homakov’s account has been reinstated,” a blog post published on Monday said. It went on to encourage developers to practice “responsible disclosure.”

Source:  arstechnica.com

Oracle claims new MySQL Cluster does 1 billion queries per minute—in NoSQL

Friday, February 17th, 2012

Oracle has announced the general availability of MySQL Cluster 7.2 as a GPL download, and claims to have achieved a benchmark of 1 billion queries per minute and 110 million updates per minute on an eight-server cluster. Those results, based on the flexAsynch test in the DBT-2 benchmark, were attained using a new NoSQL NDB C++ API.

Mikael Ronstrom, senior MySQL architect at Oracle, described the test rig for the benchmark in a blog post on February 15. He said that the server cluster used in the test ran on eight two-socket servers, each running one data node, “using X5670 with Infiniband interconnect and 48GB of memory per machine.” Ten other machines ran the flexAsynch queries against the cluster.

In the flexAsynch test, “each read is a transaction consisting of a read of an entire row consisting of 25 attributes, each 4 bytes in size,” he wrote. “flexAsynch uses the asynchronous feature of the NDB API which enables one thread to send off multiple transactions in parallel. This is handled similarly to how Node.js works with callbacks registered that reports back when a transaction is completed.”

The results were a eight-fold improvement from a similar benchmark ran by Oracle last year. But given that there aren’t any published results anywhere else for flexAsynch scores from any other vendor, it’s hard to say exactly what these results mean, or how the performance compares to other open-source NoSQL databases.

Source:  arstechnica.com