Archive for January, 2012

802.11ac boosts buzz more than bandwidth

Friday, January 27th, 2012

A great innovation, 802.11ac does more for 802.11n than it does to change the face of wireless networking

The buzz about 802.11ac is in full swing, but don’t believe everything you read.

The newest of Wi-Fi innovations, 802.11ac (still in draft form) looks like it will start making it into enterprise Wi-Fi products as early as 2013 and home products even earlier. It’s already being flaunted as Gigabit Wi-Fi. And for the largest Wi-Fi market, the home, it will be. But will it deliver gigabit speeds for the enterprise? Not a chance.

Defined for the capacity-rich 5GHz spectrum (495MHz) only, 802.11ac introduces a number of new techniques like advanced modulation and encoding, multi-user MIMO and channel bonding, that theoretically, if you’re talking to a vendor anyway, has the potential to dramatically increase Wi-Fi capacity. The question is, REALLY?

Make no mistake, 802.11ac is a great innovation. But like any great innovation, the devil is often in the details. So here are some details that should help demystify 802.11ac. Here are the key differences to understand:

* Eight spatial streams. One of the biggest Wi-Fi innovations came with 802.11n in the form of spatial multiplexing using a technique called MIMO (multiple input, multiple output). MIMO is the use of multiple antennas at both the transmitter and receiver to increase data throughput without additional bandwidth or increased transmit power. Basically it spreads the same total transmit power over multiple antennas to achieve more bits per second per hertz of bandwidth with the added benefit of greater reliability due to more antenna diversity.

In essence, MIMO lets an access point send multiple spatial streams to one client at a time to increase capacity. 802.11n specified up to four spatial streams.

Now in glorious one-upmanship, 802.11ac will support up to eight spatial streams. Historically it has taken chip manufacturers about two years to add an additional spatial stream (802.11n is only at three right now). While that will surely improve with 802.11ac, don’t look for it to ever get to eight. However it would be a funny sight to see. Just picture an AP with 12 omni-directional antennas (eight for 11ac in 5GHz and four for 11n in 2.4GHz) sticking out of it. Not a pretty picture.

* Multi-user MIMO. Another difference with 11ac is support for “multi-user MIMO” (MU-MIMO). With 802.11n, MIMO could only be used for a single client at any given time, while 802.11ac tries to improve on this by supporting multiple clients.

This allows an 802.11ac AP to transmit two (or more, depending on the number of radio chains) spatial streams to two or more client devices. This has the potential to be a good improvement but is optional. And it’s expected that the first 802.11ac chips out the door won’t support this. What’s more, there’s is a good chance that MU-MIMO won’t ever be supported due to the radio and MAC complexity required.

* 256 Quadrature Amplitude Modulation (QAM). QAM is a way to modulate radio waves to transmit data. 802.11n maxed out at 64QAM so the advent of 256QAM should deliver big improvements in maximum throughput. However, the more complex a modulation scheme, the more difficult it is to achieve. In realistic situations it is highly unlikely that any percentage of client devices would consistently achieve 256QAM. Ouch.

* 5GHz only. Due to how 11ac really achieves all this speed (channel bonding) it doesn’t make sense for 11ac to support 2.4GHz which only has three (of 11) non-overlapping channels. What this means is that devices that want to have 11ac (they all will) will be 5GHz capable. Right now it is a very low percentage that are capable of 5GHz and that is a real shame. Now they’ll be required to do it.

* Channel bonding. An easy and effective method to increase the speed of any radio link is to give it more frequency, or bandwidth. To get more bandwidth, 802.11n introduced us to channel bonding: the ability to take two 20MHz channels and make them work as one — basically a bigger Wi-Fi pipe. This effectively doubled throughput that could be achieved.

Now 802.11ac has mandated the support of 80MHz channels with options to bond eight channels for a total channel of 160MHz.

Even with 802.11n, channel bonding is a double-edged sword. In North America, the 2.4GHz band has 83.5MHz (three non-overlapping channels) of total bandwidth, while the 5GHz bands have a total of 495MHz. That means that 5GHz can carry almost six times the traffic of 2.4GHz plus the added benefit that (for now) the 5GHz band is a much cleaner spectrum.

But don’t count your bits quite yet. What most people don’t realize is that by enabling channel bonding you are actually reducing your overall capacity (see chart).

When designing and deploying a Wi-Fi network for high density, more channels are preferred to fewer, larger channels. Increasing the number of devices occupying one channel in a given area makes reduces the efficiency of Wi-Fi.

This is why people like wires. Because each device effectively has its own channel and there are no other devices occupying that channel (the copper or fiber). So we see staggering amounts of throughput.

If Wi-Fi could have hundreds of channels, and each client would get their own, this would be wireless nirvana. But, as you can see from that chart, we don’t have that many channels, and we sure don’t want to exacerbate the problem by bonding them together if it reduces the overall efficiency of the wireless LAN.

Using 802.11ac in the home is a different story. Bonding channels all the way to 160MHz is preferred given there are few devices trying to access a single AP.

The enterprise is just the opposite. Here, numerous APs are required to support hundreds or thousands of users. And, as much as is possible, those APs should be on different channels.

Ultimately, 802.11ac offers improvements for the Wi-Fi industry primarily because it forces clients to add support for the capacity-rich 5GHz spectrum. Current enterprise APs already support both bands.

Ironically, 802.11ac will prolong the viability of current 802.11n networks. As more and more clients become 5GHz capable, capacity and performance will increase without touching the infrastructure. This is the best news of all.

Source:  networkworld.com

Symantec tells customers to disable PCAnywhere

Wednesday, January 25th, 2012

Symantec is urging customers to disable PCAnywhere until it issues a software update to protect them against attacks that could result from the theft of the product’s source code.

Someone broke into Symantec’s network in 2006 and stole source code for PCAnywhere, which allows customers to remotely connect to other computers, as well as Norton Antivirus Corporate Edition, Norton Internet Security and Norton SystemWorks, the company said last week. Earlier this month, hackers in India affiliated with the Anonymous online activist group said they had gotten the code off servers run by Indian military intelligence.

Hackers have threatened to use the pilfered code to attack companies using it and then release the code publicly. The affected products have been updated since 2007 so there is no risk to customers, except for PCAnywhere, Symantec said.

“Malicious users with access to the source code have an increased ability to identify vulnerabilities and build new exploits,” the company said in a white paper (PDF) offering security recommendations for PCAnywhere customers released this week. “Additionally, customers that are not following general security best practices are susceptible to man-in-the-middle attacks which can reveal authentication and session information.

“At this time, Symantec recommends disabling the product until Symantec releases a final set of software updates that resolve currently known vulnerability risks,” the paper said. Customers who rely on it for business critical purposes should install version 12.5 and apply relevant patches.

PCAnywhere 12.0, 12.1, and 12.5 customers are at increased risk, as well as customers with prior, unsupported versions of the product, according to Symantec.

“There are also secondary risks associated with this situation. If the malicious user obtains the cryptographic key they have the capability to launch unauthorized remote control sessions. This in turn allows them access to systems and sensitive data,” the white paper warns. “If the cryptographic key itself is using Active Directory credentials, it is also possible for them to perpetrate other malicious activities on the network.”

Update 3:31 p.m. PT: Separately, Symantec released a hotfix for several critical vulnerabilities in PCAnywhere on Tuesday, but said it did not know of any publicly available exploits.

Source:  CNET

Microsoft pitches private cloud to IT with System Center 2012

Friday, January 20th, 2012

Microsoft’s System Center 2012 is available today as a Release Candidate, the last milestone before a final release. Along with Hyper-V and Windows Server, the upgraded System Center forms the key building blocks for Microsoft’s private cloud strategy, providing management tools for desktops, mobile devices, both physical and virtual servers, and a mix of resources across private data centers and public clouds such as Windows Azure.

While Release Candidates for some pieces of System Center 2012 were already out, as of today all eight components of the suite are free for anyone to download at this link, with final versions out in the first half of 2012. The exact release date has not been specified, but Microsoft Management & Security Division Vice President Brad Anderson tells Ars Microsoft is shooting for the early side of that time frame.

While the desktop management tools are Windows-only, Microsoft is providing cross-management tools for mobile devices, with security and configuration management covering iOS, Android, and Windows Phone. This, for example, lets IT specify how often smartphone users must change their passwords. In the data center, System Center supports both Linux and Windows servers, with Anderson telling Ars that nearly 20 percent of System Center customers use the software to manage at least some Linux servers.

System Center 2012 boosts the number of supported hypervisors. The current Virtual Machine Manager in System Center supports Hyper-V and VMware, despite VMware being Microsoft’s biggest rival in virtualization and management tools. System Center 2012 broadens the cross-platform hypervisor support by adding Citrix’s XenServer to the mix.

In terms of public clouds, System Center 2012 only officially supports Microsoft’s own Azure, creating integration between in-house and external resources. But even with the pre-2012 version, customers are using System Center to manage virtual servers running in Amazon’s Elastic Compute Cloud. “We do have customers telling us they are using System Center to manage [Amazon resources] like they do with any other VM,” Anderson said.

Microsoft is simplifying pricing by offering just one package with all eight System Center 2012 components: Configuration Manager, Service Manager, Virtual Machine Manager, Operations Manager, Data Protection Manager, Orchestrator, App Controller, and Endpoint Protection. Estimated retail price for the Standard edition of this package is $1,323 per server (up to two physical processors) and $3,615 for the Datacenter edition. Each has the same functionality, but the Standard version allows only two OS instances per server, whereas the Datacenter edition allows unlimited virtualization rights.

There are separate fees for managing mobile devices and PCs, ranging from $22 to $121 per device. Unlike some other Microsoft products that require both a server and client license, System Center client device management licenses can be purchased without an accompanying server license.

In building System Center 2012, Anderson said Microsoft drew on its own experience managing large cloud data centers, in which a single IT administrator might be responsible for managing 5,000 servers, rather than the 30 or 40 per employee typical for many businesses. Even though it’s not quite final yet, the beta version of System Center 2012 is already being used to manage more than 100,000 servers deployed by about 20 customers in Microsoft’s early adopter program. With the Release Candidate now out, Microsoft will do a final check within Microsoft and customer deployments to verify that the changes made since beta are in working order. “We strive to have the release candidate literally have zero known bugs and issues that have to be addressed. There’s very little we have to do to finish up the product,” Anderson said.

Source:  arstechnica.com

Microsoft introduces new robust “Resilient File System” for Windows Server 8

Friday, January 20th, 2012

Storage Spaces will give Windows 8 flexible, fault-tolerant pooling of disk space, and will make storage management simpler and much more powerful. But there’s more to robust file storage than replicating data between disks: preventing and detecting corruption, and ensuring that damage to one file does not spread to others are also important. When describing Storage Spaces, Microsoft was silent on how it hoped to tackle these needs. The answer has now been revealed: a new file system, ReFS (from “Resilient File System”).

Storage Spaces make it easy to cope from a failed disk, but are no help if a disk is merely producing bad data. The Storage Space will be able to tell you if two mirrored drives differ or if the parity check fails, but have no way of determining which drive is right and which is wrong. Erasures, where the data is missing altogether, can be corrected; errors, where the data is wrong, can only be detected.

ReFS is designed to pick up where Storage Spaces leave off. To protect its internal data structures, file system metadata, and, optionally, user data against corruption, ReFS calculates and stores checksums for the data and metadata. Each piece of information protected by the checksum is fed into a checksum algorithm, and the result is a number, the checksum; in ReFS’s case, the checksum is a 64-bit number. Checksum algorithms are designed such that a small change in the input causes a large change in the resulting checksum.

Every time ReFS reads file system metadata (or data that has opted in to the checksum protection) it will compute the checksum for the information it has read, and compare this against the stored value. If the two are in agreement then the data has been read correctly; if they aren’t, it hasn’t.

This checksum protection guards against a range of problems. When writing, two particular issues are “lost writes,” where the data never makes it to disk, and “misdirected writes,” where a bug in either the file system driver or the firmware of the drive or its controller causes a write to go to the wrong location on disk. When reading data, the biggest concern is “bit rot”—the corruption of correctly-written data due to failures of the disk’s magnetic storage.

This solves the problem of not knowing which side of a mirrored pair is the correct one. When used with a Storage Spaces mirror, ReFS will test the checksums of each side of the mirror independently, and then use this to determine which one is correct and which is not.

To further improve protection against bit rot, ReFS will perform scrubbing: it will periodically read all the data and metadata in a volume, verify the checksums are correct, and if necessary use mirrored copies to repair the bad data.

ReFS is also designed to provide greater protection against power failures during write operations. Windows’ current file system, NTFS, performs in-place updates of its data structures and metadata. For example, if a file is renamed, NTFS will read the sector of the drive that contains the filename, calculate a new sector with the new filename, and then write the whole sector back out. This means that a whole sector could be corrupted if the write is interrupted by a power failure, a problem known as “torn writes”. Traditionally, this meant that 512 bytes could be damaged; on modern hard drives, that’s now increased to 4096 bytes. ReFS will work differently; instead of updating the metadata in-place, it will write new metadata to a different location, preventing damage from torn writes.

Like NTFS, ReFS will also prevent cascading failures. If an uncorrectable problem does occur—corruption of data that isn’t mirrored, or has no valid mirrors—ReFS will be able to make a record of the problem and then remove access to the damaged data, without taking the volume offline or interrupting access to any other data on the same volume.

The decision to change file systems is not one to be taken lightly. A bug in a file system can cause catastrophic data loss for millions of people. The two file system implementations with the most real-world testing are both Microsoft’s; its NTFS and FAT32 drivers have more users than any other file system drivers in the world. That’s not something to be discarded on a whim: the enormous amount of real-world testing means that bugs and unusual corner case situations are more likely to have been found—and fixed—in those file systems than any other.

Microsoft is not discarding these tried and trusted NTFS code entirely with ReFS. The ReFS driver will take parts of the existing NTFS driver, including its API and handling of caching and security, and re-use them. Only the lower-level parts of the driver, the parts concerned with how data is laid out on disk, have changed.

Understandably, Microsoft is taking a conservative approach to rolling out ReFS support. Only Windows Server 8 will include the ReFS; desktop-oriented Windows 8 will stick with NTFS. Just as with Storage Spaces, ReFS will not be usable as a boot drive, either. Future versions of Windows will extend ReFS support to the client, and eventually make it bootable. Windows Server 8 will also have no facility for converting NTFS volumes to ReFS; creating a new volume and copying data will be the only migration path.

More surprisingly, ReFS contains a number of feature regressions relative to NTFS. Most remarkably, it doesn’t support hard links, a feature found on NTFS and all UNIX file systems that allows one file to be given multiple names at multiple paths. This is particularly surprising, because Windows uses hard links extensively for its side-by-side storage of different library versions. Using hard links in this way is relatively new to Windows; Windows Vista was the first to do so.

Another new-in-Windows Vista feature, transactional updates to user data, also won’t be included. Windows Vista and Windows 7 allow database-like transactions to be made to the file system, wherein a batch of updates to the file system can be made atomically. Per-file encryption and per-file compression also aren’t supported in ReFS. Nor are named data streams, handy as these are for working with Mac OS X clients. Storage quotas are also gone.

Not all the missing features are surprising. Two features that exist only for backwards compatibility—generation of short (8.3) filenames, for DOS compatibility, and support for Extended Attributes, for OS/2 compatibility—are removed.

While the backwards compatibility features can safely stay gone forever, it’s hard to see ReFS accepted as a full-on NTFS replacement until the other omissions are resolved.

In spite of its limited availability and feature set, Microsoft says that ReFS will be production-ready, and decidedly not a beta, at Windows Server 8’s launch. In addition to the re-use of existing code, the company has also tested the new file system extensively, claiming that it has achieved an “unprecedented level of robustness” for a Microsoft file system.

Though Microsoft has continued to improve NTFS in Windows Vista and Windows 7, it has looked increasingly long in the tooth, particularly when compared to Oracle’s ZFS and Linux’s btrfs. Both of these file systems have offered flexible storage handling, and integrated fault tolerance, advancing the state of the art of mainstream file systems. Storage Spaces go a long way towards providing Windows with comparably capable volume management. ReFS fills most of the remaining gaps, adding a file system structure that’s fundamentally more robust than current NTFS.

Source:  arstechnica.com

Microsoft planning real-time feed of valuable threat data

Friday, January 13th, 2012

Microsoft holds a trove of valuable threat data gathered from botnet takedowns, and now is reportedly testing a system to share that data

Microsoft has had a great deal of success taking down botnets in recent years. A fringe benefit of those takedowns is that Microsoft gets to collect oodles of very valuable data. Now, Microsoft is preparing to offer that threat intelligence as a real-time feed that partners can use to evaluate threats and develop better defenses.

A post on the Kaspersky Labs Threat Post blog explains, “Microsoft collects the data by leveraging its huge Internet infrastructure, including a load-balanced, 80gb/second global network, to swallow botnets whole — pointing botnet infected hosts to addresses that Microsoft controls, capturing their activity and effectively taking them offline.”

Microsoft is reportedly conducting internal beta tests using data gathered from the Kelihos botnet. Microsoft is able to collect IP addresses of infected nodes, as well as AS (Autonomous System) number and reputation data from Microsoft’s SNDS (Smart Network Data Services), and share that information with third parties such as ISPs, CERTs, government agencies, and private organizations.

Paul Henry, security and forensic analyst for Lumension, doesn’t believe a Microsoft real-time threat feed will lead to a decrease in attacks. He does, however, feel that the data shared by Microsoft will help the information security community respond to threats more quickly, and limit the fallout from cyber attacks.

The information security industry needs more collaborative efforts, and more sharing of valuable data like this. Organizations in general are too secretive about security issues. There is an air of vulnerability that leads IT admins and executives to believe that if they share details of attacks it will reveal information that might be used in future attacks.

Henry notes, “The age old argument about protecting users from copy-cat attacks because the information exposed a weakness does not hold water… the bad guys are already sharing information on new attack vectors in real-time. So it only makes sense for defenders to do the same.”

There is some concern that the collected data may pose a privacy risk. T.J. Campana, a senior program manager in the Microsoft Digital Crimes Unit, told an audience at the ICCS (International Conference on Cyber Security) that personally identifiable information will be scrubbed from the threat feed.

Lumension’s Henry is comfortable that privacy will not be an issue. “The information can easily be sanitized to address any privacy concerns. This is nothing new and SANS has addressed the issue in their feed — I don’t see privacy as being an issue at all for this.”

Microsoft hasn’t shared any specific timeline for officially launching the threat feed. It would be nice if more organizations would follow Microsoft’s lead. Better collaboration and sharing of threat data would be a huge step in the right direction to minimize the impact of malware and cyber attacks.

Source:  infoworld.com

Cyber insurance offers IT peace of mind — or maybe not

Friday, January 13th, 2012

Cyber insurance can protect your company against data loss and liability, but it’s pricey, and coverage can be complicated.

If your company were hit with a cyber attack today, would it be able to foot the bill? The entire bill, including costs from regulatory fines, potential lawsuits, damage to your organization’s brand, and hardware and software repair, recovery and protection?

It’s a question worth careful consideration, given that the price of cyber attacks is rising at an alarming rate.

The second annual Cost of Cyber Crime study, released last August by the Ponemon Institute, reported that the median annualized cost of detection of and recovery from cyber crime per company is $5.9 million — a 56% increase from the 2010 median figures. The costs of cyber crime range from $1.5 million to $36.5 million per company.

A growing number of insurance companies are offering cyber protection in the event of breaches and other malicious data attacks. But so far, they’re having some difficulty making their case. Surveys show companies have yet to embrace these policies, whose costs can be staggering.

The annual PricewaterhouseCoopers Global State of Information Security Survey for the first time in 2011 asked respondents about whether their organizations had an insurance policy to protect against cyber crimes. Some 46% of the 12,840 worldwide respondents — which included CEOs, CFOs, CIOs and CSOs as well as vice presidents and directors in IT and information security — answered yes to the question: “Does your organization have an insurance policy that protects it from theft or misuse of electronic data, consumer records, etc.?”

Additionally, 17% said that their firms have submitted claims, and 13% said they’ve collected on those claims. (PwC didn’t ask why the remaining 4% hadn’t collected, but says it’s likely they were denied.)

Because it’s the first time PwC had asked its respondents about cyber insurance, there’s no way of knowing if those numbers represent an increase; however, a separate, albeit much smaller, survey indicates that companies may be slow to warm up to cyber insurance.

The 2011 Risk and Finance Manager survey, conducted by global professional services company Towers Watson, found that 73% of the 164 risk managers surveyed work at companies that have not purchased network liability policies. Some 37% of those who didn’t have polices said they believed their internal IT departments and controls were adequate, while another 15% either said the cost of a policy was too high or that they weren’t overly concerned about the risk.

Confusion in the marketplace

Lawyers and information security leaders say they encounter many executives who harbor misconceptions about cyber insurance. Decision-makers, they say, often mistakenly believe that standard corporate insurance policies and/or general liability policies cover losses related to hacking or that their cyber policies, if they have them, will cover all costs related to a breach. Most of the time, they won’t.

Continue reading at:  computerworld.com

Attack code published for serious ASP.Net DoS vulnerability

Tuesday, January 10th, 2012

The code exploits a recently patched denial-of-service vulnerability

Exploit code for a recently patched DoS (denial-of-service) vulnerability that affects Microsoft’s ASP.Net Web development platform has been published online, therefore increasing the risk of potential attacks.

The vulnerability, identified as CVE-2011-3414, was disclosed in December at the Chaos Communication Congress, Europe’s largest and oldest hacker conference. Shortly afterward Microsoft published a security advisory and released an out-of-band patch for the flaw.

The type of attack facilitated by this vulnerability affects other Web application platforms as well, and each of them has its own mitigation instructions. “This vulnerability could allow an anonymous attacker to efficiently consume all CPU resources on a Web server, or even on a cluster of Web servers,” explained Suha Can and Jonathan Ness, two Microsoft Security Response Center engineers, in a blog post back in December.

“For ASP.Net in particular, a single specially crafted ~100kb HTTP request can consume 100 percent of one CPU core for between 90 and 110 seconds. An attacker could potentially repeatedly issue such requests, causing performance to degrade significantly enough to cause a denial of service condition for even multi-core servers or clusters of servers,” they said.

On Friday, a user who calls himself HybrisDisaster, published a proof-of-concept exploit for the ASP.Net vulnerability on GitHub, a platform that hosts open source development projects.

In the notes accompanying the exploit code, HybrisDisaster encourages people to download it, use it how they see fit, and spread it. He also signs off with “We are Legion. Expect us,” a slogan commonly associated with the Anonymous hacktivist collective.

HybrisDisaster did not immediately return a request for comment about his affiliation with Anonymous. However, over the years, the well-known hacktivist group regularly used DoS attacks to support of its operations, it’s members considering the activity a legitimate form of online protesting.

The high likelihood of someone releasing attack code for this vulnerability played an important part in Microsoft’s decision to release an out-of-band patch. “We anticipate the imminent public release of exploit code,” Can and Ness said shortly after the vulnerability was disclosed.

Webmasters who maintain ASP.Net Web applications should immediately deploy the patches in Microsoft’s MS11-100 security bulletin, which also address other ASP.Net vulnerabilities as well.

Source:  infoworld.com

Computer virus gets convicted murderer new trial

Friday, January 6th, 2012

Viruses take the blame for a lot of deleted information and malicious activity occurring on computers. But this latest incident involving a virus infection is a lot more serious than someone losing important documents or even having their bank accounts compromised.

A murderer who was convicted in a Miami court is getting a retrial because the official transcript of his trial has been deleted. This wasn’t a deletion carried out by an individual, instead the computer of the court reporter got infected with a virus which then proceeded to wipe the files.

The court reporter, named Terlesa Cowart, has been fired following the discovery because a paper record is also meant to be kept on file. This was not done, meaning no record of the trial now exists.

Apparently Cowart was known for not bringing enough paper to court and therefore relied on the digital copy recorded inside the stenograph machine she used. In this case she copied the digital file from the machine to her PC, then deleted the copy on the machine. The virus then deleted the version on the PC.

When convicted murderer Randy Chaviano’s legal team attempted to appeal his conviction, the lack of a trial record was discovered. Now the Third District Court of Appeal has ruled a new trial is the only way to proceed.

Clearly this shows a number of failings. First of all, the court reporter is at fault for not doing her job properly and keeping two records of the trial. Secondly, whoever takes care of her PC security is at fault for not adequately protecting and monitoring her PC. Lastly, I think there is a need for a third backup to be created at the time of the trial. Maybe a court-based or cloud-based backup should be kept just to be very sure such files can’t disappear.

Randy Chaviano was convicted of second degree murder for shooting a man and recieved a life sentence. He now has a chance to plead self defense once again and an opportunity to walk away a free man.

Source:  geek.com

Symantec confirms source code leak in two enterprise security products

Friday, January 6th, 2012

Hacking group discloses source code segments used in Symantec’s Endpoint Protection 11.0 and Antivirus 10.2

Symantec late Thursday confirmed that source code used in two of its older enterprise security products was publicly exposed by hackers this week.

In a statement, the company said that the compromised code is between four and five years old and does not affect Symantec’s consumer-oriented Norton products as had been previously speculated.

“Our own network was not breached, but rather that of a third party entity,” the company said in the statement. “We are still gathering information on the details and are not in a position to provide specifics on the third party involved. Presently, we have no indication that the code disclosure impacts the functionality or security of Symantec’s solutions,” the statement said.

Symantec spokesman Cris Paden identified the two affected products as Symantec Endpoint Protection 11.0 and Symantec Antivirus 10.2. Both products are targeted at enterprise customers and are more than five years old, Paden said.

“We’re taking this extremely seriously, but in terms of a threat, a lot has changed since these codes were developed,” Paden said. “We distributed 10 million new signatures in 2010 alone. That gives you an idea of how much these products have morphed since then, when you’re talking four and five years.”

Symantec is developing a remediation process for enterprise customers who are still using the affected products, Paden noted. Details of the remediation process will be made available in due course, he added.

An Indian hacking group calling itself Lords of Dharmaraja had earlier claimed that it had accessed source code for Symantec’s Norton AV products.

A member of the group using the handle “Yama Tough” initially posted several documents on Pastebin and Google+ that purported to be proof that the group had accessed Symantec’s source code.

One of the documents described an application programming interface (API) for Symantec’s AV product. Another listed the complete source code tree file for Norton Antivirus. Two documents on Google+ offered detailed technical overviews of Norton Anti-Virus, Quarantine Server Packaging API Specification, v1.0, and a Symantec Immune System Gateway Array Setup technology.

According to Symantec, the initial set of documents posted by the hacking group was not source code. Rather, it was information from a publicly available document from April 1999 defining the API for something called the Definition Generation Service. The document explains how the software is designed to work, but no actual source code was in it, Symantec had noted.

A second set of documents posted by the group, however, did contain segments of Symantec source code for the two enterprise security products, Paden said.

Comments posted by Yama Tough on Google+ and Pastebin suggest that the Symantec information was accessed from an Indian government server. Many governments require companies such as Symantec to submit their source code for inspection to prove they are not spying on the government.

“As of now we start sharing with all our brothers and followers information from the Indian Militaty [sic] Intelligence servers, so far we have discovered within the Indian Spy Programme source codes of a dozen software companies which have signed agreements with Indian TANCS programme and CBI,” Yama Tough had said in one comment.

It is still too early to tell what impact the code disclosure will have on Symantec and its enterprise customers. Some say that exposure of older source code will likely pose less of a risk because of the fast pace at which security products evolve.

Rob Rachwald, director of security strategies at security vendor Imperva, said there isn’t much the hackers can learn from the code that they don’t know already.

“The workings of most of the antivirus’ algorithms have been studied already by hackers in order to write the malware that defeats them,” Rachwald wrote in a blog post Thursday. “A key benefit of having the source code could be in the hands of the competitors,” he said.

This is the second time in less than a year that a major security vendor has found itself in the embarrassing position of owning up to a data breach. Last year, RSA disclosed that unknown attackers had accessed source code to its SecurID two-factor authentication technology. The breach prompted widespread concerns about the security of the company’s authentication products within the government and the private sector.

In Symantec’s case, the compromise did not result from a breach of its own servers. Even so, the fallout from the code exposure could be significant for the company, especially if a large number of its enterprise customers are still using the two compromised products.

Source:  computerworld.com

Building Windows 8: Refresh and reset your PC

Friday, January 6th, 2012

The following is an excerpt from the MSDN blog:

Many consumer electronic devices these days provide a way for customers to get back to some predefined “good” state. This ranges from the hardware reset button on the back of a wireless network router, to the software reset option on a smartphone. We’ve built two new features in Windows 8 that can help you get your PCs back to a “good state” when they’re not working their best, or back to the “factory state” when you’re about to give them to someone else or decommission them.

Today, there are many different approaches and tools to get a PC back to factory condition. If you buy a PC with Windows preinstalled, it often comes with a manufacturer-provided tool and a hidden partition that can be used for that specific model of PC. You might also use a third-party imaging product, Windows system image backup, or the tried and true method of a clean reinstall from the Windows DVD. While these tools all provide similar functionalities, they don’t provide a consistent experience from one PC or technique to another. If you are the “go to” person for your friends, relatives, or neighbors when they need help with their PCs, you may find that it’s sometimes necessary to just start over and reinstall everything. Without a consistent experience to do this, you might end up spending more time finding the recovery tool for a specific PC than actually fixing the problems, and this gets even worse if you’re helping someone over the phone.

With Windows 8, there are a few key things that we set out to deliver:

  • Provide a consistent experience to get the software on any Windows 8 PC back to a good and predictable state.
  • Streamline the process so that getting a PC back to a good state with all the things customers care about can be done quickly instead of taking up the whole day.
  • Make sure that customers don’t lose their data in the process.
  • Provide a fully customizable approach for technical enthusiasts to do things their own way.

As we began planning for Windows 8, we asked ourselves: “Wouldn’t it be great if you could just push a button and everything is fixed?” We really wanted to focus on the concept of “push button”, which translated into a design goal that represents a simple to use, predictable, and fast solution. We also wanted to build on the process many people already use today when they need to start over: back up your data, reinstall Windows and apps, and restore your data. The strength of this approach is that you start over from a truly clean state, but you still get to keep the things you care about. With that as the basis of the solution, our goal was to make the process much more streamlined, less time-consuming, and more accessible to a broad set of customers.

Our solution in Windows 8 consists of two related features:

  • Reset your PC – Remove all personal data, apps, and settings from the PC, and reinstall Windows.
  • Refresh your PC – Keep all personal data, Metro style apps, and important settings from the PC, and reinstall Windows.

Reset your PC to start over

In some cases, you might just want to remove everything and start from scratch manually. But in other cases, you’re removing your data from a PC because you’re about to recycle or decommission it. For both of these situations, you can easily reset your Windows 8 PC and put the software back into the same condition as it was when you started it for the very first time (such as when you purchased the PC).

Resetting your Windows 8 PC goes like this:

  1. The PC boots into the Windows Recovery Environment (Windows RE).
  2. Windows RE erases and formats the hard drive partitions on which Windows and personal data reside.
  3. Windows RE installs a fresh copy of Windows.
  4. The PC restarts into the newly installed copy of Windows.

(Note that the screenshots below reflect changes that we’re making for Beta, some of which are not yet available in Developer Preview)

Reset your PC and start over. Here's what will happen: All your personal files and apps will be removed. Your PC settings will be changed back to their defaults.

Resetting your PC

For those of you who worry about data that may still be recoverable after a standard reset, especially on PCs with sensitive personal data, we also will be providing an option in Windows 8 Beta to erase your data more thoroughly, with additional steps that can significantly limit the effectiveness of even sophisticated data recovery attempts. Instead of just formatting the drive, choosing the “Thorough” option will write random patterns to every sector of the drive, overwriting any existing data visible to the operating system. Even if someone removes the drive from your PC, your data will still not be easily recoverable without the use of special equipment that is prohibitively expensive for most people. This approach strikes a good balance between security and performance – a single pass through your hard drive offers more than enough security for typical scenarios such as donation to a local charity, but does not bog you down for hours or days with multi-pass scrubbing operations that might be required for regulatory compliance if you are dealing with highly confidential business and government data.

How do you want to remove your personal files? Thoroughly, but this can take several hours. Quickly, but your files might be recoverable by someone else.

Choosing how your data should be removed

Refresh your PC to fix problems

Resetting your PC can take you back to square one if you encounter a problem, but that’s clearly a very heavy weight solution, something you’d only do as a last resort. But what if you could get the benefit of a reset – starting over with a fresh Windows install – while still keeping your stuff intact? This is where Refresh comes in handy. Refresh functionality is fundamentally still a reinstall of Windows, just like resetting your PC as described above, but your data, settings, and Metro style apps are preserved. We have a solution to help you with your desktop apps, too, which I’ll talk about a little later.

The coolest part about Refresh is there’s no need to first back up your data to an external hard drive and restore them afterwards.

Refreshing your PC goes like this:

  1. The PC boots into Windows RE.
  2. Windows RE scans the hard drive for your data, settings, and apps, and puts them aside (on the same drive).
  3. Windows RE installs a fresh copy of Windows.
  4. Windows RE restores the data, settings, and apps it has set aside into the newly installed copy of Windows.
  5. The PC restarts into the newly installed copy of Windows.

Unlike manually reinstalling Windows, you don’t have to go through the Windows Welcome screens again and reconfigure all the initial settings, as your user accounts and those settings are all preserved. You can sign in with the same account and password, and all of your documents and data are preserved in the same locations they were before. To accomplish this, we actually use the same imaging and migration technologies behind Windows Setup. In fact, the underlying setup engine is used to perform both Reset and Refresh, which also benefit from the performance and reliability improvements we added to setup for Windows 8.

Refresh your PC Here's what will happen: Your files and personalization settings won't change. Your PC settings will be changed back to their defaults. Apps from Winodws Store will be kept. All apps you installed from discs or websites will be removed. A list of removed apps will be saved on your desktop. Next / Cancel

Refreshing your PC

Misconfigured settings are sometimes the cause of problems that lead to customers needing to refresh their PCs. To ensure that Refresh is both effective in fixing problems and in making sure customers don’t lose settings that they might have trouble reconfiguring, we’ve thought a great deal about which settings to preserve. In Windows 8 Beta, some of the settings we’ll preserve include:

  • Wireless network connections
  • Mobile broadband connections
  • BitLocker and BitLocker To Go settings
  • Drive letter assignments
  • Personalization settings such as lock screen background and desktop wallpaper

On the other hand, we deliberately chose not to preserve the following settings, as they can occasionally cause problems if misconfigured:

  • File type associations
  • Display settings
  • Windows Firewall settings

We will continue to enhance and tune both lists over time based on how we see the feature being used in the Developer Preview and Beta.

Restoring your apps

We preserve only Metro style apps when customers refresh their PCs, and require desktop apps that do not come with the PC to be reinstalled manually. We do this for two reasons. First, in many cases there is a single desktop app that is causing the problems that lead to a need to perform this sort of maintenance, but identifying this root cause is not usually possible. And second, we do not want to inadvertently reinstall “bad” apps that were installed unintentionally or that hitched a ride on something good but left no trace of how they were installed.

It is also important to understand that we cannot deterministically replace desktop apps, as there are many installer technologies as well as custom setup and configuration logic, of which Windows has little direct knowledge. That is why we discourage the use of third-party uninstallers or scrubbers. One simple thing to consider is that many setup and installation programs conditionally implement functionality based on the state of the machine at the time of the install (for example default browser, default photo handler, etc.)

You can, however, cleanly install and uninstall all Metro style apps using the .appx package format. If you’re interested in learning more about how Metro style apps work in this regard, check out the following sessions from //build:

If you do need to reinstall some desktop apps after you refresh your PC, we save the list of apps that were not preserved in an HTML file, and put this list on the desktop, so you have a quick way to see what you might need to reinstall and where to find them.

One caution is that if any desktop apps you have require a license key, you will need to follow your manufacturer’s instructions for how to reuse the key. This might involve uninstalling the app first, going to a web site, or going through some automated steps by phone, for example.

What if the PC can’t boot?

When your PC is able to boot normally, you can get started with refreshing or resetting it from PC settings. (This is the Metro style app that we called “Control Panel” in the Windows Developer Preview. It is different from the standard Control Panel that you can still use for more complex tasks from the desktop.) The options are easily discoverable, and will be in the same place on every Windows 8 PC. Once launched, you can get through them with just a few clicks, which makes it easy to guide someone through the process over the phone.

However, in some situations, the PC might not boot successfully and you might want to refresh or reset it to get it back to a working condition. In a previous post, Billie Sue Chafins discussed how the boot experience has been redesigned from the ground up, including troubleshooting using Windows RE. Naturally, we’ve made it possible for you to refresh or reset your PC from there as well.

Troubleshoot / Refresh your PC - Reload Windows without losing your personal files. / Reset your PC - Put your PC back to the way it was originally and remove all of your files. / Advanced options

Refresh your PC Here's what will happen: Your files and personalization settings won't change. Your PC settings will be changed back to their defaults. Apps from Winodws Store will be kept. All apps you installed from discs or websites will be removed. A list of removed apps will be saved on your desktop. / Next / More information

Refreshing or resetting your PC from the new boot UI

In Windows 8 Beta, there will also be a tool that you can use to create a bootable USB flash drive, in case even the copy of Windows RE on the hard drive won’t start. You’ll be able to start your PC with the USB drive, and fix problems by refreshing your PC or performing advanced troubleshooting. And if your PC comes with a hidden recovery partition, you’ll even have the option to remove it and reclaim disk space once you’ve created the USB drive.

Refreshing your PC to a state you define, including desktop apps

We know that many of you like to first configure your PC just the way you like it, by installing favorite desktop apps or removing apps that came with the PC, and then create an image of the hard drive before you start using the PC. This way, when you need to start over, you can just restore the image and you won’t have to reinstall the apps from scratch.

With this in mind, we’ve made it possible for you to establish your own baseline image via a command-line tool (recimg.exe). So when you get a Windows 8 PC, you will be able to do the following:

  1. Go through the Windows first-run experience to configure basic settings.
  2. Install your favorite desktop apps (or uninstall things you don’t want).
  3. Configure the machine exactly as you would like it.
  4. Use recimg.exe to capture and set your custom image of the system.

After you’ve created the custom image, whenever you refresh your PC, not only will you be able to keep your personal data, settings, and Metro style apps, but you can restore all the desktop apps in your custom image as well. And if you buy a PC that already comes with a recovery image on a hidden partition, you’ll be able to use the tool to switch from using the hidden partition to instead use the custom image you’ve created.

If you’d like to try this out now, a preview version of this tool is included in the Windows 8 Developer Preview. You can try it out by typing the following in a command prompt window running as administrator:

mkdir C:\RefreshImage

recimg -CreateImage C:\RefreshImage

This creates the image under C:\RefreshImage and will register it to be used when you refresh your PC. Again, this is a very early version of the tool, so we know it’s not perfect yet. Rest assured that we’re working hard to get it ready for primetime.

Getting back to productivity quickly

When we started building these features, we knew that ease of use wasn’t going to be enough – refresh and reset had to be fast as well. Many of the recovery tools preloaded with PCs today take an hour or more just to get the PC back to factory condition, and you often still have to spend hours copying back your data and reconfiguring everything. Even solutions that back up and restore the entire hard drive can take a long time, as the time required generally scales with how much data you have.

To give an example of the performance of our solution, we installed a clean copy of Windows on the Developer Preview PC that we gave out to attendees at the BUILD conference, filled most of the drive with data, and measured the time it took to go through various recovery operations:

Recovery operation

Time required

Refreshing the PC

8 minutes 22 seconds

Resetting the PC (quick)

6 minutes 12 seconds

Resetting the PC (thorough, with BitLocker enabled)

6 minutes 21 seconds

Resetting the PC (thorough, without BitLocker)

23 minutes 52 seconds

Compared to a baseline time of 24 minutes 29 seconds for restoring the same contents from a system image backup, most of these times show a considerable improvement.

The beauty of refreshing the PC is that performance isn’t impacted by the amount of data you have. Using the migration technology behind Windows Setup, your data never leaves the drive, and they are not physically moved from one location on the disk to another either, hence minimizing disk reads/writes. Restoring a system image from an external drive using the Windows backup utility, on the other hand, took much longer due to the data in the backup, even with the relatively small 64GB drive on the prototype PC. Thoroughly erasing data did take a bit longer than the other operations, as every sector of the drive had to be overwritten. However, you may also notice that when BitLocker drive encryption was enabled on the drive, this process took much less time. This is due to an optimization we employ so that erasing an encrypted drive would require erasing only the encryption metadata, rendering all the data unrecoverable.

A consistent and easy way to get back to a known good state

Sometimes things can go wrong and you just want to get back to a good state quickly, while other times you might want to remove your data before giving a PC to another family member, employee, or co-worker. With Windows 8, we’ve streamlined these processes and made them more accessible to customers with the new refresh and reset features. Here’s a video showing these features in action:

Download this video to view it in your favorite media player:
High quality MP4 | Lower quality MP4

We hope you’ll find these features useful and time-saving when you’re fixing your own PC or helping others with theirs.

— Desmond Lee

Excerpt from MSDN

Multiple Programming Language Implementations Vulnerable to Hash Table Collision Attacks

Tuesday, January 3rd, 2012

US-CERT is aware of reports stating that multiple programming language implementations, including web platforms, are vulnerable to hash table collision attacks. This vulnerability could be used by an attacker to launch a denial-of-service attack against websites using affected products.

The Ruby Security Team has updated Ruby 1.8.7. The Ruby 1.9 series is not affected by this attack. Additional information can be found in the ruby 1.8.7 patchlevel 357 release notes.

Microsoft has released an update for the .NET Framework to address this vulnerability and three others. Additional information can be found in Microsoft Security Bulletin MS11-100 and Microsoft Security Advisory 2659883.

More information regarding this vulnerability can be found in US-CERT Vulnerability Note VU#903934 and n.runs Security Advisory n.runs-SA-2011.004.

US-CERT will provide additional information as it becomes available.

Source:  US-CERT

10 programming languages that could shake up IT

Tuesday, January 3rd, 2012

These cutting-edge programming languages provide unique insights on the future of software development

Do we really need another programming language? There is certainly no shortage of choices already. Between imperative languages, functional languages, object-oriented languages, dynamic languages, compiled languages, interpreted languages, and scripting languages, no developer could ever learn all of the options available today.

And yet, new languages emerge with surprising frequency. Some are designed by students or hobbyists as personal projects. Others are the products of large IT vendors. Even small and midsize companies are getting in on the action, creating languages to serve the needs of their industries. Why do people keep reinventing the wheel?The answer is that, as powerful and versatile as the current crop of languages may be, no single syntax is ideally suited for every purpose. What’s more, programming itself is constantly evolving. The rise of multicore CPUs, cloud computing, mobility, and distributed architectures have created new challenges for developers. Adding support for the latest features, paradigms, and patterns to existing languages — especially popular ones — can be prohibitively difficult. Sometimes the best answer is to start from scratch.

Here, then, is a look at 10 cutting-edge programming languages, each of which approaches the art of software development from a fresh perspective, tackling a specific problem or a unique shortcoming of today’s more popular languages. Some are mature projects, while others are in the early stages of development. Some are likely to remain obscure, but any one of them could become the breakthrough tool that changes programming for years to come — at least, until the next batch of new languages arrives.

Experimental programming language No. 1: Dart
JavaScript is fine for adding basic interactivity to Web pages, but when your Web applications swell to thousands of lines of code, its weaknesses quickly become apparent. That’s why Google created Dart, a language it hopes will become the new vernacular of Web programming.

Like JavaScript, Dart uses C-like syntax and keywords. One significant difference, however, is that while JavaScript is a prototype-based language, objects in Dart are defined using classes and interfaces, as in C++ or Java. Dart also allows programmers to optionally declare variables with static types. The idea is that Dart should be as familiar, dynamic, and fluid as JavaScript, yet allow developers to write code that is faster, easier to maintain, and less susceptible to subtle bugs.

You can’t do much with Dart today. It’s designed to run on either the client or the server (a la Node.js), but the only way to run client-side Dart code so far is to cross-compile it to JavaScript. Even then it doesn’t work with every browser. But because Dart is released under a BSD-style open source license, any vendor that buys Google’s vision is free to build the language into its products. Google only has an entire industry to convince.

Experimental programming language No. 2: Ceylon
Gavin King denies that Ceylon, the language he’s developing at Red Hat, is meant to be a “Java killer.” King is best known as the creator of the Hibernate object-relational mapping framework for Java. He likes Java, but he thinks it leaves lots of room for improvement.

Among King’s gripes are Java’s verbose syntax, its lack of first-class and higher-order functions, and its poor support for meta-programming. In particular, he’s frustrated with the absence of a declarative syntax for structured data definition, which he says leaves Java “joined at the hip to XML.” Ceylon aims to solve all these problems.

King and his team don’t plan to reinvent the wheel completely. There will be no Ceylon virtual machine; the Ceylon compiler will output Java bytecode that runs on the JVM. But Ceylon will be more than just a compiler, too. A big goal of the project is to create a new Ceylon SDK to replace the Java SDK, which King says is bloated and clumsy, and it’s never been “properly modernized.”

That’s a tall order, and Red Hat has released no Ceylon tools yet. King says to expect a compiler this year. Just don’t expect software written in “100 percent pure Ceylon” any time soon.

Experimental programming language No. 3: Go
Interpreters, virtual machines, and managed code are all the rage these days. Do we really need another old-fashioned language that compiles to native binaries? A team of Google engineers — led by Robert Griesemer and Bell Labs legends Ken Thompson and Rob Pike — says yes.

Go is a general-purpose programming language suitable for everything from application development to systems programing. In that sense, it’s more like C or C++ than Java or C#. But like the latter languages, Go includes modern features such as garbage collection, runtime reflection, and support for concurrency.

Equally important, Go is meant to be easy to program in. Its basic syntax is C-like, but it eliminates redundant syntax and boilerplate while streamlining operations such as object definition. The Go team’s goal was to create a language that’s as pleasant to code in as a dynamic scripting language yet offers the power of a compiled language.

Go is still a work in progress, and the language specification may change. That said, you can start working with it today. Google has made tools and compilers available along with copious documentation; for example, the Effective Go tutorial is a good place to learn how Go differs from earlier languages.

Experimental programming language No. 4: F#
Functional programming has long been popular with computer scientists and academia, but pure functional languages like Lisp and Haskell are often considered unworkable for real-world software development. One common complaint is that functional-style code can be difficult to integrate with code and libraries written in imperative languages like C++ and Java.

Enter F# (pronounced “F-sharp”), a Microsoft language designed to be both functional and practical. Because F# is a first-class language on the .Net Common Language Runtime (CLR), it can access all of the same libraries and features as other CLR languages, such as C# and Visual Basic.

F# code resembles OCaml somewhat, but it adds interesting syntax of its own. For example, numeric data types in F# can be assigned units of measure to aid scientific computation. F# also offers constructs to aid asynchronous I/O, CPU parallelization, and off-loading processing to the GPU.

After a long gestation period at Microsoft Research, F# now ships with Visual Studio 2010. Better still, in an unusual move, Microsoft has made the F# compiler and core library available under the Apache open source license; you can start working with it for free and even use it on Mac and Linux systems (via the Mono runtime).

Experimental programming language No. 5: Opa
Web development is too complicated. Even the simplest Web app requires countless lines of code in multiple languages: HTML and JavaScript on the client, Java or PHP on the server, SQL in the database, and so on.

Opa doesn’t replace any of these languages individually. Rather, it seeks to eliminate them all at once, by proposing an entirely new paradigm for Web programming. In an Opa application, the client-side UI, server-side logic, and database I/O are all implemented in a single language, Opa.

Opa accomplishes this through a combination of client- and server-side frameworks. The Opa compiler decides whether a given routine should run on the client, server, or both, and it outputs code accordingly. For client-side routines, it translates Opa into the appropriate JavaScript code, including AJAX calls.

Naturally, a system this integrated requires some back-end magic. Opa’s runtime environment bundles its own Web server and database management system, which can’t be replaced with stand-alone alternatives. That may be a small price to pay, however, for the ability to prototype sophisticated, data-driven Web applications in just a few dozen lines of code. Opa is open source and available now for 64-bit Linux and Mac OS X platforms, with further ports in the works.

Experimental programming language No. 6: Fantom
Should you develop your applications for Java or .Net? If you code in Fantom, you can take your pick and even switch platforms midstream. That’s because Fantom is designed from the ground up for cross-platform portability. The Fantom project includes not just a compiler that can output bytecode for either the JVM or the .Net CLI, but also a set of APIs that abstract away the Java and .Net APIs, creating an additional portability layer.

There are plans to extend Fantom’s portability even further. A Fantom-to-JavaScript compiler is already available, and future targets might include the LLVM compiler project, the Parrot VM, and Objective-C for iOS.

But portability is not Fantom’s sole raison d’être. While it remains inherently C-like, it is also meant to improve on the languages that inspired it. It tries to strike a middle ground in some of the more contentious syntax debates, such as strong versus dynamic typing, or interfaces versus classes. It adds easy syntax for declaring data structures and serializing objects. And it includes support for functional programming and concurrency built into the language.

Fantom is open source under the Academic Free License 3.0 and is available for Windows and Unix-like platforms (including Mac OS X).

Experimental programming language No. 7: Zimbu
Most programming languages borrow features and syntax from an earlier language. Zimbu takes bits and pieces from almost all of them. The brainchild of Bram Moolenaar, creator of the Vim text editor, Zimbu aims to be a fast, concise, portable, and easy-to-read language that can be used to code anything from a GUI application to an OS kernel.

Owing to its mongrel nature, Zimbu’s syntax is unique and idiosyncratic, yet feature-rich. It uses C-like expressions and operators, but its own keywords, data types, and block structures. It supports memory management, threads, and pipes.

Portability is a key concern. Although Zimbu is a compiled language, the Zimbu compiler outputs ANSI C code, allowing binaries to be built only on platforms with a native C compiler.

Unfortunately, the Zimbu project is in its infancy. The compiler can build itself and some example programs, but not all valid Zimbu code will compile and run properly. Not all proposed features are implemented yet, and some are implemented in clumsy ways. The language specification is also expected to change over time, adding keywords, types, and syntax as necessary. Thus, documentation is spotty, too. Still, if you would like to experiment, preliminary tools are available under the Apache license.

Experimental programming language No. 8: X10
Parallel processing was once a specialized niche of software development, but with the rise of multicore CPUs and distributed computing, parallelism is going mainstream. Unfortunately, today’s programming languages aren’t keeping pace with the trend. That’s why IBM Research is developing X10, a language designed specifically for modern parallel architectures, with the goal of increasing developer productivity “times 10.”

X10 handles concurrency using the partitioned global address space (PGAS) programming model. Code and data are separated into units and distributed across one or more “places,” making it easy to scale a program from a single-threaded prototype (a single place) to multiple threads running on one or more multicore processors (multiple places) in a high-performance cluster.

X10 code most resembles Java; in fact, the X10 runtime is available as a native executable and as class files for the JVM. The X10 compiler can output C++ or Java source code. Direct interoperability with Java is a future goal of the project.

For now, the language is evolving, yet fairly mature. The compiler and runtime are available for various platforms, including Linux, Mac OS X, and Windows. Additional tools include an Eclipse-based IDE and a debugger, all distributed under the Eclipse Public License.

Experimental programming language No. 9: haXe
Lots of languages can be used to write portable code. C compilers are available for virtually every CPU architecture, and Java bytecode will run wherever there’s a JVM. But haXe (pronounced “hex”) is more than just portable. It’s a multiplatform language that can target diverse operating environments, ranging from native binaries to interpreters and virtual machines.

Developers can write programs in haXe, then compile them into object code, JavaScript, PHP, Flash/ActionScript, or NekoVM bytecode today; additional modules for outputting C# and Java are in the works. Complementing the core language is the haXe standard library, which functions identically on every target, plus target-specific libraries to expose the unique features of each platform.

The haXe syntax is C-like, with a rich feature set. Its chief advantage is that it negates problems inherent in each of the platforms it targets. For example, haXe has strict typing where JavaScript does not; it adds generics and type inference to ActionScript; and it obviates the poorly designed, haphazard syntax of PHP entirely.

Although still under development, haXe is used commercially by its creator, the gaming studio Motion Twin, so it’s no toy. It’s available for Linux, Mac OS X, and Windows under a combination of open source licenses.

Experimental programming language No. 10: Chapel
In the world of high-performance computing, few names loom larger than Cray. It should come as no surprise, then, that Chapel, Cray’s first original programming language, was designed with supercomputing and clustering in mind.

Chapel is part of Cray’s Cascade Program, an ambitious high-performance computing initiative funded in part by the U.S. Defense Advanced Research Project Agency (DARPA). Among its goals are abstracting parallel algorithms from the underlying hardware, improving their performance on architectures, and making parallel programs more portable.

Chapel’s syntax draws from numerous sources. In addition to the usual suspects (C, C++, Java), it borrows concepts from scientific programming languages such as Fortran and Matlab. Its parallel-processing features are influenced by ZPL and High-Performance Fortran, as well as earlier Cray projects.

One of Chapel’s more compelling features is its support for “multi-resolution programming,” which allows developers to prototype applications with highly abstract code and fill in details as the implementation becomes more fully defined.

Work on Chapel is ongoing. At present, it can run on Cray supercomputers and various high-performance clusters, but it’s portable to most Unix-style systems (including Mac OS X and Windows with Cygwin). The source code is available under a BSD-style open source license.

Source:  infoworld.com

Researchers publish open-source tool for hacking WiFi Protected Setup

Tuesday, January 3rd, 2012

On December 27, the Department of Homeland Security’s Computer Emergency Readiness Team issued a warning about a vulnerability in wireless routers that use WiFi Protected Setup (WPS) to allow new devices to be connected to them. Within a day of the discovery, researchers at a Maryland-based computer security firm developed a tool that exploits that vulnerability, and has made a version available as open source.

WiFi Protected Setup, a standard created by the WiFi Alliance, is designed specifically for home and small business users of wireless networking to easily configure devices without having to enter a long password. Offered as an optional feature on WiFi routers from a number of manufacturers, it automates the setup of the WiFi Protected Access 2 (WPA2) authentication between the router and a wireless device. One of the standard’s methods of establishing connection that is supported by all WPS-capable routers is the use of a personal identification number, usually printed on the wireless router itself, to authenticate the device.

But as security researcher Stefan Veihbock found and reported to US-CERT, the PIN implementation is susceptible to “brute-force” attacks because of the way routers respond to bad requests, and the nature of the PIN itself. When a PIN request fails, the message sent back to the wireless device attempting to connect contains information that can help an attacker by revealing whether the first half of the PIN is correct or not—reducing the number of guesses that an attacking system would have to make. Additionally, the last number of the PIN is a checksum for the the rest of the PIN. As a result, an attacker could get the PIN within 11,000 guesses. Veihbock demonstrated the vulnerability with a proof-of-concept tool he wrote in Python, available for download from his site.

That wouldn’t be as much of a problem for security if wireless access points locked out devices after repeated bad PIN entries. But on many WPS wireless routers, there is no lockout feature. That means attackers can continue to attempt to connect at their leisure.

And unlike passwords, the PIN is something that can’t usually be changed by the router’s owner. That presents a huge security loophole for attackers—once they’ve gained the PIN, they can reconnect at will to the network, even if the administrator has changed the password or service set identifier (SSID) for the network. And on access devices that have multiple radios in them providing network connectivity for different SSIDs with different passwords, the PIN can provide access to all of the wireless networks on the router.

According to a blog post by Tactical Network Solutions’ Craig Heffner, this type of attack is one that researchers at the Columbia, Maryland based security firm have been “testing, perfecting, and using for nearly a year.” Now the company has released an open-source version of its tool, Reaver, which Heffner says is capable of cracking the PIN codes of routers and gaining access to their WPA2 passwords “in approximately 4 [to] 10 hours.” The company also is offering a commercial version of the tool that offers features like a web interface for remote command and control, the ability to pause and resume attacks, optimized attacks for different models of wireless access points, and additional support.

The routers most vulnerable to these attacks—the ones without PIN lockout features—include products from Cisco’s Linksys division, Belkin, Buffalo, Netgear, TP-Link, ZyXEL, and Technicolor. None of the vendors has issued a statement on the vulnerability, or replied to inquiries from Veihbock.

Source:  arstechnica.com

XP still top OS, but Windows 7 hot on its trail

Tuesday, January 3rd, 2012

Windows XP is still the dominant OS worldwide after more than 10 years, but Windows 7 continues to narrow the gap.

XP ended last year with a 46 percent slice of the OS market, according to December data from NetApplications. Although impressive after a decade, that number proved a hefty drop in use for XP, which kicked off 2011 with a 55 percent share and has fallen each month since then.

On the upswing, Windows 7 rang out the year with almost 37 percent of the market, a solid gain from 22 percent last January and further proof of its ongoing monthly growth.

In third place was Windows Vista, which dropped to 8 percent from more than 11 percent at the start of 2011.

Microsoft has been on a tear lately trying to convince companies and consumers alike to make the leap to Windows 7.

The company has stressed that support for Windows XP will end in April 2014, making sure to give IT departments enough time to migrate their users to the latest version of Windows.

Microsoft has even gone so far as to advise enterprises still on XP not to wait for Windows 8 and instead plan the switch to Windows 7 now.

Meanwhile, over in the land of Apple, Mac OS X grabbed almost 6 percent of the operating system market last month. OS X 10.6 Snow Leopard was the leading flavor with a 3 percent share, though it has gradually fallen in usage. Ending the year with a 2 percent share, OS X 10.7 Lion has risen in popularity since its release last summer.

And still carving out a niche among its faithful users, Linux accounted for almost 1.5 percent of the OS market in December.

Source:  CNET