Archive for the ‘Software’ Category

Saas predictions for 2014

Friday, December 27th, 2013

While the bulk of enterprise software is still deployed on-premises, SaaS (software as a service) continues to undergo rapid growth. Gartner has said the total market will top $22 billion through 2015, up from more than $14 billion in 2012.

The SaaS market will likely see significant changes and new trends in 2014 as vendors jockey for competitive position and customers continue shifting their IT strategies toward the deployment model. Here’s a look at some of the possibilities.

The matter of multitenancy: SaaS vendors such as Salesforce.com have long touted the benefits of multitenancy, a software architecture where many customers share a single application instance, with their information kept separate. Multitenancy allows vendors to patch and update many customers at once and get more mileage out of the underlying infrastructure, thereby cutting costs and easing management.

This year, however, other variations on multitenancy emerged, such as one offered by Oracle’s new 12c database. An option for the release allows customers to host many “pluggable” databases within a single host database, an approach that Oracle says is more secure than the application-level multitenancy used by Salesforce.com and others.

Salesforce.com itself has made a shift away from its original definition of multitenancy. During November’s Dreamforce conference, CEO Marc Benioff announced a partnership with Hewlett-Packard around a new “Superpod” option for large enterprises, wherein companies can have their own dedicated infrastructure inside Salesforce.com data centers based on HP’s Converged Infrastructure hardware.

Some might say this approach has little distinction from traditional application hosting. Overall, in 2014 expect multitenancy to fade away as a major talking point for SaaS.

Hybrid SaaS: Oracle has made much of the fact its Fusion Applications could be deployed either on-premises or from its cloud, but due to the apparent complexity involved with the first option, most initial Fusion customers have chosen SaaS.

Still, concept of application code bases that are movable between the two deployment models could become more popular in 2014.

While there’s no indication Salesforce.com will offer an on-premises option — and indeed, such a thing seems almost inconceivable considering the company’s “No Software” logo and marketing campaign around the convenience of SaaS — the HP partnership is clearly meant to give big companies that still have jitters about traditional SaaS a happy medium.

As in all cases, customer demand will dictate SaaS vendors’ next moves.

Geographic depth: It was no accident that Oracle co-President Mark Hurd mentioned during the company’s recent earnings call that it now has 17 data centers around the world. Vendors want enterprise customers to know their SaaS offerings are built for disaster recovery and are broadly available.

Expect “a flurry of announcements” in 2014 from SaaS vendors regarding data center openings around the world, said China Martens, an independent business applications analyst, via email. “This is another move likely to benefit end-user firms. Some firms at present may not be able to proceed with a regional or global rollout of SaaS apps because of a lack of local data center support, which may be mandated by national data storage or privacy laws.”

Keeping customers happy: On-premises software vendors such as Oracle and SAP are now honing their knowledge of something SaaS vendors such as NetSuite and Salesforce.com had to learn years earlier: How to run a software business based on annual subscriptions, not perpetual software licenses and annual maintenance.

The latter model provides companies with big one-time payments followed by highly profitable support fees. With SaaS, the money flows into a vendor’s coffers in a much different manner, and it’s arguably also easier for dissatisfied customers to move to a rival product compared to an on-premises deployment.

As a result, SaaS vendors have suffered from “churn,” or customer turnover. In 2014, there will be increased focus on ways to keep customers happy and in the fold, according to Karan Mehandru, general partner at venture capital firm Trinity Ventures.

Next year “will further awareness that the purchase of software by a customer is not the end of the transaction but rather the beginning of a relationship that lasts for years,” he wrote in a recent blog post. “Customer service and success will be at the forefront of the customer relationship management process where terms like retention, upsells and churn reduction get more air time in board meetings and management sessions than ever before.”

Consolidation in marketing, HCM: Expect a higher pace of merger and acquisition activity in the SaaS market “as vendors buy up their competitors and partners,” Martens said.

HCM (human capital management) and marketing software companies may particularly find themselves being courted. Oracle, SAP and Salesforce.com have both invested heavily in these areas already, but the likes of IBM and HP may also feel the need to get in the game.

A less likely scenario would be a major merger between SaaS vendors, such as Salesforce.com and Workday.

SaaS goes vertical: “There will be more stratification of SaaS apps as vendors build or buy with the aim of appealing to particular types of end-user firms,” Martens said. “In particular, vendors will either continue to build on early industry versions of their apps and/or launch SaaS apps specifically tailored to particular verticals, e.g., healthcare, manufacturing, retail.”

However, customers will be burdened with figuring out just how deep the industry-specific features in these applications are, as well as gauging how committed the vendor is to the particular market, Martens added.

Can’t have SaaS without a PaaS: Salesforce.com threw down the gauntlet to its rivals in November, announcing Salesforce1, a revamped version of its PaaS (platform as a service) that couples its original Force.com offering with tools from its Heroku and ExactTarget acquisitions, a new mobile application, and 10 times as many APIs (application programming interfaces) than before.

A PaaS serves as a multiplying force for SaaS companies, creating a pool of developers and systems integrators who create add-on applications and provide services to customers while sharing an interest in the vendor’s success.

Oracle, SAP and other SaaS vendors have been building out their PaaS offerings and will make plenty of noise about them next year.

Source:  cio.com

Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Study finds zero-day vulnerabilities abound in popular software

Friday, December 6th, 2013

Subscribers to organizations that sell exploits for vulnerabilities not yet known to software developers gain daily access to scores of flaws in the world’s most popular technology, a study shows.

NSS Labs, which is in the business of testing security products for corporate subscribers, found that over the last three years, subscribers of two major vulnerability programs had access on any given day to at least 58 exploitable flaws in Microsoft, Apple, Oracle or Adobe products.

In addition, NSS labs found that an average of 151 days passed from the time when the programs purchased a vulnerability from a researcher and the affected vendor released a patch.

The findings, released Thursday, were based on an analysis of 10 years of data from TippingPoint, a network security maker Hewlett-Packard acquired in 2010, and iDefense, a security intelligence service owned by VeriSign. Both organizations buy vulnerabilities, inform subscribers and work with vendors in producing patches.

Stefan Frei, NSS research director and author of the report, said the actual number of secret vulnerabilities available to cybercriminals, government agencies and corporations is much larger, because of the amount of money they are willing to pay.

Cybercriminals will buy so-called zero-day vulnerabilities in the black market, while government agencies and corporations purchase them from brokers and exploit clearinghouses, such as VUPEN Security, ReVuln, Endgame Systems, Exodus Intelligence and Netragard.

The six vendors collectively can provide at least 100 exploits per year to subscribers, Frei said. According to a February 2010 price list, Endgame sold 25 zero-day exploits a year for $2.5 million.

In July, Netragard founder Adriel Desautels told The New York Times that the average vulnerability sells from around $35,000 to $160,000.

Part of the reason vulnerabilities are always present is because of developer errors and also because software makers are in the business of selling product, experts say. The latter means meeting deadlines for shipping software often trumps spending additional time and money on security.

Because of the number of vulnerabilities bought and sold, companies that believe their intellectual property makes them prime targets for well-financed hackers should assume their computer systems have already been breached, Frei said.

“One hundred percent prevention is not possible,” he said.

Therefore, companies need to have the experts and security tools in place to detect compromises, Frei said. Once a breach is discovered, then there should be a well-defined plan in place for dealing with it.

That plan should include gathering forensic evidence to determine how the breach occurred. In addition, all software on the infected systems should be removed and reinstalled.

Steps taken following a breach should be reviewed regularly to make sure they are up to date.

Source:  csoonline.com

RBS admits decades of IT neglect after systems crash

Tuesday, December 3rd, 2013

Royal Bank of Scotland has neglected its technology for decades, the state-backed bank’s boss admitted on Tuesday after a system crash left more than 1 million customers unable to withdraw cash or pay for goods.

The problem for three hours on Monday, one of the busiest online shopping days of the year, raised questions about the resilience of RBS’s technology, which analysts and banking industry sources regard as outdated and made up of a complex patchwork of systems after dozens of acquisitions.

“For decades, RBS failed to invest properly in its systems,” Ross McEwan, who became chief executive in October, said.

“Last night’s systems failure was unacceptable … I’m sorry for the inconvenience we caused our customers,” he said, adding he would outline plans in the New Year to improve the bank and increase investment.

The latest crash could cost RBS millions of pounds in compensation and follows a more serious crash in its payments system last year that Britain’s regulator is still investigating.

The regulator has been scrutinising the resilience of all banks’ technology to address concerns that outdated systems and a lack of investment could cause more crashes.

The technology glitch is another setback for the bank’s efforts to recover from the financial crisis when it had to be rescued in a taxpayer-funded bailout. The government still owns 82 percent of RBS.

RBS’s cash machines did not work from 1830-2130 GMT on Monday and customers trying to pay for goods with debit cards at supermarkets and petrol stations, buy goods online or use online or mobile banking were also unable to complete transactions.

The bank said the problem had been fixed and it would compensate anyone who had been left out of pocket as a result.

About 250,000 people an hour would typically use RBS’s cash machines on a Monday night, and tens of thousands more customers would have used the other affected parts at its RBS, NatWest and Ulster operations. RBS has 24 million customers in the UK.

Twitter lit up with customer complaints.

“RBS a joke of a bank. Card declined last night and almost 1,000 pounds vanished from balance this morning! What is going on?” tweeted David MacLeod from Edinburgh, echoing widely-felt frustration with the bank.

Some people tweeted on Tuesday they were still experiencing problems and accounts were showing incorrect balances.

RUN OF PROBLEMS

Millions of RBS customers were affected in June 2012 by problems with online banking and payments after a software upgrade went wrong.

That cost the bank 175 million pounds ($286 million) in compensation for customers and extra payments to staff after the bank opened branches for longer in response. Stephen Hester, chief executive at the time, waived his 2012 bonus following the problem. Britain’s financial watchdog is still investigating and could fine the bank.

The latest crash occurred on so-called Cyber Monday, one of the busiest days for online shopping before Christmas.

RBS said the problem was not related to volume, but gave no details on what had caused the system crash.

McEwan has vowed to improve customer service and has said technology in British banking lags behind Australia, where he previously worked. He has pledged to spend 700 million pounds in the next three years on UK branches, with much of that earmarked for improving systems.

RBS’s former CEO Fred Goodwin has been blamed for under-investing in technology and for not building robust enough systems following its takeover of NatWest in 2000.

Andy Haldane, director for financial stability at the Bank of England, told lawmakers last year banks needed to transform their IT because they had not invested enough during the boom years. Haldane said 70-80 percent of big banks’ IT spending was on maintaining legacy systems rather than investing in improvements.

“It appears to be another example of the lack of sufficient investment in technology by a bank that is still hurting. They are trying to do it on a shoestring, because they don’t have any extra money,” said Ralph Silva at research firm SRN.

“They need to do more, they need to allocate a greater portion of their spend to IT.”

Source:  reuters.com

U.S. government rarely uses best cybersecurity steps: advisers

Friday, November 22nd, 2013

The U.S. government itself seldom follows the best cybersecurity practices and must drop its old operating systems and unsecured browsers as it tries to push the private sector to tighten its practices, technology advisers told President Barack Obama.

“The federal government rarely follows accepted best practices,” the President’s Council of Advisors on Science and Technology said in a report released on Friday. “It needs to lead by example and accelerate its efforts to make routine cyberattacks more difficult by implementing best practices for its own systems.”

PCAST is a group of top U.S. scientists and engineers who make policy recommendations to the administration. William Press, computer science professor at the University of Texas at Austin, and Craig Mundie, senior adviser to the CEO at Microsoft Corp, comprised the cybersecurity working group.

The Obama administration this year stepped up its push for critical industries to bolster their cyber defenses, and Obama in February issued an executive order aimed at countering the lack of progress on cybersecurity legislation in Congress.

As part of the order, a non-regulatory federal standard-setting board last month released a draft of voluntary standards that companies can adopt, which it compiled through industry workshops.

But while the government urges the private sector to adopt such minimum standards, technology advisers say it must raise its own standards.

The advisers said the government should rely more on automatic updates of software, require better proof of identities of people, devices and software, and more widely use the Trusted Platform Module, an embedded security chip.

The advisers also said for swifter response to cyber threats, private companies should share more data among themselves and, “in appropriate circumstances” with the government. Press said the government should promote such private sector partnerships, but that sensitive information exchanged in these partnerships “should not be and would not be accessible to the government.”

The advisers steered the administration away from “government-mandated, static lists of security measures” and toward standards reached by industry consensus, but audited by third parties.

The report also pointed to Internet service providers as well-positioned to spur rapid improvements by, for instance, voluntarily alerting users when their devices are compromised.

Source: reuters.com

HP: 90 percent of Apple iOS mobile apps show security vulnerabilities

Tuesday, November 19th, 2013

HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.

Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97 percent of these apps inappropriately accessed private information sources within a device, and 86 percent proved to be vulnerable to attacks such as SQL injection.

The Apple guidelines for developing iOS apps help developers but this doesn’t go far enough in terms of security, says Armistead. Mobile apps are being used to extend the corporate website to mobile devices, but companies in the process “are opening up their attack surfaces,” he says.

In its summary of the testing, HP said 86 percent of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.

The same number did not have optimized security built in the early part of the development process, according to HP. Three quarters “did not use proper encryption techniques when storing data on mobile devices, which leaves unencrypted data accessible to an attacker.” A large number of the apps didn’t implement SSL/HTTPS correctly.To discover weaknesses in apps, developers need to involve practices such as app scanning for security, penetration testing and a secure coding development life-cycle approach, HP advises.

The need to develop mobile apps quickly for business purposes is one of the main contributing factors leading to weaknesses in these apps made available for public download, according to HP. And the weakness on the mobile side is impacting the server side as well.

“It is our earnest belief that the pace and cost of development in the mobile space has hampered security efforts,” HP says in its report, adding that “mobile application security is still in its infancy.”

Source:  infoworld.com

New malware variant suggests cybercriminals targeting SAP users

Tuesday, November 5th, 2013

The malware checks if infected systems have a SAP client application installed, ERPScan researchers said

A new variant of a Trojan program that targets online banking accounts also contains code to search if infected computers have SAP client applications installed, suggesting that attackers might target SAP systems in the future.

The malware was discovered a few weeks ago by Russian antivirus company Doctor Web, which shared it with researchers from ERPScan, a developer of security monitoring products for SAP systems.

“We’ve analyzed the malware and all it does right now is to check which systems have SAP applications installed,” said Alexander Polyakov, chief technology officer at ERPScan. “However, this might be the beginning for future attacks.”

When malware does this type of reconnaissance to see if particular software is installed, the attackers either plan to sell access to those infected computers to other cybercriminals interested in exploiting that software or they intend to exploit it themselves at a later time, the researcher said.

Polyakov presented the risks of such attacks and others against SAP systems at the RSA Europe security conference in Amsterdam on Thursday.

To his knowledge, this is the first piece of malware targeting SAP client software that wasn’t created as a proof-of-concept by researchers, but by real cybercriminals.

SAP client applications running on workstations have configuration files that can be easily read and contain the IP addresses of the SAP servers they connect to. Attackers can also hook into the application processes and sniff SAP user passwords, or read them from configuration files and GUI automation scripts, Polyakov said.

There’s a lot that attackers can do with access to SAP servers. Depending on what permissions the stolen credentials have, they can steal customer information and trade secrets or they can steal money from the company by setting up and approving rogue payments or changing the bank account of existing customers to redirect future payments to their account, he added.

There are efforts in some enterprise environments to limit permissions for SAP users based on their duties, but those are big and complex projects. In practice most companies allow their SAP users to do almost everything or more than what they’re supposed to, Polyakov said.

Even if some stolen user credentials don’t give attackers the access they want, there are default administrative credentials that many companies never change or forget to change on some instances of their development systems that have snapshots of the company data, the researcher said.

With access to SAP client software, attackers could steal sensitive data like financial information, corporate secrets, customer lists or human resources information and sell it to competitors. They could also launch denial-of-service attacks against a company’s SAP servers to disrupt its business operations and cause financial damage, Polyakov said.

SAP customers are usually very large enterprises. There are almost 250,000 companies using SAP products in the world, including over 80 percent of those on the Forbes 500 list, according to Polyakov.

If timed correctly, some attacks could even influence the company’s stock and would allow the attackers to profit on the stock market, according to Polyakov.

Dr. Web detects the new malware variant as part of the Trojan.Ibank family, but this is likely a generic alias, he said. “My colleagues said that this is a new modification of a known banking Trojan, but it’s not one of the very popular ones like ZeuS or SpyEye.”

However, malware is not the only threat to SAP customers. ERPScan discovered a critical unauthenticated remote code execution vulnerability in SAProuter, an application that acts as a proxy between internal SAP systems and the Internet.

A patch for this vulnerability was released six months ago, but ERPScan found that out of 5,000 SAProuters accessible from the Internet, only 15 percent currently have the patch, Polyakov said. If you get access to a company’s SAProuter, you’re inside the network and you can do the same things you can when you have access to a SAP workstation, he said.

Source:  csoonline.com

Adobe hack attack affected 38 million accounts

Tuesday, October 29th, 2013

The recent security breach that hit Adobe exposed customer IDs, passwords, and credit and debit card information.

A cyberattack launched against Adobe affected more than 10 times the number of users initially estimated.

On October 3, Adobe revealed that it had been the victim of an attack that exposed Adobe customer IDs and encrypted passwords. At the time, the company said that hackers gained access to credit card records and login information for around 3 million users. But the number of affected accounts has turned out to be much higher.

The attack actually involved 38 million active accounts, according to security blog Krebs on Security. Adobe confirmed that number in an e-mail to Krebs.

“So far, our investigation has confirmed that the attackers obtained access to Adobe IDs and (what were at the time valid), encrypted passwords for approximately 38 million active users,” Adobe spokeswoman Heather Edell said. “We have completed e-mail notification of these users. We also have reset the passwords for all Adobe IDs with valid, encrypted passwords that we believe were involved in the incident — regardless of whether those users are active or not.”

The attack also gained access to many invalid or inactive Adobe accounts — those with invalid encrypted passwords and those used as test accounts.

“We are still in the process of investigating the number of inactive, invalid, and test accounts involved in the incident,” Edell added. “Our notification to inactive users is ongoing.”

CNET contacted Adobe for comment and will update the story with any further details

Following the initial report of the attack, Adobe reset the passwords on compromised customer accounts and sent e-mails to those whose accounts were breached and whose credit card or debit card information was exposed.

Adobe has posted a customer security alert page with more information on the breach and an option whereby users can change their passwords.

Source:  CNET

Microsoft and Symantec push to combat key, code-signed malware

Wednesday, October 23rd, 2013

Code-signed malware hot spots said to be China, Brazil, South Korea

An alarming growth in malware signed with fraudulently obtained keys and code-signing certificates in order to trick users to download harmful code is prompting Microsoft and Symantec to push for tighter controls in the way the world’s certificate authorities issue these keys used in code-signing.

It’s not just stolen keys that are the problem in code-signed malware but “keys issued to people who aren’t who they say they are,” says Dean Coclin, senior director of business development in the trust services division at Symantec.

Coclin says China, Brazil and South Korea are the hot spots today where the problem of malware signed with certificates and keys obtained from certificate authorities is the worst right now. “We need a uniform way to vet companies and individuals around the world,” says Coclin. He says that doesn’t really exist today for certificates used in code-signing, but Microsoft and Symantec are about to float a plan that might change that.

Code-signed malware appears to be aimed mostly at Microsoft Windows and Java, maintained by Oracle, says Coclin, adding that malicious code-signing of Android apps has also quickly become a lawless “Wild West.”

Under the auspices of the Certificate Authority/ Browser Forum, an industry group in which Microsoft and Symantec are members, the two companies next month plan to put forward what Coclin describes as proposed new “baseline requirements and audit guidelines” that certificate authorities would have to follow to verify the identity of purchasers of code-signing certificates. Microsoft is keenly interested in this effort because “Microsoft is out to protect Windows,” says Coclin.

These new identity-proofing requirements will be detailed next month in the upcoming CAB Forum document from its Code-Signing Group. The underlying concept is that certificate authorities would have to follow more stringent practices related to proofing identity, Coclin says.

The CAB Forum includes the main Internet browser software makers, Microsoft, Google, Opera Software and The Mozilla Foundation, combined with many of the major certificate authorities, including Symantec’s  own certificate authority units Thawte and VeriSign, which earlier acquired GeoTrust.

Several other certificate authorities, including Comodo, GoDaddy, GlobalSign, Trustwave and Network Solutions, are also CAB Forum members, plus a number of certificate authorities based abroad, such as Chunghwa Telecom Co. Ltd., Swisscom, TURKTRUST and TAIWAN-CA, Inc. It’s part of a vast and larger commercial certificate authority global infrastructure with numerous sub-authorities operating in a root-based chain of trust. Outside this commercial certificate authority structure, governments and enterprises also use their own controlled certificate authority systems to issue and manage digital certificates for code-signing purposes.

Use of digital certificates for code-signing isn’t as widespread as that for SSL, for example, but as detailed in the new White Paper on the topic from the industry group called the CA Security Council, code-signing is intended to assure the identity of software publishers and ensure that the signed code has not been tampered with.

Coclin, who is co-chair of the CAB Forum, says precise details about new anti-fraud measures for proofing the identity of those buying code-signing certificates from certificate authorities will be unveiled next month and subject to a 60-day comment period. These new proposed identity-proofing requirements will be discussed at a meeting planned in February at Google before any adoption of them.

The CAB Forum’s code-signing group is expected to espouse changes related to security that may impact software vendors and enterprises that use code-signing in their software development efforts so the CAB Forum wants maximum feedback before going ahead with its ideas on improving security in certificate issuance.

Coclin points out that commercial certificate authorities today must pass certain audits done by KPMG or PricewaterhouseCoopers, for example. In the future, if new requirements say certificate authorities have to verify the identity of customers in a certain way and they don’t do it properly, that information could be shared with an Internet browser maker like Microsoft, which makes the Internet Explorer browser. Because browsers play a central role in the certificate-based code-signing process, Microsoft, for example, could take action to ensure its browser and OS do not recognize certificates issued by certificate authorities that violate any new identity-proofing procedures. But how any of this shake out remains to be seen.

McAfee, which unlike Symantec doesn’t have a certificate authority business unit and is not a member of the CAB Forum, last month at its annual user conference presented its own research about how legitimate certificates are increasingly being used to sign malware in order to trick victims into downloading malicious code.

“The certificates aren’t actually malicious — they’re not forged or stolen, they’re abused,” said McAfee researcher Dave Marcus. He said in many instances, according to McAfee’s research on code-signed malware, the attacker has gone out and obtained legitimate certificates from a company associated with top-root certificate authorities such as Comodo, Thawte or VeriSign. McAfee has taken to calling this the problem of “abused certificates,” an expression that’s not yet widespread in the industry as a term to describe the threat.

Coclin notes that one idea that would advance security would be to have a “code-signing portal” where a certificate authority could scan the submitted code to be checked for signs of malware before it was signed. He also said a good practice is hardware-based keys and security modules to better protect private keys used as part of the code-signing process.

Source:  networkworld.com

Weighing the IT implications of implementing SDNs

Friday, September 27th, 2013

Software-defined anything has myriad issues for data centers to consider before implementation

Software Defined Networks should make IT execs think about a lot of key factors before implementation.

Issues such as technology maturity, cost efficiencies, security implications, policy establishment and enforcement, interoperability and operational change weigh heavily on IT departments considering software-defined data centers. But perhaps the biggest consideration in software-defining your IT environment is, why would you do it?

“We have to present a pretty convincing story of, why do you want to do this in the first place?” said Ron Sackman, chief network architect at Boeing, at the recent Software Defined Data Center Symposium in Santa Clara. “If it ain’t broke, don’t fix it. Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.”

And if that compelling use case is established, the next task is to get everyone on board and comfortable with the notion of a software-defined IT environment.

“The willingness to accept abstraction is kind of a trade-off between control of people and hardware vs. control of software,” says Andy Brown, Group CTO at UBS, speaking on the same SDDC Symposium panel. “Most operations people will tell you they don’t trust software. So one of the things you have to do is win enough trust to get them to be able to adopt.”

Trust might start with assuring the IT department and its users that a software-defined network or data center is secure, at least as secure as the environment it is replacing or founded on. Boeing is looking at SDN from a security perspective trying to determine if it’s something it can objectively recommend to its internal users.

“If you look at it from a security perspective, the best security for a network environment is a good design of the network itself,” Sackman says. “Things like Layer 2 and Layer 3 VPNs backstop your network security, and they have not historically been a big cyberattack surface. So my concern is, are the capex and opex savings going to justify the risk that you’re taking by opening up a bigger cyberattack surface, something that hasn’t been a problem to this point?”

Another concern Sackman has is in the actual software development itself, especially if a significant amount of open source is used.

“What sort of assurance does someone have – particularly if this is open source software – that the software you’re integrating into your solution is going to be secure,” he asks. “How do you scan that? There’s a big development time security vector that doesn’t really exist at this point.”

Policy might be the key to ensuring security and other operational aspects in place pre-SDN/SDDC are not disrupted post implementation. Policy-based orchestration, automation and operational execution is touted as one of SDN’s chief benefits.

“I believe that policy will become the most important factor in the implementation of a software-defined data center because if you build it without policy, you’re pretty much giving up on the configuration strategy, the security strategy, the risk management strategy, that have served us so well in the siloed world of the last 20 years,” UBS’ Brown says.

Software Defined Data Center’s also promise to break down those silos through cross-function orchestration of the compute, storage, network and application elements in an IT shop. But that’s easier said than done, Brown notes – interoperability is not a guarantee in the software-defined world.

“Information protection and data obviously have to interoperate extremely carefully,” he says. The success of software defined workload management – aka, virtualization and cloud – in a way has created a set of children, not all of which can necessarily be implemented in parallel, but all of which are required to get to the end state of the software defined data center.

“Now when you think of all the other software abstraction we’re trying to introduce in parallel, someone’s going to cry uncle. So all of these things need to interoperate with each other.”

So are the purported capital and operational cost savings of implementing SDN/SDDCs worth the undertaking? Do those cost savings even exist?

Brown believes they exist in some areas and not in others.

“There’s a huge amount of cost take-out in software-defined storage that isn’t necessarily there in SDN right now,” he said. “And the reason it’s not there in SDN is because people aren’t ripping out the expensive under network and replacing it with SDN. Software-defined storage probably has more legs than SDN because of the cost pressure. We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.”

Sackman believes the overall savings are there in SDN/SDDCs but again, the security uncertainty may make those benefits not currently worth the risk.

“The capex and opex savings are very compelling, and there are particular use cases specifically for SDN that I think would be great if we could solve specific pain points and problems that we’re seeing,” he says. “But I think, in general, security is a big concern, particularly if you think about competitors co-existing as tenants in the same data center — if someone develops code that’s going to poke a hole in the L2 VPN in that data center and export data from Coke to Pepsi.

“We just won a proposal for a security operations center for a foreign government, and I’m thinking can we offer a better price point on our next proposal if we offer an SDN switch solution vs. a vendor switch solution? A few things would have to happen before we feel comfortable doing that. I’d want to hear a compelling story around maturity before we would propose it.”

Source: networkworld.com

Research shows IT blocking applications based on popularity not risk

Thursday, September 26th, 2013

Tactic leads to less popular, but still risky cloud-based apps freely accessing networks

A new study, based on collective data taken from 3 million users across more than 100 companies, shows that cloud-based apps and services are being blocked based on popularity rather than risk.

A new study from Skyhigh Networks, a firm that focuses on cloud access security, shows that most of the cloud-based blocking within IT focuses on popular well-known apps, and not risk. The problem with this method of security is that often, cloud-based apps that offer little to no risk are prohibited on the network, while those that actually do pose risk are left alone, freely available to anyone who knows of them.

Moreover, the data collected from some 3 million users across 100 organizations shows that IT seriously underestimates the number of cloud-based apps and services running on their network. For example, on average there are about 545 cloud services in use by a given organization, yet if asked IT will cite a number that’s only a fraction of that.

When it comes to the type of cloud-based apps and services blocked by IT, the primary focus seems to be on preventing productivity loss rather than risk, and frequently blocked access centers on name recognition. For example, Netflix is the number one blocked app overall, and services such as iCloud, Google Drive, Dropbox, SourceForge, WebEx, Bit.ly, StumbleUpon, and Skype, are commonly flagged too.

However, while those services do have some risk associated with them, they are also top brands depending on their vertical. Yet, while they’re flagged and prohibited on many networks, services such as SendSpace, Codehaus, FileFactory, authorSTREAM, MovShare, and WeTransfer are unrestricted, but actually pose more than the other commonly blocked apps.

Digging deeper, the study shows that in the financial services sector, iCloud, and Google Drive are commonly blocked, yet SendSpace and CloudApp, which are direct alternatives, are rarely — if ever — filtered. In healthcare, Dropbox and Memeo (an up and coming file sharing service) are blocked, which is expected. Yet, once again, healthcare IT allows services such as WeTransfer, 4shared, and Hostingbulk on the network.

In the high tech sector, Skype, Google Drive, and Dropbox are commonly expunged from network traffic, yet RapidGator, ZippyShare, and SkyPath are fully available. In manufacturing, where WatchDox, Force.com, and Box are regularly blocked, CloudApp, SockShare, and RapidGator are fully used by employees seeking alternatives.

In a statement, Rajiv Gupta, founder and CEO at Skyhigh Networks, said that the report shows that “there are no consistent policies in place to manage the security, compliance, governance, and legal risks of cloud services.”

Separately, in comments to CSO, Gupta agreed that one of the main causes for this large disconnect in content filtering is a lack of understanding when it comes to the risks behind most cloud-based apps and services (outside of the top brands), and that many commercial content filtering solutions simply do not cover the alternatives online, or as he put it, “they’re not cloud aware.”

This, if anything, proves that risk management can’t be confined within a checkbox and a bland category within a firewall’s content filtering rules.

“Cloud is very much the wild, wild west. Taming the cloud today largely is a whack-a-mole exercise…with your bare hands,” Gupta told us.

Source:  csoonline.com

SaaS governance: Five key questions

Monday, September 23rd, 2013

Increasingly savvy customers are sharpening their requirements for SaaS. Providers must be able to answer these key questions for potential clients.

IT governance is linked to security and data protection standards—but it is more than that. Governance includes aligning IT with business strategies, optimizing operational and system workflows and processes, and the insertion of an IT control structure for IT assets that meets the needs of auditors and regulators.

As more companies move to cloud-based solutions like SaaS (software as a service), regulators and auditors are also sharpening their requirements. “What we are seeing is an increased number of our corporate clients asking us for our own IT audits, which they, in turn, insert into their enterprise audit papers that they show auditors and regulators,” said one SaaS manager.

This places more pressure on SaaS providers, which still do not consistently perform audits, and often will admit that when they do, it is usually at the request of a prospect before the prospect signs with them.

Should enterprise IT and its regulators be concerned? The answer is fast changing to “yes.”

This means that now is the time for SaaS providers to get their governance in order.

Here are five questions that SaaS providers can soon expect to hear from clients and prospects:

#1 Can you provide me with an IT security audit?

Clients and prospects will want to know what your physical facility and IT security audit results have been, in addition to the kinds of security measures that you employ on a day to day basis. They will expect that your security measures are best-in-class, and that you also have data on internal and external penetration testing.

#2 What are your data practices?

How often do you back up data? Where do you store it? If you are using multi-tenant systems on a single server, how can a client be assured that its data (and systems) remain segregated from the systems and data of others that are also running on the same server? Can a client authorize its own security permissions for its data, down to the level of a single individual within the company or at a business partner’s?

#3 How will you protect my intellectual property?

You will get clients that will want to develop custom applications or reports for their business alone. In some cases, the client might even develop it on your cloud. In other cases, the client might retain your services to develop a specification defined by the client into a finished application. The question is this: whose property does the custom application become, and who has the right to distribute it?

One SaaS provider takes the position that all custom reports it delivers (even if individual clients pay for their development) belong to the provider—and that the provider is free to repurpose the reports for others. Another SaaS provider obtains up-front funding from the client for a custom application, and then reimburses the client for the initial funding as the provider sells the solution to other clients. In both cases, the intellectual property rights are lost to the client—but there are some clients that won’t accept these conditions.

If you are a SaaS provider, it’s important to understand the industry verticals you serve and how individuals in these industry verticals feel about intellectual property.

#4 What are your standards of performance?

I know of only one SaaS provider that actually penalizes itself in the form of “credits” toward the next month’s bill it if the provider fails to meet an uptime SLA (service level agreement). The majority of SaaS companies I have spoken with have internal SLAs—but they don’t issue them to their customers. As risk management assumes a larger role in IT governance, corporate IT managers are going to start asking their SaaS partners for SLAs with “teeth” in them that include financial penalties.

#5 What kind of disaster recovery and business continuation plan do you have?

The recent spate of global natural disasters has nearly every company and their regulators and auditors focused on DR and BC. They will expect their SaaS providers to do the same. SaaS providers that own and control their own data centers are in a strong position. SaaS providers that contract with third-party data centers (where the end client has no direct relationship with the third-party data center) are riskier. For instance, whose liability is it if the third-party data center fails? Do you as a SaaS provider indemnify your end clients? It’s an important question to know the answer to—because your clients are going to be asking it.

Source:  techrepublic.com

NSA ‘altered random-number generator’

Thursday, September 12th, 2013

US intelligence agency the NSA subverted a standards process to be able to break encryption more easily, according to leaked documents.

It had written a flaw into a random-number generator that would allow the agency to predict the outcome of the algorithm, the New York Times reported.

The agency had used its influence at a standards body to insert the backdoor, said the report.

The NSA had made no comment at the time of writing.

According to the report, based on a memo leaked by former NSA contactor Edward Snowden, the agency had gained sole control of the authorship of the Dual_EC_DRBG algorithm and pushed for its adoption by the National Institute of Standards and Technology (Nist) into a 2006 US government standard.

The NSA had wanted to be able to predict numbers generated by certain implementations of the algorithm, to crack technologies using the specification, said the report.

Nist standards are developed to secure US government systems and used globally.

The standards body said that its processes were open, and that it “would not deliberately weaken a cryptographic standard”.

“Recent news reports have questioned the cryptographic standards development process at Nist,” the body said in a statement.

“We want to assure the IT cybersecurity community that the transparent, public process used to rigorously vet our standards is still in place.”

Impact

It was unclear which software and hardware had been weakened by including the algorithm, according to software developers and cryptographers.

For example, Microsoft had used the algorithm in software from Vista onwards, but had not enabled it by default, users on the Cryptography Stack Exchange pointed out.

The algorithm has been included in the code libraries and software of major vendors and industry bodies, including Microsoft, Cisco Systems, RSA, Juniper, RIM for Blackberry, OpenSSL, McAfee, Samsung, Symantec, and Thales, according to Nist documentation.

Whether the software of these organisations was secure depended on how the algorithm had been used, Cambridge University cryptographic expert Richard Clayton told the BBC.

“There’s no easy way of saying who’s using [the algorithm], and how,” said Mr Clayton.

Moreover, the algorithm had been shown to be insecure in 2007 by Microsoft cryptographers Niels Ferguson and Dan Shumow, added Mr Clayton.

“Because the vulnerability was found some time ago, I’m not sure if anybody is using it,” he said.

A more profound problem was the possible erosion of trust in Nist for the development of future standards, Mr Clayton added.

Source:  BBC

Snowden leaks: US and UK ‘crack online encryption’

Friday, September 6th, 2013

US and UK intelligence have reportedly cracked the encryption codes protecting the emails, banking and medical records of hundreds of millions of people.

Disclosures by leaker Edward Snowden allege the US National Security Agency (NSA) and the UK’s GCHQ successfully decoded key online security protocols.

They suggest some internet companies provided the agencies backdoor access to their security systems.

The NSA is said to spend $250m (£160m) a year on the top-secret operation.

It is codenamed Bullrun, an American civil-war battle, according to the documents published by the Guardian in conjunction with the New York Times and ProPublica.

The British counterpart scheme run by GCHQ is called Edgehill, after the first major engagement of the English civil war, say the documents.

‘Behind-the-scenes persuasion’

The reports say the UK and US intelligence agencies are focusing on the encryption used in 4G smartphones, email, online shopping and remote business communication networks.

The encryption techniques are used by internet services such as Google, Facebook and Yahoo.

Under Bullrun, it is said that the NSA has built powerful supercomputers to try to crack the technology that scrambles and encrypts personal information when internet users log on to access various services.

The NSA also collaborated with unnamed technology companies to build so-called back doors into their software – something that would give the government access to information before it is encrypted and sent over the internet, it is reported.

As well as supercomputers, methods used include “technical trickery, court orders and behind-the-scenes persuasion to undermine the major tools protecting the privacy of everyday communications”, the New York Times reports.

The US reportedly began investing billions of dollars in the operation in 2000 after its initial efforts to install a “back door” in all encryption systems were thwarted.

Gobsmacked

During the next decade, it is said the NSA employed code-breaking computers and began collaborating with technology companies at home and abroad to build entry points into their products.

The documents provided to the Guardian by Mr Snowden do not specify which companies participated.

The NSA also hacked into computers to capture messages prior to encryption, and used broad influence to introduce weaknesses into encryption standards followed by software developers the world over, the New York Times reports.

When British analysts were first told of the extent of the scheme they were “gobsmacked”, according to one memo among more than 50,000 documents shared by the Guardian.

NSA officials continue to defend the agency’s actions, claiming it will put the US at considerable risk if messages from terrorists and spies cannot be deciphered.

But some experts argue that such efforts could actually undermine national security, noting that any back doors inserted into encryption programs can be exploited by those outside the government.

It is the latest in a series of intelligence leaks by Mr Snowden, a former NSA contractor, who began providing caches of sensitive government documents to media outlets three months ago.

In June, the 30-year-old fled his home in Hawaii, where he worked at a small NSA installation, to Hong Kong, and subsequently to Russia after making revelations about a secret US data-gathering programme.

A US federal court has since filed espionage charges against Mr Snowden and is seeking his extradition.

Mr Snowden, however, remains in Russia where he has been granted temporary asylum.

Source:  BBC

Will software-defined networking kill network engineers’ beloved CLI?

Tuesday, September 3rd, 2013

Networks defined by software may require more coding than command lines, leading to changes on the job

SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco’s isn’t the only CLI, but on the strength of the company’s domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it’s been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.

A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco’s ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

“The network used to be programmed through what we call CLIs, or command-line interfaces. We’re now changing that to create programmatic interfaces,” Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

“If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs,” Gartner analyst Joe Skorupa said. “Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter.”

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

“We’re already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies,” Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI’s days are numbered. SDN doesn’t go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

“It’s not obsolete by any definition,” Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It’s likely that some network jobs will be eliminated, he said.

This isn’t the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven’t always had a warm welcome. “Engineers will always gravitate toward a CLI when it’s available,” Miron said.

Even networking startups need to offer a Cisco CLI so their customers’ engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco’s CLI and other vendors’ management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented “bottom up” orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF’s approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there’s more room for human error and the cost of mistakes is higher, he said.

“Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time,” Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

“There’ll always be some level of CLI,” said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks’ SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is “scary,” he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

“The scale of systems is becoming so large that you can’t actually do anything by hand,” Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner’s Skorupa said.

The first group will “get it” and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won’t be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They’ll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

“All of that knowledge that you had amassed over decades of employment got written to zero,” Skorupa said. For IP network engineers who resist change, there will be a cruel irony: “SDN will do to them what they did to the guys who managed the old TDM voice systems.”

But SDN won’t spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

“The role of the network engineer, I don’t think, has ever been more important,” Kerravala said. “Cloud computing and mobile computing are network-centric compute models.”

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

“The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network,” said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft’s Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

“If you look at the Facebook network engineering team, pretty much everybody’s writing code as well,” said Najam Ahmad, Facebook’s director of technical operations for infrastructure.

Network engineers historically have used CLIs because that’s all they were given, Ahmad said. “I think we’re underestimating their ability. ”

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won’t go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

“All these people need to somewhat evolve their skills,” Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn’t announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

“You’ve got to understand the fundamentals,” Vashi said. “If you don’t know how the network infrastructure works, you could have all the background in software automation, and you don’t know what you’re doing on the network side.”

Source:  computerworld.com

Cisco responds to VMware’s NSX launch, allegiances

Thursday, August 29th, 2013

Says a software-only approach to network virtualization spells trouble for users

Cisco has responded to the groundswell of momentum and support around the introduction of VMware’s NSX network virtualization platform this week with a laundry list of the limitations of software-only based network virtualization. At the same time, Cisco said it intends to collaborate further with VMware, specifically around private cloud and desktop virtualization, even as its partner lines up a roster of allies among Cisco’s fiercest rivals.

Cisco’s response was delivered here, in a blog post from Chief Technology and Strategy Officer Padmasree Warrior.

In a nutshell, Warrior says software-only based network virtualization will leave customers with more headaches and hardships than a solution that tightly melds software with hardware and ASICs – the type of network virtualization Cisco proposes:

A software-only approach to network virtualization places significant constraints on customers.  It doesn’t scale, and it fails to provide full real-time visibility of both physical and virtual infrastructure.  In addition this approach does not provide key capabilities such as multi-hypervisor support, integrated security, systems point-of-view or end-to-end telemetry for application placement and troubleshooting.  This loosely-coupled approach forces the user to tie multiple 3rd party components together adding cost and complexity both in day-to-day operations as well as throughout the network lifecycle.  Users are forced to address multiple management points, and maintain version control for each of the independent components.  Software network virtualization treats physical and virtual infrastructure as separate entities, and denies customers a common policy framework and common operational model for management, orchestration and monitoring.

Warrior then went on to tout the benefits of the Application Centric Infrastructure (ACI),

a concept introduced by Cisco spin-in Insieme Networks at the Cisco Live conference two months ago. ACI combines hardware, software and ASICs into an integrated architecture that delivers centralized policy automation, visibility and management of both physical and virtual networks, etc., she claims.Warrior also shoots down the comparison between network virtualization and server virtualization, which is the foundation of VMware’s existence and success. Servers were underutilized, which drove the need for the flexibility and resource efficiency promised in server virtualization, she writes.

Not so with networks. Networks do not have an underutilization problem, she claims:

In fact, server virtualization is pushing the limits of today’s network utilization and therefore driving demand for higher port counts, application and policy-driven automation, and unified management of physical, virtual and cloud infrastructures in a single system.

Warrior ends by promising some “exciting news” around ACI in the coming months. Perhaps at Interop NYC in late September/early October? Cisco CEO John Chambers was just added this week to the keynote lineup at the conference. He usually appears at these venues when Cisco makes a significant announcement that same week…

Source:  networkworld.com

Amazon and Microsoft, beware—VMware cloud is more ambitious than we thought

Tuesday, August 27th, 2013

http://cdn.arstechnica.net/wp-content/uploads/2013/08/vcloud-hybrid-service-640x327.png

Desktops, disaster recovery, IaaS, and PaaS make VMware’s cloud compelling.

VMware today announced that vCloud Hybrid Service, its first public infrastructure-as-a-service (IaaS) cloud, will become generally available in September. That’s no surprise, as we already knew it was slated to go live this quarter.

What is surprising is just how extensive the cloud will be. When first announced, vCloud Hybrid Service was described as infrastructure-as-a-service that integrates directly with VMware environments. Customers running lots of applications in-house on VMware infrastructure can use the cloud to expand their capacity without buying new hardware and manage both their on-premises and off-premises deployments as one.

That’s still the core of vCloud Hybrid Service—but in addition to the more traditional infrastructure-as-a-service, VMware will also have a desktops-as-a-service offering, letting businesses deploy virtual desktops to employees without needing any new hardware in their own data centers. There will also be disaster recovery-as-a-service, letting customers automatically replicate applications and data to vCloud Hybrid Service instead of their own data centers. Finally, support for the open source distribution of Cloud Foundry and Pivotal’s deployment of Cloud Foundry will let customers run a platform-as-a-service (PaaS) in vCloud Hybrid Service. Unlike IaaS, PaaS tends to be optimized for building and hosting applications without having to manage operating systems and virtual computing infrastructure.

While the core IaaS service and connections to on-premises deployments will be generally available in September, the other services aren’t quite ready. Both disaster recovery and desktops-as-a-service will enter beta in the fourth quarter of this year. Support for Cloud Foundry will also be available in the fourth quarter. Pricing information for vCloud Hybrid Service is available on VMware’s site. More details on how it works are available in our previous coverage.

Competitive against multiple clouds

All of this gives VMware a compelling alternative to Amazon and Microsoft. Amazon is still the clear leader in infrastructure-as-a-service and likely will be for the foreseeable future. However, VMware’s IaaS will be useful to customers who rely heavily on VMware internally and want a consistent management environment on-premises and in the cloud.

VMware and Microsoft have similar approaches, offering a virtualization platform as well as a public cloud (Windows Azure in Microsoft’s case) that integrates with customers’ on-premises deployments. By wrapping Cloud Foundry into vCloud Hybrid Service, VMware combines IaaS and PaaS into a single cloud service just as Microsoft does.

VMware is going beyond Microsoft by also offering desktops-as-a-service. We don’t have a ton of detail here, but it will be an extension of VMware’s pre-existing virtual desktop products that let customers host desktop images in their data centers and give employees remote access to them. With “VMware Horizon View Desktop-as-a-Service,” customers will be able to deploy virtual desktop infrastructure either in-house or on the VMware cloud and manage it all together. VMware’s hybrid cloud head honcho, Bill Fathers, said much of the work of adding and configuring new users will be taken care of automatically.

The disaster recovery-as-a-service builds on VMware’s Site Recovery Manager, letting customers see the public cloud as a recovery destination along with their own data centers.

“The disaster recovery use case is something we want to really dominate as a market opportunity,” Fathers said in a press conference today. At first, it will focus on using “existing replication capabilities to replicate into the vCloud Hybrid Service. Going forward, VMware will try to provide increasing levels of automation and more flexibility in configuring different disaster recovery destinations,” he said.

vCloud Hybrid Service will be hosted in VMware data centers in Las Vegas, NV, Sterling, VA, Santa Clara, CA, and Dallas, TX, as well as data centers operated by Savvis in New York and Chicago. Non-US data centers are expected to join the fun next year.

When asked if VMware will support movement of applications between vCloud Hybrid Service and other clouds, like Amazon’s, Fathers said the core focus is ensuring compatibility between customers’ existing VMware deployments and the VMware cloud. However, he said VMware is working with partners who “specialize in that level of abstraction” to allow portability of applications from VMware’s cloud to others and vice versa. Naturally, VMware would really prefer it if you just use VMware software and nothing else.

Source:  arstechnica.com

Don’t waste your time (or money) on open-source networking, says Cisco

Monday, August 26th, 2013

Despite a desire to create open and flexible networks, network managers shouldn’t be fooled into thinking that the best way to do achieve this is through building an open-source network from scratch, according to Den Sullivan, Head of Architectures for Emerging Markets,Cisco.

In a phone interview with CNME, Sullivan said that, in most cases, attempting to build your own network using open-source technologies would result in more work and more cost.

“When you’re down there in the weeds, sticking it all together, building it yourself when you can actually go out there and buy it, I think you’re probably increasing your cost base whilst you actually think that you may be getting something cheaper,” he said.

Sullivan said he understood why network managers could be seduced by the idea of building a bespoke network from open-source technologies. However, he advised that, in practical terms, open-source networking tech was mostly limited to creating smaller programs and scripts.

“People have looked to try to do things faster, try to automate things. And with regards to scripts and small programs, they’re taking up open-source off the Web, bolting them together and ultimately coming up with a little program or script that goes and does things a little bit faster for their own particular area,” he said.

Sullivan said he hadn’t come across anyone in the Middle East creating open-source networks from scratch — and with good reason. He said that the role of IT isn’t to create something bespoke, but to align the department with the needs of the business, using whichever tools are available.

“How does the IT group align with that strategy, and then how best do they deliver it?” he asked. “Ultimately, I don’t think that is always about going and building it yourself, and stitching it all together.

“It’s almost like the application world. Say you’ve got 10,000 sales people — why would you go and build a sales tool to track their forecasting, to track their performance, to track your customer base? These things are readily available — they’re built by vendors who have got years and years of experience, so why are you going to start trying to grow your own? That’s not the role of IT as I see it today.”

Sullivan admitted that, for some businesses, stock networking tools from the big vendors did not provide enough flexibility. However, he said that a lot of the flexibility and openness that people desire could be found more easily in software-defined networking (SDN) tools, rather than open-source networking tools.

“I see people very interested in the word ‘open’ in regards to software-defined networking, but I don’t see them actually going and creating their own networks through open-source, readily available programs out there on the Internet. I do see an interest in regards to openness, flexibility, and more programmability — things like the Open Network Foundation and everything in regards to SDN,” he said.

Source:  pcadvisor.com

VMware unwraps virtual networking software – promises greater network control, security

Monday, August 26th, 2013

VMware announces that NSX – which combines network and security features – will be available in the fourth quarter

VMware today announced that its virtual networking software and security software products packaged together in an offering named NSX will be available in the fourth quarter of this year.

The company has been running NSX in beta since the spring, but as part of a broader announcement of software-defined data center functions made today at VMworld, the company took the wrapping off of its long-awaited virtual networking software. VMware has based much of the NSX functionality on technology it acquired from Nicira last year.

The generally available version of NSX includes two major new features compared to the beta: technical integration with a variety of partnering companies, including the ability for the virtual networking software to control network and compute infrastructure hardware providers. Secondly, it virtualizes some network functions like firewalling, allowing for better control of virtual networks.

The idea of virtual networking is similar to that of virtual computing: abstracting the core features of networking from the underlying hardware. Doing so lets organizations more granularly control their networks, including spinning up and down networks, as well as better segmentation of network traffic.

Nicira has been a pioneer in the network virtualization industry and last year VMware spent $1.2 billion to acquire the company. In March, VMware announced plans to integrate VMware technology into its product suite through the NSX software, but today the company announced that NSX’s general availability will be in the coming months. NSX will be a software update that is both hypervisor and hardware agnostic, says Martin Casado, chief architect, networking at VMware.

The need for the NSX software is being driven by the migration from a client-server world to a cloud world, he says. In this new architecture, there is just as much traffic, if not more, within the data center (east-west traffic) as than the data traffic between clients and the edge devices (north-south traffic).

One of the biggest advancements in the NSX software that is newly announced is virtual firewalling. Instead of using hardware or virtual firewalls that would sit at the edge of the network to control traffic, instead NSX’s firewall is embedded within the software, so it is ubiquitous throughout the deployment. This removes any bottlenecking issues that would be created by using a centralized firewall system, Casado says.

“We’re not trying to take over the firewall market or do anything with north-south traffic,” Casado says. “What we are doing is providing functionality for traffic management within the data center. There’s nothing that can do that level of protection for the east-west traffic. It’s addressing a significant need within the industry.”

VMware has signed on a bevy of partners that are compatible with the NSX platform. The software is hardware and hypervisor agnostic, meaning that the software controller can manage network functionality that is executed by networking hardware from vendors like Juniper, Arista, HP, Dell and Brocade. In press materials sent out by the company Cisco is not named as a partner, but VMware says NSX will work with networking equipment from the leading network vendor.

On the security side, services from Symantec, McAfree and Trend Micro will work within the system, while underlying compute hardware from OpenStack, CloudStack, Red Hat and Piston Cloud Computing Co. will work with NSX. Nicira has worked heavily in the OpenStack community.

“In virtual networks, where hardware and software are decoupled, a new network operating model can be achieved that delivers improved levels of speed and efficiency,” said Brad Casemore, research director for Data Center Networks at IDC. “Network virtualization is becoming a game shifter, providing an important building block for delivering the software-defined data center, and with VMware NSX, VMware is well positioned to capture this market opportunity.”

Source:  infoworld.com

Popular download management program has hidden DDoS component, researchers say

Friday, August 23rd, 2013

Recent versions of Orbit Downloader, a popular Windows program for downloading embedded media content and other types of files from websites, turns computers into bots and uses them to launch distributed denial-of-service (DDoS) attacks, according to security researchers.

Starting with version 4.1.1.14 released in December, the Orbit Downloader program silently downloads and uses a DLL (Dynamic Link Library) component that has DDoS functionality, malware researchers from antivirus vendor ESET said Wednesday in a blog post.

The rogue component is downloaded from a location on the program’s official website, orbitdownloader.com, the ESET researchers said. An encrypted configuration file containing a list of websites and IP (Internet Protocol) addresses to serve as targets for attacks is downloaded from the same site, they said.

Orbit Downloader has been developed since at least 2006 and judging by download statistics from software distribution sites like CNET’s Download.com and Softpedia.com it is, or used to be, a popular program.

Orbit Downloader was downloaded almost 36 million times from Download.com to date and around 12,500 times last week. Its latest version is 4.1.1.18 and was released in May.

In a review of the program, a CNET editor noted that it installs additional “junk programs” and suggested alternatives to users who need a dedicated download management application.

When they discovered the DDoS component, the ESET researchers were actually investigating the “junk programs” installed by Orbit Downloader in order to determine if the program should be flagged as a “potentially unwanted application,” known in the industry as PUA.

“The developer [of Orbit Downloader], Innoshock, generates its revenue from bundled offers, such as OpenCandy, which is used to install third-party software as well as to display advertisements,” the researchers said, noting that such advertising arrangements are normal behavior for free programs these days.

“What is unusual, though, is to see a popular utility containing additional code for performing Denial of Service (DoS) attacks,” they said.

The rogue Orbit Downloader DDoS component is now detected by ESET products as a Trojan program called Win32/DDoS.Orbiter.A. It is capable of launching several types of attacks, the researchers said.

First, it checks if a utility called WinPcap is installed on the computer. This is a legitimate third-party utility that provides low-level network functionality, including sending and capturing network packets. It is not bundled with Orbit Downloader, but can be installed on computers by other applications that need it.

If WinPcap is installed, Orbit’s DDoS component uses the tool to send TCP SYN packets on port 80 (HTTP) to the IP addresses specified in its configuration file. “This kind of attack is known as a SYN flood,” the ESET researchers said.

If WinPcap is not present, the rogue component directly sends HTTP connection requests on port 80 to the targeted machines, as well as UDP packets on port 53 (DNS).

The attacks also use IP spoofing techniques, the source IP addresses for the requests falling into IP address ranges that are hardcoded in the DLL file.

“On a test computer in our lab with a gigabit Ethernet port, HTTP connection requests were sent at a rate of about 140,000 packets per second, with falsified source addresses largely appearing to come from IP ranges allocated to Vietnam,” the ESET researchers said.

After adding a detection signature for the DLL component, the ESET researchers also identified an older file called orbitnet.exe that had almost the same functionality as the DLL file, but downloaded its configuration from a different website, not orbitdownloader.com.

This suggests that Orbit Downloader might have had DDoS functionality since before version 4.1.1.14. The orbitnet.exe file is not bundled with any older Orbit Downloader installers, but it might have been downloaded post-installation, like the DLL component.

This is a possibility, but it can’t be demonstrated with certainty, Peter Kosinar, a technical fellow at ESET who was involved in the investigation, said Thursday. It might also be distributed though other means, he said.

Adding to the confusion is that an older version of orbitnet.exe than the one found by ESET is distributed with Orbit Downloader 4.1.1.18. The reason for this is unclear since Orbit Downloader 4.1.1.18 also downloads and uses the DLL DDoS component. However, it indicates a clear relationship between orbitnet.exe and Orbit Downloader.

The fact that a popular program like Orbit Downloader is used as a DDoS tool creates problems not only for the websites that it’s used to attack, but also for the users whose computers are being abused.

According to Kosinar, there is no rate limit implemented for the packets sent by the DDoS component. This means that launching these attacks can easily consume the user’s Internet connection bandwidth, affecting his ability to access the Internet through other programs.

Users who install Orbit Downloader expect the program to streamline their downloads and increase their speed, but it turns out that the application has the opposite effect.

Orbit Downloader is developed by a group called Innoshock, but it’s not clear if this is a company or just a team of developers. Attempts to contact Innoshock for comment Thursday via two Gmail addresses listed on its website and the Orbit Downloader site, as well as via Twitter, remained unanswered.

The program’s users also seem to have noticed its DDoS behavior judging by comments left on Download.com and the Orbit Downloader support forum.

Orbit Downloder version 4.1.1.18 is generating a very high amount of DDoS traffic, a user named raj_21er said on the support forum on June 12. “The DDoS flooding is so huge that it just hangs the gateway devices/network switches completely and breaks down the entire network operation.”

“I was using Orbit Downloader for the past one week on my desktop when I suddenly noticed that the internet access was pretty much dead in the last 2 days,” another user named Orbit_User_5500 said. Turning off the desktop system restored Internet access to the other network computers and devices, he said.

Since adding detection of this DDoS component, ESET received tens of thousands of detection reports per week from deployments of its antivirus products, Kosinar said.

Source:  csoonline.com