Archive for the ‘Server’ Category

IT Consulting Case Studies: Microsoft SharePoint Server for CMS

Friday, February 14th, 2014

Gyver Networks recently designed and deployed a Microsoft SharePoint Server infrastructure for a financial consulting firm servicing banks and depository institutions with assets in excess of $200 billion.

Challenge:  A company specializing in regulatory compliance audits for financial institutions found themselves inundated by documents submitted via inconsistent workflow processes, raising concerns regarding security and content management as they continued to expand.

http://officeimg.vo.msecnd.net/en-us/files/819/194/ZA103888538.pngWith many such projects running concurrently, keeping up with the back-and-forth flow of multiple versions of the same documents became increasingly difficult.  Further complicating matters, the submission process consisted of clients sending email attachments or uploading files to a company FTP server, then emailing to let staff know something was sent.  Other areas of concern included:

  • Security of submitted financial data in transit and at rest, as defined in SSAE 16 and 201 CMR 17.00, among other standards and regulations
  • Secure, customized, compartmentalized client access
  • Advanced user management
  • Internal and external collaboration (multiple users working on the same documents simultaneously)
  • Change and version tracking
  • Comprehensive search capabilities
  • Client alerts, access to project updates and timelines, and feedback

Resolution: Gyver Networks proposed a Microsoft SharePoint Server environment as the ideal enterprise content management system (CMS) to replace their existing processes.  Once deployed, existing archives and client profiles were migrated into the SharePoint infrastructure designed for each respective client and, seamlessly, the company was fully operational and ready to go live.

Now, instead of an insecure and confusing combination of emails, FTP submissions, and cloud-hosted, third-party management software, they are able to host their own secure, all-in-one CMS on premises, including:

  • 256-bit encryption of data in transit and at rest
  • Distinct SharePoint sites and logins for each client, with customizable access permissions and retention policies for subsites and libraries
  • Advanced collaboration features, with document checkout, change review and approval, and workflows
  • Metadata options so users can find what they’re searching for instantly
  • Client-customized email alerts, views, reporting, timelines, and the ability to submit requests and feedback directly through the SharePoint portal

The end result?  Clients of this company are thrilled to have a comprehensive content management system that not only saves them time and provides secure submission and archiving, but also offers enhanced project oversight and advanced-metric reporting capabilities.

The consulting firm itself experienced an immediate increase in productivity, efficiency, and client retention rates; they are in full compliance with all regulations and standards governing security and privacy; and they are now prepared for future expansion with a scalable enterprise CMS solution that can grow as they do.

Contact Gyver Networks today to learn more about what Microsoft SharePoint Server can do for your organization.  Whether you require a simple standalone installation or a more complex hybrid SharePoint Server farm, we can assist you in planning, deploying, administration, and troubleshooting to ensure you get the most out of your investment.

Huge hack ‘ugly sign of future’ for internet threats

Tuesday, February 11th, 2014

A massive attack that exploited a key vulnerability in the infrastructure of the internet is the “start of ugly things to come”, it has been warned.

Online security specialists Cloudflare said it recorded the “biggest” attack of its kind on Monday.

Hackers used weaknesses in the Network Time Protocol (NTP), a system used to synchronise computer clocks, to flood servers with huge amounts of data.

The technique could potentially be used to force popular services offline.

Several experts had predicted that the NTP would be used for malicious purposes.

The target of this latest onslaught is unknown, but it was directed at servers in Europe, Cloudflare said.

Attackers used a well-known method to bring down a system known as Denial of Service (DoS) – in which huge amounts of data are forced on a target, causing it to fall over.

Cloudflare chief executive Matthew Prince said his firm had measured the “very big” attack at about 400 gigabits per second (Gbps), 100Gbps larger than an attack on anti-spam service Spamhaus last year.

Predicted attack

In a report published three months ago, Cloudflare warned that attacks on the NTP were on the horizon and gave details of how web hosts could best try to protect their customers.

NTP servers, of which there are thousands around the world, are designed to keep computers synchronised to the same time.

The fundamentals of the NTP began operating in 1985. While there have been changes to the system since then, it still operates in much the same way.

A computer needing to synchronise time with the NTP will send a small amount of data to make the request. The NTP will then reply by sending data back.

The vulnerability lies with two weaknesses. Firstly, the amount of data the NTP sends back is bigger than the amount it receives, meaning an attack is instantly amplified.

Secondly, the original computer’s location can be “spoofed”, tricking the NTP into sending the information back to somewhere else.

In this attack, it is likely that many machines were used to make requests to the NTP. Hackers spoofed their location so that the massive amounts of data from the NTP were diverted to a single target.

“Amplification attacks like that result in an attacker turning a small amount of bandwidth coming from a small number of machines into a massive traffic load hitting a victim from around the internet,” Cloudfare explained in a blog outlining the vulnerability, posted last month.

‘Ugly future’

The NTP is one of several protocols used within the infrastructure of the internet to keep things running smoothly.

Unfortunately, despite being vital components, most of these protocols were designed and implemented at a time when the prospect of malicious activity was not considered.

“A lot of these protocols are essential, but they’re not secure,” explained Prof Alan Woodward, an independent cyber-security consultant, who had also raised concerns over NTP last year.

“All you can really do is try and mitigate the denial of service attacks. There are technologies around to do it.”

Most effective, Prof Woodward suggested, was technology that was able to spot when a large amount of data was heading for one destination – and shutting off the connection.

Cloudflare’s Mr Prince said that while his firm had been able to mitigate the attack, it was a worrying sign for the future.

“Someone’s got a big, new cannon,” he tweeted. “Start of ugly things to come.”

Source:  BBC

Change your passwords: Comcast hushes, minimizes serious hack

Tuesday, February 11th, 2014

Are you a Comcast customer? Please change your password.

On February 6, NullCrew FTS hacked into at least 34 of Comcast’s servers and published a list of the company’s mail servers and a link to the root file with the vulnerability it used to penetrate the system on Pastebin.

comcast hackComcast, the largest internet service provider in the United States, ignored news of the serious breach in press and media for over 24 hours — only when the Pastebin page was removed did the company issue a statement, and even then, it only spoke to a sympathetic B2B outlet.

During that 24 hours, Comcast stayed silent, and the veritable “keys to the kingdom” sat out in the open internet, ripe for the taking by any malicious entity with a little know-how around mail servers and selling or exploiting customer data.

Comcast customers have not been not told to reset their passwords. But they should.

Once NullCrew FTS openly hacked at least 24 Comcast mail servers, and the recipe was publicly posted, the servers began to take a beating. Customers in Comcast’s janky, hard-to-find, 1996-style forums knew something was wrong, and forum posts reflected the slowness, the up and down servers, and the eventual crashing.

The telecom giant ignored press requests for comment and released a limited statement on February 7 — to Comcast-friendly outlet, broadband and B2B website Multichannel News.

The day-late statement failed to impress the few who saw it, and was criticized for its minimizing language and weak attempt to suggest that the breach had been unsuccessful.

From Comcast’s statement on Multichannel’s post No Evidence That Personal Sub Info Obtained By Mail Server Hack:

Comcast said it is investigating a claim by a hacker group that claims to have broken into a batch of the MSO email servers, but believes that no personal subscriber data was obtained as a result.

“We’re aware of the situation and are aggressively investigating it,” a Comcast spokesman said. “We take our customers’ privacy and security very seriously, and we currently have no evidence to suggest any personal customer information was obtained in this incident.”

Not only is there a high probability that customer information was exposed — because direct access was provided to the public for 24 hours — but the vulnerability exploited by the attackers was disclosed and fixed in December 2013.

Just not by Comcast, apparently.

Vulnerability reported December 2013, not patched by Comcast

NullCrew FTS used the unpatched security vulnerability CVE-2013-7091 to open what was essentially an unlocked door for anyone access to usernames, passwords, and other sensitive details from Comcast’s servers.

NullCrew FTS used a Local File Inclusion (LFI) exploit to gain access to the Zimbra LDAP and MySQL database — which houses the usernames and passwords of Comcast ISP users.

“Fun Fact: 34 Comcast mail servers are victims to one exploit,” tweeted NullCrew FTS.

If you are a Comcast customer, you are at risk: All Comcast internet service includes a master email address.

Even if a customer doesn’t use Comcast’s Xfinity mail service, every Comcast ISP user has a master email account with which to manage their services, and it is accessible through a “Zimbra” webmail site.

This account is used to access payment information, email settings, user account creation and settings, and any purchases from Comcast’s store or among its services.

With access to this master email address, someone can give up to six “household members” access to the Comcast account.

NullCrew taunted Comcast on Twitter, then posted the data on Pastebin and taunted the company a little bit more.

Because there were “no passwords” on the Pastebin, some observers believed — incorrectly — that there was no serious risk for exploitation of sensitive customer information.

NullCrew FTS: 2 — big telecoms: 0

On the first weekend of February 2014, NullCrew FTS took credit for a valid hack against telecom provider Bell Canada.

In the first strike of what looks like it’ll be a very successful campaign to cause pain and humiliation to big telecoms, NullCrew FTS accessed and exposed more than 22,000 usernames and passwords, and some credit card numbers belonging to the phone company’s small business customers.

Establishing a signature game of cat and mouse with clueless support staff, NullCrew FTS contacted Bell customer support two weeks before its disclosure.

Like Comcast’s robotic customer service responses to NullCrew FTS on Twitter, Bell’s support staff either didn’t know how to report the security incident upstream, had no idea what a hacking event was, or didn’t take the threat seriously.

Bell also tried to play fast and loose with its accountability in the security smash and grab; it acknowledged the breach soon after, but blamed it on an Ottawa-based third-party supplier.

However, NullCrew FTS announced the company’s insecurities in mid January with a public warning that the hackers had issued to a company support representative about the vulnerabilities.

NullCrew FTS followed up with Bell by posting a Pastebin link on Twitter with unredacted data.

Excerpt from zdnet.com

Feds to dump CGI from Healthcare.gov project

Monday, January 13th, 2014

The Obama Administration is set to fire CGI Federal as prime IT contractor of the problem-plagued Healthcare.gov website, a report says.

The government now plans to hire IT consulting firm Accenture to fix the Affordable Care Act (ACA) website’s lingering performance problems, the Washington Post reported today. Accenture will get a 12-month, $90 million contract to update the website, the newspaper reported.

The Healthcare.gov site is the main portal for consumers to sign up for new insurance plans under the Affordable Care Act.

CGI’s Healthcare.gov contract is due for renewal in February. The terms of the agreement included options for the U.S. to renew it for one more year and then another two more after that.

The decision not to renew comes as frustration grows among officials of the Centers for Medicare and Medicaid Services (CMS), which oversees the ACA, about the pace and quality of CGI’s work, the Post said, quoting unnamed sources. About half of the software fixes written by CGI engineers in recent months have failed on first attempt to use them, CMS officials told the Post.

The government awarded the contract to Accenture on a sole-source, or no-bid, basis because the CGI contract expires at the end of next month. That gives Accenture less than two months to familiarize itself with the project before it takes over the complex task of fixing numerous remaining glitches.

CGI did not immediately respond to Computerworld’s request for comment.

In an email, an Accenture spokesman declined to confirm or deny the report.

“Accenture Federal Services is in discussions with clients and prospective clients all the time, but it is not appropriate to discuss new business opportunities we may or may not be pursuing,” the spokesman said The decision to replace CGI comes as performance of the Healthcare.gov website appears to be steadily improving after its spectacularly rocky Oct. 1.

A later post mortem of the debacle showed that servers did not have the right production data, third party systems weren’t connecting as required, dashboards didn’t have data and there simply wasn’t enough server capacity to handle traffic.

Though CGI had promised to have the site ready and fully functional by Oct. 1, between 30% and 40% of the site had yet to be completed at the time. The company has taken a lot of the heat since.

Ironically, the company has impressive credentials. The company is nowhere as big as some of the biggest government IT contractors but still is only one of 10 companies in the U.S. to have achieved the highest level Capability Maturity Model Integration (CMMI) level for software development certification.

CGI Federal is a subsidiary of Montreal-based CGI Group. CMS hired the company as the main IT contractor for Healthcare.gov in 2011 under an $88 million contract. So far, the firm has received about $113 million for its work on the site.

Source:  pcadvisor.com

DoS attacks that took down big game sites abused Web’s time-sync protocol

Friday, January 10th, 2014

Miscreants who earlier this week took down servers for League of Legends, EA.com, and other online game services used a never-before-seen technique that vastly amplified the amount of junk traffic directed at denial-of-service targets.

Rather than directly flooding the targeted services with torrents of data, an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.

“Prior to December, an NTP attack was almost unheard of because if there was one it wasn’t worth talking about,” Shawn Marck, CEO of DoS-mitigation service Black Lotus, told Ars. “It was so tiny it never showed up in the major reports. What we’re witnessing is a shift in methodology.”

The technique is in many ways similar to the DNS-amplification attacks waged on servers for years. That older DoS technique sends falsified requests to open domain name system servers requesting the IP address for a particular site. DNS-reflection attacks help aggravate the crippling effects of a DoS campaign since the responses sent to the targeted site are about 50 times bigger than the request sent by the attacker.

During the first week of the year, NTP reflection accounted for about 69 percent of all DoS attack traffic by bit volume, Marck said. The average size of each NTP attack was about 7.3 gigabits per second, a more than three-fold increase over the average DoS attack observed in December. Correlating claims DERP Trolling made on Twitter with attacks Black Lotus researchers were able to observe, they estimated the attack gang had a maximum capacity of about 28Gbps.

NTP servers help people synchronize their servers to very precise time increments. Recently, the protocol was found to suffer from a condition that could be exploited by DoS attackers. Fortunately, NTP-amplification attacks are relatively easy to repel. Since virtually all the NTP traffic can be blocked with few if any negative consequences, engineers can simply filter out the packets. Other types of DoS attacks are harder to mitigate, since engineers must first work to distinguish legitimate data from traffic designed to bring down the site.

Black Lotus recommends network operators follow several practices to blunt the effects of NTP attacks. They include using traffic policers to limit the amount of NTP traffic that can enter a network, implementing large-scale DDoS mitigation systems, or opting for service-based approaches that provide several gigabits of standby capacity for use during DDoS attacks.

Source:  arstechnica.com

Cyber criminals offer malware for Nginx, Apache Web servers

Thursday, December 26th, 2013

A new malware program that functions as a module for the Apache and Nginx Web servers is being sold on cybercrime forums, according to researchers from security firm IntelCrawler.

The malware is called Effusion and according to the sales pitch seen by IntelCrawler, a start-up firm based in Los Angeles that specializes in cybercrime intelligence, it can inject code in real time into websites hosted on the compromised Web servers. By injecting content into a website, attackers can redirect visitors to exploits or launch social engineering attacks.

The Effusion module works with Nginx from version 0.7 up to the latest stable version, 1.4.4, and with Apache running on 32- and 64-bit versions of Linux and FreeBSD. ModulModules extend Apache’s and Nginx’s core functionality.

The malware can inject rogue code into static content of certain MIME types, including JavaScript and HTML, and in PHP templates at the start, end or after a specific tag. Attackers can push configuration updates and control code modifications remotely.

Filters can also be used to restrict when the injection happens. Effusion supports filtering by referrer header, which can be used to target only visitors that come from specific websites; by User-Agent header, which can be used to target users of specific browsers and by IP address or address range.

The malware can check whether it has root access, something that could allow the attackers greater control over the underlying system. It can also delete the injected content when suspicious processes are detected in order to hide itself, Andrey Komarov, IntelCrawler’s CEO, said via email.

The Effusion authors offer precompiled builds for $2,500 per build and plan to vet buyers, Komarov said. This suggests they’re interested in selling it only to a limited number of people so they can continue to offer support and develop the malware at the same time, he said.

While this is not the first malware to function as an Apache module, it is one of the very few so far to also target Nginx, a high-performance Web server that has grown considerably in popularity in recent years.

According to a December Web server survey by Internet services firm Netcraft, Nginx is the third most widely used Web server software after Apache and Microsoft IIS, and has a market share of over 14%. Because it’s built to handle high numbers of concurrent connections, it is used to host heavily trafficked websites including Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub and SoundCloud.

Source: computerworld.com

Critics: NSA agent co-chairing key crypto standards body should be removed

Monday, December 23rd, 2013

There’s an elephant in the room at the Internet Engineering Task Force.

Security experts are calling for the removal of a National Security Agency employee who co-chairs an influential cryptography panel, which advises a host of groups that forge widely used standards for the Internet Engineering Task Force (IETF).

Kevin Igoe, who in a 2011 e-mail announcing his appointment was listed as a senior cryptographer with the NSA’s Commercial Solutions Center, is one of two co-chairs of the IETF’s Crypto Forum Research Group (CFRG). The CFRG provides cryptographic guidance to IETF working groups that develop standards for a variety of crucial technologies that run and help secure the Internet. The transport layer security (TLS) protocol that underpins Web encryption and standards for secure shell connections used to securely access servers are two examples. Igoe has been CFRG co-chair for about two years, along with David A. McGrew of Cisco Systems.

Igoe’s leadership had largely gone unnoticed until reports surfaced in September that exposed the role NSA agents have played in “deliberately weakening the international encryption standards adopted by developers.” Until now, most of the resulting attention has focused on cryptographic protocols endorsed by the separate National Institute for Standards and Technology. More specifically, scrutiny has centered on a random number generator that The New York Times, citing a document leaked by former NSA contractor Edward Snowden, reported may contain a backdoor engineered by the spy agency.

Enter Dragonfly

Less visibly, the revelations about the NSA influence of crypto standards have also renewed suspicions about the agency’s role in the IETF. To wit: it has brought new urgency to long-simmering criticism claiming that the CFRG was advocating the addition of a highly unproven technology dubbed “Dragonfly” to the TLS technology websites use to provide HTTPS encryption. Despite a lack of consensus about the security of Dragonfly, Igoe continued to champion it, critics said, citing several e-mails Igoe sent in the past two years. Combined with his ties to the NSA, Igoe’s continued adherence to Dragonfly is creating a lack of confidence in his leadership, critics said.

“Kevin’s NSA affiliation raises unpleasant but unavoidable questions regarding these actions,” Trevor Perrin, a crypto expert and one of the most vocal critics, wrote Friday in an e-mail to the CFRG list serve. “It’s entirely possible these are just mistakes by a novice chair who lacks experience in a particular sort of protocol and is being pressured by IETF participants to endorse something. But it’s hard to escape an impression of carelessness and unseriousness in Kevin’s work. One wonders whether the NSA is happy to preside over this sort of sloppy crypto design.”

Igoe and McGrew didn’t respond to an e-mail seeking comment. This article will be updated if they respond later.

Like the Dual EC_DRBG standard adopted by NIST and now widely suspected to contain a backdoor, Dragonfly came with no security proof. And unlike several other better known candidates for “password-authenticated key exchange” (PAKE), most people participating in the CFRG or TLS working group knew little or nothing about it. TLS already has an existing PAKE called SRP, which critics say makes Dragonfly particularly redundant. PAKEs are complex and still not widely understood by crypto novices, but in essence, they involve the use of passwords to negotiate cryptographic keys used in encrypted TLS communications between servers and end users.

Update: Dragonfly developer Dan Harkins strongly defended the security of the PAKE.

“There are no known security vulnerabilities with dragonfly,” he wrote in an e-mail after this article was first published. “But it does not have a formal security proof to accompany it, unlike some other PAKE schemes. So the TLS working group asked the CFRG to look at it. They were not asked to ‘approve’ it, and they weren’t asked to ‘bless’ it. Just take a look and see if there’s any problems that would make it unsuitable for TLS. There were comments received on the protocol and they were addressed. There were no issues found that make it unsuitable for TLS.”

Harkins also took issue with characterizations by critics and this Ars article that Dragonfly is “untested” and “highly unproven.” He said it’s used in the 802.11 Wi-Fi standard as a secure, drop-in replacement for WPA-PSK security protocol. It’s also found as a method in the extensible authentication protocol and as an alternative to pre-shared keys in the Internet key exchange protocol.

“Do you know of another PAKE scheme that has been so widely applied?” he wrote in his response.

Perrin is a programmer who primarily develops cryptographic applications. He is the developer or co-developer of several proposed Internet standards, including trust assertions for certificate keys and the asynchronous protocol for secure e-mail. In Friday’s e-mail, he provided a raft of reasons why he said Igoe should step down:

1) Kevin has provided the *ONLY* positive feedback for Dragonfly that can be found on the CFRG mailing list or meeting minutes. The contrast between Kevin’s enthusiasm and the group’s skepticism is striking [CFRG_SUMMARY]. It’s unclear what this enthusiasm is based on. There’s no record of Kevin making any effort to understand Dragonfly’s unusual structure, compare it to alternatives, consider possible use cases, or construct a formal security analysis.

2) Twice Kevin suggested a technique for deriving the Dragonfly password-based element which would make the protocol easy to break [IGOE_1, IGOE_2]. He also endorsed an ineffective attempt to avoid timing attacks by adding extra iterations to one of the loops [IGOE_3, IGOE_4]. These are surprising mistakes from an experienced cryptographer.

3) Kevin’s approval of Dragonfly to the TLS WG misrepresented CFRG consensus, which was skeptical of Dragonfly [CFRG_SUMMARY].

Perrin’s motion has been seconded by several other participants, including cryptographer William Whyte. Another critic supporting Igoe’s removal called on security expert Bruce Schneier to replace Igoe. In an e-mail to Ars, Schneier said he is unsure if he is a suitable candidate. “I’m probably too busy to chair, and I’m not really good at the whole ‘organizing a bunch of people’ thing,” he wrote.

In Harkins 1,117-word response, he wrote:

The opposition to it in TLS is not “long-simmering” as alleged in the article. It is very recent and the most vocal critic actually didn’t say anything until _after_ the close of Working Group Last Call(a state of draft development on the way to RFC status). As part of his critique, Trevor Perrin has noted that dragonfly has no security proof. That’s true and it’s certainly not new. Having a formal proof has never been a requirement in the past and it is not a requirement today. He has continued to refer to the comments received about the draft as if they are signs of flaws. This is especially shocking given he is referred to in the article as “the developer or co-developer of several proposed Internet standards.” Someone who develops, or co-develops Internet Standards knows how the sausage making works. Comments are made, comments are addressed. There has, to my knowledge, never been an Internet Draft that’s perfect in it’s -00 revision and went straight to publication as an RFC. His criticism is particularly mendacious.

Trevor Perrin has also points out the technique in which dragonfly generates a password-based element as being flawed. The technique was the result of a 2 year old thread on the TLS list on how to address a possible side-channel attack. Trevor doesn’t like it, which is fair, but on the TLS mailing list he has also said that even if it was changed to a way he wants he would still be against dragonfly.

Anyone who has spent any time at all watching how standards bodies churn out the sausage knows that suspicions and vast conspiracy theories are almost always a part of the proceedings. But in a post-Snowden world, there’s new legitimacy to criticism about NSA involvement, particularly when employees of the agency are the ones actively shepherding untested proposals.

Source:  arstechnica.com

Unique malware evades sandboxes

Thursday, December 19th, 2013

Malware used in attack on PHP last month dubbed DGA.Changer

Malware utilized in the attack last month on the developers’ site PHP.net used a unique approach to avoid detection, a security expert says.

On Wednesday, security vendor Seculert reported finding that one of five malware types used in the attack had a unique cloaking property for evading sandboxes. The company called the malware DGA.Changer.

DGA.Changer’s only purpose was to download other malware onto infected computers, Aviv Raff, chief technology officer for Seculert, said on the company’s blog. Seculert identified 6,500 compromised computers communicating with the malware’s command and control server. Almost 60 percent were in the United States.

What Seculert found unique was how the malware could receive a command from a C&C server to change the seed of the software’s domain generation algorithm. The DGA periodically generates a large number of domain names as potential communication points to the C&C server, thereby making it difficult for researchers and law enforcement to find the right domain and possibly shutdown the botnet.

“What the attackers behind DGA did is basically change the algorithm on the fly, so they can tell the malware to create a new stream of domains automatically,” Raff told CSOonline.

When the malware generates the same list of domains, it can be detected in the sandbox where security technology will isolate suspicious files. However, changing the algorithm on demand means that the malware won’t be identified.

“This is a new capability that didn’t exist before,” Raff said. “This capability allows the attacker to bypass sandbox technology.”

Hackers working for a nation-state targeting specific entities, such as government agencies, think tanks or international corporations, would use this type of malware, according to Raff. Called advanced persistent threats, these hackers tend to use sophisticated attack tools.

An exploit kit that served five different malware types was used in compromising two servers of PHP.net, a site for downloads and documentation related to the PHP general-purpose scripting language used in Web development. Google spotted four pages on the site serving malicious JavaScript that targeted personal computers, but ignored mobile devices.

The attack was noteworthy because of the number of visitors to PHP.net, which is in the top 250 domains on the Internet, according to Alexa rankings.

To defend against DGA.Changer, companies would need a tool that looks for abnormal behavior in network traffic. The malware tends to generate unusual traffic by querying lots of domains in search of the one leading to the C&C server.

“Because this malware will try to go to different domains, it will generate suspicious traffic,” Raff said.

Seculert did not find any evidence that would indicate who was behind the PHP.net attack.

“This is a group that’s continuously updating this malicious software, so this is a work in progress,” Raff said.

Source:  csoonline.com

IT managers are increasingly replacing servers with SaaS

Friday, December 6th, 2013

IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding, according to new data.

IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers.

In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers.

“What that means is a lot of people are buying SaaS,” said Frank Gens, referring to software-as-a-service. “A lot of capacity if shifting out of the enterprise into cloud service providers.”

The increased use of SaaS is a major reason for the market shift, but so is virtualization to increase server capacity. Data center consolidations are eliminating servers as well, along with the purchase of denser servers capable of handling larger loads.

For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining, based on market direction and the experience of IT managers.

Two years ago, when Mark Endry became the CIO and SVP of U.S. operations for Arcadis, a global consulting, design and engineering company, the firm was running its IT in-house.

“We really put a stop to that,” said Endry. Arcadis is moving to SaaS, either to add new services or substitute existing ones. An in-house system is no longer the default, he added.

“Our standard RFP for services says it must be SaaS,’ said Endry.

Arcadis has added Workday, a SaaS-based HR management system, replaced an in-house training management system with a SaaS system, and an in-house ADP HR system was replaced with a service. The company is also planning a move to Office 365, and will stop running its in-house Exchange and SharePoint servers.

As a result, in the last two years, Endry has kept the server count steady at 1,006 spread through three data centers. He estimates that without the efforts at virtualization, SaaS and other consolidations, they would have more 200 more physical servers.

Endry would like to consolidate the three data centers into one, and continue shifting to SaaS to avoid future maintenance costs, and also the need to customize and maintain software. SaaS can’t yet be used for everything, particularly ERP, but “my goal would be to really minimize the footprint of servers,” he said.

Similarly, Gerry McCartney, CIO of Purdue University is working to cut server use and switch more to SaaS.

The university’s West Lafayette, Ind., campus had some 65 data centers two years ago, many small. Data centers at Purdue are defined as any room with additional power and specialized heavy duty cooling equipment. They have closed at least 28 of them in the last 18 months.

The Purdue consolidation is the result of several broad directions: increased virtualization, use of higher density systems, and increase use of SaaS.

McCartney wants to limit the university’s server management role. “The only things that we are going to retain on campus is research and strategic support,” he said. That means that most, if not all, of the administrative functions may be moved off campus.

This shift to cloud-based providers is roiling the server market, and is expected to help send server revenue down 3.5% this year, according to IDC.

Gens says that one trend among users who buy servers is increasing interest in converged or integrated systems that combine server, storage, networking and software. They account now about for about 10% of the market, and are expected to make up 20% by 2020.

Meanwhile, the big cloud providers are heading in the opposite direction, and are increasingly looking for componentized systems they can assemble, Velcro-like, in their data centers. This has given rise to contract, or original design manufacturers (ODM), mostly overseas, who make these systems for cloud systems.

Source:  computerworld.com

Microsoft disrupts ZeroAccess web fraud botnet

Friday, December 6th, 2013

ZeroAccess, one of the world’s largest botnets – a network of computers infected with malware to trigger online fraud – has been disrupted by Microsoft and law enforcement agencies.

ZeroAccess hijacks web search results and redirects users to potentially dangerous sites to steal their details.

It also generates fraudulent ad clicks on infected computers then claims payouts from duped advertisers.

Also called Sirefef botnet, ZeroAccess, has infected two million computers.

The botnet targets search results on Google, Bing and Yahoo search engines and is estimated to cost online advertisers $2.7m (£1.7m) per month.

Microsoft said it had been authorised by US regulators to “block incoming and outgoing communications between computers located in the US and the 18 identified Internet Protocol (IP) addresses being used to commit the fraudulent schemes”.

In addition, the firm has also taken control of 49 domains associated with ZeroAccess.

David Finn, executive director of Microsoft Digital Crimes Unit, said the disruption “will stop victims’ computers from being used for fraud and help us identify the computers that need to be cleaned of the infection”.

‘Most robust’

The ZeroAccess botnet relies on waves of communication between groups of infected computers, instead of being controlled by a few servers.

This allows cyber criminals to control the botnet remotely from a range of computers, making it difficult to tackle.

According to Microsoft, more than 800,000 ZeroAccess-infected computers were active on the internet on any given day as of October this year.

“Due to its botnet architecture, ZeroAccess is one of the most robust and durable botnets in operation today and was built to be resilient to disruption efforts,” Microsoft said.

However, the firm said its latest action is “expected to significantly disrupt the botnet’s operation, increasing the cost and risk for cyber criminals to continue doing business and preventing victims’ computers from committing fraudulent schemes”.

Microsoft said its Digital Crimes Unit collaborated with the US Federal Bureau of Investigation (FBI) and Europol’s European Cybercrime Centre (EC3) to disrupt the operations.

Earlier this year, security firm Symantec said it had disabled nearly 500,000 computers infected by ZeroAccess and taken them out of the botnet.

Source: BBC

Why security benefits boost mid-market adoption of virtualization

Monday, December 2nd, 2013

While virtualization has undoubtedly already found its footing in larger businesses and data centers, the technology is still in the process of catching on in the middle market. But a recent study conducted by a group of Cisco Partner Firms, titled “Virtualization on the Rise,” indicates just that: the prevalence of virtualization is continuing to expand and has so far proven to be a success for many small- and medium-sized businesses.

With firms where virtualization has yet to catch on, however, security is often the point of contention.

Cisco’s study found that adoption rates for virtualization are already quite high at small- to medium-sized businesses, with 77 percent of respondents indicating that they already had some type of virtualization in place around their office. These types of solutions included server virtualization, a virtual desktop infrastructure, storage virtualization, network virtualization, and remote desktop access, among others. Server virtualization was the most commonly used, with 59 percent of respondents (that said they had adopted virtualization in some form) stating that it was their solution of choice.

That all being said, there are obviously some businesses who still have yet to adopt virtualization, and a healthy chunk of respondents – 51 percent – cited security as a reason. It appeared that the larger companies with over 100 employees were more concerned about the security of virtualization, with 60 percent of that particular demographic qualifying it as their barrier to entry (while only 33 percent of smaller firms shared the same concern).

But with Cisco’s study lacking any other specificity in terms of why exactly the respondents were concerned about the security of virtualization, one can’t help but wonder: is this necessarily sound reasoning? Craig Jeske, the business development manager for virtualization and cloud at Global Technology Resources, shed some light on the subject.

“I think [virtualization] gives a much easier, more efficient, and agile response to changing demands, and that includes responding to security threats,” said Jeske. “It allow for a faster response than if you had to deploy new physical tools.”

He went on to explain that given how virtualization enhances portability and makes it easier to back up data, it subsequently makes it easier for companies to get back to a known state in the event of some sort of compromise. This kind of flexibility limits attackers’ options.

“Thanks to the agility provided by virtualization, it changes the attack vectors that people can come at us from,” he said.

As for the 33 percent of smaller firms that cited security as a barrier to entry – thereby suggesting that the smaller companies were more willing to take the perceived “risk” of adopting the technology – Jeske said that was simply because virtualization makes more sense for businesses of that size.

“When you have a small budget, the cost savings [from virtualization] are more dramatic, since it saves space and calls for a lower upfront investment,” he said. On the flip side, the upfront cost for any new IT direction is higher for a larger business. It’s easier to make a shift when a company has 20 servers versus 20 million servers; while the return on virtualization is higher for a larger company, so is the upfront investment.

Of course, there is also the obvious fact that with smaller firms, the potential loss as a result of taking such a risk isn’t as great.

“With any type of change, the risk is lower for a smaller business than for a multimillion dollar firm,” he said. “With bigger businesses, any change needs to be looked at carefully. Because if something goes wrong, regardless of what the cause was, someone’s losing their job.”

Jeske also addressed the fact that some of the security concerns indicated by the study results may have stemmed from some teams recognizing that they weren’t familiar with the technology. That lack of comfort with virtualization – for example, not knowing how to properly implement or deploy it – could make virtualization less secure, but it’s not inherently insecure. Security officers, he stressed, are always most comfortable with what they know.

“When you know how to handle virtualization, it’s not a security detriment,” he said. “I’m hesitant to make a change until I see the validity and justification behind that change. You can understand peoples’ aversion from a security standpoint and first just from the standpoint of needing to understand it before jumping in.”

But the technology itself, Jeske reiterated, has plenty of security benefits.

“Since everything is virtualized, it’s easier to respond to a threat because it’s all available from everywhere. You don’t have to have the box,” he said. “The more we’re tied to these servers and our offices, the easier it is to respond.”

And with every element being all-encompassed in a software package, he said, businesses might be able to do more to each virtual server than they could in the physical world. Virtual firewalls, intrusion detection, etc. can all be put in as an application and put closer to the machine itself so firms don’t have to bring things back out into the physical environment.

This also allows for easier, faster changes in security environments. One change can be propagated across the entire virtual environment automatically, rather than having to push it out to each physical device individually that’s protecting a company’s systems.

Jeske noted that there are benefits from a physical security standpoint, as well, namely because somebody else takes care of it for you. The servers hosting the virtualized solutions are somewhere far away, and the protection of those servers is somebody else’s responsibility.

But what with the rapid proliferation of virtualization, Jeske warned that security teams need to try to stay ahead of the game. Otherwise, it’s going to be harder to properly adopt the technology when they no longer have a choice.

“With virtualization, speed of deployment and speed of reaction are the biggest things,” said Jeske. “The servers and desktops are going to continue to get virtualized whether officers like it or not. So they need to be proactive and stay in front of it, otherwise they can find themselves in a bad position further on down the road.”

Source:  csoonline.com

Hackers exploit JBoss vulnerability to compromise servers

Tuesday, November 19th, 2013

Attackers are actively exploiting a known vulnerability to compromise JBoss Java EE application servers that expose the HTTP Invoker service to the Internet in an insecure manner.

At the beginning of October security researcher Andrea Micalizzi released an exploit for a vulnerability he identified in products from multiple vendors including Hewlett-Packard, McAfee, Symantec and IBM that use 4.x and 5.x versions of JBoss. That vulnerability, tracked as CVE-2013-4810, allows unauthenticated attackers to install an arbitrary application on JBoss deployments that expose the EJBInvokerServlet or JMXInvokerServlet.

Micalizzi’s exploit installs a Web shell application called pwn.jsp that can be used to execute shell commands on the operating system via HTTP requests. The commands are executed with the privileges of the OS user running JBoss, which in the case of some JBoss deployments can be a high privileged, administrative user.

Researchers from security firm Imperva have recently detected an increase in attacks against JBoss servers that used Micalizzi’s exploit to install the original pwn.jsp shell, but also a more complex Web shell called JspSpy.

Over 200 sites running on JBoss servers, including some that belong to governments and universities have been hacked and infected with these Web shell applications, said Barry Shteiman, director of security strategy at Imperva.

The problem is actually bigger because the vulnerability described by Micalizzi stems from insecure default configurations that leave JBoss management interfaces and invokers exposed to unauthenticated attacks, a issue that has been known for years.

In a 2011 presentation about the multiple ways in which unsecured JBoss installations can be attacked, security researchers from Matasano Security estimated, based on a Google search for certain strings, that there were around 7,300 potentially vulnerable servers.

According to Shteiman, the number of JBoss servers with management interfaces exposed to the Internet has more than tripled since then, reaching over 23,000.

One reason for this increase is probably that people have not fully understood the risks associated with this issue when it was discussed in the past and continue to deploy insecure JBoss installations, Shteiman said. Also, some vendors ship products with insecure JBoss configurations, like the products vulnerable to Micalizzi’s exploit, he said.

Products vulnerable to CVE-2013-4810 include McAfee Web Reporter 5.2.1, HP ProCurve Manager (PCM) 3.20 and 4.0, HP PCM+ 3.20 and 4.0, HP Identity Driven Manager (IDM) 4.0, Symantec Workspace Streaming 7.5.0.493 and IBM TRIRIGA. However, products from other vendors that have not yet been identified could also be vulnerable.

JBoss is developed by Red Hat and was recently renamed to WildFly. Its latest stable version is 7.1.1, but according to Shteiman many organizations still use JBoss 4.x and 5.x for compatibility reasons as they need to run old applications developed for those versions.

Those organizations should follow the instructions for securing their JBoss installations that are available on the JBoss Community website, he said.

IBM also provided information on securing the JMX Console and the EJBInvoker in response to Micalizzi’s exploit.

The Red Hat Security Response Team said that while CVE-2013-4810 refers to the exposure of unauthenticated JMXInvokerServlet and EJBInvokerServlet interfaces on HP ProCurve Manager, “These servlets are also exposed without authentication by default on older unsupported community releases of JBoss AS (WildFly) 4.x and 5.x. All supported Red Hat JBoss products that include the JMXInvokerServlet and EJBInvokerServlet interfaces apply authentication by default, and are not affected by this issue. Newer community releases of JBoss AS (WildFly) 7.x are also not affected by this issue.”

Like Shteiman, Red Hat advised users of older JBoss AS releases to follow the instructions available on the JBoss website in order to apply authentication to the invoker servlet interfaces.

The Red Hat security team has also been aware of this issue affecting certain versions of the JBoss Enterprise Application Platform, Web Platform and BRMS Platform since 2012 when it tracked the vulnerability as CVE-2012-0874. The issue has been addressed and current versions of JBoss Enterprise Platforms based on JBoss AS 4.x and 5.x are no longer vulnerable, the team said.

Source:  computerworld.com

Internet architects propose encrypting all the world’s Web traffic

Thursday, November 14th, 2013

A vastly larger percentage of the world’s Web traffic will be encrypted under a near-final recommendation to revise the Hypertext Transfer Protocol (HTTP) that serves as the foundation for all communications between websites and end users.

The proposal, announced in a letter published Wednesday by an official with the Internet Engineering Task Force (IETF), comes after documents leaked by former National Security Agency contractor Edward Snowden heightened concerns about government surveillance of Internet communications. Despite those concerns, websites operated by Yahoo, the federal government, the site running this article, and others continue to publish the majority of their pages in a “plaintext” format that can be read by government spies or anyone else who has access to the network the traffic passes over. Last week, cryptographer and security expert Bruce Schneier urged people to “make surveillance expensive again” by encrypting as much Internet data as possible.

The HTTPbis Working Group, the IETF body charged with designing the next-generation HTTP 2.0 specification, is proposing that encryption be the default way data is transferred over the “open Internet.” A growing number of groups participating in the standards-making process—particularly those who develop Web browsers—support the move, although as is typical in technical deliberations, there’s debate about how best to implement the changes.

“There seems to be strong consensus to increase the use of encryption on the Web, but there is less agreement about how to go about this,” Mark Nottingham, chair of the HTTPbis working group, wrote in Wednesday’s letter. (HTTPbis roughly translates to “HTTP again.”)

He went on to lay out three implementation proposals and describe their pros and cons:

A. Opportunistic encryption for http:// URIs without server authentication—aka “TLS Relaxed” as per draft-nottingham-http2-encryption.

B. Opportunistic encryption for http:// URIs with server authentication—the same mechanism, but not “relaxed,” along with some form of downgrade protection.

C. HTTP/2 to only be used with https:// URIs on the “open” Internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).

In subsequent discussion, there seems to be agreement that (C) is preferable to (B), since it is more straightforward; no new mechanism needs to be specified, and HSTS can be used for downgrade protection.

(C) also has this advantage over (A) and furthermore provides stronger protection against active attacks.

The strongest objections against (A) seemed to be about creating confusion about security and discouraging use of “full” TLS, whereas those against (C) were about limiting deployment of better security.

Keen observers have noted that we can deploy (C) and judge adoption of the new protocol, later adding (A) if necessary. The reverse is not necessarily true.

Furthermore, in discussions with browser vendors (who have been among those most strongly advocating more use of encryption), there seems to be good support for (C), whereas there’s still a fair amount of doubt/disagreement regarding (A).

Pros, cons, and carrots

As Nottingham acknowledged, there are major advantages and disadvantages for each option. Proposal A would be easier for websites to implement because it wouldn’t require them to authenticate their servers using a digital certificate that is recognized by all the major browsers. This relaxation of current HTTPS requirements would eliminate a hurdle that stops many websites from encrypting traffic now, but it also comes at a cost. The lack of authentication could make it trivial for the person at an Internet cafe or the spy monitoring Internet backbones to create a fraudulent digital certificate that impersonates websites using this form of relaxed transport layer security (TLS). That risk calls into question whether the weakened measure is worth the hassle of implementing.

Proposal B, by contrast, would make it much harder for attackers, since HTTP 2.0 traffic by default would be both encrypted and authenticated. But the increased cost and effort required by millions of websites may stymie the adoption of the new specification, which in addition to encryption offers improvements such as increased header compression and asynchronous connection multiplexing.

Proposal C seems to resolve the tension between the other two options by moving in a different direction altogether—that is, by implementing HTTP 2.0 only in full-blown HTTPS traffic. This approach attempts to use the many improvements of the new standard as a carrot that gives websites an incentive to protect their traffic with traditional HTTPS encryption.

The options that the working group is considering do a fair job of mapping the current debate over Web-based encryption. A common argument is that more sites can and should encrypt all or at least most of their traffic. Even better is when sites provide this encryption while at the same time providing strong cryptographic assurances that the server hosting the website is the one operated by the domain-name holder listed in the address bar—rather than by an attacker who is tampering with the connection.

Unfortunately, the proposals are passing over an important position in the debate over Web encryption, involving the viability of the current TLS and secure sockets layer (SSL) protocols that underpin all HTTPS traffic. With more than 500 certificate authorities located all over the world recognized by major browsers, all it takes is the compromise of one of them for the entire system to fail (although certificate pinning in some cases helps contain the damage). There’s nothing in Nottingham’s letter indicating that this single point of failure will be addressed. The current HTTPS system has serious privacy implications for end users, since certificate authorities can log huge numbers of requests for SSL-protected websites and map them to individual IP addresses. This is also unaddressed.

It’s unfortunate that the letter didn’t propose alternatives to the largely broken TLS system, such as the one dubbed Trust Assertions for Certificate Keys, which was conceived by researchers Moxie Marlinspike and Trevor Perrin. Then again, as things are now, the engineers in the HTTPbis Working Group are likely managing as much controversy as they can. Adding an entirely new way to encrypt Web traffic to an already sprawling list of considerations would probably prove to be too much.

Source:  arstechnica.com

New malware variant suggests cybercriminals targeting SAP users

Tuesday, November 5th, 2013

The malware checks if infected systems have a SAP client application installed, ERPScan researchers said

A new variant of a Trojan program that targets online banking accounts also contains code to search if infected computers have SAP client applications installed, suggesting that attackers might target SAP systems in the future.

The malware was discovered a few weeks ago by Russian antivirus company Doctor Web, which shared it with researchers from ERPScan, a developer of security monitoring products for SAP systems.

“We’ve analyzed the malware and all it does right now is to check which systems have SAP applications installed,” said Alexander Polyakov, chief technology officer at ERPScan. “However, this might be the beginning for future attacks.”

When malware does this type of reconnaissance to see if particular software is installed, the attackers either plan to sell access to those infected computers to other cybercriminals interested in exploiting that software or they intend to exploit it themselves at a later time, the researcher said.

Polyakov presented the risks of such attacks and others against SAP systems at the RSA Europe security conference in Amsterdam on Thursday.

To his knowledge, this is the first piece of malware targeting SAP client software that wasn’t created as a proof-of-concept by researchers, but by real cybercriminals.

SAP client applications running on workstations have configuration files that can be easily read and contain the IP addresses of the SAP servers they connect to. Attackers can also hook into the application processes and sniff SAP user passwords, or read them from configuration files and GUI automation scripts, Polyakov said.

There’s a lot that attackers can do with access to SAP servers. Depending on what permissions the stolen credentials have, they can steal customer information and trade secrets or they can steal money from the company by setting up and approving rogue payments or changing the bank account of existing customers to redirect future payments to their account, he added.

There are efforts in some enterprise environments to limit permissions for SAP users based on their duties, but those are big and complex projects. In practice most companies allow their SAP users to do almost everything or more than what they’re supposed to, Polyakov said.

Even if some stolen user credentials don’t give attackers the access they want, there are default administrative credentials that many companies never change or forget to change on some instances of their development systems that have snapshots of the company data, the researcher said.

With access to SAP client software, attackers could steal sensitive data like financial information, corporate secrets, customer lists or human resources information and sell it to competitors. They could also launch denial-of-service attacks against a company’s SAP servers to disrupt its business operations and cause financial damage, Polyakov said.

SAP customers are usually very large enterprises. There are almost 250,000 companies using SAP products in the world, including over 80 percent of those on the Forbes 500 list, according to Polyakov.

If timed correctly, some attacks could even influence the company’s stock and would allow the attackers to profit on the stock market, according to Polyakov.

Dr. Web detects the new malware variant as part of the Trojan.Ibank family, but this is likely a generic alias, he said. “My colleagues said that this is a new modification of a known banking Trojan, but it’s not one of the very popular ones like ZeuS or SpyEye.”

However, malware is not the only threat to SAP customers. ERPScan discovered a critical unauthenticated remote code execution vulnerability in SAProuter, an application that acts as a proxy between internal SAP systems and the Internet.

A patch for this vulnerability was released six months ago, but ERPScan found that out of 5,000 SAProuters accessible from the Internet, only 15 percent currently have the patch, Polyakov said. If you get access to a company’s SAProuter, you’re inside the network and you can do the same things you can when you have access to a SAP workstation, he said.

Source:  csoonline.com

Seven essentials for VM management and security

Tuesday, October 29th, 2013

Virtualization isn’t a new trend, these days it’s an essential element of infrastructure design and management. However, while common for the most part, organizations are still learning as they go when it comes to cloud-based initiatives.

CSO recently spoke with Shawn Willson, the Vice President of Sales at Next IT, a Michigan-based firm that focuses on managed services for small to medium-sized organizations. Willson discussed his list of essentials when it comes to VM deployment, management, and security.

Preparing for time drift on virtual servers. “Guest OSs should, and need to be synced with the host OS…Failure to do so will lead to time drift on virtual servers — resulting in significant slowdowns and errors in an active directory environment,” Willson said.

Despite the impact this could have on work productivity and daily operations, he added, very few IT managers or security officers think to do this until after they’ve experienced a time drift. Unfortunately, this usually happens while attempting to recover from a security incident. Time drift can lead to a loss of accuracy when it comes to logs, making forensic investigations next to impossible.

Establish policies for managing snapshots and images. Virtualization allows for quick copies of the Guest OS, but policies need to be put in place in order to dictate who can make these copies, if copies will (or can) be archived, and if so, where (and under what security settings) will these images be stored.

“Many times when companies move to virtual servers they don’t take the time the upgrade their security policy for specific items like this, simply because of the time it requires,” Willson said.

Creating and maintaining disaster recovery images. “Spinning up an unpatched, legacy image in the case of disaster recovery can cause more issues than the original problem,” Willson explained.

To fix this, administrators should develop a process for maintaining a patched, “known good” image.

Update disaster recovery policy and procedures to include virtual drives. “Very few organizations take the time to upgrade their various IT policies to accommodate virtualization. This is simply because of the amount of time it takes and the little value they see it bringing to the organization,” Willson said.

But failing to update IT policies to include virtualization, “will only result in the firm incurring more costs and damages whenever a breach or disaster occurs,” Willson added.

Maintaining and monitoring the hypervisor. “All software platforms will offer updates to the hypervisor software, making it necessary that a strategy for this be put in place. If the platform doesn’t provide monitoring features for the hypervisor, a third party application should be used,” Willson said.

Consider disabling clip boarding between guest OSs. By default, most VM platforms have copy and paste between guest OSs turned on after initial deployment. In some cases, this is a required feature for specific applications.

“However, it also poses a security threat, providing a direct path of access and the ability to unknowingly [move] malware from one guest OS to another,” Willson said.

Thus, if copy and paste isn’t essential, it should be disabled as a rule.

Limiting unused virtual hardware. “Most IT professionals understand the need to manage unused hardware (drives, ports, network adapters), as these can be considered soft targets from a security standpoint,” Willson said.

However, he adds, “with virtualization technology we now have to take inventory of virtual hardware (CD drives, virtual NICS, virtual ports). Many of these are created by default upon creating new guest OSs under the disguise of being a convenience, but these can offer the same danger or point of entry as unused physical hardware can.”

Again, just as it was with copy and paste, if the virtualized hardware isn’t essential, it should be disabled.

Source:  csoonline.com

Hackers compromise official PHP website, infect visitors with malware

Friday, October 25th, 2013

Maintainers of the open-source PHP programming language have locked down the php.net website after discovering two of its servers were hacked to host malicious code designed to surreptitiously install malware on visitors’ computers.

The compromise was discovered Thursday morning by Google’s safe browsing service, which helps the Chrome, Firefox, and Safari browsers automatically block sites that serve drive-by exploits. Traces of the malicious JavaScript code served to some php.net visitors were captured and posted to Hacker News here and, in the form of a pcap file, to a Barracuda Networks blog post here. The attacks started Tuesday and lasted through Thursday morning, PHP officials wrote in a statement posted late that evening.

Eventually, the site was moved to a new set of servers, PHP officials wrote in an earlier statement. There’s no evidence that any of the code they maintain has been altered, they added. Encrypted HTTPS access to php.net websites is temporarily unavailable until a new secure sockets layer certificate is issued and installed. The old certificate was revoked out of concern the intruders may have accessed the private encryption key. User passwords will be reset in the coming days. At time of writing, there was no indication of any further compromise.

“The php.net systems team have audited every server operated by php.net, and have found that two servers were compromised: the server which hosted the www.php.net, static.php.net and git.php.net domains and was previously suspected based on the JavaScript malware, and the server hosting bugs.php.net,” Thursday night’s statement read. “The method by which these servers were compromised is unknown at this time.”

According to a security researcher at Kaspersky Lab, Thursday’s compromise caused some php.net visitors to download “Tepfer,” a trojan spawned by the Magnitude Exploit Kit. At the time of the php.net attacks, the malware was detected by only five of 47 antivirus programs. An analysis of the pcap file suggests the malware attack worked by exploiting a vulnerability in Adobe Flash, although it’s possible that some victims were targeted by attacks that exploited Java, Internet Explorer, or other applications, Martijn Grooten, a security researcher for Virus Bulletin, told Ars.

Grooten said the malicious JavaScript was served from a file known as userprefs.js hosted directly on one of the php.net servers. While the userprefs.js code was served to all visitors, only some of those people received an additional payload that contained malicious iframe tags. The HTML code caused visitors’ browsers to connect to a series of third-party websites and eventually download malicious code. At least some of the sites the malicious iframes were pointing to were UK domains such as nkhere.reviewhdtv.co.uk, which appeared to have their domain name system server settings compromised so they resolved to IP addresses located in Moldova.

“Given what Hacker News reported (a site serving malicious JS) to some, this doesn’t look like someone manually changing the file,” Grooten said, calling into question an account php.net officials gave in their initial brief statement posted to the site. The attackers “somehow compromised the Web server. It might be that php.net has yet to discover that (it’s not trivial—some webserver malware runs entirely in memory and hides itself pretty well.)”

Ars has covered several varieties of malware that target webservers and are extremely hard to detect.

In an e-mail, PHP maintainer Adam Harvey said PHP officials first learned of the attacks at 6:15am UTC. By 8, they had provisioned a new server. In the interim, some visitors may have been exposed.

“We have no numbers on the number of visitors affected, due to the transient nature of the malicious JS,” Harvey wrote. “As the news post on php.net said, it was only visible intermittently due to interactions with an rsync job that refreshed the code from the Git repository that houses www.php.net. The investigation is ongoing. Right now we have nothing specific to share, but a full post mortem will be posted on php.net once the dust has settled.”

Source:  arstechnica.com

Top three indicators of compromised web servers

Thursday, October 24th, 2013

You slowly push open your unusually unlocked door only to find that your home is ransacked. A broken window, missing cash, all signs that someone has broken in and you have been robbed.

In the physical world it is very easy to understand what an indicator of compromise would mean for a robbery. It would simply be all the things that clue you in to the event’s occurrence. In the digital world however, things are another story.

My area of expertise is breaking into web applications. I’ve spent many years as a penetration tester attempting to gain access to internal networks through web applications connected to the Internet. I developed this expertise because of the prevalence of exploitable vulnerabilities that made it simple to achieve my goal.  In a world of phishing and drive-by downloads, the web layer is often a complicated, over-looked, compromise domain.

A perimeter web server is a gem of a host to control for any would-be attacker. It often enjoys full Internet connectivity with minimal downtime while also providing an internal connection to the target network.  These servers are routinely expected to experience attacks, heavy user traffic, bad login attempts, and many other characteristics that allow a real compromise to blend in with “normal” behavior.  The nature of many web applications running on these servers are such that encoding, obfuscation, file write operations, and even interaction with the underlying operating system are all natively supported, providing much of the functionality an attacker needs to do their bidding.  Perimeter web servers can also be used after a compromise has occurred elsewhere in the network to retain remote access so that pesky two-factor VPN’s can be avoided.

With all the reasons an attacker has to go after a web server, it’s a wonder that there isn’t a wealth of information available for detecting a server compromise by way of the application layer.  Perhaps the sheer number of web servers, application frameworks, components, and web applications culminate in a difficult situation for any analyst to approach with a common set of indicators.  While this is certainly no easy task, there are a few common areas that can be evaluated to detect a compromise with a high degree of success.

#1 Web shells

Often the product of vulnerable image uploaders and other poorly controlled file write operations, a web shell is simply a file that has been written to the web server’s file system for the purpose of executing commands. Web shells are most commonly text files with the appropriate extension to allow execution by the underlying application framework, an obvious example being commandshell.php or cmd.aspx.  Viewing the text file generally reveals code that allows an attacker to interact with the underlying operating system via built-in calls such as the ProcessStartInfo() constructor in .net or the system() call in php.  The presence of a web shell on any web server is a clear indicator of compromise in virtually every situation.

Web Shell IOC’s (Indicators of Compromise)

  • Scan all files in web root for operating system calls, given the installed application frameworks
  • Check for the existence of executable files or web application code in upload directories or non-standard locations
  • Parse web server logs to detect commands being passed as GET requests or successive POST requests to suspicious web scripts
  • Flag new processes created by the web server process because when should it ever really launch cmd.exe

#2 Administrative interfaces

Many web application frameworks and custom web applications have some form of administrative interface. These interfaces often suffer from password issues and other vulnerabilities that allow an attacker to gain access to this component. Once inside, an attacker can utilize all of the built-in functionality to further compromise the host or it’s users. While each application will have its own unique logging and available functionality, there are some common IOC’s that should be investigated.

Admin interface IOC’s

  • Unplanned deployment events such as pushing out a .war file in a Java based application
  • Modification of user accounts
  • Creation or editing of scheduled tasks or maintenance events
  • Unplanned configuration updates or backup operations
  • Failed or non-standard login events

#3 General attack activity

The typical web hacker will not fire up their favorite commercial security scanner to try and find ways into your web application as they tend to prefer a more manual approach. The ability for an attacker to quietly test your web application for exploitable vulnerabilities makes this a high reward, low risk activity.  During this investigation the intruder will focus on the exploits that lead them to their goal of obtaining access. A keen eye can detect some of this activity and isolate it to a source.

General attack IOC’s

  • Scan web server logs for (500) errors or errors handled within the application itself.  Database errors for SQL injection, path errors for file write or read operations, and permission errors are some prime candidates to indicate an issue
  • Known sensitive file access via web server process.  Investigate if web configuration files like WEB-INF/web.xml, sensitive operating system files like /etc/passwd, or static location operating system files like C:\WINDOWS\system.ini have been accessed via the web server process.
  • Advanced search engine operators in referrer headers.  It is not common for a web visitor to access your site directly from an inurl:foo ext:bar Google search
  • Large quantities of 404 page not found errors with suspicious file names may indicate an attempt to access un-linked areas of an application

Web application IOC’s still suffer from the same issues as their more traditional counterparts in that the behavior of an attacker must be highly predictable to detect their activity.  If we’re honest with ourselves, an attacker’s ability to avoid detection is only limited by their creativity and skill set.  An advanced attacker could easily avoid creating most, if not all, of the indicators in this article. That said, many attackers are not as advanced as the media makes them out to be; even better, some of them are just plain lazy. Armed with the web-specific IOC’s above, the next time you walk up to the unlocked front door of your ransacked web server, you might actually get to see who has their hand in your cookie jar.

Source:  techrepublic.com

Google unveils an anti-DDoS platform for human rights organizations and media, but will it work?

Tuesday, October 22nd, 2013

Project Shield uses company’s infrastructure to absorb attacks

On Monday, Google announced a beta service that will offer DDoS protection to human rights organizations and media, in and effort to slow the amount of censorship that such attacks cause.

The announcement of Project Shield, the name given to the anti-DDoS platform, came during a presentation in New York, at the Conflict in a Connected World summit. The gathering included security experts, hacktivists, dissidents, and technologists, in order to explore the nature of conflict and how online tools can both be a source of protection and harm when it comes to expression, and information sharing.

“As long as people have expressed ideas, others have tried to silence them. Today one out of every three people lives in a society that is severely censored. Online barriers can include everything from filters that block content to targeted attacks designed to take down websites. For many people, these obstacles are more than an inconvenience — they represent full-scale repression,” the company explained in a blog post.

Project Shield uses Google’s massive infrastructure to absorb DDoS attacks. Enrollment in the service is invite only at the moment, but it could be expanded considerable in the future. The service is free, but will follow page speed pricing, should Google open enrollment and charge for it down the line.

However, while the service is sure to help smaller websites, such as those ran by dissidents exposing corrupt regimes, or media speaking out against those in power, Google makes no promises.

“No guarantees are made in regards to uptime or protection levels. Google has designed its infrastructure to defend itself from quite large attacks and this initiative is aimed at providing a similar level of protection to third-party websites,” the company explains in a Project Shield outline.

One problem Project Shield may inadvertently create is a change in tactics. If the common forms of DDoS attacks are blocked, then more advanced forms of attack will be used. Such an escalation has already happened for high value targets, such as banks and other financial services websites.

“Using Google’s infrastructure to absorb DDoS attacks is structurally like using a CDN (Content Delivery Network) and has the same pros and cons,” Shuman Ghosemajumder, VP of strategy at Shape Security, told CSO during an interview.

The types of attacks a CDN would solve, he explained, are network-based DoS and DDoS attacks. These are the most common, and the most well-known attack types, as they’ve been around the longest.

In 2000, flood attacks were in the 400Mb/sec range, but today’s attacks scale to regularly exceed 100Gb/sec, according to anti-DDoS vendor Arbor Networks. In 2010, Arbor started to see a trend led by attackers who were advancing DDoS campaigns, by developing new tactics, tools, and targets. What that has led to is a threat that mixes flood, application and infrastructure attacks in a single, blended attack.

“It is unclear how effective [Project Shield] would be against Application Layer DoS attacks, where web servers are flooded with HTTP requests. These represent more leveraged DoS attacks, requiring less infrastructure on the part of the attacker, but are still fairly simplistic. If the DDoS protection provided operates at the application layer, then it could help,” Ghosemajumder said.

“What it would not protect against is Advanced Denial of Service attacks, where the attacker uses knowledge of the application to directly attack the origin server, databases, and other backend systems which cannot be protected against by a CDN and similar means.”

Google hasn’t mentioned directly the number of sites currently being protected by Project Shield, so there is no way to measure the effectiveness of the program from the outside.

In related news, Google also released a second DDoS related tool on Monday, which is possible thanks to data collected by Arbor networks. The Digital Attack Map, as the tool is called, is a monitoring system that allows users to see historical DDoS attack trends, and connect them to related news events on any given day. The data is also shown live, and can be granularly sorted by location, time, and attack type.

Source:  csoonline.com

VMware identifies vulnerabilities for ESX, vCenter, vSphere, issues patches

Friday, October 18th, 2013

VMware today said that its popular virtualization and cloud management products have security vulnerabilities that could lead to denials of service for customers using ESX and ESXi hypervisors and management platforms including vCenter Server Appliance and vSphere Update Manager.

To exploit the vulnerability an attacker would have to intercept and modify management traffic. If successful, the hacker would compromise the hostd-VMDBs, which would lead to a denial of service for parts of the program.

VMware released a series of patches that resolve the issue. More information about the vulnerability and links to download the patches can be found here.

The vulnerability exists in vCenter 5.0 for versions before update 3; and ESX versions 4.0, 4.1 and 5.0 and ESXi versions 4.0 and 4.1, unless they have the latest patches.

Users can also reduce the likelihood of the vulnerability causing a problem by running vSphere components on an isolated management network to ensure that traffic does not get intercepted.

Source:  networkworld.com

Symantec disables 500,000 botnet-infected computers

Tuesday, October 1st, 2013

Symantec has disabled part of one of the world’s largest networks of infected computers.

About 500,000 hijacked computers have been taken out of the 1.9 million strong ZeroAccess botnet, the security company said.

The zombie computers were used for advertising and online currency fraud and to infect other machines.

Security experts warned that any benefits from the takedown might be short-lived.

The cybercriminals behind the network had not yet been identified, said Symantec.

“We’ve taken almost a quarter of the botnet offline,” Symantec security operations manager Orla Cox told the BBC. “That’s taken away a quarter of [the criminals’] earnings.”

The ZeroAccess network is used to generate illegal cash through a type of advertising deception known as “click fraud”.

Communications poisoned

Zombie computers are commanded to download online adverts and generate artificial mouse clicks on the ads to mimic legitimate users and generate payouts from advertisers.

The computers are also used to create an online currency called Bitcoin which can be used to pay for goods and services.

The ZeroAccess botnet is not controlled by one or two servers, but relies on waves of communications between groups of infected computers to do the bidding of the criminals.

The decentralised nature of the botnet made it difficult to act against, said Symantec.

In July, the company started poisoning the communications between the infected computers, permanently cutting them off from the rest of the hijacked network, said Ms Cox.

The company had set the ball in motion after noticing that a new version of the ZeroAccess software was being distributed through the network.

The updated version of the ZeroAccess Trojan contained modifications that made it more difficult to disrupt communications between peers in the infected network.

Symantec built its own mini-ZeroAccess botnet to study effective ways of taking down the network, and tested different takedown methods for two weeks.

The company studied the botnet and disabled the computers as part of its research operations, which feed into product development, said Ms Cox.

“Hopefully this will help us in the future to build up better protection,” she said.

Internet service providers have been informed which machines were taken out of the botnet in an effort to let the owners of the computers know that their machine was a zombie.

Resilient zombies

Although a quarter of the zombie network has been taken out of action, the upgraded version of the botnet will be more difficult to take down, said Ms Cox.

“These are professional cybercriminals,” she said. “They will likely be looking for ways to get back up to strength.”

In the long term, the zombie network could grow back to its previous size, security experts said.

“Every time a botnet is taken down, but the people who run it are not arrested, there is a chance they can rebuild the botnet,” said Vincent Hanna, a researcher for non-profit anti-spam project Spamhaus.

The remaining resilient part of the network may continue to be used for fraud, and could start spreading the upgraded ZeroAccess Trojan, Mr Hanna warned.

Taking down infected networks is a “thankless task”, according to Sophos, a rival to Symantec.

“It’s a bit like trying to deal with the rabbit problem in Australia – you know you’re unlikely ever to win, but you also know that you have to keep trying, or you will definitely lose,” said Sophos head of technology Paul Ducklin.

Source:  BBC