Archive for September, 2013

Brute-force malware targets email and FTP servers

Monday, September 30th, 2013
A piece of malware designed to launch brute-force password guessing attacks against websites built with popular content management systems like WordPress and Joomla has started being used to also attack email and FTP servers.

The malware is known as Fort Disco and was documented in August by researchers from DDoS mitigation vendor Arbor Networks who estimated that it had infected over 25,000 Windows computers and had been used to guess administrator account passwords on over 6,000 WordPress, Joomla and Datalife Engine websites.

Once it infects a computer, the malware periodically connects to a command and control (C&C) server to retrieve instructions, which usually include a list of thousands of websites to target and a password that should be tried to access their administrator accounts.

The Fort Disco malware seems to be evolving, according to a Swiss security researcher who maintains the Abuse.ch botnet tracking service. “Going down the rabbit hole, I found a sample of this particular malware that was brute-forcing POP3 instead of WordPress credentials,” he said Monday in a blog post.

The Post Office Protocol version 3 (POP3) allows email clients to connect to email servers and retrieve messages from existing accounts.

The C&C server for this particular Fort Disco variant responds with a list of domain names accompanied by their corresponding MX records (mail exchanger records). The MX records specify which servers are handling email service for those particular domains.

The C&C server also supplies a list of standard email accounts—usually admin, info and support—for which the malware should try to brute force the password, the Abuse.ch maintainer said.

“While speaking with the guys over at Shadowserver [an organization that tracks botnets], they reported that they have seen this malware family bruteforcing FTP credentials using the same methodology,” he said.

Brute-force password guessing attacks against websites using WordPress and other popular CMSes are relatively common, but they are usually performed using malicious Python or Perl scripts hosted on rogue servers, the researcher said. With this malware, cybercriminals created a way to distribute their attacks across a large number of machines and also attack POP3 and FTP servers, he said.

Source:  pcworld.com

Cisco IOS fixes 10 denial-of-service vulnerabilities

Friday, September 27th, 2013

The vulnerabilities can be exploited by unauthenticated, remote attackers to cause connectivity loss, hangs or reloads

Cisco Systems has patched 10 vulnerabilities that could affect the availability of devices using various versions of its IOS software.

IOS is a multitasking operating system that combines networking and telecommunications functions and is used on many of the company’s networking devices.

All of the patched vulnerabilities can affect a device’s availability if exploited. They affect Cisco IOS implementations of the Network Time Protocol (NTP), the Internet Key Exchange protocol, the Dynamic Host Configuration Protocol (DHCP), the Resource Reservation Protocol (RSVP), the virtual fragmentation reassembly (VFR) feature for IP version 6 (IPv6), the Zone-Based Firewall (ZBFW) component, the T1/E1 driver queue and the Network Address Translation (NAT) function for DNS (Domain Name System) and PPTP (Point-to-Point Tunneling Protocol).

These vulnerabilities can be exploited by remote, unauthenticated attackers by sending specifically crafted packets over the network to IOS devices that have the affected features enabled.

Depending on the targeted vulnerability, attackers can cause the affected devices to hang, reload, lose connection, lose their ability to route connections or trigger other types of denial-of-service (DoS) conditions.

Workarounds for the NTP, ZBFW, T1/E1 driver queue and RSVP flaws are available and are described in the corresponding security advisories released by Cisco this week. To mitigate the other vulnerabilities, users will have to install patched versions of the IOS software, depending on which versions their devices already use.

“The effectiveness of any workaround or fix depends on specific customer situations, such as product mix, network topology, traffic behavior, and organizational mission,” Cisco said. “Because of the variety of affected products and releases, customers should consult their service providers or support organizations to ensure that any applied workaround or fix is the most appropriate in the intended network before it is deployed.”

The company is not aware of any malicious exploitation or detailed public disclosure of these vulnerabilities. They were discovered during internal security reviews or while troubleshooting customer service reports.

Source:  computerworld.com

Weighing the IT implications of implementing SDNs

Friday, September 27th, 2013

Software-defined anything has myriad issues for data centers to consider before implementation

Software Defined Networks should make IT execs think about a lot of key factors before implementation.

Issues such as technology maturity, cost efficiencies, security implications, policy establishment and enforcement, interoperability and operational change weigh heavily on IT departments considering software-defined data centers. But perhaps the biggest consideration in software-defining your IT environment is, why would you do it?

“We have to present a pretty convincing story of, why do you want to do this in the first place?” said Ron Sackman, chief network architect at Boeing, at the recent Software Defined Data Center Symposium in Santa Clara. “If it ain’t broke, don’t fix it. Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.”

And if that compelling use case is established, the next task is to get everyone on board and comfortable with the notion of a software-defined IT environment.

“The willingness to accept abstraction is kind of a trade-off between control of people and hardware vs. control of software,” says Andy Brown, Group CTO at UBS, speaking on the same SDDC Symposium panel. “Most operations people will tell you they don’t trust software. So one of the things you have to do is win enough trust to get them to be able to adopt.”

Trust might start with assuring the IT department and its users that a software-defined network or data center is secure, at least as secure as the environment it is replacing or founded on. Boeing is looking at SDN from a security perspective trying to determine if it’s something it can objectively recommend to its internal users.

“If you look at it from a security perspective, the best security for a network environment is a good design of the network itself,” Sackman says. “Things like Layer 2 and Layer 3 VPNs backstop your network security, and they have not historically been a big cyberattack surface. So my concern is, are the capex and opex savings going to justify the risk that you’re taking by opening up a bigger cyberattack surface, something that hasn’t been a problem to this point?”

Another concern Sackman has is in the actual software development itself, especially if a significant amount of open source is used.

“What sort of assurance does someone have – particularly if this is open source software – that the software you’re integrating into your solution is going to be secure,” he asks. “How do you scan that? There’s a big development time security vector that doesn’t really exist at this point.”

Policy might be the key to ensuring security and other operational aspects in place pre-SDN/SDDC are not disrupted post implementation. Policy-based orchestration, automation and operational execution is touted as one of SDN’s chief benefits.

“I believe that policy will become the most important factor in the implementation of a software-defined data center because if you build it without policy, you’re pretty much giving up on the configuration strategy, the security strategy, the risk management strategy, that have served us so well in the siloed world of the last 20 years,” UBS’ Brown says.

Software Defined Data Center’s also promise to break down those silos through cross-function orchestration of the compute, storage, network and application elements in an IT shop. But that’s easier said than done, Brown notes – interoperability is not a guarantee in the software-defined world.

“Information protection and data obviously have to interoperate extremely carefully,” he says. The success of software defined workload management – aka, virtualization and cloud – in a way has created a set of children, not all of which can necessarily be implemented in parallel, but all of which are required to get to the end state of the software defined data center.

“Now when you think of all the other software abstraction we’re trying to introduce in parallel, someone’s going to cry uncle. So all of these things need to interoperate with each other.”

So are the purported capital and operational cost savings of implementing SDN/SDDCs worth the undertaking? Do those cost savings even exist?

Brown believes they exist in some areas and not in others.

“There’s a huge amount of cost take-out in software-defined storage that isn’t necessarily there in SDN right now,” he said. “And the reason it’s not there in SDN is because people aren’t ripping out the expensive under network and replacing it with SDN. Software-defined storage probably has more legs than SDN because of the cost pressure. We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.”

Sackman believes the overall savings are there in SDN/SDDCs but again, the security uncertainty may make those benefits not currently worth the risk.

“The capex and opex savings are very compelling, and there are particular use cases specifically for SDN that I think would be great if we could solve specific pain points and problems that we’re seeing,” he says. “But I think, in general, security is a big concern, particularly if you think about competitors co-existing as tenants in the same data center — if someone develops code that’s going to poke a hole in the L2 VPN in that data center and export data from Coke to Pepsi.

“We just won a proposal for a security operations center for a foreign government, and I’m thinking can we offer a better price point on our next proposal if we offer an SDN switch solution vs. a vendor switch solution? A few things would have to happen before we feel comfortable doing that. I’d want to hear a compelling story around maturity before we would propose it.”

Source: networkworld.com

Research shows IT blocking applications based on popularity not risk

Thursday, September 26th, 2013

Tactic leads to less popular, but still risky cloud-based apps freely accessing networks

A new study, based on collective data taken from 3 million users across more than 100 companies, shows that cloud-based apps and services are being blocked based on popularity rather than risk.

A new study from Skyhigh Networks, a firm that focuses on cloud access security, shows that most of the cloud-based blocking within IT focuses on popular well-known apps, and not risk. The problem with this method of security is that often, cloud-based apps that offer little to no risk are prohibited on the network, while those that actually do pose risk are left alone, freely available to anyone who knows of them.

Moreover, the data collected from some 3 million users across 100 organizations shows that IT seriously underestimates the number of cloud-based apps and services running on their network. For example, on average there are about 545 cloud services in use by a given organization, yet if asked IT will cite a number that’s only a fraction of that.

When it comes to the type of cloud-based apps and services blocked by IT, the primary focus seems to be on preventing productivity loss rather than risk, and frequently blocked access centers on name recognition. For example, Netflix is the number one blocked app overall, and services such as iCloud, Google Drive, Dropbox, SourceForge, WebEx, Bit.ly, StumbleUpon, and Skype, are commonly flagged too.

However, while those services do have some risk associated with them, they are also top brands depending on their vertical. Yet, while they’re flagged and prohibited on many networks, services such as SendSpace, Codehaus, FileFactory, authorSTREAM, MovShare, and WeTransfer are unrestricted, but actually pose more than the other commonly blocked apps.

Digging deeper, the study shows that in the financial services sector, iCloud, and Google Drive are commonly blocked, yet SendSpace and CloudApp, which are direct alternatives, are rarely — if ever — filtered. In healthcare, Dropbox and Memeo (an up and coming file sharing service) are blocked, which is expected. Yet, once again, healthcare IT allows services such as WeTransfer, 4shared, and Hostingbulk on the network.

In the high tech sector, Skype, Google Drive, and Dropbox are commonly expunged from network traffic, yet RapidGator, ZippyShare, and SkyPath are fully available. In manufacturing, where WatchDox, Force.com, and Box are regularly blocked, CloudApp, SockShare, and RapidGator are fully used by employees seeking alternatives.

In a statement, Rajiv Gupta, founder and CEO at Skyhigh Networks, said that the report shows that “there are no consistent policies in place to manage the security, compliance, governance, and legal risks of cloud services.”

Separately, in comments to CSO, Gupta agreed that one of the main causes for this large disconnect in content filtering is a lack of understanding when it comes to the risks behind most cloud-based apps and services (outside of the top brands), and that many commercial content filtering solutions simply do not cover the alternatives online, or as he put it, “they’re not cloud aware.”

This, if anything, proves that risk management can’t be confined within a checkbox and a bland category within a firewall’s content filtering rules.

“Cloud is very much the wild, wild west. Taming the cloud today largely is a whack-a-mole exercise…with your bare hands,” Gupta told us.

Source:  csoonline.com

Researchers create nearly undetectable hardware backdoor

Thursday, September 26th, 2013

University of Massachusetts researchers have found a way to make hardware backdoors virtually undetectable.

With recent NSA leaks and surveillance tactics being uncovered, researchers have redoubled their scrutiny of things like network protocols, software programs, encryption methods, and software hacks. Most problems out there are caused by software issues, either from bugs or malware. But one group of researchers at the University of Massachusetts decided to investigate the hardware side, and they found a new way to hack a computer processor at such a low-level, it’s almost impossible to detect it.

What are hardware backdoors?

Hardware backdoors aren’t exactly new. We’ve known for a while that they are possible, and we have examples of them in the wild. They are rare, and require a very precise set of circumstances to implement, which is probably why they aren’t talked about as often as software or network code. Even though hardware backdoors are rare and notoriously difficult to pull off, they are a cause of concern because the damage they could cause could be much greater than software-based threats. Stated simply, a hardware backdoor is a malicious piece of code placed in hardware so that it cannot be removed and is very hard to detect. This usually means the non-volatile memory in chips like the BIOS on a PC, or in the firmware of a router or other network device.

A hardware backdoor is very dangerous because it’s so hard to detect, and because it typically has full access to the device it runs on, regardless of any password or access control system. But how realistic are these threats? Last year, a security consultant showcased a fully-functioning hardware backdoor. All that’s required to implement that particular backdoor is flashing a BIOS with a malicious piece of code. This type of modification is one reason why Microsoft implemented Secure Boot in Windows 8, to ensure the booting process in a PC is trusted from the firmware all the way to the OS. Of course, that doesn’t protect you from other chips on the motherboard being modified, or the firmware in your router, printer, smartphone, and so on.

New research

The University of Massachusetts researchers found an even more clever way to implement a hardware backdoor. Companies have taken various measures for years now to ensure their chips aren’t modified without their knowledge. After all, most of our modern electronics are manufactured in a number of foreign factories. Visual inspections are commonly done, along with tests of the firmware code, to ensure nothing was changed. But in this latest hack, even those measures may not be enough. The way to do that is ingenious and quite complex.

The researchers used a technique called doping transistors. Basically, a transistor is made of a crystalline structure which provides the needed functionality to amplify or switch a current that goes through it. Doping a transistor means changing that crystalline structure to add impurities, and change the way it behaves. The Intel Random Number Generator (RNG) is the basic building block of any encryption system since it provides those important starting numbers with which to create encryption keys. By doping the RNG, the researchers can make the chip behave in a slightly different way. In this case, they simply changed the transistors so that one particular number became a constant instead of a variable. That means a number that was supposed to be random and impossible to predict, is now always the same.

By introducing these changes at the hardware level, it weakens the RNG, and in turn weakens any encryption that comes from keys created by that system, such as SSL connections, encrypted files, and so on. Intel chips contain self tests that are supposed to catch hardware modifications, but the researchers claim that this change is at such a low level in the hardware, that it doesn’t get detected. Fixing this flaw isn’t easy either, even if you could detect it. The RNG is part of the security process in a CPU, and for safety, it is isolated from the rest of the system. That means there is nothing a user or even administrator can do to correct the problem.

There’s no sign that this particular hardware backdoor is being used in the wild, but if this type of change is possible, then it’s likely that groups with a lot of technical expertise could find similar methods. This may lend more credence to moves from various countries to ban certain parts from some regions of the world. This summer Lenovo saw its systems being banned from defense networks in many countries after allegations that China may have added vulnerabilities in the hardware of some of its systems. Of course, with almost every major manufacturer having their electronics part made in China, that isn’t much of a relief. It’s quite likely that as hardware hacking becomes more cost effective and popular, we may see more of these types of low level hacks being performed, which could lead to new types of attacks, and new types of defense systems.

Source:  techrepublic.com

AT&T announces plans to use 700Mhz channels for LTE Broadcast

Thursday, September 26th, 2013

Yesterday at Goldman Sachs’ Communacopia Conference in New York, AT&T CEO Randall Stephenson announced that his company would be allocating the 700Mhz Lower D and E blocks of spectrum that it acquired from Qualcomm in 2011 to build out its LTE Broadcast service. Fierce Wireless reported from the event and noted that this spectrum was destined for additional data capacity. In a recent FCC filing, AT&T put off deploying LTE in this spectrum due to administrative and technical delays caused by the 3G Partnership Project’s continued evaluation of carrier aggregation in LTE Advanced.

No timeline was given for deploying LTE Broadcast, but Stephenson stressed the importance of video to AT&T’s strategy over the next few years.

The aptly named LTE Broadcast is an adaptation of the LTE technology we know and love, but in just one direction. In the case of AT&T’s plans, either 6Mhz or 12Mhz will be available for data transmission, depending on the market. In 6Mhz markets there would be some bandwidth limitations, but plenty enough to distribute a live television event, like the Super Bowl or March Madness. Vitally, since the content is broadcast indiscriminately to any handsets capable of receiving it, there’s no upper limit to the number of recipients of the data. So, instead of having a wireless data network crumble under the weight of thousands of users watching March Madness on their phones and devices at one cell site, the data network remains intact, and everyone gets to watch the games.

Verizon Wireless has a similar proposal in the works, with vague hopes that they’ll be able to be in position to leverage their ongoing relationship with the NFL for the 2014 Super Bowl. Neither Verizon Wireless nor AT&T is hurting for spectrum right now, so it’s nice to see them putting it to good use.

Source:  arstechnica.com

Stop securing your virtualized servers like another laptop or PC

Tuesday, September 24th, 2013
Many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages. Here are the most common mistakes made and how to prevent them.

Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

We asked two security pros a couple of questions specific to ensuring security on virtual servers. Here’s what they said:

TechRepublic: What mistakes do IT managers make most often when securing their virtual servers?

Answered by Min Wang, CEO and founder AIP US

Wang: Most virtual environments have the same security requirements as the physical world with additions defined by the use of virtual networking and shared storage. However, many IT managers don’t take the additional steps to secure their virtual servers, but rather leave them vulnerable to attacks with only antivirus software and data loss prevention packages.

Here are some more specific mistakes IT managers make regularly:

1.  IT managers rely too much on the hypervisor layer to provide security. Instead, they should be taking a 360 degree approach rather than a looking at one section or layer.

2.  When transitioning to virtual servers, too often they misconfigure their servers and the underlying network. This causes things to get even more out of whack when new servers are created and new apps are added.

3.  There’s increased complexity and many IT managers  don’t fully understand how the components interwork and how to properly secure the entire system, not just parts of it.

TechRepublic: Can you provide some tips on what IT managers can do moving forward to ensure their servers remain hack free?

Answered by Praveen Bahethi, CTO of Shilpa Systems

Bahethi:

1.  Logins into the Xen, HyperV, KVM, and ESXi servers, as well as the VMs created within them, should be mapped to a central database such as Active Directory to ensure that all logins are logged.  These login logs should be reviewed for failures on a regular basis as the organization’s security policy defines. By using a centralized login service, the administrative staff can quickly and easily remove privileges to all VMs and the servers by disabling the central account. Password Policies applied in the Centralized Login Servers can then be enforced across the virtualized environment.

2.  The virtual host servers should have a separate physical network interface controller (NIC) for network console and management operations that is tied into a separate out of band network solution or maintained via VLAN separation.  Physical access to the servers and their storage is controlled and monitored. All patches and updates that are being applied are verified to come from the vendors of the software and have been properly vetted with checksums.

3.  Within the virtualized environment, steps should be taken to ensure that the VMs are only able to see traffic destined for them by mapping them to the proper VLAN and vSwitch. The VMs cannot modify their MAC addresses nor have their virtual NICs engaged in snooping the wire with Promiscuous mode. The VMs themselves are not able to copy/paste operations via the console, no extraneous HW is associated with them, and VM to VM communication outside of the network operations is disabled.

4.  The VMs must have proper firewall and anti-malware, anti-virus, and url-filtering in place so that accessing outside data that contains threats can be mitigated. The use of security software with the hosts using plug-ins that enable security features such as firewalls and intrusion prevention are to be added. As with any proactive security measures, review of logs and policies for handling events need to be clearly defined.

5.  The shared storage should require unique login credentials for each virtual server and the network should be segregated from the normal application data and Out of Band console traffic. This segregation can be done using VLANs or completely separate physical network connections.

6.  The upstream network should only allow traffic required for the hosts and their VMs to only pass their switch ports, dropping all other extraneous traffic. Layer 2 and Layer 3 configuration should be in place for DHCP, Spanning Tree, and routing protocol attacks. Some vendors provide additional features in their third party vSwitches which can also be used to mitigate attacks with a VM server.

Source:  techrepublic.com

Schools’ use of cloud services puts student privacy at risk

Tuesday, September 24th, 2013

Vendors should promise not to use targeted advertising and behavioral profiling, SafeGov said

Schools that compel students to use commercial cloud services for email and documents are putting privacy at risk, says a campaign group calling for strict controls on the use of such services in education.

A core problem is that cloud providers force schools to accept policies that authorize user profiling and online behavioral advertising. Some cloud privacy policies stipulate that students are also bound by these policies, even when they have not had the opportunity to grant or withhold their consent, said privacy campaign group SafeGov.org in a report released on Monday.

There is also the risk of commercial data mining. “When school cloud services derive from ad-supported consumer services that rely on powerful user profiling and tracking algorithms, it may be technically difficult for the cloud provider to turn off these functions even when ads are not being served,” the report said.

Furthermore, by failing to create interfaces that distinguish between ad-free and ad-supported versions, students may be lured from ad-free services for school use to consumer ad-driven services that engage in highly intrusive processing of personal information, according to the report. This could be the case with email, online video, networking and basic search.

Also, contracts used by cloud providers don’t guarantee ad-free services because they are ambiguously worded and include the option to serve ads, the report said.

SafeGov has sought support from European Data Protection Authorities (DPAs), some of which endorsed the use of codes of conduct establishing rules to which schools and cloud providers could voluntarily agree. Such codes should include a binding pledge to ban targeted advertising in schools as well as the processing or secondary use of data for advertising purposes, SafeGov recommended.

“We think any provider of cloud computing services to schools (Google Apps and Microsoft 365 included) should sign up to follow the Codes of Conduct outlined in the report,” said a SafeGov spokeswoman in an email.

Even when ad serving is disabled the privacy of students may still be jeopardized, the report said.

For example, while Google’s policy for Google Apps for Education states that no ads will be shown to enrolled students, there could still be a privacy problem, according to SafeGov.

“Based on our research, school and government customers of Google Apps are encouraged to add ‘non-core’ (ad-based) Google services such as search or YouTube, to the Google Apps for Education interface, which takes students from a purportedly ad-free environment to an ad-driven one,” the spokeswoman said.

“In at least one case we know of, it also requires the school to force students to accept the privacy policy before being able to continue using their accounts,” she said, adding that when this is done the user can click through to the ad-supported service without a warning that they will be profiled and tracked.

This issue was flagged by the French and Swedish DPAs, the spokeswoman said.

In September, the Swedish DPA ordered a school to stop using Google Apps or sign a reworked agreement with Google because the current terms of use lacked specifics on how personal data is being handled and didn’t comply with local data laws.

However, there are some initiatives that are encouraging, the spokeswoman said.

Microsoft’s Bing for Schools initiative, an ad-free, no cost version of its Bing search engine that can be used in public and private schools across the U.S., is one of them, she said. “This is one of the things SafeGov is trying to accomplish with the Codes of Conduct — taking out ad-serving features completely when providing cloud services in schools. This would remove the ad-profiling risk for students,” she said.

Microsoft and Google did not respond to a request for comment.

Source:  computerworld.com

US FDA to regulate only medical apps that could be risky if malfunctioning

Tuesday, September 24th, 2013

The FDA said the mobile platform brings its own unique risks when used for medical applications

The U.S. Food and Drug Administration intends to regulate only mobile apps that are medical devices and could pose a risk to a patient’s safety if they do not function as intended.

Some of the risks could be unique to the choice of the mobile platform. The interpretation of radiological images on a mobile device could, for example, be adversely affected by the smaller screen size, lower contrast ratio and uncontrolled ambient light of the mobile platform, the agency said in its recommendations released Monday. The FDA said it intends to take the “risks into account in assessing the appropriate regulatory oversight for these products.”

The nonbinding recommendations to developers of mobile medical apps only reflects the FDA’s current thinking on the topic, the agency said. The guidance document is being issued to clarify the small group of mobile apps which the FDA aims to scrutinize, it added.

The recommendations would leave out of FDA scrutiny a majority of mobile apps that could be classified as medical devices but pose a minimal risk to consumers, the agency said.

The FDA said it is focusing its oversight on mobile medical apps that are to be used as accessories to regulated medical devices or transform a mobile platform into a regulated medical device such as an electrocardiography machine.

“Mobile medical apps that undergo FDA review will be assessed using the same regulatory standards and risk-based approach that the agency applies to other medical devices,” the agency said.

It also clarified that its oversight would be platform neutral. Mobile apps to analyze and interpret EKG waveforms to detect heart function irregularities would be considered similar to software running on a desktop computer that serves the same function, which is already regulated.

“FDA’s oversight approach to mobile apps is focused on their functionality, just as we focus on the functionality of conventional devices. Our oversight is not determined by the platform,” the agency said in its recommendations.

The FDA has cleared about 100 mobile medical applications over the past decade of which about 40 were cleared in the past two years. The draft of the guidance was first issued in 2011.

Source:  computerworld.com

New OS X Trojan found and blocked by Apple’s XProtect

Tuesday, September 24th, 2013

A new command-and-control Trojan for OS X appears to be associated with the Syrian Electronic Army.

Security company Intego recently found a new malware package for OS X, called OSX/Leverage.A, which appears to be yet another targeted command-and-control Trojan horse, this time with apparent associations with the Syrian Electronic Army; however, Apple has blocked its ability to run with an XProtect update only days after its discovery.

The Trojan horse is distributed as an application disguised as a picture of two people kissing, presumably a scene from the television show “Leverage,” hence the name of the Trojan.

When the Trojan’s installer is opened, it will open an embedded version of the image in Apple’s Preview program, in an attempt to maintain the idea that it is just a picture, while the program installs the true Trojan in the background. In addition, the Trojan is built with a couple of code modifications that prevent it from showing up as a running application in the user’s Dock or in the Command-Tab application switch list.

The Trojan itself will be a program called UserEvent.app and will be placed in the /Users/Shared/ directory. It will then install a launch agent called UserEvent.System.plist in the current user’s LaunchAgents directory, which is used to keep the program running whenever the user is logged in. These two locations do not require authentication for any user to access, so the Trojan can place these files without prompting for an admin username and password.

Syrian Electronic ArmyOnce installed, the running Trojan will, among standard command-and-control activity like grabbing personal information, attempt to download an image associating the nefarious activity with the Syrian Electronic Army, a relatively new hacking group associated with the Assad regime in Syria. When contacted by Mashable, the group claimed that it is not associated with the Trojan.

While this new malware is out there and has affected a few people, it is not a major threat at this time, one reason being that the command and control servers it connects to appear to be offline. In addition, though for now the exact mode of distribution is unknown, if done through a Web browser or Apple’s Mail e-mail client, then Gatekeeper in OS X will issue a warning about the program not being a signed package. Additionally, Apple has recently updated its XProtect anti-malware scanner to specifically detect and quarantine this malware.

Beyond these security measures, you can take some additional steps to help secure your system from similar Trojans. Since most malware attempts in OS X have used various Launch Agent scripts to keep themselves running, you can use Apple’s Folder Actions feature to set up a launch agent monitor that will notify you of anytime such scripts are being set up in the system.

Source: CNET

SaaS governance: Five key questions

Monday, September 23rd, 2013

Increasingly savvy customers are sharpening their requirements for SaaS. Providers must be able to answer these key questions for potential clients.

IT governance is linked to security and data protection standards—but it is more than that. Governance includes aligning IT with business strategies, optimizing operational and system workflows and processes, and the insertion of an IT control structure for IT assets that meets the needs of auditors and regulators.

As more companies move to cloud-based solutions like SaaS (software as a service), regulators and auditors are also sharpening their requirements. “What we are seeing is an increased number of our corporate clients asking us for our own IT audits, which they, in turn, insert into their enterprise audit papers that they show auditors and regulators,” said one SaaS manager.

This places more pressure on SaaS providers, which still do not consistently perform audits, and often will admit that when they do, it is usually at the request of a prospect before the prospect signs with them.

Should enterprise IT and its regulators be concerned? The answer is fast changing to “yes.”

This means that now is the time for SaaS providers to get their governance in order.

Here are five questions that SaaS providers can soon expect to hear from clients and prospects:

#1 Can you provide me with an IT security audit?

Clients and prospects will want to know what your physical facility and IT security audit results have been, in addition to the kinds of security measures that you employ on a day to day basis. They will expect that your security measures are best-in-class, and that you also have data on internal and external penetration testing.

#2 What are your data practices?

How often do you back up data? Where do you store it? If you are using multi-tenant systems on a single server, how can a client be assured that its data (and systems) remain segregated from the systems and data of others that are also running on the same server? Can a client authorize its own security permissions for its data, down to the level of a single individual within the company or at a business partner’s?

#3 How will you protect my intellectual property?

You will get clients that will want to develop custom applications or reports for their business alone. In some cases, the client might even develop it on your cloud. In other cases, the client might retain your services to develop a specification defined by the client into a finished application. The question is this: whose property does the custom application become, and who has the right to distribute it?

One SaaS provider takes the position that all custom reports it delivers (even if individual clients pay for their development) belong to the provider—and that the provider is free to repurpose the reports for others. Another SaaS provider obtains up-front funding from the client for a custom application, and then reimburses the client for the initial funding as the provider sells the solution to other clients. In both cases, the intellectual property rights are lost to the client—but there are some clients that won’t accept these conditions.

If you are a SaaS provider, it’s important to understand the industry verticals you serve and how individuals in these industry verticals feel about intellectual property.

#4 What are your standards of performance?

I know of only one SaaS provider that actually penalizes itself in the form of “credits” toward the next month’s bill it if the provider fails to meet an uptime SLA (service level agreement). The majority of SaaS companies I have spoken with have internal SLAs—but they don’t issue them to their customers. As risk management assumes a larger role in IT governance, corporate IT managers are going to start asking their SaaS partners for SLAs with “teeth” in them that include financial penalties.

#5 What kind of disaster recovery and business continuation plan do you have?

The recent spate of global natural disasters has nearly every company and their regulators and auditors focused on DR and BC. They will expect their SaaS providers to do the same. SaaS providers that own and control their own data centers are in a strong position. SaaS providers that contract with third-party data centers (where the end client has no direct relationship with the third-party data center) are riskier. For instance, whose liability is it if the third-party data center fails? Do you as a SaaS provider indemnify your end clients? It’s an important question to know the answer to—because your clients are going to be asking it.

Source:  techrepublic.com

NSA ‘altered random-number generator’

Thursday, September 12th, 2013

US intelligence agency the NSA subverted a standards process to be able to break encryption more easily, according to leaked documents.

It had written a flaw into a random-number generator that would allow the agency to predict the outcome of the algorithm, the New York Times reported.

The agency had used its influence at a standards body to insert the backdoor, said the report.

The NSA had made no comment at the time of writing.

According to the report, based on a memo leaked by former NSA contactor Edward Snowden, the agency had gained sole control of the authorship of the Dual_EC_DRBG algorithm and pushed for its adoption by the National Institute of Standards and Technology (Nist) into a 2006 US government standard.

The NSA had wanted to be able to predict numbers generated by certain implementations of the algorithm, to crack technologies using the specification, said the report.

Nist standards are developed to secure US government systems and used globally.

The standards body said that its processes were open, and that it “would not deliberately weaken a cryptographic standard”.

“Recent news reports have questioned the cryptographic standards development process at Nist,” the body said in a statement.

“We want to assure the IT cybersecurity community that the transparent, public process used to rigorously vet our standards is still in place.”

Impact

It was unclear which software and hardware had been weakened by including the algorithm, according to software developers and cryptographers.

For example, Microsoft had used the algorithm in software from Vista onwards, but had not enabled it by default, users on the Cryptography Stack Exchange pointed out.

The algorithm has been included in the code libraries and software of major vendors and industry bodies, including Microsoft, Cisco Systems, RSA, Juniper, RIM for Blackberry, OpenSSL, McAfee, Samsung, Symantec, and Thales, according to Nist documentation.

Whether the software of these organisations was secure depended on how the algorithm had been used, Cambridge University cryptographic expert Richard Clayton told the BBC.

“There’s no easy way of saying who’s using [the algorithm], and how,” said Mr Clayton.

Moreover, the algorithm had been shown to be insecure in 2007 by Microsoft cryptographers Niels Ferguson and Dan Shumow, added Mr Clayton.

“Because the vulnerability was found some time ago, I’m not sure if anybody is using it,” he said.

A more profound problem was the possible erosion of trust in Nist for the development of future standards, Mr Clayton added.

Source:  BBC

Snowden leaks: US and UK ‘crack online encryption’

Friday, September 6th, 2013

US and UK intelligence have reportedly cracked the encryption codes protecting the emails, banking and medical records of hundreds of millions of people.

Disclosures by leaker Edward Snowden allege the US National Security Agency (NSA) and the UK’s GCHQ successfully decoded key online security protocols.

They suggest some internet companies provided the agencies backdoor access to their security systems.

The NSA is said to spend $250m (£160m) a year on the top-secret operation.

It is codenamed Bullrun, an American civil-war battle, according to the documents published by the Guardian in conjunction with the New York Times and ProPublica.

The British counterpart scheme run by GCHQ is called Edgehill, after the first major engagement of the English civil war, say the documents.

‘Behind-the-scenes persuasion’

The reports say the UK and US intelligence agencies are focusing on the encryption used in 4G smartphones, email, online shopping and remote business communication networks.

The encryption techniques are used by internet services such as Google, Facebook and Yahoo.

Under Bullrun, it is said that the NSA has built powerful supercomputers to try to crack the technology that scrambles and encrypts personal information when internet users log on to access various services.

The NSA also collaborated with unnamed technology companies to build so-called back doors into their software – something that would give the government access to information before it is encrypted and sent over the internet, it is reported.

As well as supercomputers, methods used include “technical trickery, court orders and behind-the-scenes persuasion to undermine the major tools protecting the privacy of everyday communications”, the New York Times reports.

The US reportedly began investing billions of dollars in the operation in 2000 after its initial efforts to install a “back door” in all encryption systems were thwarted.

Gobsmacked

During the next decade, it is said the NSA employed code-breaking computers and began collaborating with technology companies at home and abroad to build entry points into their products.

The documents provided to the Guardian by Mr Snowden do not specify which companies participated.

The NSA also hacked into computers to capture messages prior to encryption, and used broad influence to introduce weaknesses into encryption standards followed by software developers the world over, the New York Times reports.

When British analysts were first told of the extent of the scheme they were “gobsmacked”, according to one memo among more than 50,000 documents shared by the Guardian.

NSA officials continue to defend the agency’s actions, claiming it will put the US at considerable risk if messages from terrorists and spies cannot be deciphered.

But some experts argue that such efforts could actually undermine national security, noting that any back doors inserted into encryption programs can be exploited by those outside the government.

It is the latest in a series of intelligence leaks by Mr Snowden, a former NSA contractor, who began providing caches of sensitive government documents to media outlets three months ago.

In June, the 30-year-old fled his home in Hawaii, where he worked at a small NSA installation, to Hong Kong, and subsequently to Russia after making revelations about a secret US data-gathering programme.

A US federal court has since filed espionage charges against Mr Snowden and is seeking his extradition.

Mr Snowden, however, remains in Russia where he has been granted temporary asylum.

Source:  BBC

Will software-defined networking kill network engineers’ beloved CLI?

Tuesday, September 3rd, 2013

Networks defined by software may require more coding than command lines, leading to changes on the job

SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco’s isn’t the only CLI, but on the strength of the company’s domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it’s been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.

A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco’s ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

“The network used to be programmed through what we call CLIs, or command-line interfaces. We’re now changing that to create programmatic interfaces,” Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

“If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs,” Gartner analyst Joe Skorupa said. “Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter.”

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

“We’re already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies,” Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI’s days are numbered. SDN doesn’t go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

“It’s not obsolete by any definition,” Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It’s likely that some network jobs will be eliminated, he said.

This isn’t the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven’t always had a warm welcome. “Engineers will always gravitate toward a CLI when it’s available,” Miron said.

Even networking startups need to offer a Cisco CLI so their customers’ engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco’s CLI and other vendors’ management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented “bottom up” orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF’s approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there’s more room for human error and the cost of mistakes is higher, he said.

“Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time,” Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

“There’ll always be some level of CLI,” said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks’ SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is “scary,” he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

“The scale of systems is becoming so large that you can’t actually do anything by hand,” Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner’s Skorupa said.

The first group will “get it” and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won’t be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They’ll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

“All of that knowledge that you had amassed over decades of employment got written to zero,” Skorupa said. For IP network engineers who resist change, there will be a cruel irony: “SDN will do to them what they did to the guys who managed the old TDM voice systems.”

But SDN won’t spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

“The role of the network engineer, I don’t think, has ever been more important,” Kerravala said. “Cloud computing and mobile computing are network-centric compute models.”

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

“The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network,” said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft’s Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

“If you look at the Facebook network engineering team, pretty much everybody’s writing code as well,” said Najam Ahmad, Facebook’s director of technical operations for infrastructure.

Network engineers historically have used CLIs because that’s all they were given, Ahmad said. “I think we’re underestimating their ability. ”

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won’t go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

“All these people need to somewhat evolve their skills,” Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn’t announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

“You’ve got to understand the fundamentals,” Vashi said. “If you don’t know how the network infrastructure works, you could have all the background in software automation, and you don’t know what you’re doing on the network side.”

Source:  computerworld.com

NFL lagging on stadium Wi-Fi

Tuesday, September 3rd, 2013

http://i2.cdn.turner.com/cnn/dam/assets/130902121717-levis-stadium-horizontal-gallery.jpg

The consumption of NFL football, America’s most popular sport, is built on game-day traditions.

This week fans will dress head-to-toe in team colors and try out new tailgate recipes in parking lots before filing into 16 NFL stadiums to cheer on their team — which, thanks to the league’s parity, will likely still be in the playoff hunt come December.

But a game-day ritual of the digital age — tracking scores, highlights and social-media chatter on a mobile device — isn’t possible inside many NFL venues because the crush of fans with smartphones can overload cellular networks.

The improved home-viewing experience — high-def TV, watching multiple games at once, real-time fantasy-football updates and interaction via social media — has left some NFL stadiums scrambling to catch up. It’s one of the reasons why, before rebounding last year, the NFL lost attendance between 2008 and 2011, forcing the league to alter television-blackout rules.

In May 2012, NFL Commissioner Roger Goodell announced an initiative to outfit all 31 NFL stadiums with Wi-Fi. But with the start of the 2013 regular season just days away, less than half of the NFL’s venues are Wi-Fi enabled and no stadiums have launched new Wi-Fi systems this year.

Part of the reason for the delay is some stadium operators are waiting for the next generation of increased Wi-Fi speed before installing networks, said Paul Kapustka, editor in chief for Mobile Sports Report.

Another reason, Kapustka said, is that the cost of installing Wi-Fi will come out of the pockets of venue owners and operators who have traditionally not needed to invest in such costly projects. Instead, they receive public money to help build stadiums and television money for the right to broadcast games.

“Stadium owners and operators need to get their hands on the fact that they need to put in Wi-Fi like they need to put in plumbing,” Kapustka said.

Brian Lafemina, the NFL’s vice president of club business development, said the league is still searching for a telecommunications partner that can help tackle challenges of stadium location, design and tens of thousands of fans all trying to access the network at the same time.

“Yes, we are working on it as hard as we can,” he said. “But the technology just isn’t where it needs to be to deliver what we want to deliver.”

The league is unveiling a variety of technological enhancements at stadiums in 2013, including cameras in locker rooms, massive video boards that will show replays of every play, a “fantasy football lounge” with sleek technological amenities, the ability to listen to audio of other games from inside the stadium, team specific fantasy games and free access to the league’s NFL Red Zone cable channel for season ticket holders.

Lafemina emphasized the league’s role as a storyteller and said it is striving to use technology to provide fans in stadiums with unique content.

“The most important people in that stadium are the 70,000 paying customers,” he said.

Jonathan Kraft, president of the New England Patriots and co-chair of the NFL’s digital media committee, told CNN Money in January that he hopes to have all stadiums equipped with Wi-Fi for the start of the 2015 season.

The Patriots helped lead the way last year by offering fans free Wi-Fi throughout Gillette Stadium in Foxboro, Massachusetts. The network was built by New Hampshire-based Enterasys Networks.

“We certainly encourage that any club would invest the way they have,” said Lafemina.

Eleven other stadiums currently have Wi-Fi capability: MetLife Stadium in northern New Jersey, the Georgia Dome in Atlanta, Lucas Oil Stadium in Indianapolis, Raymond James Stadium in Tampa, the Mercedes-Benz Superdome in New Orleans, Bank of America Stadium in Charlotte, Sun Life Stadium in Miami, AT&T Stadium in suburban Dallas, University of Phoenix Stadium in suburban Phoenix, Ford Field in Detroit and Soldier Field in Chicago.

The 20 other stadiums have Wi-Fi in certain areas, but mostly operate on wireless service provided by Verizon and/or AT&T. Many of these venues have installed distributed antenna systems (DAS) to increase wireless connectivity while they seek answers to the challenges of enabling stadiums with Wi-Fi.

DAS connects cellular antennas to a common source, allowing wireless access in large buildings like stadiums.

Mobile Sports Report published its inaugural State of the Stadium Technology Survey this year, based on responses from more than 50 NFL, MLB, NBA, NHL, university, pro soccer, pro golf and car racing sites. The survey concluded DAS is currently more popular at venues because it boosts connectivity to mobile devices while dividing costs between carriers and the facility.

Cleveland Browns fans will benefit from a new DAS tower, installed by Verizon, and an upgraded AT&T tower this year at FirstEnergy Stadium, Browns President Alec Scheiner said the improved technology will serve as a test case for whether to install Wi-Fi in the future.

“If you are a consumer or a fan, you really just care about being able to get on your mobile device, and that’s what we’re trying to tackle,” he said during a July press conference.

Kapustka said DAS is a quick fix and is not a long-term strategy, especially when it comes to fans watching TV replays on their mobile devices.

“The video angle is the big thing for Wi-Fi,” he said. “Cellular just simply won’t be able to handle the bandwidth.”

He also pointed out that it is not in the best business interest of cellphone carriers to install Wi-Fi, as it would take customers off their networks.

Also complicating Kraft’s 2015 goal is the lack of league consensus about who will build Wi-Fi networks in all of its stadiums, and when.

By contrast, Major League Baseball named wireless-tech company Qualcomm its official technology partner in April, launching a two-year study to solve mobile-connectivity issues in its 30 stadiums. Kapustka said MLB was in a position to strike the overarching deal with Qualcomm because team owners made the league responsible for digital properties during the 1990s.

The NFL has a variety of rights deals, including Direct TV and Verizon, which make it more difficult for the league to agree on a single Wi-Fi plan, he said.

“My opinion is they (the NFL) will eventually have something more like MLB,” Kapustka said. “MLB has shown it is a great way to make money.”

Source:  CNN