Pen testing in the Web 2.0 era
How is penetration testing coping with the brave new virtualised world of Web 2.0 with its new opportunities to breach security and compromise data? ProCheckUp director Richard Brain outlines the state of the art.
The increased market adoption of virtualised servers and interconnected web services (web 2.0), introduces new challenges when performing pen tests to uncover flaws and to create proof of concept attacks. Testing with no prior knowledge (black box) historically has provided a good foundation for a sound penetration test, though to be able now to detect and defeat the more prevalent advanced attacks, a more comprehensive system information and source code review is now required (white box testing).
Server virtualisation is rapidly becoming the standard in the server environment. Driven by the release of the Windows 2008 server and Red Hat enterprise 5.x. OS, and the desire to fully utilise the power of the latest Xeon chipsets, virtual host machines running these operating systems are easily able to support four to eight virtual hosts.
Worm/viruses historically spread over network shares, exploiting new security flaws found within machines. Virtual machine sprawl, which is the uncontrolled creation and expansion of the number of virtual machines, can allow worms and viruses to spread throughout the data centre. As un-patched and insecurely configured hosted machines will be vulnerable to the same flaws as stand-alone operating systems and can become reservoirs of malicious agents if not properly managed.
Additionally as virtual machines have predictable hardware profiles, with similar virtual hardware shared between virtual machines, this similarity might be exploited by the future malware to spread more rapidly between machines as virtualisation becomes more widespread. BIOS level root kits are old news, and it should be expected in due course root kits which target virtual hardware like the keyboard controller will be released.
As the conflicker and sasser worms spread using hard drives, DVD and USB devices by using the auto-run feature, hosted virtual machines became infected from the host machine. The physical hosted drives when shared between hosted machines auto-ran and installed the worm on the hosted machine. Microsoft released a patch which effectively disabled auto run in Windows Server 2008 in Feb 2009.
Penetrating the virtual world
Penetration testing of virtual machines is no different from testing conventional hosts. Open ports are discovered and services running over these ports are tested for security flaws. Additionally virtual support software like WMI management might also be found running on virtual machines. Interacting manually with individual virtual machines proves that the patching is recent and an updated virus system is running. An additional problem is identifying offline virtual machines, and backup images (stored offline/online) may not be sufficiently patched before being exposed to a dangerous environment like the Internet. The backup images themselves might be infected with malware, which needs to be considered if an organisation had recovered in the past from a malware infection.
Hosting servers that host and manage multiple virtual machines (four or more), require more in-depth and focused penetration testing to ensure that no security flaws exist which might adversely affect the dependent hosted machines. Simple denial of service attacks might occur by killing the hosting machine; or by privilege escalation.
Submitted links can be used to attack flaws in software running on end user machines (eg Flash Player XSS) or directly attack other users of the website by a technique called cross site request forgery (CSRF). CSRF attacks typically occurs where the website uses long-lived persistent cookies fro authenticating its users, for instance if a website user visits a malicious submitted page which then submits a page request like delete user (normally via an image tag). The user's browser recognising that the page has associated persistent cookies submits the authentication cookie along with the submitted request, and the site carries out the deletion request believing it was submitted by the user (as the authentication cookies were submitted). The SAMY worm spread across Myspace using an XSS attack by bypassing mechanisms to prevent CSRF attacks to perform a CSRF, and using string concatenatio and character conversion to bypass XSS filters.
Many servers accept RSS (really simple syndication) news feeds which are forwarded to the servers' subscribers; the subscribers' web browser the render the information contained within the RSS file. RSS files use the XML file standard to transmit information, a problem occurs when an attacker is able to submit a malicious RSS file. In such an instance, the attacker might be able to perform a XXE (XML external entity) attack to read system files and perform other attacks on an RSS aggregator machine (news site), or exploit an XML parser security weakness within the subscribers' web browsers eventually to run system commands on subscriber machines. A recent example was the CVE-2009-0137 Safari RSS attack when a malicious-crafted news feed was potentially able to execute code on the client.
Penetration testers have to ensure that aggregator sites have processes and controls in place to ensure that un-trusted RSS feeds cannot be added, and that any code providing RSS feeds has sufficient filtering and malicious code detection so that un-patched subscriber machines do not execute any malicious embedded code. This is not straightforward, as some RSS feeds embed tags to make their content more interesting.
Service providers like Paypal, Ebay and Amazon, remove the need to process card payments and run complex e-commerce environments from their users. The interlinking of these different services ensures that vulnerabilities in providers will affect their users; it is becoming common when performing a penetration test that the found flaw is 'downstream' from the site under test. There are also issues with data integrity, as the data is now distributed and shared to and from the different service providers (data might be lost, intercepted etc). The website under test might submit customer data to a service provider, by using a published API provided by the service provider. This historic/current API code might contain programming flaws, which allow other registered parties to retrieve customer details or to interfere with the processing of orders by exploiting the flaws within the API.
Facing the challenge
I hope this article has given an insight into some of the current challenges facing penetration testers. More time needs to be allocated to perform Web 2.0 penetration testing, particularly with penetration testing companies operating in an increasingly competitive environment, with market demand that web application tests to be performed to a budget with a year-on-year reduction of costs despite the inherent need to spend ore time. This regrettably all to brief overview of how penetration testers can find vulnerabilities, has hopefully aided administrators and information security managers in making their infrastructure more secure. Another concern for administrators and ISMs is the effectiveness of traditional IDS/IPS, and application level firewalls in detecting Web 2.0 attacks as such devices have been used as the traditional sticking plaster for insecure applications in the past.
The following article appears on Test Magazine. You can click here to read it in its original source.