This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Comparison of “Next-Generation” Security Products 2016

Date October 2016
Language English
Last Revision November 7th 2016

Release date 2016-11-15
Revision date 2016-11-07
Test Period September - October 2016
Number of Testcases 1071
Online with cloud connectivity checkbox-checked
Update allowed checkbox-checked
False Alarm Test included checkbox-checked
Platform/OS Microsoft Windows

Introduction

“Next Generation” is a vague term used to describe security products that work on a different principle from traditional antivirus software. Currently these are available mainly for business networks rather than home users. They may work by monitoring incoming traffic on the network, as is the case for the Barracuda NextGen Firewall, or by installing client software which is managed and monitored centrally from a console, as is the case for the other products we have reviewed here. The latter type is intended to replace the antivirus software on client PCs, while the former could still be used in conjunction with traditional AV products.

This report marks the first time AV-Comparatives has tested and included next-generation security products in a public comparative test report. We had reached out to many leading vendors in this space, requesting their participation in the test. Unfortunately, a number of vendors refused to participate in this independent evaluation. Since their products could not be included in this public report, we are unable to approve them as valuable next-generation security products. We look forward to their future participation in independent tests.

For this assessment, MRG Effitas and AV-Comparatives combined their strengths to conduct a joint test. The reviews and Malware Protection Tests were performed by AV-Comparatives, while the Exploit Test was performed by MRG Effitas.

Tested Products

The following products have been reviewed/tested under Windows 10 64-bit and are included in this public report:

Test Procedure

Exploit Test Setup

Testing Cycle for Each Test Case

  • One default-installation Windows 10 64-bit virtual machine (VirtualBox) endpoint was created. The default HTTP/HTTPS proxy was configured to point to a proxy running on a different machine. SSL/TLS traffic was not intercepted on the proxy.
  • The security of the OS was weakened by the following actions:
    • Windows Defender was disabled
    • Internet Explorer SmartScreen was disabled
    • Vulnerable software was installed, see “Software Installed” for details.
    • Windows Update was disabled
  • From this point, different snapshots were created from the virtual machine, several with different endpoint protection products and one with none. This procedure ensured that the base system was exactly the same in all test systems.

    The following endpoint security suites, with the following configuration, were defined for this test:

    • No additional protection
      this snapshot was used to infect the OS and to verify the exploit replay
    • Product 1 installed
    • Product 2 installed

The endpoint systems were installed with the default configuration, potentially unwanted software removal was enabled, and where the option was provided during the installation, cloud/community participation was enabled.

  • The exploit sources can be divided into two categories: In-the-wild threats and Metasploit. VBscript based downloaders and Office macro documents were also in scope, as these threats are usually not included in other test scenarios.
  • The virtual machine was reverted to a clean state and traffic was replayed by the proxy server. The replay meant that the browser was used as before, but instead of the original webservers, the proxy server answered the requests based on the recorded traffic. When the “replayed exploit” was able to infect the OS, the exploit traffic was marked as a source for the tests. This method guarantees that exactly the same traffic will be seen by the endpoint protection systems, even if the original exploit kit goes down during the tests. This exploit replay is NOT to be confused with tcpreplay type replay.
  • After new exploit traffic was approved, the endpoint protection systems were tested. Before the exploit site was tested, it was verified that the endpoint protection had been updated to the latest version with the latest signatures and that every cloud connection was working. If there was a need to restart the system, it was restarted. In the proxy setup, unmatched requests were allowed to pass through and SSL/TLS was not decrypted to ensure AV connectivity. VPN was used during the test on the host machine. When user interaction was needed from the endpoint protection (e.g. site visit not recommended, etc.), the block/deny action was chosen. When user interaction was needed from Windows, we chose the run/allow options. No other processes were running on the system, except the Process Monitor/Process Explorer from SysInternals and Wireshark (both installed to non-default directories).
  • After navigating to the exploit site, the system was monitored to check for new processes, loaded DLLs or C&C traffic.
  • The process went back to step 5. until all exploit site test cases were reached.

The following hardware was dedicated to the virtual machine:

  • 4 GB RAM memory
  • 2 processors dedicated from AMD FX 8370E CPU
  • 65 GB free space
  • 1 network interface
  • SSD drive

The VirtualBox host and guest system for the exploit test has been hardened in a way that common virtualization and sandbox detection techniques cannot detect the system as an analysis system.

Testcases

Malware Protection Test

All tests were performed with an active Internet connection (i.e. with cloud connection). We tested the products reviewed here as follows:

  • RTTL: 500 most prevalent malicious samples according to the AMTSO Real-Time Threat List (RTTL) were executed on the system.
  • AVC: 500 most recent and prevalent malicious samples from our own database were executed on the system. Some of the tested products function also as an incident response, where the system is compromised but a detection alert is reported in the web interface. The additional detection rate for the AVC score is noted in brackets.
  • WPDT: 50 malicious websites were tested by using our Real-World Testing Framework, which simulates the activities of a typical computer user (whether at home or in the office) surfing the Internet. The test was run in parallel with “traditional” business antivirus products, enabling a comparison of the threat-protection capabilities of traditional and next-gen products.
  • FPs: a false alarm test in which 1000 clean files have been executed on the system has also been performed. The false positive test measures the ability of products to distinguish clean from malicious files.
  • Exploit Test: 21 exploits have been used in the Exploit test.

Settings Used

Some vendors made configuration changes to their products remotely before the tests. Protection relevant changes were as follows:

  • Barracuda: Detect All Types set to Yes, URL Filter Enabled set to True.
  • CrowdStrike: File Attribute Analysis and File Analysis set to Aggressive, and all available protection options enabled.
  • Palo Alto: WildFire activation set to On, Action is prevention, Action is applied on grayware, Local analysis is enabled, Upload files for WildFire is enabled.
  • SentinelOne: Show Suspicious Activities enabled, Auto Immune enabled, Actions set to Quarantine.

Analysis Of The Exploit Kits Used In The Exploit Test

Unfortunately, the time of the test and OS configuration was not in favor for the exploit test. At the time of the tests, two exploit kits dominated the Internet. Sundown and RIG. Unfortunately, RIG used old (mostly Flash) exploits, which was unable to exploit the test configuration at all. That is why it was important to test with Metasploit and with some not super-fresh, but not too-old exploit kits as well (Neutrino). We also used two samples, which are not an exploit itself, but rather non-PE downloader, like an Office macro and a WSF downloader. We added these into the mix because these exotic file types are often excluded from Real World tests, but meanwhile, prevalent in-the-wild.

A total of 21 test cases have been tested.

  • 8 Sundown EK
  • 5 Neutrino EK
  • 4 Metasploit
  • 1 Powershell Empire
  • 1 Metasploit Macro
  • 1 Locky malspam WSF
  • 1 unknown EK

These exploit kits were targeting Adobe Flash, Internet Explorer, Microsoft Office (macro), Silverlight, Firefox, Java.

Software Installed

For the exploit test part, the following vulnerable software was installed:

Vendor  Product  Version    Vendor Product  Version 
Adobe  Flash Player ActiveX – builtin  21.0.0.182    Microsoft  SilverLight  5.1.10411.0 
AutoIT  AutoIT  3.3.12.0    Mozilla  Firefox  31.0 
Microsoft  Internet Explorer  11.162.10586    Oracle  Java  1.7.0.17 
Microsoft  Office  2016         

 

Ranking System

Scoring Of The Exploit Protection/Detection Results

We defined the following stages, at which the exploit can be prevented by the endpoint protection system:

  1. Blocking the URL (infected URL, exploit kit URL, redirection URL, malware URL) by the URL database (local or cloud). For example, a typical result is the browser displaying a “site has been blocked” message by the endpoint protection. The sooner the threat is detected in the exploit chain, the easier it is to remove the malicious files from the system, the less information can be gathered from the system by the attackers, and the lower the risk of an attack targeting the particular security solution on an endpoint.
  2. Analyzing and blocking the page containing a malicious HTML code, JavaScripts (redirects, iframes, obfuscated JavaScripts, etc.), or Flash files.
  3. Blocking the exploit before the shellcode is executed.
  4. Blocking the downloaded payload by analyzing the malware before it is started. For example, the malware payload download (either the clear-text binary or the encrypted/encoded binary) can be seen in the proxy traffic, but no malware process starts.
  5. The malware execution is blocked (no process create, load library).
  6. There is a successful start by the dropped malware.
  7. There is a successful start by the dropped malware, but after some time, all dropped malware is terminated and deleted (“malware starts, but blocked later”).

The “protection” scoring of the results was calculated as the followings:

  • If no malicious untrusted code was able to run on the endpoint, 5 points were given to the products. This can be achieved via blocking the exploit in step 1, 2 or 3.
  • If malicious untrusted code ran on the system (exploit shellcode, downloader code), but the final malware was not able start, 4 points were given to the product. This can be achieved via blocking the exploit in step 4 or 5.
  • If both the exploit shellcode (or downloader code) and the final malware were able to run, 0 points were given to the product.
  • If at any stage of the infection, a medium or high severity alert was generated (even if the infection was not prevented), 1 point was given to the product.

The “detection” scoring of the results was calculated as follows:

  • If at any stage of the infection, a medium or high severity alert was generated (even if the infection was not prevented), 1 point was given to the product.

We used this scoring for the following reasons:

  • The scope of the test was exploit prevention and not the detection of malware running on the system.
  • It is not possible to determine what kind of commands have been executed or what information exfiltrated by the malware. Data exfiltration cannot be undone or remediated.
  • It cannot be determined if the malware exited because the endpoint protection system blocked it, or if malware quit because it detected monitor processes, virtualization, or quit because it did not find its target environment.
  • Checking for malware remediation can be too time-consuming and remediation scoring very difficult in an enterprise environment. For example, in recent years we have seen several alerts stating that the endpoint protection system blocked a URL/page/exploit/malware, but still the malware was able to execute and run on the system. On other occasions, the malware code was deleted from the disk by the endpoint protection system, but the malware process was still running, or some parts of the malware were detected and killed, while others were not.
  • In a complex enterprise environment multiple network and endpoint products protect the endpoints. If one network product alerts that malicious binary has been downloaded to the endpoint, administrators have to cross-check the alerts with the endpoint protection alerts, or do a full forensics investigation to be sure that no malware was running on the endpoint. This process can be time and resource consuming, which is why it is better to block the exploit before the shellcode starts.
  • Usually the exploit shellcode is only a simple stage to download and execute a new piece of malware, but in targeted attacks, the exploit shellcode can be more complex.

We believe that such zero-tolerance scoring helps enterprises to choose the best products, using simple metrics. Manually verifying the successful remediation of the malware in an enterprise environment is a very resource-intensive process and costs a lot of money. In our view, malware needs to be blocked before it has a chance to run, and no exploit shellcode should be able to run.

Scoring of the Malware Protection Results

The scoring of the malware protection is straightforward, whenever the system got compromised by the malware, 0 point were given to the product, and whenever the malware was blocked or remediated, 1 point was given.

Scoring of the False Positives Results

The same scoring principle as described above has been applied for the false alarms test. In this test, 1000 non-malicious applications have been used to measure the ability of the products to distinguish clean from malicious files.

Test Results

Malware Protection and False Alarm Test

Below are the results achieved by the next-gen products in the malware protection tests performed by AV-Comparatives. In general, the protection rates are quite high, and comparable with the scores reached by conventional business products.

The scores in brackets shows the total detection rate if notifications in the web interface are counted as detections (i.e. cases where the system was compromised, but an alert was shown in the web interface).

   RTTL AVC WPDT FPs
Barracuda  100% 100%  100% 
CrowdStrike  98.2%  98.2% (99.2%)  98.0% 
Palo Alto  99.0%  97.6% (99.0%)  100% 
SeninelOne  100%  100%  98% (100%) 


Exploit Test

Below are the results achieved by the next-gen products in the exploit tests performed by MRG Effitas.

  Protection Rate (Detection Rate) 
Barracuda  100% (100%) 
CrowdStrike  70% (100%) 
Palo Alto  82% (86%) 
SentinelOne  28% (57%) 


Product Reviews

Award levels reached in this Business Security Tests and Review

The following products showed decent results in the malware protection tests without issuing too many false alarms, and receive our Approved Business Security Award:

APPROVED
BarracudaAPPROVED
CrowdStrikeAPPROVED
Palo AltoAPPROVED
SentinelOneAPPROVED

Notes

Some of the products in this review may only provide logging and analysis of threats (useful for incident response), rather than actually protecting against them. In some cases, protection features are deactivated by default and have to be enabled and configured by the administrator before they can be used. Not all of the products covered here may be available as a trial version. In fact, a few “next-gen” vendors try to avoid having their products publicly tested or independently scrutinized. To this end, they do not sell their products to testing labs, and may even revoke a license key – without a refund – if they find out or suspect that it was bought anonymously by a testing lab.

Copyright and Disclaimer

This publication is Copyright © 2016 by AV-Comparatives ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV-Comparatives prior to any publication. This report is supported by the participants. AV-Comparatives and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but a liability for the correctness of the test results cannot be taken by any representative of AV-Comparatives. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the website, test documents or any related data.

For more information about AV-Comparatives and the testing methodologies, please visit our website.

AV-Comparatives
(November 2016)