This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Retrospective / Proactive Test 2015

Date March 2015
Language English
Last Revision June 26th 2015

Heuristic and behavioural protection against new/unknown malicious software


Release date 2016-04-15
Revision date 2015-06-26
Test Period March 2015
Number of Testcases 1463
Online with cloud connectivity checkbox-unchecked
Update allowed checkbox-unchecked
False Alarm Test included checkbox-checked
Platform/OS Microsoft Windows

Introduction

Many new malware samples appear every day, which is why it is important that antivirus products not only provide new updates, as frequently and as quickly as possible, but also that they are able to detect such threats in advance with generic/heuristic techniques; failing that, with behavioural protection measures. Even if nowadays most antivirus products provide daily, hourly or cloud updates, without proactive methods there is always a time-frame where the user is not reliably protected. The aim of this test is to evaluate the proactive detection and protection rates in this time-frame (without cloud). The data shows how good the proactive heuristic/generic detection and behavioural protection capabilities of the scanners were in detecting new threats used in this test. The design and scope of the test mean that only the heuristic/generic detection capability and behavioural protection capabilities were tested (offline). Additional protection technologies (which are dependent on cloud-connectivity) and infection vectors are considered by AV-Comparatives in e.g. Whole-Product Dynamic (“Real-World”) Protection Tests and other tests, but are outside the scope of the Retrospective/Proactive Tests.

This test report is the second part of the March 2015 test. The report is delivered several months later due to the large amount of work required, deeper analysis, preparation and dynamic execution of the retrospective test-set. This type of test is performed only once a year and includes a behavioural protection element, where any malware samples are executed, and the results observed. Although it is a lot of work, we usually receive good feedback from various vendors, as this type of test allows them to find bugs and areas for improvement in the behavioural routines (as this test evaluates specifically the proactive heuristic and behavioural protection components). Feedback from users (especially corporate users) indicates that they appreciate the findings from this type of test.

The products used the same updates and signatures that they had on the 3rd March 2015. This test shows the proactive protection capabilities that the products had at that time. We used 1,463 new and relevant (i.e. prevalent malware files and families) malware samples that appeared for the first time shortly after the freezing date. The following 12 products were tested:

Tested Products

Test Procedure

By design, the test does not make use of cloud services. In the time needed for the entire test procedure, it is possible that most of the samples would be blacklisted by vendors’ signatures or cloud services, meaning that the results would not reflect true proactive protection. In last year’s test, it took several weeks before all the malware samples used had been covered by some of the participants’ cloud services. This year the situation was better (due to the inclusion of mainly quite prevalent malware files / families), but in some few cases it still took some weeks till all malware samples used were finally detected by some cloud-dependent products, even when their cloud-based features were available. Consequently, it has to be considered as a marketing excuse if retrospective tests – which test the proactive protection against new malware – are criticized for not being allowed to use cloud resources. This is especially true considering that in many corporate environments the cloud connection is disabled by the company policy, and the detection of new malware coming into the company often has to be provided (or is supposed to be provided) by other product features. Cloud features are very (economically) convenient for security software vendors and allow the collection and processing of large amounts of metadata. However, in most cases (not all) they still rely on blacklisting known malware, i.e. if a file is completely new/unknown, the cloud will usually not be able to determine if it is good or malicious.

Testcases

We included in the retrospective test-set only new malware that was very prevalent in-the-field shortly after the freezing date. Samples which were not detected by the heuristic/generic detection capabilities of the products were then executed in order to see if behaviour-blocking features would stop them. In several cases, we observed that behaviour blockers only warned about some dropped malware components or system changes, without protecting against all the malicious actions performed by the malware; such cases were not counted as a block. As behaviour blockers only come into play after the malware is executed, a certain risk of being compromised remains (even when the security product claims to have blocked/removed the threat). Therefore, it is preferable that malware be detected before it is executed, by e.g. the on-access scanner using heuristics. This is why behaviour blockers should be considered a complement to the other features of a security product (multi-layer protection), and not a replacement.

Ranking System

The awards are given by the testers after consulting a number of statistical methods, including hierarchical clustering. We based our decisions on the following scheme:

Proactive Protection Rates
Under 50%
3
2
1
None - Few FPs
TESTED
STANDARD
ADVANCED
ADVANCED+
Many FPs
TESTED
TESTED
STANDARD
ADVANCED
Very many FPs
TESTED
TESTED
TESTED
STANDARD
Crazy many FPs
TESTED
TESTED
TESTED
TESTED

Test Results

To know how these antivirus products perform with updated signatures and cloud connection against prevalent malware files, please have a look at our File Detection Tests of March and September. To find out about real-life online protection rates provided by the various products, please have a look at our ongoing Whole-Product Dynamic “Real-World” Protection tests. Readers should look at the results and decide on the best product for them based on their individual needs. For example, laptop users who are worried about infection from e.g. infected flash drives whilst offline should pay particular attention to this Proactive test.

False Positive (False Alarm) Test Result

To better evaluate the proactive detection capabilities, the false-alarm rate has to be taken into account too. A false alarm (or false positive [FP]) occurs when an antivirus product flags an innocent file as infected. False alarms can sometimes cause as much trouble as real infections.

The false-alarm test results were already included in the March test report. For details, please read the report, available at http://www.av-comparatives.org/wp-content/uploads/2015/04/avc_fps_201503_en.pdf

1.ESET1very few FPs (0-1)
2.Fortinet6
3.Bitdefender, Kaspersky9 few FPs (2-10)
4.Emsisoft16
5.BullGuard, eScan, F-Secure, Lavasoft19 many FPs (over 10)
6.ThreatTrack50
7.Avast77

A small behavioural false-alarm test using 100 most downloaded/common software packages released in February did not bring up any additional false alarms. The false-alarm test carried out for the March 2015 Real-World Protection Test produced largely similar false alarm rates to those for the File-Detection Test shown above.

Summary Result

The results show the proactive (generic/heuristic/behavioural) protection capabilities of the various products against new malware. The percentages are rounded to the nearest whole number.

Below you can see the proactive protection results over our set of new and prevalent malware files/families appeared in-the-field (1,463 malware samples):

  Blocked User dependent[1] Compromised Proactive / Protection Rate False Alarms Cluster
Bitdefender 1448 15 99% few 1
F-Secure 1358 3 102 93% many 1
eScan 1354 109 93% many 1
Kaspersky Lab 1343 120 92% Few 1
BullGuard 1259 129 75 90% many 1
ESET 1253 210 86% very few 1
 
Emsisoft 777 667 19 76% many 2
Avast 985 478 67% very many 2
 
Lavasoft 781 682 53% many 3
Microsoft 772 691 53% very few 3
Fortinet 742 721 51% few 3
 
ThreatTrack 682 781 47% many

[1] User-dependent cases were given a half credit. Example: if a program blocks 80% of malware by itself, plus another 20% user-dependent, we give it 90% altogether, i.e. 80% + (20% x 0.5).

Award levels reached in this Heuristic / Behavioural Test

The following awards are for the results reached in the proactive/behavioural test, considering not only the protection rates against new malware, but also the false alarm rates:

* these products got lower awards due to false alarms

Notes

This test is an optional part of our public main test-series, that is to say, manufacturers can decide at the beginning of the year whether they want their respective products to be included in the test. The test is currently done as part of the public main-test series only if a minimum number of vendors choose to participate in it.

Microsoft security products are not included in the awards page, as their out-of-box protection is (optionally) included in the operating system and is currently considered out-of-competition.

Readers may be interested to see a summary and commentary of our test methodology which was published by PC Mag two years ago: http://securitywatch.pcmag.com/security-software/315053-can-your-antivirus-handle-a-zero-day-malware-attack

Copyright and Disclaimer

This publication is Copyright © 2016 by AV-Comparatives ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV-Comparatives prior to any publication. AV-Comparatives and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but a liability for the correctness of the test results cannot be taken by any representative of AV-Comparatives. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the website, test documents or any related data.

For more information about AV-Comparatives and the testing methodologies, please visit our website.

AV-Comparatives
(April 2016)