This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Retrospective / Proactive Test August 2011

Date August 2011
Language English
Last Revision November 14th 2011

Heuristic and behavioural protection against new/unknown malicious software


Release date 2011-11-15
Revision date 2011-11-14
Test Period August 2011
Number of Testcases 9003
Online with cloud connectivity checkbox-unchecked
Update allowed checkbox-unchecked
False Alarm Test included checkbox-checked
Platform/OS Microsoft Windows

Introduction

Anti-Virus products often claim to have high proactive detection capabilities – far higher than those reached in this test. This is not just a self-promotional statement; it is possible that products reach the stated percentages, but this depends on the duration of the test-period, the size of the sample set and the used samples. The data shows how good the proactive (on-access/on-demand) detection capabilities of the scanners were in detecting the new threats (sometimes also named as 0-day threats by others) used in this test. Users should not be afraid if products have, in a retrospective test, low percentages. If the anti-virus software is always kept up-to-date, it may be able to detect more samples. For understanding how the detection rates of the Anti-Virus products look with up-dated signatures and programs, have a look at our regular on-demand detection tests. By design and scope of the test, only the heuristic/generic detection capability was tested (offline). Some products may be had the ability to detect some samples e.g. on-execution or by other monitoring tools, like behaviour-blocker, reputation/cloud heuristics, etc. Those kinds of additional protection technologies are considered by AV-Comparatives in e.g. whole-product dynamic tests, but are outside the scope of retrospective tests. For further details please refer to the methodology documents as well as the information provided on our website.

This test report is the second part of the August 2011 test. The report is delivered in late November due the high-required work, deeper analysis and preparation of the retrospective test-set. Due to the time spent in analyzing and assuring the quality of those samples, they usually also get included in the next detection test to see if they will be covered by then.
Many new viruses and other types of malware appear every day, this is why it’s important that Anti-Virus products not only provide new updates, as often and as fast as possible, but also that they are able to detect such threats in advance (also without executing them or while offline) with generic and/or heuristic techniques. Even if nowadays most Anti-Virus products provide daily, hourly or cloud updates, without heuristic/generic methods there is always a time-frame where the user is not reliably protected.

The products used the same updates and signatures they had the 12th August 2011, and the same detection settings as used in August. This test shows the proactive detection capabilities that the products had at that time. We used new malware appeared between the 13th and 20th August 2011. The following products were tested:

Tested Products

Test Procedure

Nowadays, hardly any Anti-Virus products rely purely on “simple” signatures anymore. They all use complex generic signatures, heuristics etc. in order to catch new malware, without needing to download signatures or initiate manual analysis of new threats. In addition, Anti-Virus vendors continue to deliver signatures and updates to fill the gaps where proactive mechanisms initially fail to detect some threats. Anti-Virus software uses various technologies to protect a PC. The combination of such multi-layered protection usually can provide good protection.

Almost all products run nowadays by default with highest protection settings (at least either at the entry points, during whole computer on-demand scans or scheduled scans) or switch automatically to highest settings in case of a detected infection. Due to that, in order to get comparable results, we tested all products with highest settings, if not explicitly advised otherwise by the vendors. To avoid some frequent questions, below are some notes about the used settings (scan of all files etc. is always enabled) of some products:

AVIRA, Kaspersky: asked to get tested with heuristic set to high/advanced. Due to that, we recommend users to consider also setting the heuristics to high/advanced.
F-Secure: asked to get tested and awarded based on their default settings (i.e. without using their advanced heuristics).
AVIRA: asked to do not enable/consider the informational warnings of packers as detections. Due to that, we did not count them as detections (neither on the malware set, nor on the clean set).

AV-Comparatives prefer to test with default settings. As most products run with highest settings by default (or switch to highest automatically when malware is found, making it impossible to test against various malware with “default” settings), in order to get comparable results we set also the few remaining products to highest settings (or leave them to default settings) in accordance with the respective vendors. We hope that all vendors will find the appropriate balance of detection/false alarms/system impact and will provide highest security already by default and remove paranoid settings inside the user interface which are too high to be ever of any benefit for normal users.

Testcases

This time we included in the retrospective test-set only new malware which has been seen in-the-field and prevalent in the few days after the last update in August. Additionally, we took care to include malware samples which belong to different clusters (i.e. which differ from each other, in order to e.g. do not include too many samples which are practically the same malware). As malware which became prevalent may be spotted faster by reactive measures when many users got infected, initial proactive rates may be lower (because if they would have been spotted proactively, they may not become preva-lent if they would be blocked/detected in advance).

Ranking System

The awards are given by the testers after consulting a number of statistical methods, including hierarchical clustering. We based our decisions on the following scheme:

Proactive Protection Rates
Under 50%
3
2
1
None - Few FPs
TESTED
STANDARD
ADVANCED
ADVANCED+
Many FPs
TESTED
TESTED
STANDARD
ADVANCED
Very many FPs
TESTED
TESTED
TESTED
STANDARD
Crazy many FPs
TESTED
TESTED
TESTED
TESTED

Test Results

The tested products are able to detect a quantity of completely new/unknown malware proactively even without executing the malware, using passive heuristics, while other protective mechanisms like HIPS, behavior analysis and behavior-blockers, reputa-tion/cloud heuristics, etc. add an extra layer of protection. The retrospective test is performed using passive scanning and demonstrates the ability of the products under test to detect new malware pro-actively, without being executed. In retrospective tests „in-the-cloud” features are not considered, as that is not the scope of the test.

False Positive (False Alarm) Test Result

To better evaluate the quality of the detection capabilities, the false alarm rate has to be taken into account too. A false alarm (or false positive) is when an Anti-Virus product flags an innocent file to be infected when it is not. False alarms can sometimes cause as much troubles like a real infection. The false alarm test results were already included in the test report of August.

The false-alarm test results were already included in the March test report. For details, please read the False Alarm Test August 2011.

1.Microsoft1very few FPs (0-1)
2.Bitdefender, eScan, F-Secure3
3.Avira9 few FPs (2-10)
4.Kaspersky, Trustport12
5.G DATA, Panda18 many FPs (over 10)
6.Avast19
7.ESET20
8.Qihoo104

Summary Result

The results show the proactive (generic/heuristic) file detection capabilities of the scan engines against new malware. The percentages are rounded to the nearest whole number. Do not take the results as an absolute assessment of quality – they just give an idea of who detected more, and who less, in this specific test. To know how these anti-virus products perform with updated signatures, please have a look at our detections tests of February and August. To know about protection rates provided by the various products, please have a look to our ongoing Whole-Product Dynamic tests. Readers should look at the results and build an opinion based on their needs.

Below you can see the proactive on-demand detection results over our set of new and prevalent mal-ware appeared in-the-field within some few days of August (9003 different malware samples):

  Blocked Compromised Proactive / Protection Rate False Alarms Cluster
Qihoo 6086 2917 67.6% Many 1
G DATA 5762 3241 64.0% Few 1
AVIRA 5618 3385 62.4% Few 1
ESET 5546 3457 61.6% Very few 1
Trustport 5519 3484 61.3% Many 1
Kaspersky 5411 3592 60.1% Very few 1
F-Secure 5177 3826 57.5% Few 1
Bitdefender 5150 3853 57.2% Few 1
eScan 5123 3880 56.9% Many 1
 
Microsoft 4384 4619 48.7% Very few 2
Avast 4150 4853 46.1% Few 2
Panda 3727 5276 41.4% Very few 2

[1] User-dependent cases were given a half credit. Example: if a program blocks 80% of malware by itself, plus another 20% user-dependent, we give it 90% altogether, i.e. 80% + (20% x 0.5).

Award levels reached in this Heuristic / Behavioural Test

The following awards are for the results reached in the proactive/behavioural test, considering not only the protection rates against new malware, but also the false alarm rates:

* these products got lower awards due to false alarms

Notes

All the products that were included in the test achieved good results, and received either the Advanced or Advanced+ award. Remarkably, Panda’s Cloud AntiVirus product achieved respectable detection rates of unknown malicious programs, despite not allowed to use the cloud, and was rewarded with an Advanced award. Unfortunately, not all vendors chose to participate in this test. This may be because many of the non-participating programs would only achieve sub-optimal results in this type of test which does not make use of the cloud etc.

AVG, K7, McAfee, PC Tools, Sophos, Symantec, Trend Micro and Webroot decided to not get included in this test and to renounce to get awarded. Considering that certain vendors did not take part, we decided that it makes more sense in this case to keep our fixed thresholds instead of using the cluster method (as by the non-inclusion of the low-scoring products clusters may be built “unfairly”).

Readers may be interested to see a summary and commentary of our test methodology which was published by PC Mag two years ago: http://securitywatch.pcmag.com/security-software/315053-can-your-antivirus-handle-a-zero-day-malware-attack

Copyright and Disclaimer

This publication is Copyright © 2011 by AV-Comparatives ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV-Comparatives prior to any publication. AV-Comparatives and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but a liability for the correctness of the test results cannot be taken by any representative of AV-Comparatives. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the website, test documents or any related data.

For more information about AV-Comparatives and the testing methodologies, please visit our website.

AV-Comparatives
(November 2011)