This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Retrospective / Proactive Test August 2010

Date August 2010
Language English
Last Revision December 6th 2010

Heuristic and behavioural protection against new/unknown malicious software


Release date 2010-12-07
Revision date 2010-12-06
Test Period August 2010
Number of Testcases 23237
Online with cloud connectivity checkbox-unchecked
Update allowed checkbox-unchecked
False Alarm Test included checkbox-checked
Platform/OS Microsoft Windows

Introduction

Anti-Virus products often claim to have high proactive detection capabilities – far higher than those reached in this test. This is not just a self-promotional statement; it is possible that products reach the stated percentages, but this depends on the duration of the test-period, the size of the sample set and the used samples. The data shows how good the proactive detection capabilities of the scan-ners were in detecting new threats. Users should not be afraid if products have, in a retrospective test, low percentages. If the anti-virus software is always kept up-to-date, it will be able to detect more samples. For understanding how the detection rates of the Anti-Virus products look with up-dated signatures and programs, have a look at our regular on-demand detection tests. Only the heu-ristic/generic detection capability was tested (offline). Some products may be had the ability to de-tect some samples e.g. on-execution or by other monitoring tools, like behaviour-blocker, reputa-tion/cloud heuristics, etc. Those kinds of additional protection technologies are considered by AV-Comparatives in e.g. whole-product dynamic tests, but are outside the scope of this retrospective test. For further details please refer to the methodology documents as well as the information provided on our website.

This test report is the second part of the August 2010 test. The report is delivered begin of December due the high-required work, deeper analysis and preparation of the retrospective test-set. Many new viruses and other types of malware appear every day, this is why it’s important that Anti-Virus products not only provide new updates, as often and as fast as possible, but also that they are able to detect such threats in advance (also without executing them or while offline) with generic and/or heuristic techniques. Even if nowadays most Anti-Virus products provide daily, hourly or cloud updates, without heuristic/generic methods there is always a time-frame where the user is not reliably protected.

The products used the same updates and signatures they had the 16th August, and the same detection settings as used in August (see page 6 of this report). This test shows the proactive detection capa-bilities that the products had at that time. We used new malware appeared between the 17th and 24th August 2010. The following products were tested:

Tested Products

Test Procedure

AV-Comparatives prefer to test with default settings. As most products run with highest settings by default (or switch to highest automatically when malware is found, making it impossible to test against various malware with “default” settings), in order to get comparable results we set also the few remaining products to highest settings (or leave them to lower settings) in accordance with the respective vendors. We hope that all vendors will find the appropriate balance of detection/false alarms/system impact and will provide highest security already by default and remove paranoid set-tings inside the user interface which are too high to be ever of any benefit for normal users.

Testcases

We included in the retrospective test-set only new malware that was very prevalent in-the-field shortly after the freezing date. Samples which were not detected by the heuristic/generic detection capabilities of the products were then executed in order to see if behaviour-blocking features would stop them. In several cases, we observed that behaviour blockers only warned about some dropped malware components or system changes, without protecting against all the malicious actions performed by the malware; such cases were not counted as a block. As behaviour blockers only come into play after the malware is executed, a certain risk of being compromised remains (even when the security product claims to have blocked/removed the threat). Therefore, it is preferable that malware be detected before it is executed, by e.g. the on-access scanner using heuristics. This is why behaviour blockers should be considered a complement to the other features of a security product (multi-layer protection), and not a replacement.

Ranking System

The awards are given by the testers after consulting a number of statistical methods, including hierarchical clustering. We based our decisions on the following scheme:

Proactive Protection Rates
Under 50%
3
2
1
None - Few FPs
TESTED
STANDARD
ADVANCED
ADVANCED+
Many FPs
TESTED
TESTED
STANDARD
ADVANCED
Very many FPs
TESTED
TESTED
TESTED
STANDARD
Crazy many FPs
TESTED
TESTED
TESTED
TESTED

Test Results

The results show the proactive (generic/heuristic) detection capabilities of the scan engines against new malware. This test is performed offline and on-demand. It is NOT an on-execution/behavioral/cloud test. The percentages are rounded to the nearest whole number. Do not take the results as an absolute assessment of quality – they just give an idea of who detected more, and who less, in this specific test. To know how these anti-virus products perform with updated signatures, please have a look at our on-demand tests of February and August. Readers should look at the results and build an opinion based on their needs. All the tested products are already selected from a group of very good scanners and if used correctly and kept up-to-date, users can feel safe with any of them.

False Positive (False Alarm) Test Result

To better evaluate the quality of the detection capabilities, the false alarm rate has to be taken into account too. A false alarm (or false positive) is when an Anti-Virus product flags an innocent file to be infected when it is not. False alarms can sometimes cause as much troubles like a real infection.

The false alarm test results were already included in the test report of August. For details, please read the False Alarm Test August 2009.

1.F-Secure2very few FPs (0-3)
2.Microsoft3
3.Bitdefender4 few FPs (4-15)
4.eScan5
5.ESET6
6.PC Tools7
7.Avast, Symantec9
8.Avira10
9.Sophos13
10.G DATA15
11.Trustport19 many FPs (over 15)
12.Kaspersky46
13.K750
14.Panda98

Summary Result

The below table shows the proactive on-demand detection capabilities of the various products, sorted by detection rate. The given awards are based not only on the detection rates over the new malware, but also considering the false alarm rates.

Below you can see the proactive protection results over our set of new and prevalent malware files/families appeared in-the-field (23,237 malware samples):

  Blocked Compromised Proactive / Protection Rate False Alarms Cluster
G DATA
14407 8830 62% few 1
AVIRA 13710 9527 59% few 1
Sophos 13477 9760 58% few 1
ESET 13013 10224 56% few 1
F-Secure 13013 10224 56% very few 1
BitDefender, eScan 12548 10689 54% few 1
Microsoft 12083 11154 52% very few 1
Symantec 11851 11386 51% few 1
 
Panda 14175 9062 61% many 2
Kaspersky 13710 9527 59% many 2
TrustPort 13477 9760 58% many 2
K7 11619 11619 50% many 2
Avast 9992 13245 43% few 2
PC Tools
8598 14639 37% few 2

Award levels reached in this Heuristic / Behavioural Test

We provide a 3-level-ranking-system (STANDARD, ADVANCED and ADVANCED+). The following certification levels are for the results reached in the retrospective test:

* these products got lower awards due to false alarms

Notes

AVG, Kingsoft, McAfee, Norman and Trend Micro decided to not get included in this report and to renounce to get awarded.

Almost all products run nowadays by default with highest protection settings (at least either at the entry points, during whole computer on-demand scans or scheduled scans) or switch automatically to highest settings in case of a detected infection. Due that, in order to get comparable results, we tested all products with highest settings, if not explicitly advised otherwise by the vendors (as we will use same settings over all tests, the reason is usually that their highest settings either cause too many false alarms, have a too high impact on system performance, or the settings are planned to be changed/removed by the vendor in near future). To avoid some frequent questions, below are some notes about the used settings (scan of all files etc. is always enabled) of some products:

  • AVIRA, Kaspersky, Symantec, TrustPort: asked to get tested with heuristic set to high/advanced. Due to that, we recommend users to consider also setting the heuristics to high/advanced.
  • F-Secure, Sophos: asked to get tested and awarded based on their default settings (i.e. without using their advanced heuristics / suspicious detections setting).
  • AVIRA: asked to do not enable/consider the informational warnings of packers as detections. Due that, we did not count them as detections (neither on the malware set, nor on the clean set).

Copyright and Disclaimer

This publication is Copyright © 2010 by AV-Comparatives ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV-Comparatives prior to any publication. AV-Comparatives and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but a liability for the correctness of the test results cannot be taken by any representative of AV-Comparatives. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the website, test documents or any related data.

For more information about AV-Comparatives and the testing methodologies, please visit our website.

AV-Comparatives
(December 2010)