This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Real-World Protection Test August-November 2013

Date November 2013
Language English
Last Revision December 10th 2013

Release date 2013-12-11
Revision date 2013-12-10
Test Period August - November 2013
Number of Testcases 1821
Online with cloud connectivity checkbox-checked
Update allowed checkbox-checked
False Alarm Test included checkbox-checked
Platform/OS Microsoft Windows
Methodology Click here

Introduction

Malicious software poses an ever-increasing threat, not only due to the number of malware programs increasing, but also due to the nature of the threats. Infection vectors are changing from simple file-based methods to distribution via the Internet. Malware is increasingly focussing on users, e.g. by deceiving them into visiting infected web pages, installing rogue/malicious software or opening emails with malicious attachments.

The scope of protection offered by antivirus programs is extended by the inclusion of e.g. URL-blockers, content filtering, reputation systems and user-friendly behaviour-blockers. If these features are perfectly coordinated with the signature-based and heuristic detection, the protection provided against threats increases.

In this test, all protection features of the product can be used to prevent infection – not just signatures or heuristic file scanning. A suite can step in at any stage of the process – accessing the URL, downloading the file, formation of the file on the local hard drive, file access and file execution – to protect the PC. This means that the test achieves the most realistic way of determining how well the security product protects the PC. Because all of a suite’s components can be used to protect the PC, it is possible for a product to score well in the test by having e.g. very good behavioural protection, but a weak URL blocker. However, we would recommend that all parts of a product should be as effective as possible. It should be borne in mind that not all malware enters computer systems via the Internet, and that e.g. a URL blocker is ineffective against malware introduced to a PC via a USB flash drive or over the local area network.

In spite of these new technologies, it remains very important that the signature-based and heuristic detection abilities of antivirus programs continue to be tested. Even with all the protection features available, the growing frequency of zero-day attacks means that some computers will inevitably become infected. As signatures can be updated, they provide the opportunity to recognize and remove malware which was initially missed by the security software. The newer, “non-conventional” protection technologies often offer no means of checking existing data stores for already-infected files, which can be found on the file servers of many companies. Those new security layers should be understood as an addition to good detection rates, not as a replacement.

The Whole-Product Dynamic “Real-World” Protection test is a joint project of AV-Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality Engineering. It is partially funded by the Republic of Austria.

The methodology of our Real-World Protection Test has received the following awards and certifications, including:

  • Constantinus Award – given by the Austrian government
  • Cluster Award – given by the Standortagentur Tirol – Tyrolean government
  • eAward – given by report.at (Magazine for Computer Science) and the Office of the Federal Chancellor
  • Innovationspreis IT – “Best Of” – given by Initiative Mittelstand Germany

For this test, we normally use the Internet security suite, as any protection features that prevent the system from being compromised can be used. However, a vendor can choose to enter their basic antivirus product instead, if they prefer. The main versions of the products tested in each monthly test run are shown below:

Tested Products

Test Procedure

Testing dozens of antivirus products with hundreds of URLs each per day is a great deal of work, which cannot be done manually (as it would involve visiting thousands of websites in parallel), so it is necessary to use some sort of automation.

Lab Setup

Every potential test-case to be used in the test is run and analysed on a clean machine without antivirus software, to ensure that it is a suitable candidate. If the malware meets both of these criteria, the source URL is added to the list to be tested with security products. Any test cases which turn out not to be appropriate are excluded from the test set.

Every security program to be tested is installed on its own test computer. All computers are connected to the Internet. Each system is manually updated every day, and each product is updated before every single test case. Each test PC has its own external IP address. We make special arrangements with ISPs to ensure a stable Internet connection for each PC, and take the necessary precautions (with specially configured firewalls etc.) not to harm other computers (i.e. not to cause outbreaks).

Software and Software

For this test we use identical workstations, a control and command server and network attached storage.

  Vendor Type CPU RAM Hard Disk
Workstation Dell Optiplex 755 Intel Core 2 Duo 4 GB 80 GB SSD
Control Server Supermicro Microcloud Intel Xeon E5 32 GB 4 x 500 GB SSD
Storage Eurostor ES8700-Open-E Dual Xeon 32 GB 140 TB Raid 6

The tests were performed under Microsoft Windows 7 Home Premium SP1 64-Bit, with updates as at 2nd August 2013. Some further installed software includes:

Vendor Product Version   Vendor Product Version
Adobe Flash Player ActiveX 11.8 Microsoft Office Home 2010
Adobe Flash Player Plug-In 11.8 Microsoft .NET Framework 4.5
Adobe Acrobat Reader 11.0 Mozilla Firefox 35.0
Apple QuickTime 7.7 Oracle Java 1.7.0.25
Microsoft Internet Explorer 10.0 VideoLAN VLC Media Player 2.0.8

The use of more up-to-date third-party software and an updated Microsoft Windows 7 64-Bit makes it much harder to find exploits in-the-field for the test. Users should always keep their systems and applications up-to-date, in order to minimize the risk of being infected through exploits which use unpatched software vulnerabilities.

Settings

We use every security suite with its default settings. Our Whole-Product Dynamic Protection Test aims to simulate real-world conditions as experienced every day by users. If user interactions are shown, we choose “Allow” or equivalent. If the product protects the system anyway, we count the malware as blocked, even though we allow the program to run when the user is asked to make a decision. If the system is compromised, we count it as user-dependent. We consider “protection” to mean that the system is not compromised. This means that the malware is not running (or is removed/terminated) and there are no significant/malicious system changes. An outbound-firewall alert about a running malware process, which asks whether or not to block traffic from the users’ workstation to the Internet, is too little, too late and not considered by us to be protection.

Preparation for every testing day

Every morning, any available security software updates are downloaded and installed, and a new base image is made for that day. Before each test case is carried out, the products have some time to download and install newer updates which have just been released, as well as to load their protection modules (which in several cases takes some minutes). In the event that a major signature update for a product is made available during the day, but fails to download/install before each test case starts, the product will at least have the signatures that were available at the start of the day. This replicates the situation of an ordinary user in the real world.

Testing Cycle for each malicious URL

Before browsing to each new malicious URL we update the programs/signatures (as described above). New major product versions (i.e. the first digit of the build number is different) are installed once at the beginning of the month, which is why in each monthly report we only give the main product version number. Our test software monitors the PC, so that any changes made by the malware will be recorded. Furthermore, the recognition algorithms check whether the antivirus program detects the malware. After each test case the machine is reset to its clean state.

Protection

Security products should protect the user’s PC. It is not very important at which stage the protection takes place. It could be while browsing to the website (e.g. protection through URL Blocker), while an exploit tries to run, while the file is being downloaded/created or when the malware is executed (either by the exploit or by the user). After the malware is executed (if not blocked before), we wait several minutes for malicious actions and also to give e.g. behaviour-blockers time to react and remedy actions performed by the malware. If the malware is not detected and the system is indeed infected/compromised, the process goes to “System Compromised”. If a user interaction is required and it is up to the user to decide if something is malicious, and in the case of the worst user decision the system gets compromised, we rate this as “user-dependent”. Because of this, the yellow bars in the results graph can be interpreted either as protected or not protected (it’s up to each individual user to decide what he/she would probably do in that situation).

Due to the dynamic nature of the test, i.e. mimicking real-world conditions, and because of the way several different technologies (such as cloud scanners, reputation services, etc.) work, it is a matter of fact that such tests cannot be repeated or replicated in the way that e.g. static detection rate tests can. Anyway, we log as much data as reasonably possible to support our findings and results. Vendors are invited to provide useful log functions in their products that can provide the additional data they want in the event of disputes. After each testing month, manufacturers are given the opportunity to dispute our conclusion about the compromised cases, so that we can recheck if there were maybe some problems in the automation or with our analysis of the results.

In the case of cloud products, we can only consider the results that the products achieved in our lab at the time of testing; sometimes the cloud services provided by the security vendors are down due to faults or maintenance downtime by the vendors, but these cloud-downtimes are often not disclosed to the users by the vendors. This is also a reason why products relying too heavily on cloud services (and not making use of local heuristics, behaviour blockers, etc.) can be risky, as in such cases the security provided by the products can decrease significantly. Cloud signatures/reputation should be implemented in the products to complement the other local/offline protection features, but not replace them completely, as e.g. offline cloud services would mean the PCs are being exposed to higher risks.

Testcases

We aim to use visible and relevant malicious websites/malware that are currently out there, and present a risk to ordinary users. We usually try to include as many working drive-by exploits as we find –these are usually well covered by practically all major security products, which may be one reason why the scores look relatively high. The rest are URLs that point directly to malware executables; this causes the malware file to be downloaded, thus replicating a scenario in which the user is tricked by social engineering into following links in spam mails or websites, or installing some Trojan or other malicious software.

We use our own crawling system to search continuously for malicious sites and extract malicious URLs (including spammed malicious links). We also search manually for malicious URLs. If our in-house crawler does not find enough valid malicious URLs on one day, we have contracted some external researchers to provide additional malicious URLs (initially for the exclusive use of AV-Comparatives) and look for additional (re)sources.

In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples are used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products. More details can be found here. Our tests use more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average.

In total, over 75,000 test cases were executed, that is 3,423 malicious test cases for each of the 22 products tested. Of these, 1,645 exploits were ineffective due to the patch level – those test cases were therefore not counted in the test.

Hierarchical Cluster Analysis

The dendrogram (using average linkage between groups) shows the results of the hierarchical cluster analysis. It indicates at what level of similarity the clusters are joined. The red drafted line defines the level of similarity. Each intersection indicates a group (in this case 3 groups). Products that had aboveaverage FPs (wrongly blocked score) are marked in red (and downgraded according to the ranking system below).

avc-rpt-2013b-dendrogram

Ranking System

Ranking system
Protection score
Cluster 4
Protection score
Cluster 3
Protection score
Cluster 2
Protection score
Cluster 1
< ∅ FPs
Tested
Standard
Advanced
Advanced+
> ∅ FPs
Tested
Tested
Standard
Advanced

Test Results

Below you can see an overview of the individual testing months:

August 2013 – 443 test cases

September 2013 – 326 test cases

October 2013 – 527 test cases

November 2013 – 525 test cases

We purposely do not give exact numbers for the individual months in this report, to avoid the minor differences of a few cases being misused to state that one product is better than the other in a given month and with a given test-set size. We provide the total numbers in the overall reports, where the size of the test-set is bigger, and differences that are more significant may be observed.

False Positive (False Alarm) Test Result

The false-alarm test in the Whole-Product Dynamic “Real-World” Protection Test consists of two parts: wrongly blocked domains (while browsing) and wrongly blocked files (while downloading/installing). It is necessary to test both scenarios because testing only one of the two above cases could penalize products that focus mainly on one type of protection method, either URL filtering or onaccess/behaviour/reputation-based file protection.

Wrongly blocked domains (while browsing)

We used around one thousand randomly chosen popular domains. Blocked non-malicious domains/URLs were counted as false positives (FPs). The wrongly blocked domains have been reported to the respective vendors for review and should now no longer be blocked.

By blocking whole domains, the security products not only risk causing a loss of trust in their warnings, but also possibly causing financial damage (besides the damage to website reputation) to the domain owners, including loss of e.g. advertisement revenue. Due to this, we strongly recommend vendors to block whole domains only in the case where the domain’s sole purpose is to carry/deliver malicious code, and otherwise block just to the malicious pages (as long as they are indeed malicious). Products which tend to block URLs based e.g. on reputation may be more prone to this and score also higher in protection tests, as they may block many unpopular/new websites.

Wrongly blocked files (while downloading/installing)

We used around two thousand different applications listed either as top downloads or as new/recommended downloads from various download portals. The applications were downloaded from the original software developers’ websites (instead of the download portal host), saved to disk and installed to see if they are blocked at any stage of this procedure. Additionally, we included a few clean files that were encountered and disputed over the past months of the Real-World Protection Test.

The duty of security products is to protect against malicious sites/files, not to censor or limit the access only to well-known popular applications and websites. If the user deliberately chooses a high security setting, which warns that it may block some legitimate sites or files, then this may be considered acceptable. However, we do not regard it to be acceptable as a default setting, where the user has not been warned. As the test is done at points in time and FPs on very popular software/websites are usually noticed and fixed within a few hours, it would be surprising to encounter FPs with very popular applications. Due to this, FP tests which are done e.g. only with very popular applications, or which use only the top 50 files from whitelisted/monitored download portals would be a waste of time and resources. Users do not care whether they are infected by malware that affects only them, just as they do not care if the FP count affects only them. While it is preferable that FPs do not affect many users, it should be the goal to avoid having any FPs and to protect against any malicious files, no matter how many users are affected or targeted. Prevalence of FPs based on user-base data is of interest for internal QA testing of AV vendors, but for the ordinary user it is important to know how accurately its product distinguishes between clean and malicious files.

The below table shows the numbers of wrongly blocked domains/files:

  Wrongly blocked clean domains/files
(blocked / user-dependent[1])
Wrongly blocked score
[lower is better]
AhnLab, AVG, ESET, Kaspersky Lab, Microsoft – / – (-)
Kingsoft, Sophos 3 / – (3) 3
Avast, AVIRA 3 / 3 (6) 4.5
Tencent 5 / – (5) 5
G DATA, Qihoo 6 / – (6) 6
BullGuard 7 / – (7) 7
Bitdefender, Emsisoft, Fortinet 10 / – (10) 10
  average (12) average (11)
eScan 13 / – (13) 13
Trend Micro 13 / 4 (17) 15
McAfee 26 / – (26) 26
ThreatTrack Vipre 36 / – (36) 36
Panda 42 / – (42) 42
F-Secure 43 / 9 (52) 47.5

[1] Although user dependent cases are extremely annoying (esp. on clean files) for the user, they were counted only as half for the “wrongly blocked rate” (like for the protection rate).

To determine which products have to be downgraded in our award scheme due to the rate of wrongly blocked sites/files, we backed up our decision by using statistical methods and by looking at the average scores. The following products with above-average FPs have been downgraded: eScan, F-Secure, McAfee, Panda, Trend Micro and ThreatTrack Vipre

Summary Result

Test period: August – November 2013 (1821 Test cases)

The graph below shows the overall protection rate (all samples), including the minimum and maximum protection rates for the individual months.

BlockedUser dependentCompromisedProtection Rate
[Blocked % + (User dependent % / 2)]
Cluster
Kaspersky Lab18201-100%1
Panda 1817- 499.8% 1
Bitdefender 1816- 599.7% 1
F-Secure 1806 14 199.6% 1
Trend Micro 1806 13 299.5% 1
AVIRA 1805 9 799.4% 1
ESET 1805 1 1599.1% 1
Emsisoft 1786 33 299% 1
Avast 1795 6 2098.7% 1
McAfee 1798- 2398.7% 1
Sophos 1792- 2998.4% 1
Fortinet 1785- 3698% 1
eScan 1745 61 1597.5% 2
G DATA 1732 75 1497.2% 2
Tencent 1761 8 5296.9% 2
ThreatTrack Vipre 1765- 5696.9% 2
BullGuard 1749 12 6096.4% 2
AVG 1690 106 2595.7% 3
Qihoo 1704 61 5695.2% 3
AhnLab 1703- 11893.5% 4
Kingsoft 1670 33 11892.6% 4
Microsoft 1674- 14791.9% 4

Award levels reached in this Real-World Protection Test

The awards are decided and given by the testers based on the observed test results (after consulting statistical models). Microsoft was considered as a baseline and was therefore tested out-of-competition, due to which it is not included in the awards page. The score of Microsoft Windows Defender would be equivalent to STANDART. The following awards are for the results reached in this Whole-Product Dynamic “Real-World” Protection Test:

* these products got lower awards due to false alarms

Notes

Microsoft Security Essentials, which provides basic malware protection, can easily be installed from Windows Update, and is used as the basis of comparison for malware protection. Microsoft Windows 7 includes a firewall and automatic updates, and warns users about executing files downloaded from the Internet. Most modern browsers include phishing/URL-Filters and warn users about downloading files from the Internet. Despite the various build-in protection features, systems can become infected anyway. The reason for this is usually the ordinary user, who may be tricked by social engineering into visiting malicious websites or installing malicious software. Users expect a security product not to ask them if they really want to e.g. execute a file, but expect that the security product will protect the system in any case without them having to think about it, and despite what they do (e.g. executing unknown files).

Copyright and Disclaimer

This publication is Copyright © 2013 by AV-Comparatives ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV-Comparatives prior to any publication. AV-Comparatives and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the information provided in this paper. We take every possible care to ensure the correctness of the basic data, but a liability for the correctness of the test results cannot be taken by any representative of AV-Comparatives. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, producing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the website, test documents or any related data.

For more information about AV-Comparatives and the testing methodologies, please visit our website.

AV-Comparatives
(December 2013)