Which settings and conditions are used for your tests?

All security solutions run in a separated environment (each with its own isolated Internet connection). A full description of the settings and conditions for each test is given in the respective test report.  

We test all consumer products with their default settings, since surveys reveal that most home users keep their security programs at advised (default) settings. There is one exception to this rule, namely that we enable detection of potentially unwanted applications (PUA) if available. However, we do not test for PUAs, and use our own checks and analysis of samples to ensure that no verified PUA samples are counted in our test scores.

In business environments, and with business products in general, it is usual for products to be configured by the system administrator, in accordance with vendor’s guidelines. Thus we allow all vendors to configure their respective products. However, as with consumer products, we enable PUA detection but do not count verified PUA samples in the test scores.

How does the Real-World Protection Test differ from “traditional” static on-demand detection tests?

The “Real-World Protection Test” is a joint project of AV-Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality Engineering. It mimics a user surfing the Internet and opening internet links in email. This web attack angle allows us to test all of the protection features in a product. As well as signatures or heuristic file scanning (locally or in the cloud), any defence mechanism developed by the vendor, such as web filters and behaviour blockers, is tested. 

The Real-World Protection Test thus assesses the most important aspect of a security program, i.e. whether it will prevent malware from compromising the system. The test allows all available protection features to come into play, and products are able to download updates before each test case. Thus, it shows how well each product protects the system under optimal conditions. Static online multi-scanner services have their uses, but they cannot replicate the protection features of full security products. Firstly, there are limitations to the online scanning process. Hence its results may not even be identical to those of on-demand scans performed by full products. Secondly, online scanners do not employ all the features used by full security products, such as behavioural detection. It is very likely that in real life, a full security program would be able to protect against a malware sample not detected by an online multi-scanner service.

Do you assess the user interface of the programs you test?

Our summary reports include a usability report for each product. This describes what it’s like to install the program, and to find and use essential features. We do not give any awards for user-interface design, but the reviews help you decide for yourself how easy each product is to use.

How do you ensure your samples/test cases are fresh and valid?

In our Real-World Protection Test Series, we test continuously (24×7). When a malware sample is collected, e.g. in our honeypot network, it is analysed in a contained environment to check whether it runs. Verified malware will then be submitted immediately to the testing queue. Here it will run simultaneously for all products, in isolated environments. This avoids an unfair situation in which program A is tested earlier than program B. The result of this could be that program B scores better than program A because its vendor has had more time to develop detection methods, or obtain them via online multi-scanning services or other Threat Intelligence sources. Most of our real-world samples are tested a few minutes after discovery (most of them even before they start to show up in VirusTotal etc.). Products which by default block any new/unknown/unsigned file might get a high detection score by doing so. However, because the same test system is used to check for false alarms with clean files, such products will also have higher false alarm rates.

In our Malware Protection Test, we use prevalent malware samples that appeared in the field in the last few days and weeks before the test starts. This means that the last-seen date is between a maximum of four weeks old and a minimum of a few days old. We would expect any decent security product to detect (almost) all samples in the test (when tested on-execution and with cloud connection). For informational purposes, the Malware Protection Test provides detection rates for both online and offline scans. This illustrates how much different products rely on their respective cloud services. In some cases, the offline detection rates can be as little as half of those in online scans. However, these results are not counted with regard to giving awards for the test.