This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

Introducing AV-Comparatives’ Malware Protection Test

The Malware Protection Test is an enhancement of the File Detection Test which we performed in previous years. It assesses a security program’s ability to protect a system against infection by malicious files; what is unique about this test is that in addition to checking detection in scans, it additionally assesses each program’s last line of defence. Any samples that have not been detected e.g. on-access are executed on the test system, with Internet/cloud access available, to allow features such as behavioural protection to come into play.

If a product does not prevent any changes being made by a particular malware sample, or reverse all the changes made by within a given time period, that test case is considered to be a miss. If the user is asked to decide whether a malware sample should be allowed to run, and in the case of the worst user-decision the system will be compromised, we rate this as “user-dependent”.

The test set used for this test consisted of about 38000 malware samples, assembled after consulting telemetry data with the aim of including recent, prevalent samples that are endangering users in the field. Malware variants were clustered, to build a more representative test-set (i.e. to avoid over-representation of the very same malware in the set). AV-Comparatives gets up to 300,000 new samples per day, but most of the samples are just variants of the very same thing (they just have different hashes), or are unsuitable for the test (as they e.g. do not work on an up-to-date Windows 10 system). Our aim in collecting samples for the test set is quality rather than quantity.

The current test helped several vendors to find and fix previously unknown bugs in their products.

A breakdown of the malware types used in our test set is shown in the graphic below. The stats are generated using Microsoft’s naming scheme.

The table below shows the most common families of each type of malware in the test set: