This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.
Accept

The “undead” WildList

AV-Test.org presented at the VirusBulletin conference an interesting paper about the current desolate state of the WildList and made suggestions on how to improve it. Already at the AV Testing Workshop in Rekjavik 2007 most of the technical staff of the AV vendors admitted that the WildList is well-accepted and loved because it is easy to pass tests based on the WildList and because it is good for the marketing (100% detection*). So you may ask, why – if it is easy to pass – some vendors fail at detecting all samples from the WildList? The reasons could be either errors by the testers or temporary bugs in the software, but more often and likely it is because a) more variables than just detecting all samples are needed to pass (e.g. no false positives in case of VB100), b) sometimes also very old threats that were on the wildlist 10 years ago (e.g. boot sector viruses) are still included, and probably also because not all vendors receive the WildCore collection and therefore are not tested under same circumstances. So, who wants to keep the WildList alive? Of course (beside marketing** peoples and certification bodies which get lot of money for quite easy to do [and for av vendors to pass paid] tests) all those vendors that know that their product would not score well in tests using larger test-sets.

* based on the WildList xx/200x (most buyers reading this on the box do not understand what it means, but 100% sounds good)

** reading on a box e.g. “detects 90%” or “detects 98%” of malicious software does of course not sound as good and reliable as “detects 100%”

P.S.: AV-Comparatives does not provide tests based on the WildList.

I am NOT saying that ITW tests are completely useless, nor am I saying that tests based on large test-sets give you the best insight. The loyal readers of AV-Comparatives know what we always state: do not look/rely just on one single test from one testing site only, do not rely on test results alone, but look at the bigger picture and consult as many independent tests and reviews you can find to build up an opinion and get than your own opinion later by trying out the various products by yourself on your system.

Tags