This website uses cookies to ensure you get the best experience on our website.
Please note that by continuing to use this site you consent to the terms of our Privacy and Data Protection Policy .
Some of our partner services are located in the United States. According to the case law of the European Court of Justice, there is currently no adequate data protection in the USA. There is a risk that your data will be controlled and monitored by US authorities. You cannot bring any effective legal remedies against this.

Fee structure

As announced publicly several times already last year, AV-Comparatives provides since 2008 its services for a fee. The fee is NOT for the tests itself, but for the various services we provide (bug reports, usage of our material/logos for reprint and marketing material, getting sample lists, getting false alarms after the public tests, internal services, etc.). Vendors are not obligated to pay for the FULL-service-package, and if a vendor e.g. prefers to do not get all services, the fee is lower. All major testers get paid for the services they provide, this is a common practice. Of course, the fee has no influence at all on the results and is to cover our expenses and time involved.
For users and magazines the public results are free of charge, as long the source is given. If we have time and e.g. a magazine wants us to make an additional work/test of e.g. products that are not already tested, this costs some money which has to be covered by the magazine (only for the service provided to the magazine; the vendor of the tested product will not get any service).

Keeping AV-Comparatives alive

For those that did not already read this info on our website:
Starting from 2008, AV-Comparatives will – like most other testers do – no longer provide its services for free, as the expenses for the site and all the work involved are too high. The vendors will not pay for the big tests itselfs (and it has of course no influence of any kind on the test results or award system) – the participation fee includes e.g. the usage of the reached award for marketing purposes and other services (like getting false positives and missed malware after the tests, bug reports, etc.). The number of participants will be limited to about 18 vendors.

What’s coming next

Here an overview about what is coming next on AV-Comparatives:

– the retrospective test (which will be released the 1st December) will be slightly changed/improved further. Some vendors where already suggesting some test method changes since about a half year and as those methods are a) for us easier to do than we did so far, b) seems to be widely accepted also from other vendors and c) should not change much anyway, this new “methods” will be used in this upcoming retrospective test. The changes etc. will be described in the report. At least for this retrospective test, the level-ranges will likely remain unchanged.

– the summary report of 2007 will be released during December, containing a brief summary of all tests and some notes about the various products.

– In 2008 we will probably start to ask for some kind of fees for the various services we provide and we will probably drop out some vendors from the regular tests and include some other vendors instead which are more known and often asked to be included. The number of tested products will probably be limited to 16-18 products.

– Behavioral-Testing is getting more and more important. There are currently discussions to find the best (vendor-independent) practice for such kind of tests and how testers can perform such tests. As such tests are not trivial and require lot of resources it may take a while until we do them.

– If manpower and resources permits, we would like to do from time to time also other tests showing how well products protect against malicious actions in some various scenarios. But first we will continue to focus on keeping and improving further the quality and methods of the current tests.

The “undead” WildList presented at the VirusBulletin conference an interesting paper about the current desolate state of the WildList and made suggestions on how to improve it. Already at the AV Testing Workshop in Rekjavik 2007 most of the technical staff of the AV vendors admitted that the WildList is well-accepted and loved because it is easy to pass tests based on the WildList and because it is good for the marketing (100% detection*). So you may ask, why – if it is easy to pass – some vendors fail at detecting all samples from the WildList? The reasons could be either errors by the testers or temporary bugs in the software, but more often and likely it is because a) more variables than just detecting all samples are needed to pass (e.g. no false positives in case of VB100), b) sometimes also very old threats that were on the wildlist 10 years ago (e.g. boot sector viruses) are still included, and probably also because not all vendors receive the WildCore collection and therefore are not tested under same circumstances. So, who wants to keep the WildList alive? Of course (beside marketing** peoples and certification bodies which get lot of money for quite easy to do [and for av vendors to pass paid] tests) all those vendors that know that their product would not score well in tests using larger test-sets.

Continue reading…

Follow Us