I am a publisher/journalist and would like to use the test results that AV-Comparatives provide on its website. Which rules do I have to follow?
You are allowed to use the published test results free of charge, but you should conform with the following rules:
- give the source (av‐comparatives.org) and the date of when the test was performed (e.g. March 2018). You should always use the most recent test results
- it is suggested that you let us proof-read your article before you publish it, in order to be sure that the results are interpreted correctly and not misused
- we would like to know in which magazine our results are going to be published
I am a publisher/journalist and would like AV-Comparatives to test some products to be published in a magazine or similar. Is that possible?
Yes, it is possible. Please contact us for details and requirements.
What is the best IT-security program?
This is a question we are asked very frequently, but there is no definitive answer to it. Users have different needs, so they need different IT-security products. Cyberattacks are constantly evolving. Thus a security program that provided excellent protection against the threats of some years ago might not be the most effective at stopping today’s attacks.
Ease of use is also a consideration – if a program is too complicated for you to understand, you will not get the best out of it. Additionally, some programs have extra features that not all users will need. You could liken this to the idea of “the best car”; a Ferrari is great if you’re e.g. young and single, but not very practical if you have a family. Hence there is no such thing as a “best car” for everyone/anyone.
Consequently, AV-Comparatives does not recommend specific products. Instead we try to provide users with the information they need to decide for themselves. Please make your own decision, by having a look at the reports.
Do you have any suggestions for choosing a security product?
Whilst AV-Comparatives does not recommend any specific products, we can offer guidelines for finding a product that’s right to you. Firstly, bear in mind that protection scores, whilst obviously important, are not the only thing to be considered. Most of the tested products achieve high protection rates. They can thus be regarded as providing acceptable protection. We would suggest considering other factors as well when choosing a product. These include performance (how much the product slows your computer down), false positives, ease of use, compatibility with your operating system/other software, configuration options, support, and price.
We suggest you start by looking at our most recent test report(s)* for the device/platform that you want to protect, i.e. Windows, macOS or Android. Our reports cover test results for protection, performance and false positive, as well as a user-interface review. For a second opinion on these factors, we suggest consulting the test results of other well-known test labs. A product that gets good results across different test-types by different labs is likely to be better than one that only scores highly in tests by one or two labs.
We then suggest that you install the most promising-looking program and try it out for a few days. There will be trial versions of paid programs available, so you can test them before you need to buy anything. You can use the harmless EICAR test file to see what sort of alerts the program shows when malware is detected. If you are happy with the program after trying it out, you can keep it, and pay a subscription if required. If not, uninstall it and try another product instead. If you are a non-expert user, it’s a good idea to ask an expert to help you with installing/uninstalling/testing programs.
* For consumer products for Microsoft Windows, there is an annual Summary Report. This is published at the end of each year, and includes the results of the different individual tests and a user-interface review. For business products for Microsoft Windows, we publish twice-yearly reports. These again include all the results of individual tests, and a usability report.
Can a free/basic antivirus program protect me as well as a paid-for full security suite?
Many security vendors make a number of different security products for each platform. These can range from a “super security suite” to a “basic” antivirus program. You shouldn’t automatically assume that a more sophisticated (and possibly more expensive) product will provide better protection than a simpler (and possibly cheaper) product.
As a general rule, all the security programs in a vendor’s consumer range include the same protection components. The more sophisticated/expensive versions include more features, support, and possibly a more sophisticated user interface.
When comparing different products within one vendor’s range, check to see what extra features you get with the more sophisticated/expensive product. You can then decide whether you need them. For example, device tracking, data backup and parental control features may be genuinely useful to some people, but not to others.
In our tests, we make every effort to ensure that product selection does not give any vendor an unfair advantage. Please bear in mind that one vendor’s antivirus product may include a feature only found in another vendor’s Internet security product. Thus, absolute equality in product selection is not possible. In principle, we test Internet security suites in our Consumer Main-Test Series. However, we may test e.g. a more basic antivirus program version in some cases. For example, if this is the most popular/widely used product in that vendor’s range (because this is what users are most interested in).
How are you funded?
AV-Comparatives receives funding for its work from a number of different sources:
- As IT security is a matter of concern to society as a whole, we receive funding for our work from public bodies such as the European Union and the regional government of Tyrol.
- We also work on joint projects with the University of Innsbruck, which contributes funds to this co-operative work.
- We perform tests of multiple products commissioned by bodies such as computer magazines and private companies looking for test results of products not yet included in our public tests.
- As regards individual tests and test series, we distinguish between the “Public Test Series” and “Commissioned Tests”. Participation in the Public Test Series is free of charge, but vendors can subscribe to our post-test consultancy and feedback services, and obtain licensing rights to our logos, for which a fee may be applicable. We reserve the right to include or not include any individual product in these tests, based on capacity, technical limitations, and demand. Commissioned tests are clearly shown as such, along with the name of the commissioning body. This is audited and verified each year as part of our ISO 9001:2015 certification. AV-Comparatives holds a TÜV Austria certificate for its management system for the scope “Independent Tests of Anti-Virus Software”.
As well as making test results and reviews available to the public for free, we also provide other free services to our readers. These include for example our yearly Android research studies of hundreds of security apps, AVC Undroid app analyser, our free security advice and security market overviews and opinions.
We do not take money for advertisements and our website is free of any ads or paid referrer links (so-called “affiliate commissions”).
For more information, please read here: https://www.av-comparatives.org/funding/
What is a “test series”?
A test series puts all the tested products through a number of different individual tests running across the whole year. The reasons for running a series of tests are as follows. A single test can only assess one aspect of a security program’s abilities. However, in real life, malware can infect a system via a number of different vectors. These include Internet downloads, external drives, and targeted attacks.
There are also other factors to be considered. These include false positives, and the impact on system performance (speed) caused by a security application. A test series subjects all the programs to a number of different tests, covering different scenarios and program qualities. This ensures that each security program provides all-round protection, without slowing the system down or plaguing the user with false alarms.
How can I be sure that your tests are independent and unbiased?
AV-Comparatives was the first IT-security testing lab certified by the European Institute for Computer Anti-Virus Research (EICAR). We are IS0 9001:2015-certified by TÜV Austria (technical standards body) for the scope “Independent Tests of Anti-Virus Software”. Our operations are audited by TÜV Austria every year, to ensure our tests are unbiased and meet the highest quality-control standards.
My security program scores well in your tests, but not very well in tests by other test labs. Why is this?
Different test labs use different malware samples and test methodologies. Thus they provide (at least slightly) different perspectives on each program’s effectiveness. All AV-Comparatives’ test reports include a methodology, and we recommend reading this, along with all other parts of the report. This will give you a full picture of how the test was conducted.
Which settings and conditions are used for your tests?
All security solutions run in a separated environment (each with its own isolated Internet connection). A full description of the settings and conditions for each test is given in the respective test report.
We test all consumer products with their default settings, since surveys reveal that most home users keep their security programs at advised (default) settings. There is one exception to this rule, namely that we enable detection of potentially unwanted applications (PUA) if available. However, we do not test for PUAs, and use our own checks and analysis of samples to ensure that no verified PUA samples are counted in our test scores.
In business environments, and with business products in general, it is usual for products to be configured by the system administrator, in accordance with vendor’s guidelines. Thus we allow all vendors to configure their respective products. However, as with consumer products, we enable PUA detection but do not count verified PUA samples in the test scores.
How does the Real-World Protection Test differ from “traditional” static on-demand detection tests?
The “Real-World Protection Test” is a joint project of AV-Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality Engineering. It mimics a user surfing the Internet and opening internet links in email. This web attack angle allows us to test all of the protection features in a product. As well as signatures or heuristic file scanning (locally or in the cloud), any defence mechanism developed by the vendor, such as web filters and behaviour blockers, is tested.
The Real-World Protection Test thus assesses the most important aspect of a security program, i.e. whether it will prevent malware from compromising the system. The test allows all available protection features to come into play, and products are able to download updates before each test case. Thus, it shows how well each product protects the system under optimal conditions. Static online multi-scanner services have their uses, but they cannot replicate the protection features of full security products. Firstly, there are limitations to the online scanning process. Hence its results may not even be identical to those of on-demand scans performed by full products. Secondly, online scanners do not employ all the features used by full security products, such as behavioural detection. It is very likely that in real life, a full security program would be able to protect against a malware sample not detected by an online multi-scanner service.
For which operating systems do you test IT-security programs?
Do you assess the user interface of the programs you test?
Our summary reports include a usability report for each product. This describes what it’s like to install the program, and to find and use essential features. We do not give any awards for user-interface design, but the reviews help you decide for yourself how easy each product is to use.
How do you ensure your samples/test cases are fresh and valid?
In our Real-World Protection Test Series, we test continuously (24×7). When a malware sample is collected, e.g. in our honeypot network, it is analysed in a contained environment to check whether it runs. Verified malware will then be submitted immediately to the testing queue. Here it will run simultaneously for all products, in isolated environments. This avoids an unfair situation in which program A is tested earlier than program B. The result of this could be that program B scores better than program A because its vendor has had more time to develop detection methods, or obtain them via online multi-scanning services or other Threat Intelligence sources. Most of our real-world samples are tested a few minutes after discovery (most of them even before they start to show up in VirusTotal etc.). Products which by default block any new/unknown/unsigned file might get a high detection score by doing so. However, because the same test system is used to check for false alarms with clean files, such products will also have higher false alarm rates.
In our Malware Protection Test, we use prevalent malware samples that appeared in the field in the last few days and weeks before the test starts. This means that the last-seen date is between a maximum of four weeks old and a minimum of a few days old. We would expect any decent security product to detect (almost) all samples in the test (when tested on-execution and with cloud connection). For informational purposes, the Malware Protection Test provides detection rates for both online and offline scans. This illustrates how much different products rely on their respective cloud services. In some cases, the offline detection rates can be as little as half of those in online scans. However, these results are not counted with regard to giving awards for the test.