How do you ensure your samples/test cases are fresh and valid?
In our Real-World Protection Test Series, we test continuously (24×7). When a malware sample is collected, e.g. in our honeypot network, it is analysed in a contained environment to check whether it runs. Verified malware will then be submitted immediately to the testing queue. Here it will run simultaneously for all products, in isolated environments. This avoids an unfair situation in which program A is tested earlier than program B. The result of this could be that program B scores better than program A because its vendor has had more time to develop detection methods, or obtain them via online multi-scanning services or other Threat Intelligence sources. Most of our real-world samples are tested a few minutes after discovery (most of them even before they start to show up in VirusTotal etc.). Products which by default block any new/unknown/unsigned file might get a high detection score by doing so. However, because the same test system is used to check for false alarms with clean files, such products will also have higher false alarm rates.
In our Malware Protection Test, we use prevalent malware samples that appeared in the field in the last few days and weeks before the test starts. This means that the last-seen date is between a maximum of four weeks old and a minimum of a few days old. We would expect any decent security product to detect (almost) all samples in the test (when tested on-execution and with cloud connection). For informational purposes, the Malware Protection Test provides detection rates for both online and offline scans. This illustrates how much different products rely on their respective cloud services. In some cases, the offline detection rates can be as little as half of those in online scans. However, these results are not counted with regard to giving awards for the test.