Real world benchmarks are about more than just competing for big numbers

Over the last year, we have been hard at work improving, polishing, and making our internal benchmarks available to the general public which cover a range of applications in the content creation, engineering, and scientific computing fields. In fact, while most of our benchmarks still have a "BETA" modifier on them, we already have eight of our benchmarks available that you can download and run right now!

But why are we spending so much effort on this project? After all, this kind of development takes a significant amount of time and is often much harder to do than you might realize since applications like Photoshop, Premiere Pro, DaVinci Resolve, etc. are not made to be used in this manner. They are developed to do the job that they are designed for, not for us (or anyone else) to do performance testing with.

Why are real-world benchmarks necessary?

While real-world benchmarks are important for a wide number of reasons, there are three major reasons why we feel it is worth the investment of time and money to develop these benchmarks:

1) They provide a standardized and accurate method to evaluate hardware performance

While somewhat of an obvious use for a benchmark, it is surprising how often we get asked why we go through all this effort when we could just use Cinebench, Geekbench, or the plethora of available game benchmarks. To be fair, Cinebench is a great benchmark to evaluate CPU-based rendering performance in Cinema4D, but even it means almost nothing if the software you actually use is Adobe Photoshop.

Even between applications that are made by the same company, the way they use the CPU, GPU, and other hardware can be significantly different. For example, the best CPU for After Effects is not at all the best CPU for Premiere Pro. If similar applications like this are so different, you can imagine how inaccurate a game benchmark can be for anything beyond other games that use the same engine.

With the sort of targeted benchmarks we are creating, you can get a much more accurate idea of how different hardware will perform in the real world, which in turn helps people get the exact right hardware for their workflow. Of course, the fact that you can do a million different things in most applications means that even our benchmarks won't be 100% accurate for everyone, but it is vastly better than making decisions based on a completely unrelated benchmark.

2) They allow us to democratize hardware testing

While we believe that the testing and hardware evaluation we do is incredibly useful, we cannot feasibly test everything. Just as an example, since we do not sell laptops it is hard for us to spend the effort and funds to benchmark laptops with various applications. It simply isn't something that we can justify financially.

However, there are a ton of hardware reviewers out there who just need a little bit of help to get beyond gaming and synthetic benchmarks. In the end, part of our mission as a company is to help empower creators and that extends beyond our direct customer base. If our real-world benchmarks can indirectly help people get the exact right laptop for their workflow, that is definitely something we want to do!

3) They greatly improve the troubleshooting process

If you are having performance or other issues in an application like Photoshop, it could be caused by several factors including hardware, software, workflow, or simply your expectations not fitting reality. A standardized benchmark like the ones we are developing allows you to greatly reduce the number of factors that could be causing the problem.

For example, if Photoshop is slower than you expect on a new system but our benchmark gives you a result that is right where we would expect it to be, you can be reasonably confident that your system and Photoshop itself is working properly and that it is either an expectation or workflow problem. On the other hand, if the scores are way off, you know that your installation of Photoshop has a problem or something is broken with the system itself.

This is something our support department is starting to utilize more and more as our benchmarks become available to the public, and it can dramatically reduce the time to resolution when a customer is having an issue with their system. And of course, it extends to everyone, not just our own customers.

What is coming in the future for our benchmarks?

While the benchmarks we already have available are great, there are a couple of projects coming up that we are very excited about.

First, we are exploring releasing a paid version of our benchmarks for commercial use. We do not anticipate removing the free versions of our benchmarks, but a paid version gives us the financial freedom to include features that are useful for hardware reviewers, computer manufacturers, and various other commercial uses. These features include the ability to run from the command line, generate log files, and more official support. This is a big step towards democratizing hardware testing as it will give any reviewer the ability to quickly, easily, and effectively test various hardware configurations in real-world applications.

Another project that is coming up is the ability to upload and browse results. This is a large and complex project, but it will allow individual users to directly compare their system to a range of other hardware configurations. As we mentioned in the last section, we cannot feasibly test every hardware combination out there, but this should go a long way towards mitigating that. While we are not 100% sure what this will look like, the results browser for OctaneBench is the type of system we imagine.

Overall, we are very excited to continue to develop our benchmarks. We feel that hardware reviews are often focused too heavily on gaming or synthetic benchmarks, and we want to do our part to help fix that. As always, if you have any suggestions or feedback on the work we are doing, let us know in the comments!

Tags: