Introduction The initial release of Unreal Engine 5.0 was aimed at game developers. Features such as Nanite, Lumen, MetaSounds and World Partition improve the look and performance of games. However, some of these features did not support in-camera VFX (ICVFX), and other virtual production specific features from 4.27 were not brought over to 5.0. Well,
For a long time here at Puget Systems, we have been putting together computer hardware recommendations for a wide range of applications. A lot of that advice is gathered from the far corners of the internet, by a range of different folks here within our company, but our Labs team delves especially deep into certain software and workflows. As such, we are beginning to brand some of our recommended systems with an additional “Labs Certified” status – and I wanted to take a moment to give you some details on why we are doing this and what it means for you, our customer.
The benchmark continues to progress, and results are rolling in.
Intel launched a new processor in their Core X series recently, and it is novel in many ways. It combines a fairly high core count with very high clock speeds, at the cost of power consumption and high heat output. It also is very limited in availability, being offered only to select system integrators via a private auction. We got our hands on one in the first auction, and have been putting it through several rounds of benchmarking to see if it is worth the price and hassle, as well as to determine if we will be offering it in our workstations.
With the RTX series of GPUs, NVIDIA has moved to using dual fans as the standard cooling layout on their GeForce and Titan video cards. This is a big change from past generations and has even bigger implications for using NVIDIA graphics cards in multi-GPU workstations. Let’s look at what changed, what it impacts, and what can be done to work around it.
Like many of you, I was glued to my computer screen this morning during NVIDIA’s live-stream of the GeForce RTX 20 series launch. But what exactly was shown today, and what does it mean for the future of gaming, virtual reality, and other GPU-based applications?
Here at Puget Systems, it is our goal to perform realistic testing on the software packages we tailor our workstations toward. Sometimes this is easy, sometimes it is harder… and sometimes a software maker already provides their own benchmark tool. That is the case with Maxon, makers of Cinema 4D, as well as the free benchmark, took Cinebench. To determine whether we should use it, though, we have to ask some questions. Is Cinebench really a good benchmark for Cinema 4D? How do the tests it runs relate to real-world performance?
Pix4D is a photogrammetry application which can take sets of photographs and turn them into point clouds and 3D meshes, to make digital versions of real-world objects or locations. It supports both local processing on a workstation as well as uploading images to be processed in the cloud – but which is faster, and what advantages does each have?
Every time a new generation of CPUs is announced, I see a number of people writing about how they think it will be faster (or slower) than current technology because of the advertised specifications. CPU specs alone don’t tell the whole story, though, and comparing core count and clock speed across different brands or generations of processors is extremely misleading. Stop doing it!
We test a lot of software here at Puget Systems, and in most cases what we are looking for is what hardware lets a given program run the fastest – or in some cases, what is the most cost effective. If you can get 95% of the best possible performance for half the price that it would cost to get a full 100%, for example, that is often a compelling way to go. However, ANSYS Mechanical (and FLUENT) present a different challenge: how can you get the best performance within the limitations of the ANSYS licensing model?