SubVRsive, experts in VR, AR, and 360 video content, worked with Puget Systems to design workstations they help them create engaging content for customers that include Google, Walmart, Ford, AMD, and Showtime Sports.


SubVRsive, experts in VR, AR, and 360 video content, worked with Puget Systems to design workstations they help them create engaging content for customers that include Google, Walmart, Ford, AMD, and Showtime Sports.
NVLINK is one of the more interesting features of NVIDIA’s new RTX GPU’s. In this post I’ll take a look at the performance of NVLINK between 2 RTX 2080 GPU’s along with a comparison against single GPU I’ve recently done. The testing will be a simple look at the raw peer-to-peer data transfer performance and a couple of TensorFlow job runs with and without NVLINK.

GPU based renderers like OctaneRender and Redshift make use of the video cards in a computer to process ray tracing and other calculations in order to create photo-realistic images and videos. The performance of an individual video card, or GPU, is known to impact rendering speed – as is the number of video cards installed in a single computer. But what about the connection between each video card and the rest of the system? This interconnect is called PCI Express and comes in a variety of speeds. In this article, we will look at how benchmarks for these programs perform across PCI-E 3.0 and 2.0 with x1, x4, x8, and x16 lanes.

We found previously that stacking multiple RTX 2080 video cards next to each other for multi-GPU rendering led to overheating and significant performance throttling, due to the dual-fan cooler NVIDIA has adopted as the standard on this generation of Founders Edition cards. Now that manufacturers like Asus are putting out single-fan, blower-style cards we can repeat our testing to see if the throttling issues are resolved and find out how well these video cards scale when using 1, 2, 3, or even 4 of them for GPU-based rendering in OctaneRender and Redshift.

We take a look at the differences between the Intel Z370 chipset, launched in 2017, and the updated Z390 that launched in 2018. What features does the newer version add?

Today Intel has officially announced the launch of new mainstream desktop processors, including the first Core i9 branded chip for this market segment. We are testing these processors now, and are excited about what we have found so far, but cannot publish performance data until October 19th.

There was a lot of excitement when it was first announced that GeForce RTX 2080 and 2080 Ti cards would have NVLink connectors, because of the assumption that it would allow them to pool graphics memory when used in pairs. Digging into the functionality of the NVLink connection on these cards, however, things are not as straightforward as folks may have hoped.

Lighroom Classic CC saw dramatic performance improvements with higher core count CPUs, but the 2990WX in particular has a staggering 32 cores. Will Lightroom Classic be able to take advantage of these extremely high core counts, or we have reached the point of diminishing returns?

After choosing a 10-bit per channel graphics card (AMD Radeon Pro / Nvidia Quadro), and connecting it to a 10-bit per channel monitor, there is a setting in Photoshop you should enable to create a 30 bit workflow.
Are the NVIDIA RTX 2080 and 2080Ti good for machine learning?
Yes, they are great! The RTX 2080 Ti rivals the Titan V for performance with TensorFlow. The RTX 2080 seems to perform as well as the GTX 1080 Ti (although the RTX 2080 only has 8GB of memory). I’ve done some testing using **TensorFlow 1.10** built against **CUDA 10.0** running on **Ubuntu 18.04** with the **NVIDIA 410.48 driver**.