Accelerated Parallel Computing
with NVIDIA Tesla and GPU Compute
Peak delivers the highest possible compute performance into the hands of developers, scientists, and engineers to advance computing enabled discovery and solution of the world's most challenging computational problems.
Puget Systems has over 19 years experience designing and building high quality and high performance PCs. Our emphasis has always been on reliability, high performance, and quiet operation. We take this experience to the HPC sector with our Peak family of workstations and servers. Through in-house testing we do not blindly follow the industry -- we help lead it. We provide the products below as starting points that we feel cover some of the most compelling areas that we can contribute to the HPC community. Do you have a project that needs some serious compute power, and you don't know where to turn? Let us help, it's what we do!
Minimum noise and maximum performance, reliability and usability. Puget Peak is an evolutionary step from our custom systems experience. Genesis performance post-production, Summit server stability, Serenity silent design, Obsidian reliability and even the diminutive Echo have influenced Peak.
TeraFLOPS. Using Intel Xeon CPU's and the Intel MKL library, or the well established CUDA platform and libraries, there is tremendous potential for applications leveraging the computing power of both the CPU and the GPU.
Ready for use. Peak systems are installed, configured and tested under load before they ship and will (optionally) arrive with the setup and tools you need to get started. Our CentOS setup will provide a configuration that can be the basis of your working environment.
Part of what makes our cooling both effective and quiet is that we specifically target the hot spots of each system. We place fans only where they are needed and only when they are needed. We then verify the final configuration with extensive testing, full load stress testing, and thermal imaging to ensure excellent cooling.
We know that these PCs are intended for heavy, long duration workloads. We have designed them for long life with 24/7 load, and that is our primary design goal. Through targeted cooling and high quality thermal solutions, we are able to achieve an excellent low noise level while maintaining the cooling necessary for long term high load. Even better, since we are implementing a custom cooling plan for each order, if you have a preference of whether you'd like us to tune more aggressively in either direction (towards even quieter operation, or more extreme cooling), all you have to do is let us know!
This is a short post showing a performance comparison with the RTX2070 Super and several GPU configurations from recent testing. The comparison is with TensorFlow running a ResNet-50 and Big-LSTM benchmark.
I was able to spend a little time with an AMD Ryzen 3900X. Of course the first thing I wanted know was the double precision floating point performance. My two favorite applications for a "first look" at a new processor are Linpack and NAMD. The Ryzen 3900X is a pretty impressive processor!
Docker is a great Workstation tool. It is mostly used for command-line application or servers but, ... What if you want to run an application in a container, AND, use an X Window GUI with it? What if you are doing development work with CUDA and are including OpenGL graphic visualization along with it? You CAN do that!
Install TensorFlow 2 beta1 (GPU) on Windows 10 and Linux with Anaconda Python (no CUDA install needed)Written on 06/26/2019 by Dr Donald Kinghorn
TensorFlow 2.0.0-beta1 is available now and ready for testing. What if you want to try it but don't want to mess with doing an NVIDIA CUDA install on your system. The official TensorFlow install documentations has you do that, but it's really not necessary.
Being able to run Jupyter Notebooks on remote systems adds tremendously to the versatility of your workflow. In this post I will show a simple way to do this by taking advantage of some nifty features of secure shell (ssh). What I'll do is mostly OS independent but I am putting an emphasis on Windows 10 since many people are not familiar with tools like ssh on that OS.
This post is a setup guide and introduction to ssh client and server on Windows 10. Microsoft has a native OpenSSH client AND server on Windows. They are standard (and in stable versions) on Windows 10 since the 1809 "October Update". This guide should helpful to both Windows and Linux users who want better interoperability.
Being able to get Docker and the NVIDIA-Docker runtime working on Ubuntu 19.04 makes this new and (currently) mostly unsupported Linux distribution a lot more useful. In this post I'll go through the steps that I used to get everything working nicely.
This post is the needed update to a post I wrote nearly a year ago (June 2018) with essentially the same title. This time I have presented more details in an effort to prevent many of the "gotchas" that some people had with the old guide. This is a detailed guide for getting the latest TensorFlow working with GPU acceleration without needing to do a CUDA install.
Ubuntu 19.04 will be released soon so I decided to see if CUDA 10.1 could be installed on it. Yes, it can and it seems to work fine. In this post I walk through the install and show that docker and nvidia-docker also work. I ran TensorFlow 2.0- alpha on Ubuntu 19.04 beta.
TensorFlow Performance with 1-4 GPUs -- RTX Titan, 2080Ti, 2080, 2070, GTX 1660Ti, 1070, 1080Ti, and Titan VWritten on 03/14/2019 by Dr Donald Kinghorn
I have updated my TensorFlow performance testing. This post contains up-to-date versions of all of my testing software and includes results for 1 to 4 RTX and GTX GPU's. It gives a good comparative overview of most of the GPU's that are useful in a workstation intended for machine learning and AI development work.