Peak
Accelerated Parallel Computing
with NVIDIA Tesla and GPU Compute
Peak delivers the highest possible compute performance into the hands of developers, scientists, and engineers to advance computing enabled discovery and solution of the world's most challenging computational problems.
Puget Systems has over 21 years experience designing and building high quality and high performance PCs. Our emphasis has always been on reliability, high performance, and quiet operation. We take this experience to the HPC sector with our Peak family of workstations and servers. Through in-house testing we do not blindly follow the industry -- we help lead it. We provide the products below as starting points that we feel cover some of the most compelling areas that we can contribute to the HPC community. Do you have a project that needs some serious compute power, and you don't know where to turn? Let us help, it's what we do!
Dr. Kinghorn has a 20+ year history with scientific and high performance computing and holds a BA in Mathematics/Chemistry and a PhD in Theoretical Chemistry. If you are looking for a HPC configuration, check out his HPC Blog.
Peak Single Xeon Tower
CustomizePayments starting at $155/month
A powerful enterprise-class tower developer workstation with support for four NVIDIA GPUs.
Peak Dual Xeon Tower
CustomizePayments starting at $297/month
A powerful enterprise-class tower developer workstation with support for dual NVIDIA Tesla or GPUs.
Peak 4 GPU 1U Server
CustomizePayments starting at $207/month
A powerful, enterprise-class 1U rackmount server with Intel Xeon processors and up to 4 NVIDIA Tesla or GPU cards.
Peak 8 GPU 4U Server
CustomizePayments starting at $340/month
A powerful, enterprise-class 4U rackmount server with dual Intel Xeon processors and up to 8 NVIDIA Tesla or GPU cards.
Design
Minimum noise and maximum performance, reliability and usability. Puget Peak is an evolutionary step from our custom systems experience. Genesis performance post-production, Summit server stability, Serenity silent design, Obsidian reliability and even the diminutive Echo have influenced Peak.
Performance
TeraFLOPS. Using Intel Xeon CPU's and the Intel MKL library, or the well established CUDA platform and libraries, there is tremendous potential for applications leveraging the computing power of both the CPU and the GPU.
Application
Ready for use. Peak systems are installed, configured and tested under load before they ship and will (optionally) arrive with the setup and tools you need to get started. Our CentOS setup will provide a configuration that can be the basis of your working environment.
Part of what makes our cooling both effective and quiet is that we specifically target the hot spots of each system. We place fans only where they are needed and only when they are needed. We then verify the final configuration with extensive testing, full load stress testing, and thermal imaging to ensure excellent cooling.
We know that these PCs are intended for heavy, long duration workloads. We have designed them for long life with 24/7 load, and that is our primary design goal. Through targeted cooling and high quality thermal solutions, we are able to achieve an excellent low noise level while maintaining the cooling necessary for long term high load. Even better, since we are implementing a custom cooling plan for each order, if you have a preference of whether you'd like us to tune more aggressively in either direction (towards even quieter operation, or more extreme cooling), all you have to do is let us know!
Recommended Reading
How To Install TensorFlow 1.15 for NVIDIA RTX30 GPUs (without docker or CUDA install)
Written on 12/09/2020 by Dr Donald KinghornIn this post I will show you how to install NVIDIA's build of TensorFlow 1.15 into an Anaconda Python conda environment. This is the same TensorFlow 1.15 that you would have in the NGC docker container, but no docker install required and no local system CUDA install needed either.
Quad RTX3090 GPU Power Limiting with Systemd and Nvidia-smi
Written on 11/24/2020 by Dr Donald KinghornThis is a follow up post to "Quad RTX3090 GPU Wattage Limited "MaxQ" TensorFlow Performance". This post will show you a way to have GPU power limits set automatically at boot by using a simple script and a systemd service Unit file.
Quad RTX3090 GPU Wattage Limited "MaxQ" TensorFlow Performance
Written on 11/13/2020 by Dr Donald KinghornCan you run 4 RTX3090's in a system under heavy compute load? Yes, by using nvidia-smi I was able to reduce the power limit on 4 GPUs from 350W to 280W and achieve over 95% of maximum performance. The total power load "at the wall" was reasonable for a single power supply and a modest US residential 110V, 15A power line.
RTX3070 (and RTX3090 refresh) TensorFlow and NAMD Performance on Linux (Preliminary)
Written on 10/29/2020 by Dr Donald KinghornThe GeForce RTX3070 has been released. The RTX3070 is loaded with 8GB of memory making it less suited for compute task than the 3080 and 3090 GPUs. we have some preliminary results for TensorFlow, NAMD and HPCG.
Note: Adding Anaconda PowerShell to Windows Terminal
Written on 10/01/2020 by Dr Donald KinghornWhen you install Miniconda3 or Anaconda3 on Windows it adds a PowerShell shortcut that has the necessary environment setup and initialization for conda. It's listed in the Windows menu as "Anaconda Powershell Prompt (Anaconda3)". However, this opens a separate/detached PowerShell instance and it would be nice to have this as an optional shell from Windows Terminal! In this post we will add that functionality as a new shell option in Windows Terminal.
RTX3090 TensorFlow, NAMD and HPCG Performance on Linux (Preliminary)
Written on 09/24/2020 by Dr Donald KinghornThe second new NVIDIA RTX30 series card, the GeForce RTX3090 has been released. The RTX3090 is loaded with 24GB of memory making it a good replacement for the RTX Titan... at significantly less cost! The performance for Machine Learning and Molecular Dynamics on the RTX3090 is quite good, as expected.
RTX3080 TensorFlow and NAMD Performance on Linux (Preliminary)
Written on 09/17/2020 by Dr Donald KinghornThe much anticipated NVIDIA GeForce RTX3080 has been released. How good is it with TensorFlow for machine learning? How about molecular dynamics with NAMD? I've got some preliminary numbers for you!
Does Enabling WSL2 Affect Performance of Windows 10 Applications
Written on 07/17/2020 by Dr Donald KinghornWSL2 offers improved performance over version 1 by providing more direct access to the host hardware drivers. Recent "Insider Dev Channel" builds of Win10 even allows access to the Windows NVIDIA display driver for GPU computing applications for WSL2 Linux applications! The performance improvements with WSL2 are largely because this version is running as a privileged virtual machine on to of MS Hyper-V. This means that at least low level support for the Hyper-V virtualization layer needs to be enabled to use it. In particular, the Windows feature "VirtualMachinePlatform" must be enabled for WSL2. We tested to see if there was any negative application performance impact.
Note: How To Install JupyterLab Extensions (Globally for a JupyterHub Server)
Written on 07/15/2020 by Dr Donald KinghornThe current JupyterHub version 2.5.1 does not allow user installed extension for JupyterLab when it is being served from JupyterHub. This should be remedied in version 3. However, even when this is "fixed" it is still useful to be able to install extensions globally for all users on a multi-user system. This note will show you how.
Note: How To Copy and Rename a Microsoft WSL Linux Distribution
Written on 06/19/2020 by Dr Donald KinghornWSL on Windows 10 does not (currently) provide a direct way to copy a Linux distribution that was installed from the "Microsoft Store". The following guide will show you a way to make a working copy of an installed distribution with a new name.