Accelerated Parallel Computing
with NVIDIA Tesla and GPU Compute

Peak delivers the highest possible compute performance into the hands of developers, scientists, and engineers to advance computing enabled discovery and solution of the world's most challenging computational problems.


Puget Systems has over 17 years experience designing and building high quality and high performance PCs. Our emphasis has always been on reliability, high performance, and quiet operation. We take this experience to the HPC sector with our Peak family of workstations and servers. Through in-house testing we do not blindly follow the industry -- we help lead it. We provide the products below as starting points that we feel cover some of the most compelling areas that we can contribute to the HPC community. Do you have a project that needs some serious compute power, and you don't know where to turn? Let us help, it's what we do!

Dr. Kinghorn

Dr. Donald Kinghorn
Scientific Advisor for Puget Systems

Dr. Kinghorn has a 20+ year history with scientific and high performance computing and holds a BA in Mathematics/Chemistry and a PhD in Theoretical Chemistry. If you are looking for a HPC configuration, check out his HPC Blog.


Puget Peak Mini

Peak Mini


A compact, efficient, portable developer workstation.

Puget Peak Single Xeon Tower

Peak Single Xeon Tower


A powerful enterprise-class tower developer workstation with support for four NVIDIA Titan-X GPUs.

Puget Peak Dual Xeon Tower

Peak Dual Xeon Tower


A powerful enterprise-class tower developer workstation with support for dual NVIDIA Tesla or GPUs.

Puget Peak Quad Xeon Tower

Peak Quad Xeon Tower


A quad socket E7 Xeon tower for maximum processing power in a single box.

Puget Peak 2U

Peak 2U


A powerful, enterprise-class 2U rackmount server with dual Intel Xeon E5 processors and up to 4 NVIDIA Tesla or GPU cards.

Puget Peak 3U

Peak 3U


A powerful, enterprise-class 3U rackmount server with dual Intel Xeon E5 processors and up to 8 NVIDIA Tesla or GPU cards.


Minimum noise and maximum performance, reliability and usability. Puget Peak is an evolutionary step from our custom systems experience. Genesis performance post-production, Summit server stability, Serenity silent design, Obsidian reliability and even the diminutive Echo have influenced Peak. We've taken extra steps like developing our own custom Arduino based thermal fan controller and fabricating custom fan shrouds.


TeraFLOPS. Using dual Intel E5-2687W CPU's we see over 300 DP Linpack GFLOPS on the base Peak system and 765 GFLOPS directly logged into a single Xeon Phi 5110 coprocessor with the same Intel MKL library benchmark. The well established CUDA platform and libraries deliver similar levels of performance with NVIDIA Tesla and have been put to good use in many existing codes. There is tremendous potential for applications leveraging the computing power of the Intel Xeon Phi coprocessor and the NVIDIA Tesla and Titan GPGPU's.


Ready for use. Peak systems are installed, configured and tested under load before they ship and will (optionally) arrive with the setup and tools you need to get started. Our CentOS setup will provide a configuration that can be the basis of your working environment.

Part of what makes our cooling both effective and quiet is that we specifically target the hot spots of each system. We place fans only where they are needed and only when they are needed. We then verify the final configuration with extensive testing, full load stress testing, and thermal imaging to ensure excellent cooling.

Example of Puget Systems targeted cooling

Without targeted cooling

With targeted cooling

We know that these PCs are intended for heavy, long duration workloads. We have designed them for long life with 24/7 load, and that is our primary design goal. Through targeted cooling and high quality thermal solutions, we are able to achieve an excellent low noise level while maintaining the cooling necessary for long term high load. Even better, since we are implementing a custom cooling plan for each order, if you have a preference of whether you'd like us to tune more aggressively in either direction (towards even quieter operation, or more extreme cooling), all you have to do is let us know!


The NVIDIA Tesla series of GPU accelerator cards sparked intense interest in speeding up applications by using algorithms with high thread count parallelism utilizing the large number of execution cores available on GPU's. The Tesla cards although based on GPU cores are designed specifically for computation and forgo video output. Much to NVIDIA's credit the strong developer ecosystem they established around their CUDA SDK has spawned many successful projects. In general, programming for Tesla requires careful consideration of the hardware and re-thinking of CPU oriented algorithms.

NVIDIA Tesla Specifications:

# of CUDA Cores2496268828804992
Clock Speed706 MHz732 MHz745 MHz562 MHz
Max TDP225W235W225W300W
Memory Size (GDDR5)5 GB6 GB12 GB24 GB
Memory Clock2.6 GHz2.6 GHz3.0 GHz2.8 GHz
Memory Bandwidth (ECC off)208 GB/s250 GB/s288 GB/s480 GB/s
ECC Memory SupportedYesYesYesYes

Intel® Xeon Phi™ Coprocessor

The Intel Xeon Phi x100 series of coprocessors offer double precision floating point performance approaching tera-FLOPS in a single add-in card. This performance is accessible through normal x86 instructions that leverages the high number of cores and 8GB high speed shared memory, the 512 bit wide SIMD vector unit and 4 layer hardware threading per core. Codes that have been optimized for standard Intel SSEx/AVX instructions should port readily to Phi. From a systems perspective the card appears as an additional node on an internal network over the PCIe bus and is indeed running an embedded Linux uOS with an interface provided by openbox. This means you can log into the card as a separate node and have a normal Linux command environment available. Booting, reseting, reconfiguring, user management, monitoring etc. is handled by a set of commands and kernel module which communicate with the card via a system daemon on the host. The Xeon Phi is an attractive alternative to the more well established NVIDIA Tesla CUDA environment. It provides a much more familiar programming "feel" and can take full advantage of Intel's advanced compiler suites.

Intel Xeon Phi Specifications:

# of Cores57 / 228 Threads60 / 240 Threads61 / 244 Threads
Clock Speed1.100 GHz1.053 GHz1.238 GHz
Max TDP300W225W300W
Max Memory Size6 GB8 GB16 GB
Max Memory Bandwidth240 GB/s320 GB/s352 GB/s
ECC Memory SupportedYesYesYes