Lightroom Classic v15 Exporting & Preview Performance Bug
Lightroom Classic v15 has a performance bug that can cause exporting photos and generating previews to take up to three times longer than expected.
Intel Xeon 6: P-cores vs E-cores for CPU Rendering
Intel’s Xeon 6 server processors come in P-core and E-core variants, so we test two similarly priced models to see which is better for a CPU rendering node.
Press Release: Puget Systems Supercomputing 2025 Show Announcement
Puget Systems Makes Its Debut at SC25 to Demonstrate Custom Liquid-Cooled Servers Designed to Make On-Prem Supercomputing Accessible and Scalable
Hardware Recommendations
Our hardware recommendations for large language model (LLM) AI servers provide broad guidance, but specific situations may have unique requirements.
Quad GPU LLM Server
Compact 2U rackmount server supporting up to four NVIDIA GPUs for fine-tuning and inference with AI large language models.
Eight GPU LLM Server
Powerful 4U rackmount server supporting up to eight NVIDIA GPUs for training, fine-tuning, and inference with AI large language models.
Quad GPU 5U Rackmount Workstation
Convertible 5U rackmount server / tower workstation for fine-tuning and starting deployment of large language model inference and other AI-based tools.
Multi GPU Xeon Workstation
High-performance tower workstation for piloting GPU-accelerated machine learning and AI applications right at your desk.
Hardware Recommendations
Our hardware recommendations for AI development workstations are based on research and hands-on testing our Puget Labs team has conducted over the years.
Single GPU Tower Workstation
Desktop workstation optimized for running Linux-based machine learning and AI development work and experimenting with local AI models.









