ASUS today announced that ESC8000 and ESC4000 servers with the latest NVIDIA® L40S GPUs are ready for order with greater availability and better performance per dollar. To transform with generative AI, enterprises need to deploy more compute resources at a larger scale — and ASUS offers eight-GPU and four-GPU NVIDIA L40S servers to accelerate training, fine tuning and inference workloads, with powerful performance to build and deploy AI models.
In addition, ASUS is one of only a handful NVIDIA OVX server system providers in the world, and we development our own innovative AI LLM technology deliver comprehensive and true generative-AI solutions. Learn more about ASUS L40S solutions.
ASUS ESC8000 and ESC4000 servers with L40S available for rapid fulfilment
Enterprises today need computing infrastructure to deliver performance, scalability and reliability for data centers. ASUS offers both the Intel-based ESC8000-E11 and ESC4000-E11, and AMD-based ESC8000A-E12 and ESC4000A-E12, servers with up to eight NVIDIA L40S GPUs, providing faster time to AI deployment with quicker access to GPU availability and better performance per dollar for AI inferencing. These L40S GPU servers enable enterprises to confidently deploy hardware solutions that securely and optimally run their modern accelerated workloads and are engineered with independent GPU- and CPU-airflow tunnels, and flexible module design on storage and networking for scalability.
The NVIDIA L40S GPU, based on the Ada Lovelace architecture, is the most powerful universal GPU for the data center, delivering breakthrough multi-workload acceleration for large language model (LLM) inference and training, graphics and video applications. As the premier platform for multi-modal generative AI, the L40S GPU provides end-to-end acceleration for inference, training, graphics and video workflows to power the next generation of AI-enabled audio, speech, 2D, video, and 3D applications.
ASUS servers that will be validated by NVIDIA with L40S include: