NVIDIA GB10 Grace Blackwell Superchip
Ultra-Small AI Supercomputer
ASUS Ascent GX10
Based on NVIDIA DGX™ Spark
The ASUS Ascent GX10, accelerated by the NVIDIA GB10 Grace Blackwell Superchip and the NVIDIA AI software stack, provides a full-stack solution for AI development and deployment. Its compact design facilitates seamless integration and deployment, delivering powerful AI performance for innovators who demand excellence. With advanced AI tools and NVIDIA® ConnectX®-7 SmartNIC, this small-scale server enhances your AI capabilities while empowering your unique solutions.
NVIDIA GB10 Grace Blackwell Superchip
128 GB LPDDR5x
Coherent Unified System Memory
1 petaFLOP
AI Performance
NVIDIA
NVLink™-C2C
Deliver a cohesive CPU+GPU memory model with five times the bandwidth of PCIe Gen 5
NVIDIA® DGX OS with Ubuntu Linux
NVIDIA® ConnectX®-7 SmartNIC
Allows two GX10 systems to be linked for handling even larger models
Optimized Cooling Design
NVIDIA AI Software Stack
Al Development Environment
Includes NVIDIA NIM™ and Blueprints. Supports Pytorch, Jupyter, Ollama for prototyping and inference.
Design
Revolutionary AI Performance on Your Desktop
The groundbreaking ASUS Ascent GX10 AI Supercomputer, powered by the state-of-the-art NVIDIA GB10 Grace Blackwell Superchip found in the NVIDIA DGX Spark, brings petaflop-scale AI computing capabilities directly to the desks of developers, AI researchers, and data scientists. This innovative device is designed to empower local AI development with its exceptional performance and advanced features.
Compact, Powerful, and Scalable
Compact Size 150 x 150 x 51mm
Unparalleled AI Performance
Up to 1 petaFLOP of AI performance using FP4
Delivers up to 1 petaFLOP of AI performance to power large AI workloads.
128GB LPDDR5x Coherent Unified System Memory
Empowers model development, experimentation, and inferencing with ample memory capacity.
Up to
1 petaFLOP
of AI Performance Using FP4
128G
Coherent Unified System Memory
Cutting-Edge Architecture
NVIDIA GB10 Grace Blackwell Superchip:
Central to the ASUS Ascent GX10, this advanced chip features a robust Blackwell GPU with fifth-generation Tensor Cores and FP4 support.
High-Performance 20-Core Arm CPU:
Enhances data preprocessing and orchestration, accelerating model tuning and real-time inferencing.
NVLink™-C2C Technology:
Provides a cohesive CPU+GPU memory model with five times the bandwidth of PCIe 5.0.
Handles Large Parameter Gen AI Models
Support for AI Models 200 Billion Parameters
Prototype, fine-tune, and infer the latest AI reasoning models directly on your desktop.
Integrated NVIDIA® ConnectX®-7 Network Technology
Link two ASUS Ascent GX10 systems to handle even larger models, such as Llama 3.1 with 405 billion parameters.
AI Networking
Next-Gen Connectivity: NVIDIA® ConnectX®-7 SmartNIC
The ASUS Ascent GX10 integrates NVIDIA® ConnectX®-7 SmartNIC to deliver ultra-high-speed networking, enabling rapid data transfer and low-latency communication across distributed AI workloads.
Ultra-Fast AI Data Throughput
Up to 200Gb/s bandwidth enables lightning-fast data transfer between nodes—ideal for large-scale, distributed AI workloads.
Secure, Intelligent Networking
Built-in hardware acceleration for TLS, IPsec, and MACsec ensures encrypted data transmission without CPU overhead.
Precision-Critical Performance
IEEE 1588v2 PTP support enables microsecond-level time synchronization for time-sensitive AI and edge computing applications.
AI Experience
Integrated AI Software Stack for Seamless Development
Preloaded AI for Instant Development
NVIDIA DGX OS (Ubuntu-based) – Optimized AI environment, ready to use.
NVIDIA AI Software Stack – Preloaded frameworks, SDKs, and tools for fast deployment.
Optimized AI Tools & AI Frameworks
CUDA, PyTorch, TensorFlow, Jupyter – Optimized for AI model development & inference.
NVIDIA TensorRT – High-performance AI inference engine.
NVIDIA NIMs & Blueprints – Prebuilt AI workflows & microservices.
Industry-Leading AI Model Support
DeepSeek R1 – AI inference optimized up to 70B parameters.
Llama 3.1 – Generative AI up to 405B parameters (dual-GX10).
Meta, Google models – Broad compatibility with industry-leading AI frameworks.
Scalable
Engineered for Maximum Efficiency
Optimized cooling design ensure sustained AI performance under heavy workloads
Compact from factor, delivering high-density AI computing in a small footprint
Connectivity
I/O Ports
Kensington Lock Slot
1 x USB 3.2 Gen 2x2 Type-C, 20Gbps,
alternate mode (DisplayPort 2.1), with PD in(180W EPR PD3.1 SPEC)3 x USB 3.2 Gen 2x2 Type-C, 20Gbps,
alternate mode (DisplayPort 2.1)HDMI 2.1b port
10 GbE LAN
1 x ConnectX CX-7 200Gbps (2xQSFP)
Application
Local Development, Scalable Deployment
Seamless Transition to Cloud
Leverage NVIDIA AI platform software architecture to move models from desktop environments to DGX Cloud or any accelerated cloud or data center infrastructure with minimal code adjustments.
Cost-Effective Experimentation Platform
Free up essential compute resources in clusters better suited for training and deploying production models.
Prototyping
Fine Tuning / Inference
Data Science
Specifications and product images are subject to change.