All stock codes associated to this product
126S3000140, 11TESLAA100PASSIVE, 11TESLA-A100-PASSIVE
NVIDIA A100 for PCIe
Overview
Accelerating the Most Important Work of Our
Time
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every
scale to power the worlds highest-performing elastic data centers for AI, data
analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the
engine of the NVIDIA data center platform. A100 provides up to 20X higher
performance over the prior generation and can be partitioned into seven GPU
instances to dynamically adjust to shifting demands. Available in 40GB and 80GB
memory versions, A100 80GB debuts the worlds fastest memory bandwidth at over 2
terabytes per second (TB/s) to run the largest models and
datasets.
The Most Powerful End-to-End AI and HPC Data Center
Platform
A100 is part of the complete NVIDIA data center solution that incorporates
building blocks across hardware, networking, software, libraries, and optimized
AI models and applications from NGC. Representing the most powerful end-to-end
AI and HPC platform for data centers, it allows researchers to deliver
real-world results and deploy solutions into production at scale.
Specifications
Peak
FP64 |
9.7
TF |
Peak
FP64 Tensor Core |
19.5
TF |
Peak
FP32 |
19.5
TF |
Tensor
Float 32 (TF32) |
156
TF | 312 TF |
Peak
BFLOAT16 Tensor Core |
312
TF | 624 TF |
Peak
FP16 Tensor Core |
312
TF | 624 TF |
Peak
INT8 Tensor Core |
624
TOPS | 1,248 TOPS |
Peak
INT4 Tensor Core |
1,248
TOPS | 2,496 TOPS |
GPU
Memory |
40GB |
GPU
Memory Bandwidth |
1,555
GB/s |
Interconnect |
NVIDIA
NVLink 600 GB/s |
PCIe
Gen4 64 GB/s |
Multi-Instance
GPUs |
Various
instance sizes with up to 7 MIGs at 5GB |
Form
Factor |
PCIe |
Max
TDP Power |
250
W |