NVIDIA Tesla V100 32GB CoWoS HBM2 PCIe 3.0 - GPU-NVTV100-32-PCIE
BUS: PCI-E 3.0 16x Memory size: 32 GB Stream processors: 5120 Theoretical performance: TFLOP
All NVIDIA Ampere, NVIDIA Grace Hopper, NVIDIA Ada Lovelace GPU and NVIDIA Blackwell architectures are subject to a non-cancellable, non-returnable (NCNR) period of 52 weeks. Additionally, the product is subject to sanctions for certain countries and the end customer must be documented.
We can supply these GPU cards directly and with an individual B2B price. Contact us with your inquiry today.
Product code | 214.153189 |
---|---|
Part number | GPU-NVTV100-32 |
EAN | 672042332540 |
Manufacturer | NVIDIA |
Availability |
In stock 0 pc
Stock allocation and delivery options
Delivery to selected address Wednesday 11. 6. at the latest Friday 20. 6. |
Supplier availability | In stock 2 pc |
Warranty | 24 months |
Weight | 1 kg |
Server system integrator & worldwide shipping
Personal approach and tailor-made servers
NBD warranties & cross-shipping
Private cloud infrastructure
Pre-sales & After-sales support
Detailed information
NVIDIA Tesla
Tesla products from the NVIDIA company introduce a line of computational graphics processors that are very similar to the NVIDIA Quadro series (they usually use the same chip). However, they have an isolated display interface. They're also available in passively-cooled form-factors, which are specifically appropriate for use in servers (rack-mounts).
CUDA Technology
Users of professional applications can, thanks to the CUDA architecture, use graphical CUDA stream processors. Thanks to this, it is possible to use the raw performance of a graphics card for specific calculations, which can significantly increase work speed compared to the use of a traditional processor, which are significantly limited by the lower number of cores.
Product Specifications
Tesla V100 is architected from the ground up to simplify programmability
NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation
Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance
With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers
Main Specifications | ||
Cooler Type | Passive | |
Device type | GPU computing processor - fanless | |
Form Factor | PCIe Full Height/Length | |
Graphics Engine | NVIDIA Tesla V100 | |
Interface Type | PCI Express 3.0 x16 | |
Memory | ||
Bandwidth | 900 GBps | |
Size | 32 GB | |
Technology | HBM2 | |
Video Output | ||
API Supported | CUDA, DirectCompute, OpenCL, OpenACC | |
CUDA Cores | 5,120 |
Parameters
Product line | Tesla |
---|---|
Architecture | Volta |
Gigabytes of memory | 32 |
Number of stream processors | 5120 |
Memory type | HBM2 |
Slot count | 2 |
Monitor output | None |
Profile | FH |
Interface | PCI-E 3.0 16x |
Cooling type | Passive |
Power consumption (W) | 250 |