According to BSN, nVidia has designed their new GT300 as a competitor to Intel's Larrabee. This GPU is codenamed "Fermi"
, which we shall be talking about in this article.
According to information from BSN, this GPU could be the GT300. We should have more information tonight, after nVidia makes its announcements during the opening ceremony of the GTC (GPU Technology Conference) exhibition.
Based on current information, the GT300 would have 3 billion transistors engraved in 40nm, or about 1.5 times more than the ATI RV870, which is at the heart of the HD5800 generation. The chip would have 16 groups (Shader Cluster) and 32 calculation units (CUDA cores), which makes a total of 512.
This chip should be capable of carrying up to 512 FMA (Fused MADD) instructions in one cycle in single precision mode, and up to 256 in double precision mode. The operating frequency of these units is still unknown, but there is great potential here.
The calculation units will be assisted by a 1MB L1 cache, which will be divided into blocks of 16KB each, and 768KB of unified L2.
The memory controller is composed of 6x 64-bits for a total of 384 bits, as with the previous generation chip, the GT200. These cards will be able to handle up to 6GBs of GDDR5. Remember that GDDR5 provides a mechanism for correcting computational errors, which will be promising as far as improving the reliability of GPGPU applications and Folding@Home is concerned.
The main innovation around this chip is the cGPU; within this abbreviation is hidden native support by the chip for many programming languages. Within, we find support for CUDA C, as with the previous chips, in addition to C++, DirectCompute 5.0, DirectX 11, Fortran, OpenCL, OpenGL 3.1 and OpenGL 3.2.
It is therefore possible to run any code programmed in any of the languages listed above, on the GPU, as is currently done with C for CUDA and DirectX. This is a serious competitor to Intel's Larrabee, which is without a doubt the main target of the GT300.
We now await confirmation of these features, as well as word on nVidia's game plan for 3D and intensive gaming support. We will probably know more tonight.
Hopefully, the chip will not be too expensive for ordinary users to afford. If so, then we will hold out on mainstream consumer versions becoming available.
Translated by : KaySL