Loading

Whats the Deal with Video Cards?

So a number of people ask me why you even need a high quality video card
your new PC when it has 12 Cores….

Well, This blog post will describe exactly why a quality video card actually matters for some tasks. So the basic function of a video card is to store and update and present the bitmap that must be presented to the monitor. Lets assume that your resolution for your monitor is going to be 1000 x 1000 pixels and using 32 bits per pixel (4 bytes per pixel) to determine color. That means each page requires 1 million pixels @ 4 Bytes = 4MB. This may need to up updated 60 times a second. I used fake resolution numbers only to make the calculations come out clean.


Common resolutions are 720p (1280×720), 1080p (1920×1080), 2160p (3840×2160), but its easier to round these off to 720 = 4MB, 1080 = 8MB, 2160 = 32MB assuming 32bit color. This means that the video card must store and update the video buffer at a rate of 480 MBytes/s. This speed can be very taxing on the processor and the PCI bus, and the system CPU must be
constantly calculating the values for each of these pixels. In the case of rendering a bouncing ball on the screen, with two light sources, the CPU must calculate the value for all of these locations, and forward this to the video card. Whats more the CPU only has 1/60th of a second to update
ALL of these locations. Since these are used for gaming and live events, this cant be done ahead of time, but must be done just-in-time.

One thing you notice pretty quickly is that if you had enough processors, you could separate out the duties of these calculations into parallel operations. The problem is that your main system processor
while commonly multi-core, are very very complex, and commonly you might see no more than 4, 6, or 8 cores
due to the fact they are general purpose processors, and equally good at all tasks.

What we really need is a simplified processor that really isnt good at general tasks, is optimized for only the calculations required to figure out the screen color/shading. A common Intel Processor has 2.6 Billion transistors which support 10 Cores (intel I7 Broadwell-E), which means
that each core has roughly 260m transistors. A common AMD Video Processor (the new AMD RTX 3060 ti) has 26 Billion Transistors, but the real shocker is the 4864 cores, which means that each core has roughly 5.3m transistors per core. This is a gross over-simplification, but the point is that a video GPU core is 1/50th as powerful as a Intel Core.

Now the question is, how do you offload the workload to the video card, how can be light up those GPU cores. Now we have to think about what we are trying to draw on the screen. In the simplest terms, we want to draw a
flat selection of pixels. But what if instead of telling the video card that I need to draw this 4MB bitmap, instead I send the video card a set of coordinates, like a triangle on screen, and tell it the color in each of the
corners of the triangle, and let it figure out the shading of all the pixels in-between. So now my Intel General processor sends a set of geometry models to the GPU, and the GPU knows how to render them into a real screen. Additionally, You can place geometry on top of geometry to handle complex objects, so the GPU needs its own memory to process the models. the combination of Multi-Core Intel processor to process your game, and the offload ability to send all the lower complexity geometry calculations off gives you the best of both worlds.

Additionally considerations are the fact that a single server can support many video cards to support more and more offload as well. These cards commonly have power requirements that exceed that which can be pulled from the PCI bus, so have their own 6-pin or 8-pin direct connection to your machines power supply. These cards also commonly require more cooling and exceed the space alloted for a single PCI card width, and as such may be 2-slots wide, so as long as your motherboard and case support it.

The interesting fact is that the common AMD or Intel CPU may consume between 25 watts and 165 watts, while the common video card consumes up to 400 watts.

Now I should point out that Crypto-currency calculations just happen to match the types of calculations that the GPUs are optimized for, meaning that when trying to mine for bitcoins or etherium, you can process almost 100x faster on the GPU. You can even buy video cards, optimized for AI or Bitcoin workloads that have no output sockets.