Created by a single enthusiast, this home-built GPU is a slightly more labor-intensive way to avoid painful graphics card prices.

General
Created by a single enthusiast, this home-built GPU is a slightly more labor-intensive way to avoid painful graphics card prices.

If you are fed up with the ridiculous graphics card prices, you could build your own, including the GPU itself, PCB, and drivers. Given the complexity of modern graphics cards, that sounds like a ridiculous and probably impossible idea. Even Intel has struggled on many fronts when entering the GPU market with its new Arc cards.

But that didn't stop software engineer Dylan Barrie from building his own (via Tom's Hardware). It is called "FuryGpu. It is based on a Xilinx Field Programmable Gate Array (FPGA), which resembles a blank silicon slate that can be reconfigured from CPU to GPU or somewhere in between. [FPGAs have advantages and disadvantages; they will never be able to match true dedicated silicon for a specific task. But they are much more flexible than dedicated designs. In any case, Barrie apparently spent four years developing FuryGpu, resulting in a homebrew GPU with a feature set he says is roughly mid-90s, capable of running the original Quake at 720p and 60fps.

If you're wondering about the specs, here's what it looks like:

Not exactly an RTX 4090, but you're probably trying to build a GPU at home. Speaking of which, the layman might imagine the physical aspects of building his own GPU, but he would look into how to turn a generic FPGA into a graphics chip, or design a PCB.

He also notes that there are clear opportunities left to further improve performance, all of which provide interesting insight into Intel's struggles to bring driver quality up to speed. On the other hand, the fact that a one-man band can get a functional GPU up and running gives a rather different interpretation to the common belief that the barriers to entry into the graphics market are impossibly high.

Nevertheless, Barrie took four years and the result is about 30 years off the cutting edge; how many lifetimes would it take to upgrade a FuryGpu to today's full ray-traced marvel, complete with AI-accelerated upscaling and frame generation support? Not to mention the time it would take.

Mr. Barrie has stated that he would like to open source the entire project, but there are legal issues that must be addressed before that can be done. Even if that is accomplished, Barrie is at pains to emphasize that the project is not predisposed to be an Nvidia killer. He says, "This project is not going to change the GPU landscape or compete with commercial players."

But we're going to ignore that and blindly hope that this is just the beginning of an emerging homebrew GPU movement that will force companies like Nvidia and AMD to change their minds, lower their prices, and increase the VRAM allocations of their next-generation GPUs sixfold. We can dream, can't we?

Categories