Intel Announces How to Make Xe GPUs Compatible with DX12 Gaming

General
Intel Announces How to Make Xe GPUs Compatible with DX12 Gaming

In an online talk at GDC, Intel developer Allen Hux details how game developers can use existing DirectX 12 techniques to make both Xe discrete and integrated GPUs work together for greater gains. The greater benefit, by the way, is an increase in the frame rate of the game.

In fact, the techniques Hux introduces will work with any discrete graphics card when paired with an Intel processor. [An AMD Radeon or Nvidia GeForce GPU can be successfully combined with a redundant graphics chip in a Core i7 processor. This can take some of the computational load off the CPU and boost games.

Since Intel announced its intention to make discrete graphics cards, ostensibly to compete with AMD and Nvidia in the gaming GPU space, there has been There has been speculation. This seemed to be confirmed last year when Intel's Linux driver team began work on a patch to allow discrete and integrated Intel GPUs to work together

. [Given that the Intel Xe DG1 graphics card shown at CES in January is fairly low-power, and the gaming features likely to be included in Tiger Lake laptops are similarly low-spec, any performance boost Intel can get will be useful. Literally anything.

But that, of course, depends on whether game developers actually make the effort to make their game code work between two different GPUs. And no matter how simple the code tweak is to enable Intel's Wee multi-adapter extensions, it's not exactly set in stone...

Basically, the multi-GPU tweaks being introduced will allow different graphics silicon to work together, whereas previous SLI and CrossFire technologies required identical graphics cards to improve performance, DirectX 12 existing bits.

Although it seems super-complex, and incomprehensible to a simpleton like me, the idea is to allow developers to run compute and rendering workloads on different parts of the system, using the Explicit Multi-Adapter support already present in DX12 to allow developers to run compute and render workloads on different parts of the system. [In general, discrete GPUs are more powerful and can take on the work required for rendering, but there is still a lot of computing power left in the CPU's integrated graphics chip that would otherwise be wasted. The example Hux gave dealt with particle effects, but he said the recipe could equally be used for physics, deformation, AI, shadows, and so on. Offloading such tasks from the main graphics card can improve performance to some extent.

But potential is not the right word; Hux calls it "essentially an enhancement to asynchronous computing," but not many games currently take advantage of it. Sometimes, as in the recent case of "Ghost Recon Breakpoint," there are post-launch feature updates that enable asynchronous computing, but even then it is not something that is done regularly. And as a subsection of an already niche technology, this multi-GPU capability means that it is likely to be rarely seen, if ever, effective.

However, it does add more fuel to the bonfire of rumors about Intel's potential to combine Xe discrete and integrated GPUs in its next-generation Tiger Lake laptops: the DG1 graphics card is relatively weak, at low 1080p settings in "Warframe" 30fps, and only a few frames per second in "Destiny 2," as we know from CES, a truly impressive thin and light gaming ultrabook could be realized with the help of Intel's Xe architecture, a friend of integrated GPUs.

Categories