Microsoft is releasing a preview build of Windows that provides a more personalized AI that runs on Nvidia, AMD, and Intel Gpus

General
Microsoft is releasing a preview build of Windows that provides a more personalized AI that runs on Nvidia, AMD, and Intel Gpus

Nvidia has announced that it will work with Microsoft to power personalized AI applications on Windows through Copilot. This collaboration will extend to other GPU vendors, and AMD and Intel can also benefit.

The Windows Copilot Runtime receives support for GPU acceleration.That means the Gpu will be able to apply the intelligence of AI to apps on the OS a bit easier.

"This collaboration provides application developers with simple application programming interface (API) access to GPU-accelerated small language models (Slm) that enable retrieval-augmented generation (RAG) capabilities running on devices with the WINDOWS Copilot Runtime.

Simply put, developers can use APIs to accelerate highly personalized AI jobs on Windows, such as content summary, automation, and generation AI on GPUs.

Nvidia currently offers 1 RAG application to chat with RTX running on its own graphics card. In theory, such further applications are possible with the support of the co-pilot runtime, and Nvidia has at least one interest in Pc gamers: Project G-Assist. There is also the new RTX AI toolkit, a suite of tools and SDKs for customizing models."This could be a promising move for Nvidia and other GPU vendors. The battle for dominance in client AI inference (i.e. local AI processing) is now being fought by Intel, AMD and Qualcomm on laptops. But GPUs are actually very powerful for AI.

Developers can choose where to place their AI applications: CPU, NPU (AI-specific accelerator block), or GPU. Greater access, or easier access via APIs, allows developers to make better use of these components and make more powerful applications.

And Nvidia is not the only one set up to benefit. GPU acceleration via Copilot Runtime is also open to other GPUs. "These AI capabilities are accelerated by Nvidia RTX gpus and AI accelerators from other hardware vendors to provide end users with a fast and responsive AI experience across a wide range of the windows ecosystem.

In particular, however, Microsoft still needs 45 Npu processing to enter the AI-enabled computer club known as Copilot+. Currently, this does not apply to Gpus, but it offers more TOPs performance than the NPU currently available. But with so many rumors there are boundaries around, not least from celebrities like Mr. Dell, for Nvidia is creating its own ARM-based SoC, you need to believe that Windows on ARM will run the co-pilot AI business with Nvidia's integrated Gpu. GPUs and Npu, after all, are essentially similar parallel processing silicon, so it's not a big leap forward.

The preview API for GPU acceleration for Copilot Runtime will be available later this year in Windows developer builds.

Categories