Enabling Intel GVT-g (vGPU) on Proxmox VE

Table of Contents

Enabling Intel GVT-g (vGPU) on Proxmox VE

Last Updated: Jan 14, 2026

This guide details how to enable Intel GVT-g (Graphics Virtualization Technology), which allows you to split a single physical Intel iGPU into multiple virtual GPUs to be passed through to Virtual Machines (VMs).

This setup is tested on an Intel NUC 8 (Bean Canyon) but applies generally to supported Intel architectures.

0. Supported Hardware

Important: GVT-g is only supported on specific older generations of Intel processors. Newer processors (11th Gen+) use SR-IOV, which requires a completely different setup.

Supported Architectures for GVT-g:

  • 5th Gen: Broadwell
  • 6th Gen: Skylake
  • 7th Gen: Kaby Lake
  • 8th Gen: Coffee Lake (e.g., NUC8i5BEH)
  • 9th Gen: Coffee Lake Refresh
  • 10th Gen: Comet Lake (Support varies by motherboard/implementation)

Note: Intel 11th Gen (Tiger Lake/Rocket Lake) and 12th Gen+ (Alder Lake) do not support GVT-g. They utilize SR-IOV.


1. BIOS/UEFI Settings

Before configuring the OS, ensure the following are enabled in your BIOS:

  • VT-d (Virtualization Technology for Directed I/O)
  • VT-x (Virtualization Technology)
  • Internal Graphics (IGD) must be set to Enabled (Primary).
  • Aperture Size: Recommended 256MB or higher (if adjustable).

2. Edit GRUB Bootloader

We need to enable the IOMMU and the GVT-g driver at the kernel level.

  1. Open the GRUB configuration file:

    nano /etc/default/grub
  2. Find the line starting with <code>GRUB_CMDLINE_LINUX_DEFAULT</code> and modify it to include the following parameters:

    • <code>intel_iommu=on</code>: Enables IOMMU.
    • <code>iommu=pt</code>: Improves performance by using pass-through mode for the host.
    • <code>i915.enable_gvt=1</code>: Explicitly enables the GVT-g feature.

    Your Configuration:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_gvt=1"

    (Note: You can also add <code>pcie_acs_override=downstream,multifunction</code>. This is usually for breaking apart IOMMU groups. You can add it if you have issues assigning PCI devices.)

  3. Save and exit (<code>Ctrl+O</code>, <code>Enter</code>, <code>Ctrl+X</code>).

  4. Update GRUB to apply changes:

    update-grub

3. Load Kernel Modules

You must ensure the VFIO and KVMGT modules are loaded at boot.

  1. Open the modules configuration file:

    nano /etc/modules
  2. Add the following lines to the file:

    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    kvmgt
    • <code>kvmgt</code> is the specific module required for Intel GVT-g.
  3. Save and exit (<code>Ctrl+O</code>, <code>Enter</code>, <code>Ctrl+X</code>).

  4. Update the initramfs to ensure these modules are available during boot:

    update-initramfs -u -k all

4. Reboot

Restart your Proxmox server to apply the kernel parameters and load the modules.

reboot

5. Verification

After the system reboots, verfiy that GVT-g is active.

  1. Check Dmesg:
    Run the following command to see if IOMMU and GVT are enabled:

    dmesg | grep -e DMAR -e IOMMU
    dmesg | grep "gvt"
  2. Check MDEV Support:
    The most definitive test is checking if the system generated the Mediated Device (MDEV) types folder for your GPU (usually at PCI address <code>0000:00:02.0</code>).

    ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types/

    Expected Output:
    You should see folders like:

    • <code>i915-GVTg_V5_4</code>
    • <code>i915-GVTg_V5_8</code>

    If you see these folders, GVT-g is successfully enabled.


6. Utilizing GVT-g in a VM

  1. Go to the Proxmox Web GUI.
  2. Select your VM -> Hardware -> Add -> PCI Device.
  3. Select the Raw Device: <code>0000:00:02.0</code> (Intel Corporation …).
  4. Important: Do not check "All Functions".
  5. Expand the MDev Type dropdown.
  6. Select the desired profile (e.g., <code>i915-GVTg_V5_4</code>).
    • V5_4 usually allocates more video RAM to the VM (High performance).
    • V5_8 allocates less video RAM (Allows for more VMs to run simultaneously).
  7. Check PCI-Express.
  8. Start the VM.