How to use iGPU's for local AI

How to use iGPU's for local AI

April 6, 2024

Having AI usable by anyone its not just avoid having Generative Open Source Models and F/OSS Software supporting.

It is also about the Hardware - WHERE the Models will be executed.

And with this post my aim is to lower the required hardware expenses to make Gen AI Models run smoother.

In previous tutorials I focused on how to run with CPU, so that we dont need expensive Hardware.

Today, Im showing you how to run AI faster, thanks to iGPUs, aka AMD APUs.

Kudos to Tech-Practice for sharing this.

Activating iGPUs for AI

This is I have done it with a friend’s x300 (also worked for my Gigabyte B450M-S2H):


FAQ

Big Thanks

I came across a post on reddit who pointed to:

I have tried this with a APU 2200g, also with 4600G and 5600G and it worked.

This tutorial has also been helpful for me.

Thanks https://github.com/ttio2tech/agieverywhere ❤️

How to asign VRAM to an AMD GPU for AI?

Generally, it depends on your Motherboard.

You have to find UMA Frame Buffer Side option an enable it.

According to Steam records:

LLMs

https://github.com/Mooler0410/LLMsPracticalGuide

Virtual Machines

sudo apt-get update
sudo apt-get install qemu qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager

qemu-img create -f qcow2 mydisk.img 10G qemu-system-x86_64 -boot d -cdrom path/to/your/minimal.iso -m 512 -hda mydisk.img

Follow the prompts to install the OS.

Since you are using a minimal ISO, the installation process will be CLI-based.

Once the OS is installed, boot the VM from the virtual disk image:

qemu-system-x86_64 -boot c -m 512 -hda mydisk.img

AI on Servers