How to use iGPU's for local AI
Having AI usable by anyone its not just avoid having Generative Open Source Models and F/OSS Software supporting.
It is also about the Hardware - WHERE the Models will be executed.
And with this post my aim is to lower the required hardware expenses to make Gen AI Models run smoother.
In previous tutorials I focused on how to run with CPU, so that we dont need expensive Hardware.
Today, Im showing you how to run AI faster, thanks to iGPUs, aka AMD APUs.
Kudos to Tech-Practice for sharing this.
Activating iGPUs for AI
- https://agieverywhere.com/apuguide/AMDAPU/APU
- https://agieverywhere.com/apuguide/AMDAPU/APU_Linux#amdlin
- https://www.youtube.com/watch?v=HPO7fu7Vyw4
FAQ
Big Thanks
I came across a post on reddit who pointed to:
- This YT channel: https://www.youtube.com/@tech-practice9805
- And this project: https://agieverywhere.com/
I have tried this with a APU 2200g, also with 4600G and 5600G and it worked.
This tutorial has also been helpful for me.
Thanks https://github.com/ttio2tech/agieverywhere ❤️
How to asign VRAM to an AMD GPU for AI?
Generally, it depends on your Motherboard.
You have to find UMA Frame Buffer Side option an enable it.
According to Steam records:
- Popular GPUs: https://store.steampowered.com/hwsurvey/videocard/
- CPUs: https://store.steampowered.com/hwsurvey/cpus/
LLMs
https://github.com/Mooler0410/LLMsPracticalGuide