Start Building with AMD AI Playbooks
Step-by-step guides to run AI workloads on AMD hardware. From inference to fine-tuning, get up and running fast.

No playbooks available for this combination yet.
Try selecting a different device family, device, or OS.
Generating images with ComfyUI and Z Image Turbo
Create stunning AI-generated images using ComfyUI with Z Image Turbo.
Automating Workflows with n8n and Local LLMs
Build an AI-powered news summarizer using n8n and Lemonade.
Local LLM Coding with VS Code and Qwen3-Coder
Use VS Code with locally-running Qwen3-Coder for private code assistance.
Running and serving LLMs with LM Studio
Set up LM Studio and LM Studio Server to run and serve large language models locally.
Running LLMs with PyTorch and AMD ROCm™ software
Learn to run powerful language models on your PC with PyTorch and AMD ROCm™ software to summarize documents quickly and easily.
Quick Start on vLLM
Learn how to run inference and serving using vLLM on your STX Halo™
Building Your First Agent with GAIA
Build a 100% local AI agent — no cloud APIs needed. Use the GAIA SDK to create a hardware advisor on your STX Halo
Clustering Two STX Halos with llama.cpp RPC
Set up distributed inference using RPC server across two STX Halo™ devices with llama.cpp to run 350B+ models
Clustering with Two Halos (RCCL)
Set up a multi-node cluster using two STX Halo™ devices with RCCL for distributed workloads
Custom GPU Kernels with PyTorch ROCm
Write and optimize custom GPU kernels using PyTorch and ROCm on STX Halo™
Fine-tune LLMs with PyTorch and ROCm
Fine-tune large language models using PyTorch and ROCm on STX Halo™
Getting Started with Ollama
Install Ollama and run LLMs locally — chat from the terminal, desktop app, or REST API on your STX Halo™
How to Chat with LLMs in Open WebUI
Set up Open WebUI to chat with local LLMs on your STX Halo™
LLM Fine-tuning with llama factory
Fine-tune large language models using Llama Factory and LoRA techniques on your STX Halo™
Local Computer Vision with Ryzen AI NPU
Build local perception capabilities using CVML SDK on top of RyzenAI and ROCm
Optimized LLMs Fine-tuning with Unsloth
Use Unsloth for memory-efficient fine-tune LLMs on STX Halo™
Speech-to-Speech Translation
Build a real-time speech-to-speech translation system on your STX Halo™
Using Lemonade Across CPU, GPU, and NPU
Learn to run Gen AI models locally with Lemonade, an open-source local AI server.
Frequently Asked Questions
What are AMD AI Playbooks?
Playbooks are step-by-step guides for building and running AI workloads on AMD hardware, including AMD Ryzen™ AI APUs and Radeon™ GPUs. They are published in a public GitHub repository and provide hands-on, reproducible workflows from environment setup to running models locally and building real applications.
Is access to Playbooks free?
Yes. Playbooks are publicly available on the GitHub Playbooks repo and free to use. You can clone, run, and modify them to fit your own development workflows.
What hardware and operating systems are supported?
Playbooks support:
- Hardware: Ryzen™ AI APUs and Radeon™ GPUs
- Operating Systems: Windows and Linux
Use the platform filters to view playbooks compatible with your system configuration.
Do I need cloud access to Playbooks?
No. Playbooks are designed to run locally on your machine.
This allows you to develop, test, and iterate directly on your hardware. Some workflows may optionally integrate with cloud services, but local execution is the default.
What's included in a Playbook?
Each playbook typically includes:
- Environment setup instructions (drivers, dependencies)
- Scripts or commands to run models
- Configuration files and parameters
- Optional application layers such as APIs or user interfaces
What experience level is recommended?
Playbooks are designed for developers with:
- Basic command line experience
- Familiarity with Python environments
- General understanding of AI/ML workflows (helpful but not required for all playbooks)
How do I get help or connect with the community?
For questions, troubleshooting, or to connect with other developers:
- Join the AMD Developer Community Discord (Playbooks channel) or Forums for community discussions and technical guidance.
How do I report issues or suggest improvements?
If you encounter issues or have suggestions, open an issue or submit a pull request in the GitHub repository.
Can I modify or extend Playbooks?
Yes. Playbooks are designed to be flexible and customizable.
You can modify configurations, swap models, and integrate workflows into your own applications.
AMD AI Developer Program
Join the AMD AI Developer Program for additional benefits like $100 AMD Developer Cloud credits, direct access to AMD technical experts, premium AI training and more.