Skip to content

Start Building with AMD AI Playbooks

Step-by-step guides to run AI workloads on AMD hardware. From inference to fine-tuning, get up and running fast.

Device
Platform

No playbooks available for this combination yet.

Try selecting a different device family, device, or OS.

Coming Soon
Featured Beginner

Generating images with ComfyUI and Z Image Turbo

Create stunning AI-generated images using ComfyUI with Z Image Turbo.

Coming Soon
Beginner

Automating Workflows with n8n and Local LLMs

Build an AI-powered news summarizer using n8n and Lemonade.

Coming Soon
Intermediate

Local LLM Coding with VS Code and Qwen3-Coder

Use VS Code with locally-running Qwen3-Coder for private code assistance.

Coming Soon
Beginner

Running and serving LLMs with LM Studio

Set up LM Studio and LM Studio Server to run and serve large language models locally.

Coming Soon
Beginner

Running LLMs with PyTorch and AMD ROCm™ software

Learn to run powerful language models on your PC with PyTorch and AMD ROCm™ software to summarize documents quickly and easily.

Coming Soon
Featured Beginner

Quick Start on vLLM

Learn how to run inference and serving using vLLM on your STX Halo™

Coming Soon
Intermediate

Building Your First Agent with GAIA

Build a 100% local AI agent — no cloud APIs needed. Use the GAIA SDK to create a hardware advisor on your STX Halo

Coming Soon
Advanced

Clustering Two STX Halos with llama.cpp RPC

Set up distributed inference using RPC server across two STX Halo™ devices with llama.cpp to run 350B+ models

Coming Soon
Advanced

Clustering with Two Halos (RCCL)

Set up a multi-node cluster using two STX Halo™ devices with RCCL for distributed workloads

Coming Soon
Advanced

Custom GPU Kernels with PyTorch ROCm

Write and optimize custom GPU kernels using PyTorch and ROCm on STX Halo™

Coming Soon
Intermediate

Fine-tune LLMs with PyTorch and ROCm

Fine-tune large language models using PyTorch and ROCm on STX Halo™

Coming Soon
Beginner

Getting Started with Ollama

Install Ollama and run LLMs locally — chat from the terminal, desktop app, or REST API on your STX Halo™

Coming Soon
Beginner

How to Chat with LLMs in Open WebUI

Set up Open WebUI to chat with local LLMs on your STX Halo™

Coming Soon
Intermediate

LLM Fine-tuning with llama factory

Fine-tune large language models using Llama Factory and LoRA techniques on your STX Halo™

Coming Soon
Intermediate

Local Computer Vision with Ryzen AI NPU

Build local perception capabilities using CVML SDK on top of RyzenAI and ROCm

Coming Soon
Advanced

Optimized LLMs Fine-tuning with Unsloth

Use Unsloth for memory-efficient fine-tune LLMs on STX Halo™

Coming Soon
Intermediate

Speech-to-Speech Translation

Build a real-time speech-to-speech translation system on your STX Halo™

Coming Soon
Intermediate

Using Lemonade Across CPU, GPU, and NPU

Learn to run Gen AI models locally with Lemonade, an open-source local AI server.

more coming soon

Frequently Asked Questions

What are AMD AI Playbooks?

Playbooks are step-by-step guides for building and running AI workloads on AMD hardware, including AMD Ryzen™ AI APUs and Radeon™ GPUs. They are published in a public GitHub repository and provide hands-on, reproducible workflows from environment setup to running models locally and building real applications.

Is access to Playbooks free?

Yes. Playbooks are publicly available on the GitHub Playbooks repo and free to use. You can clone, run, and modify them to fit your own development workflows.

What hardware and operating systems are supported?

Playbooks support:

  • Hardware: Ryzen™ AI APUs and Radeon™ GPUs
  • Operating Systems: Windows and Linux

Use the platform filters to view playbooks compatible with your system configuration.

Do I need cloud access to Playbooks?

No. Playbooks are designed to run locally on your machine.

This allows you to develop, test, and iterate directly on your hardware. Some workflows may optionally integrate with cloud services, but local execution is the default.

What's included in a Playbook?

Each playbook typically includes:

  • Environment setup instructions (drivers, dependencies)
  • Scripts or commands to run models
  • Configuration files and parameters
  • Optional application layers such as APIs or user interfaces
What experience level is recommended?

Playbooks are designed for developers with:

  • Basic command line experience
  • Familiarity with Python environments
  • General understanding of AI/ML workflows (helpful but not required for all playbooks)
How do I get help or connect with the community?

For questions, troubleshooting, or to connect with other developers:

How do I report issues or suggest improvements?

If you encounter issues or have suggestions, open an issue or submit a pull request in the GitHub repository.

Can I modify or extend Playbooks?

Yes. Playbooks are designed to be flexible and customizable.

You can modify configurations, swap models, and integrate workflows into your own applications.

AMD AI Developer Program

Join the AMD AI Developer Program for additional benefits like $100 AMD Developer Cloud credits, direct access to AMD technical experts, premium AI training and more.