Running AI agents and language models on your own machine — not on someone else’s server. This series walks through three different setups for Windows 11, each with different trade-offs in isolation, performance, and ease of use.
No subscriptions, no API costs after setup, no data leaving your laptop. Every guide is hands-on with the exact commands, paths, and gotchas that come up in practice.
Series overview
- Part 1 — How to set up a local AI stack on Windows 11 with LM Studio and AnythingLLM
A chat-first setup. LM Studio runs the model, AnythingLLM gives you a workspace with RAG so you can chat with your own documents. Lowest barrier to entry — no command line required. - Part 2 — How to set up a Hermes AI agent on Ubuntu via Hyper-V on Windows 11
An autonomous agent inside an isolated Ubuntu VM. The agent can browse and execute tasks, but it lives in a snapshot-able sandbox — safer for experiments where you don’t yet trust what the agent will do on your machine. - Part 3 — How to run the Hermes AI agent in Docker on Windows 11 with an admin dashboard
The same Hermes agent but in Docker — lighter than a VM, two containers, an admin dashboard, and a CLI chat. Easiest to spin up and tear down for repeated experiments.
What you can do after going through the series
- Run a large language model locally without sending prompts to a cloud provider
- Set up an autonomous agent that can browse the web and execute tasks for you
- Add your own documents to a local RAG workspace and chat with them
- Choose the right isolation level — VM, Docker, or native — for each use case
- Tear down and rebuild your setup in minutes when something breaks
Each part is standalone — you don’t have to read them in order. Bookmark this page; more parts are added as the toolkit grows.