Tired of the same repetitive tasks? Could AI actually automate the boring stuff, or is it just another overhyped tool?
I decided to build an AI-powered agent that could help manage my WordPress sites. Something that could understand context, take actions, and automate tedious dev/admin tasks I’d normally do by hand.
I want it to:
- Receive natural language instructions (e.g. “clear all caches” or “create a blog post with these specs”)
- Interpret what I meant, not just what I said
- Trigger real-world actions via WordPress CLI or REST API
- Run locally (no cloud dependencies, no vendor lock-in)
Basically: an intelligent shell script with a brain.
🧰 Tools I Used
- 🧠 DeepSeek Coder 6.7b – A locally hosted language model running through Ollama, fast and responsive
- 🧩 Node.js backend – Simple REST interface for prompt sending and response parsing
- ⚙️ WordPress CLI / REST API – Execution layer for real actions
- 🛠️ Custom instruction layer – Wrapped the LLM output with a JSON validator/parser to ensure structured commands and retry if the model was off
🧪 The Architecture
- Input: I send a prompt like
“Can you list the 10 latest draft posts and convert them to published?”
- LLM Response: DeepSeek parses that into a structured JSON plan: jsonCopy
{ "action": "update_posts", "params": { "status": "publish", "filter": "draft", "limit": 10 } }
- Validator: My wrapper checks it, retries if needed, or rejects unsafe commands
- Executor: Runs
wp post update
or REST equivalent and returns success/failure
It’s modular, secure-ish (I’m still locking it down), and after a lot of trial and effort, it is beginning to feel like magic.
🧠 What Surprised Me
- Local LLMs can be a useful tool : With the right prompting, even a local 6.7b model is incredibly capable, however, getting the prompt format and composition right by filtering the input is most of the battle.
- They’re also fragile: Without strong validation, they’ll give you confident nonsense. Guardrails are critical.
- JSON output wrapping is your best friend: Teach the model to respond as a tool, not a poet, and you’ll eventually get consistent responses.
🧱 What’s Next
- Expand it to use the REST API, to increase capabilities
- Integrate with server-side shell tasks (upgrades, backups, etc.)
- Maybe even let it run scheduled audits and suggest updates before I need them
🚀 Final Thoughts
This project was my first dive into applying AI in a practical way right now, and learning how to develop LLM wrappers from the ground up, without relying on cloud services. Running an intelligent local agent that speaks my language and interfaces with my tools feels like the start of a new wave of devops automation.
If you’re a developer and haven’t played with local LLMs + CLI/API integration yet… it might be time to start.
Got questions or want to build something similar? Reach out on LinkedIn or check out the repo at github.com/doctarock.