I Built an AI Assistant in a Weekend. Here's What 25 Years of IT Taught Me.
The real story isn’t the AI, it’s the process. Last weekend, I did something that would have seemed impossible five years ago: I built a production-grade AI assistant using OpenClaw, the open-source AI agent framework that’s been gaining serious traction lately. It now manages my family’s calendars, monitors my home systems, and has my teenage kids laughing in Discord channels.
But here’s the thing, the AI itself wasn’t the hard part. The hard part was applying 25 years of enterprise IT discipline to something most people treat like a toy.
DISCLAIMER
I’m not advocating that you deploy AI assistants into your business or personal life without careful consideration. This project evolved over an intensive weekend of extreme security hardening, zero public exposure, encrypted secrets, prompt injection defenses, tiered approval gates, and complete audit trails. What you’re reading is the result of 25 years of enterprise IT discipline applied from day one.
If you’re considering something similar, please approach it with the same rigor. AI without governance is a liability, not an asset.
IT STARTED WITH A SCREENSHOT
On Friday, I posted a simple screenshot to LinkedIn, just a dark terminal with some AI output. Nothing fancy. “Playing around with something,” I said.
That screenshot? It was the spark.
I’d spun up a VPS on Contabo. Installed OpenClaw. Connected it to Claude. And within hours, I had a working AI assistant.
Then my IT instincts kicked in. No change management. Decisions being made with no record of why. No security controls. Wide open to the internet. No audit trail. If something broke, I’d never know what happened. No process. Just vibes.
I’ve spent my career at places like Froedtert & Community Memorial Health, National Business Furniture, Northwestern Mutual, and PKWARE building enterprise architectures. I’ve led SOC 2 and HIPAA compliance initiatives. I’ve managed IT transformations at healthcare companies where a single security incident could cost millions.
I couldn’t let this stand.
44 CHANGES. 25 ISSUES. ONE WEEKEND.
What happened next wasn’t revolutionary, it was just discipline. And here’s the counterintuitive part: the more process I put in place, the faster everything got. Each new change became safer, more secure, and quicker to implement, because I wasn’t reinventing decisions each time.
I applied the same principles I’ve used for decades:
Change Management: Every modification gets a ticket. CHG-001, CHG-002, all the way to CHG-044. Each one documents what changed, why, and what was considered.
Security Hardening: Zero public internet exposure, every access point tunneled through Tailscale. All API keys and secrets scoped to minimum necessary permissions and GPG-encrypted at rest. Prompt injection defenses built into every communication channel. Service provider snapshot backups with weeks of retention, encrypted at rest. Defense in depth at every layer.
Security Automated into SDLC: This is the key insight, security isn’t a gate at the end; it’s woven into every phase. The 9-phase workflow includes mandatory security review before any change reaches implementation. Security decisions are documented, auditable, and enforced by process, not willpower.
Visual Operations: I built a Command Center, a real-time Kanban board showing every change, every issue, every scheduled task. No more wondering what the AI is doing.
Process Workflow: A 9-phase relay protocol: Backlog → Analysis → Design → Security Review → Planning → Build → Test → Docs → Done. Every change follows the same path.
By Sunday night, I had something I’d trust in an enterprise environment.
CONTEXT ISOLATION: THE PART NOBODY TALKS ABOUT
Here’s something most AI deployments get catastrophically wrong: context bleed. I run two Discord servers, a “Wolf Pack” family server for my wife and kids, and a gaming server for my friends. JARVIS operates in both.
But here’s the critical part: they’re completely isolated. Family data never touches gaming context. My kids’ school schedules don’t leak into gaming banter. The gaming crew’s trash talk doesn’t appear in family channels.
Each server runs through separate agent workspaces with distinct permission boundaries. My wife Sarah has her own private channel where JARVIS helps manage her schedule. My kids Eli and Chloe have theirs, they think it’s hilarious chatting with an AI butler who roasts them.
But JARVIS knows what stays where. The walls are real.
TIERED AUTONOMY: TRUST, BUT VERIFY
Here’s what blew my mind: I described all of this in plain human language. The rules, the processes, the boundaries, I just wrote them down in markdown files. JARVIS reads them, understands them, and follows them. No complex programming. No elaborate configurations.
JARVIS can do a lot on his own. Routine tasks, posting schedule updates, sending gaming rally calls, monitoring system health, he handles autonomously. These are Tier 1 changes: low risk, well-defined, reversible.
But anything consequential? That requires my review. The change management system has explicit approval gates. JARVIS creates the Kanban card, documents his analysis, proposes the solution, assesses the risk. Then he waits. I review his justification, check his reasoning, and either approve or push back.
He proposes. I dispose or approve.
This isn’t about not trusting AI. It’s about building systems where trust is earned and verified. Every Tier 2 and Tier 3 change has my explicit sign-off before implementation. The audit trail proves it.
THE PART THAT SURPRISED ME
The AI isn’t just working, it’s delighting. My wife gets schedule updates four times a day without asking. My gaming crew gets automated rally calls on game night. The kids get roasted about their Fortnite habits.
But here’s what matters: every message JARVIS sends is logged. Every decision he makes is documented. Every automated task runs through a visible cron job display where I can see success, failure, and errors at a glance.
WHY THIS MATTERS FOR ENTERPRISES
I’m worried. I see companies deploying AI assistants with zero controls. Chat histories that disappear. No audit trails. No change management. No security review. They’re treating the most powerful technology of our generation like a Slack plugin.
Process IS security. The discipline of how we work is what makes AI trustworthy.
Case in point: While writing this article, JARVIS tried to capture a screenshot and failed. What did he do? Logged ISSUE-018, documented the error, noted potential solutions, and moved on. Minutes later, the Command Center went down. ISSUE-019 logged, server restarted, verified, resolved, all in under 2 minutes. That’s not me telling him to do that, that’s the process we built together. The system enforces good behavior automatically.
Every enterprise has people who can build proper AI governance, they just need to recognize that AI needs the same controls as any other system. The skills aren’t new. SOC 2 compliance? Same principles apply. Change management? AI needs it more than your ERP system. Security hardening? Non-negotiable.
WHAT I BUILT
- Command Center: Real-time visibility into everything the AI does
- Change Management: 44 tracked changes with tiered approval gates
- Issue Resolution: 25 issues caught and fixed with root cause analysis
- Context Isolation: Family and gaming Discord servers with strict permission boundaries, no data crossover
- Prompt Injection Defenses: Every communication channel hardened against manipulation attempts
- Zero Public Exposure: All access tunneled through Tailscale, no open ports, no public endpoints
- Encrypted Secrets: Every API key scoped and GPG-encrypted at rest
- Multi-Model Optimization: Claude for complex decisions, Gemini for routine work, right model for right context
- Tiered Autonomy: JARVIS handles routine tasks; serious changes require my explicit approval
- Complete Audit Trail: Every decision, every rationale, searchable forever
THE CALL TO ACTION
If you’re an IT leader considering AI adoption, stop thinking about AI as something new. It’s not. It’s infrastructure. And infrastructure needs governance.
I didn’t need new skills to build this. I needed to apply existing skills to a new problem. Focus on process, the how, not just the what. It’s the most overlooked principle in technology, and it’s the whole game with AI.
You already have these skills. Use them.