Two years ago, AI Makerspace opened its digital doors with a bold vision: to build the world's leading community for people who want to build, ship, and share production-grade LLM applications.
Today, that spark has erupted into a global learning community that is shipping and sharing prototypes on a daily basis. In just 24 months, we’ve launched 13 cohorts, served over 450 students, of which 137 are now certified AI engineers.
Keep building, shipping, and sharing with us, like legends 🏗🚢🚀.
EVENT RECAP
Context Engineering
Last week, we dove into Context Engineering, and how to expertly curate what we put into an LLM's context window. We broke down how the core patterns of AI Engineering today, from Prompt Engineering to RAG to Agents, fundamentally are about context. We had a blast on this one!
Guardrails have shifted from nice-to-have to non-negotiable for any LLM you plan to ship. In this session we’ll unpack what those safeguards are, why they matter, and how to thread them through every stage of the AI lifecycle—from pre-generation validation through post-launch monitoring. Guided by the AI Guardrails Index, we’ll explore 20-plus tools that handle jailbreak protection, PII stripping, hallucination suppression, and more. Then we’ll build a production-ready, guardrail-fortified app right before your eyes. If “trusted AI” appears anywhere on your 2025 roadmap, this walkthrough is your starting blueprint.
Arcee AI just dropped AFM-4.5B—their first Small Language Model (SLM) and the opening act of a new Arcee Foundation Model family. Packed with 4.5 billion parameters, AFM-4.5B is tuned for real-world enterprise workloads and slim enough to run everywhere from edge phones to massive GPU clusters. Join us as CTO Lucas Atkins unpacks the training pipeline—spanning 6.58 T tokens, distributed Spectrum fine-tuning, mergekit alchemy, and rigorous compliance—and shares what’s next for the AFM lineup. If you need modern performance without heavyweight infrastructure, this is the model reveal to watch.
Our friends at Arcee AI told us about Typedef coming out of stealth with Fenic. We're going to check it out how they're building The AI Engine for Modern Workloads and we'll meet Fenic, typedef.ai’s open-source, PySpark-inspired DataFrame framework that treats unstructured data as first-class columns and collapses transcription, inference, and retrieval into simple .select() and .withColumn() calls. Join our live session for a “why now” primer on DataFrame-centric AI, a real-time demo that turns a folder of PDFs into embeddings and feeds them into an agent—all inside Fenic—and a candid Q&A on interoperability, GPU-scale costs, and the project’s Apache-2.0 roadmap. If you’re serious about data plumbing and streamlining LLM pipelines, this is your next must-watch.
I context engineered a LangGraph Multi-Agent system using Claude Code
Ever wondered what would happen if you handed an LLM a detailed blueprint and asked it to build an entire multi-agent workflow from scratch? In AI Makerspace’s AI Engineering Bootcamp, Chris decided to find out, feeding Claude Code a spec for a LangGraph system that could read an arXiv paper, draft a platform-specific social post, copy-edit the tone for LinkedIn or X, and then hand the whole thing off to a supervisor agent—all while LangSmith kept watch. The exercise didn’t just automate research summarization; it teased the tantalizing possibility that with clear instructions, AI can orchestrate the entire content-creation assembly line for us.
Designing Reward Models for Enterprise LLM Tool Calling
How do you train an LLM to hit the right enterprise buttons every single time? The post’s answer is a two-tier reward system: a yes-or-no format check that slams the brakes on any JSON or schema slip-ups, and a sliding-scale correctness score that grades tool choice and parameter accuracy. Sprinkle in business-impact weighting, workflow awareness, and graceful-failure bonuses, and you’ve got a recipe for models that stay sharp, auditable, and production-proof as your stack evolves.
Ever wonder if a pocket-sized coach could tweak your Pilates form the moment you think, “Am I really keeping my spine neutral?” The Pilates Reformer App slips studio expertise onto your sofa, letting you tap a muscle group, fire off a question, and instantly get AI-backed tips, timestamped demo clips, and soon—even photo-based feedback—so each bridge, teaser, or elephant feels dialed-in and drama-free. All the precision of an $8 K reformer class, none of the wait-lists or wallet shock.