πŸ—οΈ 🚒 πŸš€ MCP, Agents SDK, and Llama 4


​

Hey, AI Makerspace community!

​

This Wednesday, we'll cover the new release from OpenAI: Codex CLI, a lightweight coding agent that runs in your terminal. Carrying the same name as the original Codex models released in 2021, this new open-source tooling is a CLI-based coding agent that bears little resemblance to the model that originally powered GitHub Copilot. Join us to understand coding agents and best practices for their use in 2025

​

​


​

πŸ” Coming Up: What is an Agent?

​

OpenAI just released their β€œPractical Guide to Agents.” Harrison Chase from LangChain responded with a blog entitled β€œHow to think about agent frameworks.” People are talking about the Matter of OpenAI vs. LangGraph. We have thoughts about what an agent is, and what it isn't, and we're excited to pattern-match and break down the arguments from all sides in this special event we're calling "What is an Agent?" Join us for what will certainly be a fun and wide-ranging live discussion!

​


​

πŸ‘€ Event Recaps!

On April 16th, we dug into the architecture and training behind the Llama 4 Herd of MoE models. We examined the release blog for breakthroughs and showed how to run the model with Unsloth on your local machine!!
​
β€‹πŸ§° Llama 4 Herd of Models

On April 9, we dove into Model Context Protocol (MCP), the β€œUSB-C for AI applications.” We broke down how MCP connects LLMs to external tools, how it has evolved since its release in late 2024, and how you should use it as an AI Engineer.

🧰 Model Context Protocol

On April 2, we explored OpenAI’s new Responses API and Agents SDK, diving into their new agent orchestration framework! The session sparked great discussion around cost, performance, and how it stacks up against leading alternatives like LangGraph.
​
β€‹πŸ§° OpenAI Agents SDK

πŸ§‘β€πŸ’» Find the concepts and code from any event we've ever done in the Awesome-AIM-Index!
​


​

πŸ–ΌοΈ Meme of the Week


🌟 Want to start learning and becoming an AI Engineer today? Check out the five-day LLM Foundations course or take the full LLM Engineering course for free now!
​

Keep building πŸ—οΈ shipping 🚒 and sharing πŸš€, and we'll do the same!

​

​Dr. Greg & The Wiz​
​AI Makerspace​

​Update Your Preferences | Not digging this newsletter?​

The LLM Edge

Read more from The LLM Edge
RAG: The 2025 Best Practice Stack

Hey, AIM community! Tomorrow, we'll cover Enterprise Agents with OpenAI! What does the agents SDK look like from OpenAI? How does it build on previous work they've done? Are they officially in the end-to-end platform game competing with orchestration frameworks like LangChain, LlamaIndex, CrewAI, and others? Join us live to find out! Last week, we discussed RAG: The 2025 Best-Practice Stack This is the year of Practical RAG, and we kicked it off by unpacking the Minimum Viable...

DeepSeek Week

Hey, AIM community! On Wednesday, we'll cover the infra stack that we recommend for RAG in 2025. Then, we'll build, ship, and share a best-practice RAG app. We'll also discuss important production tradeoffs and implications that you should consider before and after deployment when going from zero to production RAG! Last week, we discussed the latest open-source repo drops from DeepSeek Week, and we covered how they're being used as a new best-practice way to do inference on MoE models via...

Optimization of LLMs

Hey, AIM community! Next Wednesday, we begin a new series on Optimization of LLMs! We'll tackle an important topic from first principles: building and optimizing LLMs before they make it to production. What are the essential concepts and code that underlie the technology, from loss functions and gradient descent to LSTMs, RLHF, and GRPO? Join us to kick off our new series - which we will continue monthly - about Optimization of LLMs. Last week, we put PydanticAI to the test! πŸš€ The team behind...