πŸ—πŸš’πŸš€ AG2: The New AutoGen


​

​

πŸ‘‹ Hey, AIM community!

​

Dr. Greg and the Wiz will unlock vLLM for you next week with a full breakdown of "Easy, fast, and cheap LLM serving for everyone."

​


Last Wednesday, we explored AG2: AutoGen, Evolved with co-creator Qingyun Wu. The origin story was fascinating - from MathChat to going viral! AutoGen is all about conversations - which effectively constitute reasoning - by going full send on messages.

​

🧰 Resources

  • πŸ§‘β€πŸ« Concepts: Slides​
  • πŸ§‘β€πŸ’» Code: CaptainAgent Notebook​
  • πŸ“œ Paper: AutoGen

​


πŸ”­ Coming Up!

​

Dr. Greg and the Wiz guest speak on the SLM Show!

Join us for a double feature on Wednesday, Dec. 11! Following our YouTube Live event, join us at noon PT for The Small Language Model (SLM) Show: 2024 Wrap & 2025 Predictions on LinkedIn Live.

On-Prem Agents with LangGraph

Learn to leverage the LangGraph platform to deploy agents on-prem, for free! We'll build a team of research agents using LangGraph, and then deploy it to an API using LangServe, all on local hardware! Ollama will be used for local model hosting, and the discussions will pick up where we left off with On-Prem RAG!

🌐 Around the Community!

πŸ’‘ Transformation Spotlight: Hear about how Juan Ovalle worked his way out of his Data Scientist role in South America and became an AI Engineer in London! Fun fact: he connected to his new employer through his Peer Supporter role in The AIE Bootcamp!

video preview​

πŸ€“ See what the community is building, shipping, and sharing this week. Join us in the Lounge every Monday at 9 AM PT for some accountability!

​

Want to join the AIM community? Hop into Discord and share your intro!



πŸ–ΌοΈ Meme of the Week


Keep building πŸ—οΈ shipping 🚒 and sharing πŸš€,

​

​Dr. Greg, The Wiz, Seraacha, and Lusk​
​AI Makerspace​

​
​Unsubscribe Β· Preferences​

The LLM Edge

Read more from The LLM Edge

Hey, AIM community! Join Dr. Greg and The Wiz as they cover Large Reasoning Models next Wednesday, Jan 18. With the release of o1, o3, and the new Gemini, everyone is talking about Chain-of-Thought Reasoning and Test-Time Compute. What are these things, anyway? And what are the implications for building production LLM applications with models like this in 2025 and beyond? Join the discussion live on Wed. at 10 AM PT! Last week, we dove into the latest RAG Evaluation metrics and RAGAS...

πŸ‘‹ Hey, AIM community! As we near the end of 2024, our team is looking back at all we've accomplished as a community this year. Thanks to all of you for learning πŸ“š, building πŸ—, shipping 🚒, and sharing πŸš€ with us at the open-source LLM Edge! We'll be rooting for you to take your AI career to the next level in 2025, and when you do, we hope you'll lean on us to amplify your story and showcase your best work. In this way, you'll help the AI Makerspace community achieve its mission of becoming the...

πŸ‘‹ Hey, AIM community! Dr. Greg and the Wiz will go on-prem with LangGraph next week! Join us for our last YouTube Live event before the New Year πŸŽ†! Last Wednesday, Dr. Greg and The Wiz guest spoke with Malikeh from Arcee on the SLM Show about the year in summary at the LLM Edge, and what to expect in 2025! We also explored vLLM! We learned that Virtual LLM helps us relieve memory bottlenecks when serving LLMs through PagedAttention, just like Virtual Memory relieves memory bottlenecks in...