đź§ Large Reasoning Models: The Illusion of Thinking
On Wednesday July 2, ​we'll dive into the great “reasoning vs. mirage” debate as we put two headline-grabbing 2025 papers—Shojaee et al.’s “The Illusion of Thinking” and C. Opus et al.’s “The Illusion of the Illusion of Thinking”—in direct dialogue to see whether Large Reasoning Models genuinely think or merely appear to. We’ll dissect why LRMs surpass standard LLMs on medium-complex puzzles yet crash at the hardest ones, explore how token limits and benchmark design can flip results, and demo fresh techniques for probing models’ internal reasoning traces.
🦙 LlamaIndex Agent Memory: From Short-Term Storage to Intelligent Retention
Tomorrow on Thursday June 25, ​join LlamaIndex DevRel Tuana Çelik and our very own Laura "The Legend" Funderburk as they unpack how the LlamaIndex memory component has evolved—from simple message history to advanced long-term memory systems powered by static content, extracted facts, and vector search. ​They'll discuss real-world use cases and explore how these memory blocks can transform agentic applications by allowing them to retain and use context across conversations.
🚀 26 Prompting Principles That Will Transform Your LLM Interactions
Have you ever wondered why some people get incredible responses from ChatGPT, Claude, or Gemini while others struggle? The secret isn't luck—it's science-backed prompt engineering! In this video, Mo breaks down groundbreaking research from Mohamed bin Zayed University of AI that reveals 26 proven principles for dramatically improving your interactions with large language models like GPT, Claude, Gemini, and LLaMA.
Cohort 6 Highlight: TickerSense by AI Insights That Turn Stock Data into Clear Buy/Sell Calls
Check out TickerSense - an all-in-one trading platform that distills complex market data and technical indicators into straightforward, emotion-free buy or sell advice. It empowers everyday investors to see the “Wall Street edge” on one screen—removing guesswork, taming market noise, and putting confident decisions within everyone’s reach.
We launched a full Cohort 7 of The AI Engineering Bootcamp on Tuesday - 70 students! Our team of instructors, course operations staff, and 10 peer supporters will work closely with each of them over the next 10 weeks to accelerate their building 🏗️, shipping 🚢, and sharing 🚀 of RAG, agent, and other end-to-end production-ready LLM applications (including some MCP servers, of course)! Applications are now open for Cohort 8 - secure your spot today!