🏗️ 🚢 🚀 Large Reasoning Models: The Illusion of Thinking
Published about 8 hours ago • 2 min read
EVENT RECAP
🧠 The Illusion of Thinking
Last week, we examined The Illusion of Thinking, which argues that scaling up test-time compute does not work on problems of extremely high complexity. The response paper by Claude Opus, et al. "The Illusion of The Illusion of Thinking" argues the paper was contrived and poorly designed. Who is right, and wait, do LLMs think or not??
OpenAI has officially released Deep Research through the API! The layers of abstraction continue to increase for builders, as now we have to answer the question of not only which model to use, but which applications to use in our applications. In other words, how do we know choose between using a traditional LLM (e.g., GPT-4.1), an LRM - Large Reasoning model (LRM) (e.g., o3), or deep research (e.g., o3-deep-research) in my application? How do we decide, and what kind of applications will this enable us to build?
Prompt engineering is out—Context Engineering is in. In this session, we unpack why putting the right things in an LLM’s context window is now the core skill for AI builders. From RAG and agent design to in-context memory and tool orchestration, we explore how the latest best practices are evolving and why leaders like Karpathy and Tobi Lütke are all in. We’ll dive into strategies like write, select, compress, and isolate, and demo a live agentic-RAG app that brings it all together. If you’re building production-grade LLM systems in 2025, mastering Context Engineering is the next-level unlock. Join us to level up.
Guardrails are no longer optional—they’re essential for building safe, reliable LLM applications. In this session, we unpack what guardrails really are, why they matter, and how to integrate them across the AI stack—from pre-generation to post-output and ongoing operations. We’ll dive into theAI Guardrails Index, covering 20+ solutions across critical domains like jailbreak prevention, PII detection, hallucination control, and more. You’ll learn how to use guardrails to wrap, validate, and monitor your LLMs—and we’ll build a real-world, guardrail-ready app live. If you're serious about shipping trusted AI in 2025, this session is your blueprint.
Building an MCP-Powered YouTube Video Analysis Toolkit
In this blog, Isham Rashik showcases the process he followed for building a powerful MCP-driven YouTube analysis platform that turns videos into actionable insights—extracting transcripts, generating detailed notes, creating knowledge graphs, and performing sentiment and topic analysis. By blending MCP tools with advanced AI techniques, the platform showcases how raw video can be transformed into rich, multi-dimensional intelligence.
Cohort 6 Highlight: WriteSomething.ai – Conquer the Blank Page with an AI Writing Buddy
Unlock your writing habit—100 words at a time! ✍️✨ WriteSomething.ai is a minimalist, AI-powered companion built for brand-new writers who stare down the empty page and hear that loud inner critic. In this video, see how our calming interface, gentle prompts, and “get-unstuck” nudges turn “I wish I wrote” into “I write every day.”