
A Comprehensive Lecture Series by Minh Trinh, PhD
Course Overview
This is a graduate-level lecture series that traces the full arc of modern artificial intelligence — from the foundational mechanics of large language models to the frontier questions of AGI, reasoning, planning, and embodied robotics. The course is designed for researchers, practitioners, and advanced students who want both theoretical grounding and practical understanding of where AI stands today and where it is heading.
The series is delivered as recorded public lectures, most running between 50 and 65 minutes, with occasional lightning talks for focused topics. All sessions are publicly available and have maintained a 100% positive rating across most installments.
Course Structure & Lecture Breakdown
Module 1 — Large Language Models as Research Tools
Lecture: Large Language Models for Scientific Research (36 min)
The series opens by grounding students in what LLMs actually are and what they can do in a practical context. This lecture surveys LLM capabilities — text generation, question answering, and language-based task completion — and anchors them in real scientific use cases across bioinformatics, chemistry, physics, and the social sciences. It also honestly addresses limitations, setting a tone of critical rigor that runs throughout the entire course. A strong foundation lecture for students coming from scientific rather than computer science backgrounds.
Module 2 — Evaluating Intelligence: How Do We Know When AI Is Smart?
Lecture: How Close Are We to Artificial General Intelligence? (57 min)
With LLMs established as capable tools, the course immediately asks the harder question: capable of what, exactly, and compared to what standard? This lecture introduces students to the evaluation ecosystem — benchmarks, the Turing Test, and proposed successor tests for measuring AGI-level performance. It provides the conceptual vocabulary for assessing AI progress that students will use throughout the rest of the series.
Module 3 — AI’s Real-World Impact: Nobel Prizes and Beyond
Lecture: Artificial Intelligence: A Catalyst for Nobel-Worthy Innovations (54 min)
This lecture zooms out to place AI within the broader scientific moment. Using the landmark 2024 Nobel Prizes — awarded to Geoffrey Hinton and John Hopfield for neural network foundations in Physics, and to David Baker, Demis Hassabis, and John Jumper for protein structure prediction in Chemistry — it demonstrates that AI is no longer a tool for science but a driver of scientific discovery. This module provides crucial historical and cultural context for why the rest of the course matters.
Module 4 — The Reasoning Problem
Lecture: Can LLM Models Reason? (57 min)
One of the most intellectually rigorous lectures in the series. Students confront the central tension in modern AI: LLMs appear to reason, but do they actually? The lecture systematically examines benchmarks — MMLU, CommonsenseQA, the AI2 Reasoning Challenge — and distinguishes between pattern-matching that mimics reasoning and genuine multi-step logical deduction. Chain-of-Thought prompting is introduced as a partial solution, but the lecture is careful not to overstate its power. This is essential reading for anyone who wants to think critically about AI capability claims.
Module 5 — The Planning Problem
Lecture: Can LLM Models Plan? (61 min)
Directly following the reasoning module, this lecture asks a related but distinct question: can LLMs plan? It contrasts classical automated planning and scheduling techniques with modern LLM-powered approaches, honestly assessing where neural methods excel and where they still fall short. This lecture is particularly important for students interested in AI agents, as planning is a core competency any autonomous agent must possess. The tension between symbolic planning traditions and neural approaches previews the later Neuro-Symbolic module.
Module 6 — Reinforcement Learning and the Emergence of Reasoning Models
Lectures:
- A Deep Dive Into DeepSeek (66 min)
- Five Lessons from DeepSeek (6 min — lightning talk)
- Reinforcement Learning in Reasoning Models (58 min)
This is the methodological heart of the course, spread across three sessions. Students first encounter DeepSeek-R1, a landmark open-source model that achieved reasoning capabilities rivaling OpenAI’s o1 using Reinforcement Learning alone — without supervised fine-tuning as the primary driver. The lightning talk distills five transferable lessons from DeepSeek’s success for a fast-paced seminar format. The full RL lecture then provides the theoretical scaffolding: how RL evolved from basic value optimization to sophisticated alignment methods, and how it now sits at the center of frontier reasoning model development. Together these three sessions give students a complete picture of the current state of the art in training reasoning systems.
Module 7 — Scaling: How Size, Data, and Compute Shape Intelligence
Lecture: AI Scaling Laws (50 min)
This lecture addresses one of the most consequential empirical discoveries in AI: that performance improves predictably with increases in data, model size, and compute. Students learn the classical scaling laws from pre-training and then move to the frontier concept of test-time scaling — allocating more compute during inference rather than training. This connects directly to the reasoning models discussed in Module 6 and provides the economic and technical logic behind why certain architectural choices are made at the frontier.
Module 8 — AI Creativity and Scientific Discovery
Lecture: Can An AI Scientist Achieve Scientific Breakthroughs? (55 min)
Returning to the theme of AI in science, this lecture asks a deeper question than Module 1: not whether AI can assist scientists, but whether it can be a scientist. It examines the cognitive processes behind human scientific creativity, surveys AI-generated content and discovery systems, and critically evaluates whether current AI systems can truly be considered creative or whether they are sophisticated recombination engines. This is one of the most philosophically rich lectures in the series.
Module 9 — Deep Research: AI as Knowledge Worker
Lecture: The State of Deep Research (49 min)
With reasoning and planning established as emerging AI capabilities, this lecture examines their practical synthesis in “deep research” systems — AI that can autonomously conduct extended, multi-step research tasks. It surveys current offerings from OpenAI and Google Gemini, introduces evaluation benchmarks for research-capable AI, and examines both the transformative potential and the significant current limitations. Students come away understanding what it means for an AI system to operate as a knowledge worker rather than a tool.
Module 10 — AI-Assisted Coding and Vibe Coding
Lecture: Coding with AI and Vibe Coding (45 min)
A highly practical module examining how LLMs have transformed software development. The lecture covers the landscape of coding models, their training methodologies, and state-of-the-art benchmarks. It also explores “vibe coding” — the emerging practice of directing AI to write code through natural language intent rather than explicit instruction — and examines what this means for the future of software engineering as a profession.
Module 11 — Embodied Intelligence: Robotics and Vision-Language-Action Models
Lecture: VLA Models and the New Robotics (62 min)
The course’s most-watched lecture and one of its most forward-looking. Students trace the evolution of robot learning from rigid hand-coded behaviors through classical control systems to modern data-driven approaches. The central focus is Vision-Language-Action (VLA) models — systems that integrate visual perception, language understanding, and physical action into a single foundation model. This lecture bridges the gap between language-based AI agents and physically embodied agents operating in the real world.
Module 12 — The AGI Question Revisited
Lecture: The Path to Artificial General Intelligence (61 min)
Having traversed the full landscape of modern AI capabilities, the course circles back to the AGI question — this time with the full weight of everything covered. Drawing on perspectives from Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Fei-Fei Li, and Richard Sutton, the lecture maps competing definitions of AGI, surveys the major architectural approaches being actively pursued, and identifies what technical breakthroughs remain genuinely unsolved. It is an honest, researcher-level assessment of where we are and what remains.
Module 13 — Neuro-Symbolic AI: Bridging Pattern and Logic
Lecture: Neuro-Symbolic AI (59 min)
The course closes with what may be its most intellectually unifying lecture. After spending the series examining what neural LLMs can and cannot do — particularly around reasoning and planning — students encounter neuro-symbolic AI as a proposed synthesis: systems that combine the pattern-recognition strength of deep learning with the logical rigor of symbolic AI. The lecture covers core concepts, real-world applications, current challenges, and the open question of whether this hybrid paradigm represents the most promising path toward robust, interpretable general intelligence.
Key Themes Across the Course
The series is unified by several threads that recur across modules. The capability versus understanding tension — whether LLMs truly reason, plan, or discover, or merely simulate these functions — runs from Module 4 all the way to the final lecture. The scaling paradigm introduced in Module 7 provides an economic and technical lens for understanding why the field has developed the way it has. The role of reinforcement learning as the engine of the most capable current systems connects Modules 6, 7, and 12. And the question of what intelligence actually is — asked implicitly in every session — is addressed most directly in the AGI and Neuro-Symbolic modules.
Recommended Audience
This course is best suited for graduate students in computer science, data science, or cognitive science; AI researchers looking for a structured survey of adjacent subfields; and technical practitioners who want to move beyond tool-use into genuine conceptual understanding of the systems they work with. Prior familiarity with machine learning fundamentals is assumed, though the lecture style is accessible enough that motivated learners from adjacent fields will find solid footing.
Instructor
Minh Trinh, PhD is the author of the Artificial Intelligence Handbook Series and Foundations of Artificial Intelligence Agents, available through Amazon. The lectures in this series draw directly from his research and writing, making the course a living companion to his published work.

