These slides were "created by an agent." Everything you see is both content and example of the concepts being taught. Press P for presenter view.
EASY PATH: "AI and agents — it's all about context. But how do we manage that context? For the next few minutes, you will be the agent — the intelligence processing information. These slides are your context. And I'm the human, the one shaping how that context reaches you. Let's understand 3 core concepts together."
Image pops in. "Let me tell you about Finding Nemo — or as we'll call it, Finding Memo. Because AI is all about memory." Pause for the visual.
Brief intro to the analogy. The audience knows Finding Nemo. One sentence is enough. The question sets up the plottwist. "A dad crossed the ocean for his son. Simple story. But HOW he did it — that's where the lesson is."
Image pops in. Let the image breathe. The audience will internally answer "love" or "determination". Hold the pause.
Same image, text changes. The fade transition makes "How did Marlin find Nemo?" dissolve into "Was it love?" — dramatic beat. Hold the silence.
PLOTTWIST 1: "Not by love. Marlin is an agent. Agents don't have feelings. They don't 'want' to find Nemo. They process context and take actions. The thing that actually carried Marlin across the ocean wasn't emotion — it was infrastructure. A turtle named Crush showed him the way."
Crush appears. Image pops in. "This is Crush. 150 years old. He didn't swim for Marlin. He showed him something much more powerful."
"Crush showed Marlin the East Australian Current — a massive water current that carried him across the ocean at incredible speed. Crush didn't do the thinking. He didn't swim for Marlin. He showed him the system — the infrastructure — that made the journey possible. That's exactly what the human does in AI."
PLOTTWIST 2 / END OF ANALOGY: "So here's the mapping. Marlin is the agent — the intelligence in your AI system. The Water Currents are the harness — the infrastructure that lets you navigate through knowledge easily. And you? You're Crush. You know the currents. Your job is to build them — guide the agent through."
CORE SLIDE: "Three concepts. That's all you need. (1) The Agent Loop: prompt in, think, act, observe result, repeat. The model IS the loop. (2) Context: everything the agent can see — stacked layers of text, compacted when full. Similar intelligence, different capacity — that's Marlin vs Dory. (3) The Harness: .md files that define agents, skills, instructions, rules. Your intelligence, crystallized into structure. The harness is what makes the difference between 'AI is useless' and 'AI just built my entire presentation.'"
CONCLUSIONS: "Three things to take home. (1) Think before you prompt — write in an editor, read twice, dehumanize. Every wasted token is a wasted thought. (2) Your job is to build the harness — the infrastructure — not to do the thinking yourself. (3) Context is everything. The way you distribute, describe, and reference your files IS the intelligence of your system. The model provides raw thinking power. You provide direction and structure."
"These are practical rules after a month of burning tokens. (1) Never type in the chat directly. Prompts should be deliberate — write in an editor, read twice, strip the human politeness. You're configuring a system, not chatting with a colleague. (2) Scripting has evolved beyond bash — now you create skills, agents, and context files that can be reused across sessions. (3) Your mission is congruence: make sure all the files, settings, and prompts tell a consistent story. You're not teaching the model to be smart — it already is. You're giving it the right information. (4) Think first. Every wasted token is a wasted thought."
"Everything is in the repo. The slides are markdown. The skills are markdown. The theme is CSS. Fork it, break it, rebuild it. Go find your Memo."