I Built a 36-Slide Presentation with AI in Two Hours. Here Is What I Learned.
On Wednesday I’m giving a two-hour workshop on agentic coding for a client. I needed a presentation. I had the content in my head but not a single slide. So I sat down with Claude in Cowork mode and built the entire presentation from scratch — in about two hours.
This is an honest walkthrough of how it went. What worked, what didn’t, and what I’m taking away for next time.
The starting point
I had thought a lot about the content but hadn’t written anything down. No slides, no bullet points, no document. Just a bunch of loose thoughts and a rough structure in my head.
My first step was to talk through my ideas. I transcribed shorter and longer passages and put everything in a text file — completely unedited. I noted in the file that the text was transcribed, to give the AI a better chance of interpreting things that might have come out unclear in spoken form.
In addition to the brain dump, I had a mood board: screenshots of tweets I wanted to quote, benchmark graphs, a timeline I’d seen in another presentation, and my logo. The instruction was simple: “I definitely want these images and this content included, in some format.”
The tool
I used Claude Cowork with Opus 4.6. The format ended up being an HTML file with embedded CSS and JavaScript — a complete, standalone presentation that runs in the browser. Not PowerPoint, not Google Slides, but plain HTML.
Why? I wanted full control over the design, and I wanted to be able to share the presentation as a single file with clickable links.
The process
The session lasted about two hours. During that time we went through a large number of iterations back and forth. The AI generated a first draft that I then refined through short, targeted instructions.
Some of the changes we made:
- Fixed navigation bugs in the JavaScript code (the arrow keys stopped working)
- Rebuilt a timeline from a screenshot into native HTML with progressive reveal animation
- Adjusted font sizes in two rounds — everything was too small for projector display
- Reformatted quote slides from embedded tweet screenshots into stylized text blocks with portrait images
- Added an entirely new slide about context engineering based on an Anthropic blog post I found during the session
- Corrected an inaccurate description of MCP (“so AI understands your code” to “so you give your AI tools”)
Each change was a prompt. Sometimes specific: “change the heading on the MCP slide.” Sometimes open-ended: “the storytelling on the evolution slide isn’t landing — can you redo it? Feel free to think for yourself too.”
What worked well
Better results in less time. Compared to building the presentation manually in Google Slides, I got a better end result faster. 36 slides with custom design, animations, responsive layout, clickable source references, and portrait images with Creative Commons attribution.
A single interface. This was perhaps the most important insight. Normally I would have needed to navigate PowerPoint’s UI, check email and calendar, search things on the internet, and juggle a bunch of other tools. Each such switch carries a context-switching cost and a cognitive tax — plus all the distractions lurking in other tabs. Now I worked straight through the presentation in a single interface. The focus was markedly better.
Voice transcription as input. Talking through my thoughts and giving the AI a raw, unedited brain dump worked surprisingly well. I had the broad strokes ready, and the AI could structure and refine. I’d recommend it to anyone starting from the same place — a blank page and a bunch of loose ideas.
Balance between direction and freedom. Sometimes I said exactly what I wanted. Other times I asked open-ended questions and let the AI think. Both worked. It felt like directing rather than proofreading.
What didn’t work as well
Image handling was unexpectedly difficult. Things I thought would be trivial — finding a high-resolution logo, finding the Claude Code mascot — turned out to be really finicky. The AI searched the web in several rounds without managing to download the images. In the end I had to do it myself and drop the files into the folder. This is a clear weakness in the workflow.
Font size required two iterations. I said “make the text bigger for the projector” and got a result that was still too small. The root cause turned out to be fixed pixel values (max-width: 800px) that constrained the content regardless of screen size. It was fixed by switching to viewport units (vw). This is a typical situation where the AI fixes the symptoms instead of the root cause — and it took my review to identify the problem.
The first draft wasn’t “wow.” I actually tested two first drafts — one with Claude Cowork (Opus 4.6) and one with OpenAI Codex. The Codex version was marginally prettier in its design, but the Swedish was atrocious and it contained a lot of placeholder text. I continued working with Claude. Neither of the first drafts was ready to present — it was the iterations that made the difference.
Ownership
A common question about AI-generated content is whether it feels like “your own.” The answer: the presentation feels like ours — mine and the AI’s. I directed the content, the structure, and the creative direction. The AI built, formatted, and researched. It’s a collaboration, and that’s perfectly fine. Claude is credited on the last slide.
What I’ll do differently next time
Three things:
-
Visual identity. My only design instruction was “modern HTML presentation.” I want to work out a clearer look-and-feel that I can reuse. Color palette, typography, layout patterns — documented so the AI can follow it consistently.
-
Better image handling. Either I prepare all images in advance, or I find a better workflow for giving the AI access to image assets.
-
Document improvements. Instead of trying to remember what went well and what didn’t, I’ll summarize lessons in a file the AI can read next time. AI-first mindset — the same loop thinking that the presentation itself is about.
The numbers
- Total time: ~2 hours
- Final result: 36 slides, ~2,000 lines of HTML/CSS/JS
- Iterations: 36+ prompts back and forth
- Tool calls by the AI: 170+ (of which 74 file edits, 44 file reads, 14 web searches)
- Context fills: 1 (the session ran so long that the AI’s context filled up and was summarized)
- Images I had to fix myself: 2
Conclusion
This workflow isn’t magic. It requires that you know what you want to say. But if you have the content in your head — or at least the broad strokes — the AI can help you structure, design, and produce significantly faster than you can on your own.
The most important benefit wasn’t the speed. It was the focus. One interface, one conversation, zero distractions.
The presentation was created by Magnus Gille in collaboration with Claude Opus 4.6.
Magnus Gille is a consultant in AI and technology strategy, winner of the Swedish AI Prompting Championship 2025, and former Product Owner AI Enablement at Scania. Contact: gille.ai