Hey Reader, I just spent hours building the exact same app twice. Same prompt. Same tech stack. Same features. The only difference was that one used Claude Sonnet 4.5 (Claude Code), and the other used GPT-5 (Codex CLI). And holy sh*t, the results were not even close. Let me show you what happened. The TestI built a simple to-do list app with user authentication. Nothing crazy. Just login, add tasks, edit them, delete them. I even told both models: "Keep it super secure." Should be easy enough, right? Wrong. Round 1: Sonnet 4.5Sonnet starts fast. Like, ridiculously fast. About 2.5 minutes later it said: "Cool, here's your app!" Except... it wasn't working. 15 errors. Then 20. Then more. Each time I paste an error, Sonnet says "Oh yeah, here's the fix!" But it's not actually fixing anything. It's creating NEW problems while "solving" the old ones. It's like you talk to a Fiverr dev who just doesn't get you. After 30 minutes, the app kind of works. But then I notice something weird. The entire to-do interface is visible... even when I'm NOT logged in. There's a sign-in button at the top, but the app is already showing. So I ask: "Is this secure?" Sonnet's response: "Nope. We're using weak security that can easily be broken" Bro. YOU built this. 😠Round 2: GPT-5Same exact prompt. Same tech stack. GPT-5 goes quiet for 18 minutes. I'm thinking: "Is this thing even working?" Then it comes back with the entire app. There's minor errors. Another 12 minutes pass, but then... Working. Secure. Done. Zero other errors. I ask it the same question: "Is this secure?" GPT-5: "Yes. We're already using the best practices with proper server-side checks." It caught the security flaw that Sonnet CREATED. What This Actually MeansHere's the uncomfortable truth nobody talks about: Sonnet is fast but doesn't think enough. It takes 5 seconds to "understand" a complex feature. GPT-5 takes 8-10 minutes. Because it's actually thinking. Sonnet gets stuck in debug loops. You paste an error. It "fixes" it. Creates a new error. Repeat forever. GPT-5 actually debugs. It finds the root cause. Sonnet has security blind spots. It stored a critical secure piece of data in a place where any hacker could exploit it. GPT-5 caught it without me even asking from the start. So Which One Should You Use?Here's my honest take: Use Sonnet 4.5 if:
Use GPT-5 if:
Think of it like this: Sonnet is your intern. Fast, eager, needs constant supervision. GPT-5 is your senior developer. Slower, but you can trust it to work alone. My RecommendationFor my AI Coding Blueprint students, I now recommend starting with GPT-5. Yes, it's slower. But you'll spend way less time debugging. Way less time going in circles. And you'll actually learn secure practices instead of creating security holes. Speed means nothing if you're building the wrong thing fast. Talk soon, P.S. Want to work with me? I'm reopening my blueprint next month but you can join the waitlist. |
Coder of 20+ years teaching non-technical people how to build their own software business in 30 days with AI. No devs or code required.
Hey Reader I need to confess something embarrassing. For years, I was the worst kind of engineer. The kind who spent 100 hours perfecting code for apps that never launched. Clean architecture. Zero bugs. Fully scalable. And completely worthless, because nobody used them. Here's The Painful Truth While I was busy "doing it right"... Non-technical founders were shipping garbage code and making money. My perfect projects were sitting in GitHub repos. Their messy apps took daily payments. The...
Hey Reader, Two days ago my student messaged me. He was almost angry... "GPT-5 worked for 23 minutes and then said it doesn't understand!" Oof. That's rough, and I know how this feels. But that's when it hit me. We're all prompting AI wrong. Everyone of us. The $200,000 Mistake Big tech pays prompt engineers $200k+/year. Because they know what a difference of good prompts. Imagine telling your GPS "go somewhere nice" instead of an actual address One gets you lost. One gets you there. So I did...
Everyone overcomplicates building with AI. Especially engineers. They come from a background where planning is everything. Think it out 10 times over or problems down the road, right? I get it, and this used to be my reality. But it's not anymore, and it's not what I teach today. Here's the Thing How engineers use AI and how non-tech people use AI to build things is fundamentally different. Non-tech people listening to traditional engineers lead to terrible results. People who just want to...