As a technical CEO and former engineer, I get asked about my personal opinions on AI pretty much every day.
The question I hear most often isn't about the technology itself. It's some version of: "I know I need to learn this, but I have no idea where to start."
I heard it again yesterday, from a close friend who also happens to be a customer. He felt the urgency to be using AI. He just didn't know how. And then he asked me something harder: did I actually think AI was going to lead to mass unemployment?
Two big questions. I'll touch on both here and go deeper on the second one in a future post.
But first, here's the thing that's hard to explain to people on the outside: I spend 12 or more hours a day working with AI. I've built products with it, personal tools with it, automated workflows with it, replaced entire processes with it. And I still feel like I'm having a hard time keeping up. The pace is real. The acceleration is real. And the gap between the people inside it and outside it is growing faster than most people realize.
We've all heard the line by now. "You won't be replaced by AI. You'll be replaced by someone who uses AI." I'm not a fortune teller and I won't pretend to know who gets replaced by what. But I will say this: I see a chasm forming. A real one. Between the people learning to use AI and the people who aren't.
Most people think of ChatGPT when they hear AI.
It's a great product. I've largely stopped using it, but it's a great starting point. What it's good at is answering questions, helping you think through problems, and honestly, keeping you company. What it doesn't do is give you superpowers.
That requires a different kind of relationship with the technology.
Let me show you what I mean.
Story one.
My wife Cassie Debenham runs a medspa, Nervana Medical. Her team was manually reading through her entire website, page by page, pulling out Q&A content. They were three days into it and about 20% done when I found out what they were doing.
I told them to stop.
Fifteen minutes later I handed Cassie 802 Q&A answers, perfectly formatted, exactly to her criteria. I had asked an AI coding tool to write a script that scraped her website content into readable files, then reviewed each page and extracted Q&A pairs based on what she needed.
She was shocked. It took me fifteen minutes.
Story two.
Cassie also uses ChatGPT to help her write blog posts. She feeds it some ideas and it spits something out. The results are fine. Generic, but fine.
I showed her something different.
I built a personal agent and asked it to first go research what it takes to be an expert blog writer. Then I gave it specific instructions on the style and tone she was going for. Then I asked it to create a reusable skill so that every future blog post would be co-authored by that same expert.
She provided everything she had: previous writings, ideas, sources, notes. The agent wrote the post.
She compared it side by side with what ChatGPT produced. There was no comparison. One was clearly AI-generated. The other was thoughtful, high quality, and actually sounded like her.
Story three.
At GRIN we've been working through a brand positioning update. We have considered hiring an agency.
Before we did, I asked Claude Cowork to go research brand positioning, become an expert on the topic, and develop a framework to guide me through the process.
It came back with 63 questions. Research-backed, structured, comprehensive.
We worked through it together over six to eight hours. At the end of that exercise I had three documents totaling over 60 pages: a full Positioning Strategy, a Tactical Execution Strategy, and a Priority Framework.
The output isn't just a document. It's everything that was in my head as CEO, now extracted and usable. It's become a set of dominoes. Every conversation, every hire, every decision has something to anchor to now.
That's what AI can actually do when you know how to use it.
The part that keeps me up at night.
My mom loves ChatGPT. The other day she told me she heard on the news that AI was giving people wrong answers and causing problems. She was worried.
She's not wrong. But she's missing the bigger picture. What she heard about is called a hallucination, and it's one of about a hundred nuances that most people don't understand yet about how this technology actually works.
Last month I sat across from three very successful friends. Sharp people. Accomplished people. People who have been trying to read up on AI and stay current. Within about 60 seconds of talking through it, all three of them realized they understood almost nothing. And they started asking for help.
That moment stuck with me.
So here's where I think we are.
I see three paths forward.
One: people start actually learning to use AI, and we all level up together. A rising tide.
Two: most people don't learn, and a small group of people who can wield it effectively become something like a new class of operator. Not smarter, not more talented, just more capable in ways that compound fast.
Three: AI advances to the point where knowing how to use it doesn't matter anymore because the technology is largely autonomous. This is a real possibility. Nobody knows the timeline.
I don't know which of those plays out. I'm not sure anyone does.
What I do know is that right now, today, the gap is real and it's widening.
Why I'm writing this.
I'm not a technical AI researcher. There are people far smarter than me on the engineering side of this.
But I've spent enough time inside this technology, building with it every day, that I can translate. I can take something that feels overwhelming and show you what it actually looks like in practice.
That's what those three stories above are. Not hype. Just real things that happened, with real outcomes, that most people don't know are possible yet.
So that's what I'm going to start doing here. Writing regularly, sharing what I'm seeing, and helping everyday people get on the right side of this thing. Not with technical jargon or hype, just honest takes from someone who's in it every day.