AI has created two reactions: veterans who feel their edge slipping, and newcomers who think the edge no longer exists. Both are wrong, but not for the reasons you'd expect.
I'm in the second camp. I'm 20, dual-majoring in computer science and mathematics, and I've been building with AI tools since ~2021. On paper, I should be telling you that experience is dead and the future belongs to people who can prompt, but I can't tell you that, because I've been watching the opposite happen.
Over the past year, I've worked alongside seasoned professionals who picked up these tools for the first time, and I've watched people my age who grew up with them. The results aren't what the headlines would predict, which is that newcomers would pull ahead. In fact the veterans who actually engage with AI are the ones pulling ahead in ways that are difficult to replicate with technical fluency alone. I don't want to bore you and give you the version where I tell you everything is fine and your experience is all you need, since that would be dishonest. Reality has more edges to it than that.
What Actually Changed and What Didn't
The thing that changed is production speed. Tasks that required specialized knowledge and hours of execution can now be approximated in minutes by someone who has never done them before. I've seen first hand how first drafts of legal contracts, financial models, marketing copy, code, research summaries are all being made with AI with very minimal steering from humans. The barrier to producing passable output in almost any domain has collapsed.
This is real and it matters. If your value proposition was "I can produce this thing and you can't," you have a big problem on your hands, because that moat is gone.
But a lot of people confuse the "passable" outcome and genuinely "good" output, which in simple terms is the difference between 80% there and 100% there. And in most professional contexts, passable is dangerous precisely because it looks good enough to use.
One example that stuck with me: a purchasing agent generated a vendor comparison that looked thorough. It had the criteria: clean table and defensible scoring. A procurement director with 15 years of experience glanced at it for thirty seconds and flagged that two of the "vendors" were subsidiaries of the same parent company, something that changed the entire risk calculation, and the AI simply didn't know. Nor could it, because the information existed in the director's memory of a merger from 2019, not in any document the agent had access to.
That's one example of where AI did not fail. But it is a demonstration of what expertise actually is, which is pattern recognition accumulated through thousands of decisions where the consequences were actually real.
The Bottleneck Shifted and Most People Haven't Noticed
For decades, the bottleneck in professional work was production, which could consist of writing a brief, building a model, drafting the code or whatever your expertise was. The people who could produce faster and better commanded premiums.
But now AI removed that bottleneck almost overnight, and now production is cheap, fast, and accessible to anyone with a subscription. Which begs the question of what the bottleneck is now then?
The new bottleneck is evaluation. When anyone can generate a hundred options in the time it used to take to produce one, the scarce skill becomes knowing which two or three are actually worth pursuing and being able to identify what's good, what's wrong and what's completely missing.
This skill is also known as judgment. And judgment isn't something you can prompt for. It's built through years of seeing what works, what fails, and the most important part: why. Only the residue of thousands of decisions where you had to live with the outcome can teach this.
AI today can generate, but it cannot reliably and effectively evaluate its own output with the kind of contextual, stakes-aware judgment that professional work demands, which is your job now. And if you're good at it, you just became significantly more valuable, because your judgment now operates at the speed of AI generation instead of the speed of manual production.
Who's Actually Pulling Ahead
The dividing line isn't age, and it isn't technical ability, since you probably don't work at one of the leading AI companies such as Anthropic, Google, OpenAI or Grok. It's whether or not you have something worth multiplying.
AI is a multiplier and multipliers are only as useful as the input. Meaning if you bring deep domain knowledge, strong judgment, and clear thinking to these tools, the output will be transformative. Same goes for the opposite - if you bring vague instructions and no ability to evaluate what comes back, you get confident-sounding garbage at scale.
This is why the "AI will replace experts" narrative has it backwards. AI commoditizes the ability to produce and it makes the ability to evaluate production the differentiator. And evaluation is what experts do, whether they realize it or not. If you don't believe me, look at how we assess whether AI is actually good - benchmarks, which are standardized tests designed to measure model performance. The entire field evaluates AI by testing whether it can match human expert judgment on domain-specific tasks.
The flip side is also true. Expertise that refuses to engage with AI doesn't multiply. It stays fixed while the output capacity of everyone around it accelerates. So if you're a 40 year old PM that refuses to let your team use AI because it's "dangerous" then think again. The advantage belongs to people who combine domain knowledge with AI fluency. Not one or the other. Both.
What You Need to Know About How This Actually Works
You don't need to become technical to get value from AI. That said, the people I've watched get the best results are the ones who understood at least the basics of how the technology works, because it changes the way you prompt, the way you evaluate output, and the way you think about what these tools can and can't do. One concept in particular will change how you use them forever: AI doesn't understand anything. It predicts what text should come next based on patterns from the entire internet.
This is why it produces output that looks right in any domain. The formatting, tone, and structure will match the conventions of whatever field you're working in, making it pass surface-level inspection almost every time.
It usually breaks down when the correct answer depends on context it wasn't given, when conventions don't apply to the specific situation, or when the stakes require judgment that lives in experience rather than text. Note: this isn't a flaw to be fixed, but it's just the fundamental architecture. And it means the human role isn't going away. It's shifting from "produce the output" to "direct the production and verify the result." That second role requires more expertise, not less, until agents eventually help out with this part anyways.
How to Actually Get Good at This
Most advice says start simple and work up, which I think is wrong. That's why you see so many different calendars and note taking apps that don't really do anything and you can usually one-shot them. I'd recommend starting with your hardest problem.
Simple tasks teach you nothing because the prompt is trivial. You could get a full calendar note taking app with maybe a 10 word prompt. But hard problems force you to give the AI real context: what you're trying to accomplish, what constraints exist, what you've tried, what the risks are, and so many more factors. That process of articulating context is the actual skill, and it only develops under pressure.
The mental mode I like is give the AI information like you're briefing a sharp colleague who just joined your team and knows nothing about your specific situation but learns fast. The more relevant context you provide, the better the output. Always know that nobody can provide your specific context better than you can.
On iteration: your first prompt will produce mediocre output, which is normal. The value lives in refinement. "Make this more direct." "You're assuming X, which isn't true here." "The third point is wrong because of Y. Revise with that constraint." Each cycle gets you closer to something useful. Never one-shot (unless it's super trivial), because one prompt and done leaves 90% of the value on the table.
On model selection: this matters more than people realize. Free tiers are significantly behind the frontier. My estimate would be at least ~6 months if not a year. Obviously the companies want you to subscribe, which is why the newer models (which make a huge difference) are going to be behind a paywall. Pay the $20/month for Claude Pro or ChatGPT Plus, and make sure you're selecting the most capable model in the settings, because the apps default to something lighter to save compute. Opus 4.6 or GPT 5.3 is my recommendation.
On daily practice: if you can dedicate anywhere between ~15 to 20 minutes a day on a real problem from your actual work, you will be ahead in no time. I also used to fall into the trap of consumerism and watch YT tutorials and listen to how people were using it. But don't do that. You want to actually build the reps. Reps build intuition. And intuition is what separates someone who uses AI occasionally from someone whose output is genuinely amplified by it. I doubt you got good at your job by reading the job description. This works the same way.
The Window
Right now, AI adoption among experienced professionals is uneven, since most people are either ignoring it or using it for tasks so simple that it barely matters and doesn't multiply the business in any way. This creates a temporary gap.
AI fluency compounds. Every week of real usage builds intuition that can't be acquired through courses or articles. Time works differently in the world of AI. 6 months in AI is like 3 years in the real world, which is why the person who started using AI six months ago isn't just six months ahead. They've developed a feel for what these tools can and can't do, how to structure problems for AI collaboration, and when to trust the output versus when to verify.
Within ~a year, basic AI competence will be expected in most professional fields. Within two, it will be assumed. The people who built fluency early will have a structural advantage that late adopters will struggle to close.
The professionals best positioned to benefit from these tools are precisely the ones most likely to delay. Do not let that be you.
Where This Leaves You
There's a version of the next two years where AI fluency becomes the new literacy - where not being able to direct and evaluate AI output carries the same professional cost as not being able to use email in 2005.
Right now, being good at working with AI is a differentiator, but soon it will be a baseline. The advantage will shift again, from "can you use AI" to "can you use AI in ways that reflect judgment nobody else in the room has." That shift alone requires depth.
I don't know exactly how this plays out. Nobody does, and anyone who tells you they do is selling something. But I do know that the people who are building reps right now are going to be the ones setting the standard that everyone else scrambles to meet.
So go build something.