There’s a take floating around right now that sounds progressive, even liberating: “Don’t bother learning to code. AI will do it for you.”

It’s wrong. And if you’re giving this advice to your kids or to anyone early in their career, you might be setting them up for a ceiling they’ll hit hard in about two years.

Here’s why.

The Midjourney Lesson Nobody Talks About

Andrew Ng illustrated this perfectly with a dead-simple experiment.

He opened Midjourney and typed: “make pretty pictures of robots.”

The result was exactly what you’d expect from a vague prompt — generic, mediocre, forgettable.

Then someone with an Art History background sat down at the same tool. They used precise vocabulary: chiaroscuro lighting, Baroque composition, desaturated earth-tone palette, dramatic foreshortening.

Same AI. Stunning output.

The difference wasn’t the tool. It was the operator’s vocabulary — their deep understanding of the domain they were working in. The AI didn’t replace art knowledge. It made art knowledge the control interface.

Now Apply That to Code

We’re watching this exact pattern play out in software engineering right now, and the stakes are much higher than pretty pictures.

Tools like OpenAI Codex, Claude Code, and Cursor are genuinely impressive. They can scaffold an app in minutes. They can write boilerplate faster than any human. They can translate a plain-English description into working code.

But “working code” and “production-grade software” are not the same thing.

When someone without CS fundamentals uses these tools, they get code that runs. It passes the demo. It looks great in a tweet. And then it falls apart the moment it meets real users, real scale, or real edge cases. Memory leaks. Race conditions. Security holes. Architecture that paints you into a corner at 10x traffic.

They can’t diagnose these problems because they don’t have the vocabulary to describe what’s going wrong — or to instruct the AI to avoid it in the first place.

When someone who understands systems, data structures, concurrency, and architecture uses the same tools? They’re not prompting. They’re directing. They know what to ask for, what to reject, and what the AI is likely to get wrong. They can read the generated code and spot the subtle bug the AI introduced on line 47.

Same AI. Wildly different outcome.

The New Skill Matrix

Here’s how it actually breaks down:

No fundamentals + AI → You ship fast. It breaks fast. You can’t debug it. You prompt in circles trying to fix cascading issues you don’t understand. You’ve built a house of cards and called it an MVP.

Strong fundamentals, no AI → You build solid software, but you’re now 5-10x slower than the person next to you who knows the same things and also uses AI effectively. In a competitive market, speed matters.

Strong fundamentals + AI → This is the 10x combination. You understand the system deeply enough to direct AI precisely, validate its output critically, and intervene surgically when it gets things wrong. You move fast and things don’t break.

We’re Not Eliminating Engineers. We’re Splitting Them.

The industry isn’t shrinking its need for people who understand software. It’s bifurcating.

On one side: people who can use AI to generate code but can’t evaluate or improve it. They’ll fill a niche — prototyping, simple automations, one-off scripts. But they’ll cap out quickly. They’re the equivalent of someone who can operate a calculator but can’t set up the equation.

On the other side: people who understand how systems work and use AI as a force multiplier. These are the people leading teams, architecting systems, building the things that actually scale. And they’re becoming disproportionately valuable because the gap between “it works in the demo” and “it works in production” is exactly where their expertise lives.

The rarest talent in tech right now isn’t “people who can write prompts.” Prompt-writing is already a commodity — the tools are getting better at interpreting vague input every quarter.

The rarest talent is people who understand systems deeply enough to orchestrate AI effectively. People who can look at AI-generated code and see what’s missing, what’s fragile, and what’s going to blow up at 3 AM on a Saturday.

The Real Shift

AI didn’t remove the need to learn computer science.

It made computer science the language you use to control the most powerful tools ever built.

Learning to code was always valuable. But there was an argument — maybe a thin one — that not everyone needed to understand recursion or memory management or distributed systems.

That argument is gone now. Because now, the people who understand those things don’t just write better code. They direct AI to write better code. And the leverage on that knowledge has gone from 1x to 10x almost overnight.

“Don’t learn to code” isn’t forward-thinking advice.

It’s the modern equivalent of saying “don’t learn to read — you can just listen to audiobooks.”

Technically functional. Strategically crippling.


If your kid is interested in technology, in building things, in understanding how the digital world works — the answer hasn’t changed. Learn the fundamentals. Learn to code. The tools will keep evolving, but the people who understand what’s happening under the hood will always be the ones driving.