The Shortcut That Always Worked
I grew up on a turkey farm, where you are constantly faced with problems that do not have clear answers. A piece of machinery breaks down and the birds still need to be fed. A system has to work by morning with the parts you have, not the parts you wish you had. Nothing comes with directions. You figure it out by pulling from whatever you actually understand and stitching it together into something that works.
In college and in the research labs I worked in afterward, the pattern held. We were not reproducing experiments. We were trying to find new ways to do new things. The information I had been taught turned out to be a foundation, not the thing. I was not sitting at a bench recalling facts. I was taking novel approaches to problems that had not been solved, using knowledge as raw material rather than as a set of stored answers.
Creativity is the overlap of many pieces of understanding, applied to a situation that does not match any one of them exactly. I did not have the phrase for it then. I just knew that the people who could do it and the people who could not were very different people.
When I started teaching, I found both kinds in my classroom. And I started to see which approaches to teaching produced which kind.
What the lab bench reveals
Early in my physics teaching, I ran a projectile motion lab that I still think about. Hot Wheels car, a ramp, a photo gate at the bottom of the ramp, and a ruler. Students had to place a hoop somewhere on the floor and hit the hoop with the car on the first try to earn an A.
The kinematics equations they needed were ones we had practiced for weeks, and the setup gave them everything those equations required. The photo gate gave them time. The ruler gave them a way to measure distance. The only thing they had to figure out, to get the speed of the car at the bottom of the ramp, was that the distance they needed to measure was the length of the car itself.
Some students found it in thirty seconds. Some stared at the setup and froze.
The math was easier than anything we had covered all unit. The physics was the same kinematics they had been doing for weeks. What had changed was that on paper, the speed had always been given in the question. In the real world, on a lab bench, with the tools and the information all sitting in front of them, they had to recognize what they had and connect it to what they needed. The problem matched their study materials in concept. It did not match in form. And that turned out to be the whole game.
Organic chemistry has the same pattern, one layer deeper. I give students a full reaction mechanism on an exam, circle two steps with nearly identical substrates, and ask why a different thing happens at each step. The real answer is that one of the intermediates can adopt an aromatic conformation and the other cannot, and aromaticity stabilizes. A student who understands why each arrow gets drawn can see that. A student who memorized the mechanism by brute force has no access to that reasoning, because the thing that explains it was never what they were studying.
Both kinds of students were in my classroom from the beginning. Over the years I have spent more and more of my time teaching students what learning actually is, because most of them arrive thinking that learning and memorization are the same thing. I have to reprogram them before we can do the real work.
The flaw was always there
We had been conflating memorization with learning for as long as I had been in a classroom, and for as long as my teachers had been in theirs. The conflation worked, or at least appeared to work, because the assessments we used mostly rewarded recognition. A student who could reproduce the right pattern on the right kind of problem looked identical, on paper, to a student who understood the underlying material.
Both got the A. Only one of them could do the work.
This was never a secret. Every teacher I know has stories about the top student who could not function once the problems stopped matching the study guide. We just did not have a mechanism that exposed it at scale.
Then AI arrived.
What AI actually revealed
The common narrative is that AI broke learning. Students stopped doing their own work. Grades no longer reflect understanding. This is true in a narrow sense and misleading in a broader one.
What AI actually did was redistribute what used to separate students. For a long time, the thing that separated grades in most classrooms was memorization and willingness to complete the work. Now every student has their work complete. Every student is producing polished output in volumes that used to be impossible. On the take-home side of coursework, the old signal has been erased.
Then they walk into an in-class assessment that requires application, and the results are not just poor. They are off a cliff. The students who had been coasting on the old shortcut discover, all at once, that the shortcut was never the thing. The thing underneath had never been built.
That is the reveal. But there is a second piece that does not get talked about as much.
For the students who already understand, AI is not a crutch. It is an accelerant. They use it to generate extra practice problems. They ask it to explain a concept a different way when one framing is not clicking. In my own classes, I have built chatbots with guardrails that tutor rather than answer, and the students who engage with them go deeper into the material than they would have otherwise. The students who understand are learning faster than ever.
AI did not create two tiers of students. It revealed two tiers that had always been there, and now it is actively widening the gap between them.
Memorization is not the enemy
I want to be careful here, because the failure mode of this argument is to land on "memorization is bad" and then get dismissed by anyone who actually teaches.
Memorization is essential. You cannot read without memorizing the sounds of the alphabet. You cannot do chemistry without memorizing functional groups, common reagents, the rules that govern how electrons move. Memorization is the substrate. Learning is what you build on top of it.
The failure is not memorization. The failure is memorization substituted for the thing on top. The student who memorized the reaction mechanism has a useful body of knowledge. The student who memorized the mechanism and stopped there has a body of knowledge they cannot apply, because they never learned to reason about why each step happens. The same information can be the floor of real competence or a ceiling disguised as one. The difference is whether transfer and synthesis ever happen.
- Transfer is when you use something you learned in one context to solve a problem in a context you have never seen.
- Synthesis is when you combine multiple pieces of understanding into something new.
These are what learning actually produces. Without them, you have a filing cabinet with no one home to use it.
The problem is that transfer and synthesis are hard to measure. Recognition is easy to measure. So we measured what was easy to measure and called it learning, and for a long time the gap between the proxy and the thing was hidden by the fact that most students did at least a little of the real work on their own.
AI closed that gap for the students who were only doing the easy part. What looked like a small supplement turned out to be most of the work.
The same pattern in development
The same shape shows up in software, and it is worth saying out loud because the stakes are not hypothetical there either.
There has always been a category of developer who can produce code but does not understand abstractions. They do not have a mental model of the system at the architectural level. They are pattern-matching their way through implementations the same way a memorizing student pattern-matches through problems.
AI has done for them what it has done for the memorizing student. They can now produce more code, faster, and at the scale of a single component or a handful of components, it mostly works. That is real.
The architectural level is a different story. Code that works at the component level and fails at the architectural level is a category of failure I now see constantly. The abstractions do not line up. The domain model is wrong. The system fights itself because nobody built a coherent mental picture of what it was supposed to be. The code is functional in isolation and broken in aggregate, which is a worse failure mode than code that does not work, because it takes longer to discover.
Real engineering still requires what it has always required: an understanding of architecture, a clean understanding of the domain the system is built for, and the ability to synthesize those two into something that actually works. That is not a skill AI provides. It is a skill AI reveals you have or do not have, the same way an in-class assessment reveals whether a student actually learned.
What I have changed
In my own classroom, I have restructured almost everything around this distinction. The interventions are not technological. They are about assessment.
- Problems that require novel application. Not rearrangements of the homework. Situations they have never seen that require the same principles they have studied.
- Oral components. You cannot substitute someone else's reasoning for your own when I am standing across the lab bench asking you why.
- Lab work where the answer depends on the situation in front of you. The Hot Wheels lab. The circled mechanism. Problems that cannot be pattern-matched because they have never existed before in exactly that form.
These are expensive. They have always been expensive. The reason we defaulted to cheaper measurements was not that we thought they measured the real thing perfectly. It was that the gap between the proxy and the thing was small enough that we could live with it.
The gap is not small anymore. As AI moves further into knowledge work of every kind, the capacity that used to quietly separate students who would build careers from students who would coast through them is now the visible dividing line between work AI can replace and work it cannot.
AI did not disrupt learning. It revealed a flaw we had been living with for as long as there have been classrooms. We were measuring the shortcut, not the thing. The shortcut finally became too good to ignore.