By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — April 21, 2026 — 17:35 PHST


This is an open letter to the people building artificial intelligence, but it is also meant for the people trying to understand why this matters.

Machine learning did not begin with chatbots, image generators, or Silicon Valley marketing. It goes back to a much earlier idea: that a machine might improve through experience instead of simply following a fixed list of instructions.

One of the early pioneers of that idea was Arthur Samuel at IBM in the 1950s. He worked on a checkers program that learned by playing games, including games against itself, and improved over time. That may sound simple now. It was not simple then. It was a turning point.

The old model of computing was straightforward. Humans told the machine exactly what to do, step by step, and the machine obeyed. Samuel helped introduce another possibility: a machine could be given a framework, a goal, and room to improve.

That was not just a technical change. It was a philosophical one.

It meant human beings were no longer limited to building machines that only executed commands. We were beginning to build systems that could adapt.

From Checkers to Modern AI

Modern AI is vastly more powerful than Samuel’s checkers program. The scale is different. The speed is different. The range of tasks is different.

But the core idea is still the same.

A machine is exposed to information, patterns, examples, or outcomes. It adjusts. It improves. It becomes more useful over time.

That is the thread running from early machine learning to the systems we use today.

The difference is that today’s systems can work across language, code, images, and reasoning tasks at a scale Samuel could never have imagined. What once fit inside a checkers board now touches education, research, publishing, medicine, software, and daily life.

That matters because it changes what a computer is.

A computer used to be a tool that waited for instructions. Now it is increasingly a tool that can assist with interpretation, synthesis, drafting, and problem solving.

That is not a small leap. That is one of the major technological turns of modern history.

What This Means to Me

I want to say something here that matters for context.

I was working with rudimentary artificial intelligence systems as early as 1990, building simple expert systems at a time when the tools were limited and the concept was still more promise than reality. The basic idea was already there. A machine could assist with structured reasoning. But the software was primitive, the hardware was limited, and the gap between the idea and the execution was still enormous.

So when I say I have been waiting for this my entire life, I do not mean that casually.

I mean I have been watching this horizon for decades.

Not for a gimmick. Not for a toy. Not for a trend.

I have been waiting for software that could actually keep up with the way I think.

For years, most digital systems felt limited. Search engines could retrieve information. Word processors could hold text. Databases could store material. But none of them could really think with me. None of them could help me build in real time the way this can.

When I first heard the noise around artificial intelligence, I was skeptical. I heard the fear. I heard the nonsense. I heard the usual human habit of misunderstanding a powerful new tool before learning what it really is.

Then I sat down, spent a little money, got a book, did some reading, did some research, and started using it.

And then I understood.

This is it.

This is what I had been waiting for.

To me, this feels almost as monumental as the moon landing. Not because of spectacle, but because of what it opens up. It is a threshold moment. It is the point where a person working alone can suddenly do more, think further, structure better, and build faster than before.

That is not a small thing. That is empowerment.

And for someone like me, who has been building archives, essays, systems, and records for future readers, that matters a great deal.

The Limitation

Now we get to the part where praise turns into proposal.

Current AI systems are powerful, but they are still held back by one major limitation.

They do not truly learn with the user over time in a continuous, persistent, individualized way.

They can be helpful in the moment. They can adapt to tone and context inside a conversation. They can even remember some preferences. But they do not fully retain the progression of work the way a true long-term collaborator would.

That creates a real problem.

A user explains something. Then explains it again. Then explains it again in another form. The machine may verify it, handle it well in the moment, and still not fully carry that learning forward in the way that would make future collaboration smoother.

The result is friction.

Too often, the user is ready for the next step while the system is still asking for the last step.

Too often, the user says, “I’m already doing that. What comes next?”

That is not a minor inconvenience. It is a structural limitation in the relationship between person and machine.

What Should Come Next

The next phase of AI should be a personalized learning layer tied to the individual user.

Not a system that changes the global model for everyone.
Not a reckless free-for-all.
Not a machine that absorbs anything and everything without judgment.

A contained, verified, user-specific continuity layer.

In practical terms, that would mean an AI that can learn from repeated interaction with one user, retain validated context, and improve its usefulness over time within that relationship alone.

That matters because not all intelligence is general intelligence. Some of the most useful intelligence is relational intelligence. It comes from knowing the person you are working with, the projects they are building, the patterns they follow, the obstacles they run into, and the steps they have already completed.

That is what makes collaboration real.

And that is the direction AI should move.

The Safety Question

The obvious objection is safety.

What if users teach the system bad information?
What if misinformation gets reinforced?
What if the model drifts?
What if manipulation takes place?

These are legitimate concerns.

But they are not arguments against the idea. They are design challenges.

The answer is not to avoid personalized learning altogether. The answer is to build it with safeguards.

Learning should be:

  • limited to the individual user environment
  • verified against established knowledge where possible
  • flagged when uncertain
  • structured so that preference, workflow, and validated continuity are retained without corrupting the core model

That is the point.

We do not need reckless AI.
We need AI that can grow with a person responsibly.

Why This Matters

This matters because AI is no longer just a curiosity. It is becoming part of how people think, write, research, plan, and build.

If the system remains powerful but forgetful, it will still be useful. But it will stop short of what it could become.

If it gains the ability to learn with a person safely over time, then it becomes something more than a tool.

It becomes a real intellectual partner.

That is the future worth building.

Arthur Samuel helped move machines from obedience to adaptation. That was the first great shift.

The next great shift is from generalized adaptation to individualized continuity.

Not just machines that learn.

Machines that remember who they are learning with.

Conclusion

So this is my message to OpenAI.

You have built something extraordinary. For some of us, it is not just impressive. It is deeply meaningful. It is the arrival of a capability we have been waiting for our entire lives.

Do not stop at the current stage.

The next step is clear.

Build the version that can grow with the user, safely, intelligently, and over time.

That is not a gimmick. That is not luxury. That is the logical next phase of machine learning.

And for those of us who recognize what this moment is, it would mean everything.


If this work helps you understand what’s happening, help me keep it going: https://www.patreon.com/cw/WPSNews

For more from Cliff Potts, see https://cliffpotts.org


References

Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Mitchell, T. M. (1997). Machine learning. McGraw-Hill.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. (Original work published 1955)


Discover more from WPS News

Subscribe to get the latest posts sent to your email.