The conversation about whether computers are catching up to humans in terms of intelligence is an old one, but it’s constantly being updated. The skyrocketing use of voice interfaces instead of keyboards and screens is leading to new speculation in this area, because great advances in artificial intelligence are required to make voice feasible and they’re happening much faster than anyone would have predicted just ten years ago.
The autonomous car is another driver (no pun intended). It’s an incredibly complex challenge, and that’s even before we get to the ethical decisions that a self-driving car might be called upon to make. ("Do I steer into a wall and sacrifice the driver in order to avoid hitting a bunch of schoolchildren?")
Elon Musk earlier this week spoke about the need to merge human and machine intelligence before things get out of hand, which led me to wonder: Are computers catching up to humans in terms of intelligence?
They are, but only in the sense that an especially motivated tortoise can be said to be "catching up" with an Atlas rocket. Computers will never think like people.
Language is useful as a measure of intelligence in both humans and machines, so the best way to see why I’m so certain about the ultimate artificial intelligence end game is to take a look at what it means when a computer "talks," as opposed to when you and I talk.
What's happening when you and I talk is that my words trigger in you memories of sensations and experiences that you have sensed and experienced. I can't tell you about a novel experience without relating it to something you have already experienced for yourself.
For example, if I want to tell you what it's like to be out in a sleet storm, I can tell you that it's wet and that it's cold. You already know what both those things are, so now you have a pretty good idea what a sleet storm is like. I can also tell you that sleet stings when it hits your face, and you'll understand, because you already know what a "stinging" sensation feels like.
On the other hand, if you've been blind since birth, I cannot help you to understand what red is. I can tell you about red — it has a long wavelength, it comes in about five hundred variants depending on saturation and luminosity — but you will have no understanding of what red is, because color is a sensation and not a physical reality. Light at 7,000 angstroms is not red. It's not even "light." It's just an electromagnetic wave somewhere along the very wide spectrum of electromagnetic frequencies. "Light" is what it becomes when it goes through an eye and into a brain, and "red" is how a brain happens to interpret light at 7,000 angstroms. Red isn't what it is. Red is what it feels like.
Similarly, if you're deaf I can tell you about A-flat or D-sharp, but that's all you'll know: what you can say about them, without really knowing what they are. A-flat isn't a physical construct. It's a sensation provoked by a physical construct.
In order for me to tell you what red is or what A-sharp is, I have to first get you to conjure up a related experience or sensation and then tell you how this new one is different. But if you've never seen a color or heard a note, there’s no starting point and you won't get it. Ever.
We can communicate new facts, but we cannot communicate novel experiences unless they can be related in terms of modification to experiences already experienced.
Unlike humans, a computer has never had an experience or a sensation. All it "knows" is what has been described to it, with no relation to earlier experiences of its own, only to words it has previously been supplied with. The computer can say things about things, but doesn't really know the thing except in terms of what can be said about it. In order to teach the computer novel things, we have to simulateexperiences and sensations, and this will always fall greatly short of the mark, because language doesn't fully describe experience and sensation; all it does is evoke experiences and sensations we've already had. Since the only thing you can program a computer to experience is words, and since words are inadequate descriptors of experience and sensation, the computer can never truly understand them.
Suppose I want to tell a computer what it's like to hold a baseball.
What is a baseball? It's a round object about four inches in diameter. Even a human can only experience its surface, so let's talk about that.
It's made of leather and red thread.
Thread? Yes, kind of like string.
String? Sort of two dimensional, of a certain elasticity.
What's elasticity? Its length can change under tension.
You said "red." What's that? It's a color.
Which is? Well, there are about a million of those that the human eye can see.
So color doesn't really exist "out there?" It's an artifact of brain and eye? A sensation? Yes.
So what's the sensation of color like? Well, it comes into your eye, stimulates rods in the retina, which send electrical signal to the brain, where neural this happens and neural that happens…
So we can go about two hundred layers down trying to describe what thread is, and ultimately fail because, at some point, we have to relate it to some other sensation the computer has had, and we can't. Then we pop back up and try to do the same thing for leather, for hardness, for roughness, for heft, etc., going down two hundred levels for each of those, with assorted branches, only to find during each descent that we always hit a wall and never quite get there, because the computer has never held anything like a baseball, or held anything, or known red or rough or hard or heavy, all of which can be quantified with the important exception of "feel."
Because language is ultimately about the evocation of experience and sensation, no matter how good a computer might get at faking that it has had experiences and sensations, you'll always be able to trip it up. It's like trying to get a blind person to understand color or a deaf person to understand music: You can throw words at them all day long, and they can throw words back in a way that makes you think they get it, but they'll never really get it, and it won't take you long to find out.
The bottom line on computer intelligence is this: In order to truly think, the thing doing the thinking has to have had experiences, and to have experiences it has to be self-aware, because there has to be something there to have the experiences. Cogito, ergo sum. Otherwise, it can’t experience anything, and therefore cannot fully comprehend language.
Science fiction aside, I’m pretty sure that it will always be beyond the capability of humankind to bestow the ability to know one’s self…or its self…on anything. That includes computers, which is why they’ll never think like we do. Artificial intelligence will become very intelligent, but it will always be artificial.
At least I think so.
Lee Gruenfeld is a Principal with the TechPar Group in New York, a boutique consulting firm consisting exclusively of former C-level executives and "Big Four" partners. He was Vice President of Strategic Initiatives for Support.com, Senior Vice President and General Manager of a SaaS division he created for a technology company in Las Vegas, national head of professional services for computing pioneer Tymshare, and a Partner in the management consulting practice of Deloitte in New York and Los Angeles. Lee is also the award-winning author of fourteen critically-acclaimed, best-selling works of fiction and non-fiction. For more of his reports — Click Here Now.
© 2025 Newsmax. All rights reserved.