The iPad and other tablets make up more than half of all personal computer sales, and their share of the market just keeps growing.
Consumer demand for new diversions actually drives innovation. This is especially true for video games.
In fact, gaming is bigger than the movie and music businesses combined.
Many games take place in virtual worlds that respond onscreen to player demands in real time. But traditional computing chips (CPUs) are not very good at this.
CPUs excel at performing instructions in the order they are given. This works for applications like word processors, database programs, and operating systems… not huge interactive virtual environments.
Graphics processing units (GPUs) were invented to meet the needs of video games. GPUs model the rules of physics so that games can function as if the actions were taking place in the real world.
GPUs do this through massive parallel processing. This allows for a rapid rendering of data to put constantly changing images onscreen.
The first chip marketed as a GPU was the GeForce 256. It came from a small chip company called Nvidia in 1999.
Chip giant Intel passed on the chance to claim the GPU space.
This seems to be a pretty standard practice in established industries. It’s ironic, though, because Intel’s first success was due to IBM’s decision not to manufacture its own chips for the emerging PC market.
Until recently, Intel’s indifference to the GPU market wasn’t seen as a big problem. It may have even protected the company from anti-trust actions. But a few years ago, that began to change.
Researchers in artificial intelligence (AI) wanted to write programs to perform complicated functions such as image and voice recognition. But to replicate those abilities, researchers had to figure out how the brain allows us to see and hear.
This is not well understood, so progress was quite slow.
Some AI researchers worked for decades on a different approach. If a computer could use software that copied the way the human brain works, it could teach itself how to see and hear.
With sufficient data, plus layers of software able to recognize and analyze complex patterns, computers might solve the problems that humans couldn’t.
But there was one issue. The power needed to run these programs on CPU-based systems was so great and costly that researchers thought they might not succeed.
That is until there were much more powerful and inexpensive CPUs needed to test their theories.
The GPUs that run game systems are well suited to imitating brain-like neural activity. This may be because games, like the brain, must deal with real world physics instead of pure mathematics.
So by 2010, most AI researchers were starting to think about GPUs.
Then in 2012, a computer scientist from the University of Toronto, Alex Krizhevsky, won an important computer image recognition competition using GPUs.
It didn’t use a system written by programmers. Rather, it was developed by a deep learning AI, which used neural network algorithms.
Researchers and startups are aggressively pursuing ways to leverage self-teaching AIs for many industries and purposes. My guess is that everyone will have their own AI personal assistant capable of learning and simplifying all aspects of our lives, from tax planning to scheduling.
The most important applications for this newly accelerated field will be in biotechnology.
I say this for two reasons. One is that health and life are by definition our prime directives. Another is that health care is the biggest financial sector by a large margin.
There’s a nice symmetry to this. AI technology is moving forward due to biomimetics — the science of mimicking biological systems.
In this case, researchers are building models of the brain’s neurological structure. In the end, the greatest and most profitable AI ventures will be those that decode the secrets of our genomes that build those neurological systems.
This task is possible, in theory, but impractical without powerful self-learning computer systems.
Already, some of the most important AI scientists are turning their tools toward the genome. Their goal is to find a way to slow or even reverse the aging process. I know some of the people working toward this goal, so I’m optimistic.
The big question is, how long will it take? I think it will happen faster than almost anyone outside the field expects.
And for that, we should thank the gamers who funded and pushed the GPU technologies that have driven this revolution.
Read about the latest breakthroughs, from new, non-invasive cancer treatments to age-reversing nutraceuticals and vaccines that kill any virus, as well as the innovative companies that work on them. Get Tech Digest free in your inbox every Monday.
© 2023 NewsmaxHealth. All rights reserved.