It is perhaps strange that few people outside of the computer science field were aware of the peaceful death on Aug. 8, 2015 of Dr. John Holland, 86, eminent professor of Psychology, and of Electrical Engineering and Computer Science at the University of Michigan.
As one of the founders decades ago of the interdisciplinary field of complex adaptive systems, and the leading figure in what is now known as evolutionary computation, his work has had immense impact on not only general computer science but economics, psychology, and artificial intelligence.
In 1975, Holland published the trailblazing book “Adaptation in Natural and Artificial Systems,” which the Santa Fe Institute claims to have been cited “more than 50,000 times and has been published in several languages.”
In this book, intended to sketch out a general theory of adaptation, Holland introduced a mathematical treatment of “genetic algorithms,” which can solve optimization problems by mimicking the process of natural selection.
For example, let’s say we have a warehouse and we want to maximize the total value of items that can be stored in the building (this is also called the “knapsack problem”).
To apply a genetic algorithm to this problem, we randomly generate a huge number of possible solutions. Each individual solution is a string of variables or bits that is roughly analogous to a chromosome found in DNA.
We could have the position of each bit represent a different object, with the bit’s value (0 or 1) determining whether or not the object is in the warehouse.
Having randomly generated many possible individual solutions, we now use a “fitness function” to evaluate each of these individuals. Our function will tabulate the value and size associated with each item specified.
An individual might specify a lot of high-value items, but all of them together may take up too much space to be stored, so such an individual receives a zero fitness score.
If all the items are determined to be of the right size, the value of the items are added up to create a positive fitness score and this is compared to the scores of solutions suggested by the other individuals.
Next, a second generation of individual solutions is created. We first “kill off” half of the existing individuals (those with scores scoring in the bottom half) then “mate” the survivors to form a new generation of solutions.
We do this by randomly selecting pairs of the survivors and use a series of genetic operators such as “crossover” (also known as recombination) which involves extracting part of one individual and part of another and putting them together to form a third “offspring,” which is then subjected to a possible mutation at the moment of its “birth.”
Genes from high scoring individuals will propagate throughout the population so that two good parents will occasionally produce offspring that are better than either parent.
This new population of individuals is tested and each one is assigned a fitness score.
Once again, the unsuccessful ones are weeded out, the survivors are mated and a third generation is produced, and so forth, for perhaps hundreds of generations until a suitable solution is found, or perhaps the score of highest ranking solution levels off and no longer rises from generation to generation.
While John Holland and his students were perfecting genetic algorithms, his colleagues (such as Nichael L. Cramer, John R. Koza and Gianna Giavelli) were founding the even more mind-boggling field of “genetic programming,” a similar evolutionary machine learning technique that can evolve and optimize a population consisting of entire computer programs designated to perform a task such as designing an electronic device, playing a game, predicting the movement of the stock market, etc.
Back in the 1980s yours truly was dabbling in all of this, as I attempted to write a computer program that could automate — or at least semi-automate — the production of music. I talked to many of the luminary figures of that era: David E. Goldberg, Rick Riolo, and even the great John Holland himself.
Alas, when I fed 50 middle-European Christmas carols into my machine to be emulated and ran the genetic algorithm, I ended up with two promising initial musical notes but then a repeating cycle of three notes.
My program was suffering from what is called “premature convergence” — my population of “solutions” (possible Christmas carols) had reached a solution too quickly.
During the evolutionary process the parent solutions were unable to generate offspring superior to themselves. The whole problem was just too big for my little 1980s microcomputer.
Today, however, computers are becoming immensely powerful and the field of evolutionary computation is taken quite seriously. Indeed, the work of John Holland and his colleagues will one day be hailed a key steppingstone to the final goal of perfecting genuine artificial intelligence.
Richard Grigonis is an internationally known technology editor and writer. He was executive editor of Technology Management Corporation’s IP Communications Group of magazines from 2006 to 2009. The author of five books on computers and telecom, including the highly influential Computer Telephony Encyclopedia (2000), he was the chief technical editor of Harry Newton's Computer Telephony magazine (later retitled Communications Convergence after its acquisition by Miller Freeman/CMP Media) from its first year of operation in 1994 until 2003. Read more reports from Richard Grigonis — Click Here Now.
© 2024 Newsmax. All rights reserved.