There is a significant number of problems, which just too hard for classical algorithms. Even if we were able to harness all the computer power in the world, it wouldn’t help much.

For example, the problems in operational research, like scheduling, which tend to trigger the so called combinatoric explosion, are one example. Classification of images and speech, handwriting recognition, taxonomical problems and so on. In the last 50 years, we had several promising ideas about how to tackle those. After some time the initial excitement subsided due to the lack of results, and the AI winter came once again.

Neural Networks

But then, around 2012, Neural Networks came back once again. Maybe for good this time. They used to be too computationally expensive for the hardware we had. But now, after several decades of Moore’s law, this has become less of a problem. Besides, some important algorithmic breakthroughs have been also made.

Neural networks, convolutional or otherwise, came by and swept away all those much more orthodox solutions championed and financed by big companies like Microsoft, which has invested billions into solving speech recognition the old fashioned way. Neural networks were a black swan event and an extinction event for a great part of the AI efforts of that time. When Lee Sedol lost against AlphaGo, the AI skeptic community of naysayers was damaged as well. Suddenly Neural Networkshad the potential to do almost everything and nobody would be able to stop them from becoming superintelligent one day.

When a great enthusiasm overwhelms more people than just a few mad scientists, as is now the case, it’s a great time to be sober. Whatever is the highest fashion of them all, might not stay with us forever. NN’s are fashionable, all right, but they will probably never become a classic. We have to ask ourselves, where these Neural Networks have come from? Is there already something new on the horizon? The scientific foundation for neural networks is a 70 and more years old story. After that, we had some good decades of exponential growth in computing. Early in this century when people like Jurgen Schmidthuber completed everything with scientific papers and performed some experiments only a few took notice. It was the year 2009 when one ought to have placed his bet on Neural Networks which are already too fashionable today.

Evolutionary Computing

Evolutionary Computing, which is even older in the sense that Charles Darwin laid its foundations, was also introduced into computing long ago, by John Koza. The start was rough, even more than it was with Neural Nets. The scientific fundamentals were further improved in the 20th and 21st century, but no Nobel prize was awarded. Stockholm isn’t that great a visionary. Besides the fact, that EC is more general in principle than NN, there are several more important advantages of EC over NN:

  • EC can invent new stuff. While NN is based on some learning from human experts, EC can always start from scratch if need be. The creativity of EC might remind you of the creativity of biological evolution. This is no accident, but an artifact of its principles.
  • EC can use NN modules or any other algorithms when appropriate. Whereas NN isn’t ascompatible with everything.
  • EC is an example of a fully parallel algorithm. NN, not so much.
  • NN solutions are always opaque, EC solutions are crisp as you want them to be.
  • EC needs little data, but can handle a lot of it. SmallData is useless for NN.
Share This