Time to ditch the ‘artificial intelligence’ name?

Wikipedia defines ‘artificial intelligence’ as ‘intelligence exhibited by machines’. But the term AI is often taken to mean, machines doing something similar to what people can do.
The debate about what computers can do and what people can do, and when computers might catch up, is fascinating. But it’s not a business discussion – perhaps something more for weekends or evenings.

Right now, in 2017, computers can’t do anything near what people can do in most cases, although they can do a lot which people can’t do, too. The useful understanding is what computers can actually do.

So to sidetrack the ‘computers vs people’ discussion perhaps we need a new name for computers doing clever things (something more than entering and retrieving data from a database, or following an if .. then routine). How about ‘computer cleverness’?

Then to understand computer cleverness better, perhaps it makes sense to think about different sorts of computer cleverness, so we know which one we’re talking about, and can work out which sort is most useful for the application in hand.

I can think of four groups of computer cleverness.

The most basic, and often the most valuable, is logic. The computer follows logic pre-programmed by a person. Consider a scenario when an alarm goes off on a plant. The operators probably go through a set series of steps to try to work out whether there is a real cause for alarm. A computer could be programmed to follow these steps – and do it in a millisecond – so perhaps the alarm doesn’t need to sound at all.

The second is statistical analysis. An example of this is when a computer takes some readings from sensors, does some calculations, and outputs what is probably happening (ie probability > x). For example, usually when a sensor shows this vibration signature, it means this.  It can suggest what is happening right now (you are operating a drill bit and the drill bit is sticking).

A third is simulation. A computer can build a model of real life, which can run, so you can see what might happen in real life, for example to see how a building evacuation might work. The simulation could also run parallel to actual real life, so you can see more clearly what is happening, for example in a complex engineering plant or retail store.

A fourth is machine learning. This is when a computer uses a few layers of mathematical processes (which can themselves involves statistics or logic), with a variable weighting or impact on the overall result. This is used when the computer is trying to match images, or translate text, or understand audio, or find something which might be useful in a pile of data, work out how to win at Go! or Amazon making suggestions of what you might buy next. Machine learning is perhaps most applicable to consumer environment rather than a business operational environment, where the scales are usually too small and risks too high to justify the investment of building it. But not necessarily. One example of a business application is when computers are identifying cancer cells.

That is more or less it, as far as I can see. Which doesn’t mean that there’s a limit to what computers can do, but if you decide whether the ‘computer cleverness’ you are hearing about fits mainly into one of these buckets then you can get a better understanding of how to move forward.

Leave a Reply