Don’t worry, AI’s here
Ok, for those of us “of a certain age,” the phrase “Don’t worry” may bring a certain song to memory and a smile to your face. And while that’s ok, or even great, just having AI around isn’t the panacea some think it to be.
Artificial intelligence (AI) can do many things, but it isn’t magic, it’s not even the magician. Truth be told, it’s not even the magician’s wand. Unless of course, that wand is a shapeshifter. Because AI is many things depending on what it’s supposed to be for a given task. Ah… there’s the key, isn’t it?
AI for a given task is known as “weak AI” and is what all real AI is as of the date of this post. But weak AI isn’t “weak” as in puny or worthless. Hardly. It’s just a classification that sets it opposite of “strong AI” more popularly known as General AI. IBM describes strong AI as theoretical, it doesn’t yet exist, but if it did, we might have something closer to that magician.
The problem, however, is that many people who aren’t in the AI business believe, incorrectly, that AI can do anything, up to and including creating stuff out of thin air (e.g. magic). All you have to do is point “the AI” (apparently any garden variety AI will do) at whatever thing you want to magically have done and the AI will just give you what you want (no nose wiggling required).
Apparently, this is believed to be a successful approach to solving anything these days, because AI is both feared and revered depending on the person asked around the world. Either AI is going to form Skynet any day now (some say it already has) and begin the downfall of mankind, or now that the internet has permeated our daily lives “the AI” can just tap into that data lake and read our minds without violating any privacy laws and return the tailored results being sought without any preparation or groundwork aside from asking the initial prompt.
Of course, science fiction is just for entertainment… right? No. But we’re not here to talk about fiction or magic.
Real AI is a computer programmed to run an algorithm(s) to mimic human decision making. Unlike traditional computer programming with if-then-else structures where there is a predictable and repeatable output to any given input, AI could give a unique output anytime you ask it to do something. Just like the people you interact with daily. Sure, the answer may be similar, but there is probably something that isn’t a robotic direct, cloned copy of the response. For example, if you ask an AI for the results of last night’s Carolina Hurricanes game the first time, it may say “The Hurricanes beat the Flyers 3 to 2.” The second query may return something like “Last night, the Carolina Hurricanes defeated the Philadelphia Flyers in overtime, 3 goals to 2.” Same basic answer, but different levels of detail.
The fun part is, the AI can’t do anything if it isn’t given good data to work with from the beginning. For example, if the AI only had won/loss records available, it couldn’t give you a score for the ‘Canes, let alone the additional information about the win coming in overtime.
The same thing happens in education and training when AI is called on to save the day. Just siccing AI on a problem doesn’t solve the problem. The AI must be prepped with the proper information, given reasonable guidance on what the desired product should be, and then confirmed that the data is reasonably accurate. This final confirmation may be via traditional computer programming from comparing the AI’s results to a golden standard or by a human system manager. In either case, verification and validation are a must, because the AI will tell you something that isn’t right with just as much confidence in that answer as a right answer without a hallmark to compare it to.
And that hallmark, golden image, guide rail, control, or whatever you want to call it comes from the human at the helm. And that human is no more a magician than the AI is. AI is a wonderful tool. But just like a saw, it must be managed and kept sharp because contrary to popular belief, it doesn’t know when it is no longer sharp.