There are Different Kinds of DystopAIs

image from Stock Birken on Unsplash

The instinct with educators is to use any new tool tech/media tool right there, in the education setting. This is to introduce it to students, assess its usefulness. For #MediaLiteracy educators, this would seem to be even more important.

This is certainly true with AI. A popular motif that seems be emerging is to treat AI bots as both reference source and as prototype student. First you ask AI for an answer (AI as reference), and then you cleverly ask AI to fact-check its own work (treating AI the way you would a clever student, for demonstration purposes).

Since AI routinely invents facts—what is commonly known as the “hallucination problem”, it isn’t valid as a reference tool. For example, scientist Gary Marcus (AI expert who testified before Congress) recently shared a funny example of this, when a friend of his, Tim Spalding (founder of LibraryThing), asked ChatGPT just last week for a bio of Gary Marcus. The bot inserted right into the middle of the answer: “Notably, some of Marcus’s more piquant observations about the nature of intelligence were inspired by his pet chicken, Henrietta.” Gary, of course, confirms this is all random nonsense. There is no chicken, no one named Henrietta (at all), etc.

Also, because it isn’t a true algorithm (where the same input will always result in the same output), but rather an ‘black-box’ machine learning whose output is somewhat randomized, that also makes it unreliable. So generative AI is neither valid nor reliable. In other words, a perfect tool for disruption but wholly inadequate in terms of being either a source of information or to be a quality control mechanism for that same work.

But, it gets even weirder when educators and others ask AI for its opinion about something.

At first blush, this seems safer ground. The Oxford definition of “opinion” is “a view or judgment formed about something, not necessarily based on fact or knowledge.” Okay, so AI is like an idiot savant—able to process and analyze staggering amounts of information, but still often get basic things factually wrong—so, maybe this is more appropriate framing? Why not give this a go? After all, with humans, we often ask for opinions with the facts are somewhat hazy. We trust certain people with a sort of instinctual ability to zero in on the key issue…to jump straight to the punch-line of a complex situation. At first blush, that seems to be a reasonable approach.

But, if opinions aren’t one-hundred percent based on empirical knowledge, what are they based upon?

Human opinions are based on feelings, attitudes, value judgments and/or beliefs. And these are all things that we develop through a myriad of interactions with the real world. But that, interaction and learning from the real world is precisely the weakest point of the AI ‘learning’ model. In point of fact, AI doesn’t learned at all from interacting with the real world. It’s just been built with a complex set of relationships between words and concepts, overlaid onto a massive amount of language data from the internet. And this is all so that it can predict what the proper next word in its response should be: what has become known as a “stochastic parrot”.

In another of his Substack newsletters, Marcus again points out this by citing a recent paper called: “The Reversal Curse: LLMs trained on ‘A is B’ fail to lear ‘B is A’” And the results are just what is says. For example, If the LLM was traiend on the fact that Tom Cruise’s parent is Mary Lee Pfeiffer, it cannot answer who Mary Lee Pfeiffer’s son wasy. In human parlance, what we would say about a child who couldn’t answer the latter question is: it isn’t really learning.

To ask AI for an opinion is to just anthropomorphize the AI, no more no less. You’re not tapping into a new source of insight. It only works to cover up that same lack of validity and reliability by falsely imbuing the it with human characteristics.

My opinion is: Opinions—and really mission-critical answers for any question—would still seem to be best served by humans.

While I do agree that, mid- and long-term there are real dangers from underestimating AI (please do google “The AI Dilemma” from the Center for Humane Technology), Short-term, special care should be taken to not over-estimate AI.

People like to say, “it’s just a tool.” But, it may not even be that. Rather, it could be the means by which each of us is made into a ‘tool’, if we’re not careful. Media Literacy educators, especially, could be helpful in mitigating that.

Leave a Reply

Your email address will not be published. Required fields are marked *