This post was written by a human. Seriously. A carbon-based unit that runs on food, air, water, and emotions wrote this. Why am I telling you this? Because unless you’ve been hiding under the covers for the past few weeks (yes, you, fellow bloggers, novelists, memoirists, and allied journalists), someone has told you that artificial intelligence, in the form of a “chatbot” called ChatGPT (“Chat Generative Pre-Trained Transformer”) is going to replace you as a writer.
This thingee has flabbergasted even the most skilled and knowledgeable programmers working on artificial intelligence. It has forced schools from colleges and universities down to high schools and maybe even middle schools (not sure about that last one but I’m sure there are some fraught faculty and school board meetings going on currently) to revise academic integrity policies. It has inspired one Princeton student to create an app that can detect writing by AI.
Personally, I’m kind of intrigued by the prospect of higher education returning to the Socratic method, banning internet-connected devices in class, requiring oral discussion and conducting oral exams. Maybe students would even have to return to using physical books! Perhaps we’d get back to the days when students learned how to think, not what to regurgitate to get an inflated, and undeserved, grade.
Because ChatGPT can write term papers, sonnets, and book reports. It can compose journalistic articles. And it does it without complaint, without coffee, without sleep, and without pay.
But as news organization CNET found out to its chagrin, AI doesn’t do so well as a journalist. It’s articles were riddled with factual errors and banalities. It couldn’t get comments from a variety of human sources (although I’m willing to bet you that soon, future releases of this thing will be able to make phone calls, carry on human-like conversations, and elicit and then report comments from living, breathing humans, who should have that opportunity).
Computer scientists used to gauge artificial intelligence according to the “Turing Test,” named after famed computer scientist and code-breaker Alan Turing. He posited that when computers can carry on a conversation that is equivalent to human conversation, such that the human participant can’t tell they’re talking to a computer, machines will have achieved intelligence.
We’re going to need a new definition of machine intelligence, it seems.
The film “Her,” starring Joachim Phoenix, was about a lonely man falling in love with a female-voiced AI he called Samantha. The AI claimed she loved him too, but (spoiler alert) it turns out she was cheating on him with thousands of other humans and a bunch of other AIs. The carbon-based unit and the silicon-based unit (if Samantha was made with silicon chips–I don’t know, she may have been even more advanced that that) could not live happily ever after together.
Samantha may have violated at least the first of Asimov’s laws of robotics, if emotional injuries count:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third LawIsaac Asimov, in “Runaround,” a short story from 1942, according to Brittanica
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Will computers ever truly understand human love, heartbreak, ambition, compassion, gratitude, fear, excitement, or yearning? Even if one day robotics advances to create androids that can give hugs that feel human, I don’t think they’ll ever be able to replace genuine human touch and interaction. We’ll always feel like something is missing.
And I think it is, or will be. Soul. Spirit. The breath of God, if you will. Even if AI can process trillions of bits of information faster than we ever could, will it ask the right questions? Can it tell us the meaning of life? Does it have a genuine sense of purpose? While it may be able to ease our loneliness and find that song we just can’t quite remember the words to, can it dry our tears?
We’ll see, I suppose. But in the meantime, I’m betting on Dave, not HAL. Call me crazy, or overly optimistic, but hey, after all, I am, and remain, your
Struggling, laughing, crying, singing, sighing, flesh and bone, heart and soul, flaws and errors, blood and breath, enirely human,
5 thoughts on “AI, AI, AI, AI: Sing, Don’t Cry; Nice Try, AI”
I enjoyed reading this article. I am not a robot.
LikeLiked by 2 people
Good to know, Earl! 😆
LikeLiked by 1 person
Ally Bean sent me! I enjoyed this post thank you. Yes the AI is worrying. Where did the Socratic method go? Will it be able to ask the right questions? Not everything has an answer. The other day I heard the South African pres (I would say ‘our’ but I’m not happy saying that) give a speech so full of platitudes and void of any realness that I wondered if he’d had Chap/Al script it.
Thanks for finding my blog (thanks, Ally!) and reading! Glad you enjoyed the piece. AI learns from what humans put out there, so let’s all keep writing real human stuff!! It’s when AI starts genuinely thinking and creating on it’s own, achieving “sentience,” a moment called “the singularity,” that we should be afraid–very afraid! But I just can’t see the day when AI will be able to be genuinely funny, compassionate, quirky, petulant, sad, elated, or ridiculous. Those things are in the human department, and always will be–I hope!!
LikeLiked by 1 person