A friend of mine recently ran for public office. At least, that’s what an AI summary told me.
I had run an online search for some work he had done, and the AI synopsis — something I did not request — mentioned him as a candidate in a recent election.
This was news to him and to me. He had no interest in getting elected. He hadn’t even filed nomination papers in that election.
This isn’t the only time artificial intelligence tools, or AI, have come up with flawed information.
Earlier, I had used an AI service to find some of the history of the newspaper where I work. I’ve been here for more than 30 years, so I have a good working knowledge of this paper, but there are some details I am still discovering.
I am able to search old issues of the paper online, but AI should be able to do this a lot faster.
Well, faster isn’t always better, and in this case, the results were disappointing.
The AI summary had the wrong year, in the wrong decade, as the time the paper was established. At least the century was right.
When I asked for the locations the Summerland Review has used for its offices over the years, once again, I had information that was questionable at best, with some completely inaccurate statements in the mix as well.
One statement was accurate, but because of the other poor answers, I ended up spending extra time on fact-checking.
I’ve seen other instances, on other, unrelated topics, where AI-generated responses were low on truth and accuracy.
But the issue of inaccurate and misleading information is not something new, or limited to online environments.
Sometimes, old Uncle Harold, who is prone to exaggeration at the best of times, will make up something if he doesn’t know the answer.
Artificial intelligence does the same thing. Hallucinations — answers which sound plausible but are inaccurate — can and will happen.
Companies working with artificial intelligence know about this and are working to improve their accuracy.
Online users can add follow-up questions, asking for sources to verify the information they have been given. This is an essential step in using AI tools.
Still, an AI-generated answer might not be reliable. Fact-checking is important. Accuracy matters.
But there’s another aspect to consider.
Artificial intelligence is a relatively new technology, especially today’s large-language models, where users can enter a question and get a conversational response.
ChatGPT, the most popular of such services, has been around since the end of November 2022. It is not yet three years old.
A three-year-old is still learning to make sense of the world and learning to understand how to evaluate what they see.
On the other hand, the fictional Uncle Harold I referenced earlier — the teller of tall tales — has spent decades creating a narrative that suits his opinion of the day.
Today’s artificial intelligence services are improving, and they will continue to improve, but they are no substitute for good online research, fact-checking and verification.
The future will be different, and it is likely that AI tools will become part of the day-to-day lives of many people. But that day has not yet arrived. As a result, I will still question and verify any AI response I see.
John Arendt is the editor of the Summerland Review.