The artificial intelligence revolution is sweeping our understanding

David Colburn

If Ronald Reagan were alive to read my piece this week, he would undoubtedly nod, smile and utter the famous debate line, “There you go again.”
I must admit that I am fascinated by the phenomenon of artificial intelligence. Our understanding of computers diminished greatly the day Microsoft introduced Windows and hid the operating system from computer users. It made the PC more user-friendly, but vaulted computing into the realm of magic for the average user, and it’s only become more obscure and elusive since then.
AI is fascinating to me these days in how it analyzes massive amounts of data and makes decisions independent of humans about what to do with it.
Recently, my attention has focused on ChatGPT, the AI ​​chatbot that debuted last fall that can write emails, blog posts, lists, articles, computer code, and more to direct or ask a simple user.
It has taken the computing world by storm. Almost every time I try to log into a ChatGPT server during business hours, I get a message that the server is at capacity, which is a common occurrence these days for the millions of people trying it. With perseverance, one can log in during the day, but I find it easier to log in late at night.
It has caused quite a stir among those who rely on a script to carry out their chosen tasks. Educators fear, with good reason, that students will use the text it generates as a substitute for their own work. 17 percent of Stanford students reported using ChatGPT in their class assignments last fall, most of them to create outlines for papers or do basic research, but some reported using text for essays. Colleges are racing to review their Student Code of Conduct to determine how they can and cannot be used in higher education. Public schools do the same.
Techies getting industry news from CNET last week discovered that many of the articles they’d read in recent months were generated entirely by ChatGPT and not by CNET’s human reporters. The trick was discovered when people started noticing errors in articles, something ChatGPT tends to do because of how it was developed. For now, CNET has discontinued the practice of using AI-generated articles.
I’ve found ChatGPT to be quite adept at writing those “10 Reasons Why…” lists that are often found on blog sites or in popular magazines. I gave him many such queries and generated samples that matched or exceeded the quality of many of the blogs I visited. Bloggers and freelancers who create blog posts to sell can certainly see some competition from ChatGPT in the coming days.
In keeping with the magical nature of computing, many of the reviews I’ve read about ChatGPT don’t seem to understand what it is. I recently read an article by a sports writer about how ChatGPT will never replace him because he doesn’t know anything about current sports teams and he gave many examples of his inability to answer questions about players, games, and statistics. It’s a good example for someone who has a fundamental misunderstanding of what ChatGPT is.
ChatGPT is a language model that has been trained on a huge amount of language input to respond in a human-like manner to human questions and concerns. Its neural network was trained on 300 billion words entered into the system from books, web texts, articles, and other writing sources. The neural network analyzes a request from a human and creates a contextual response from all the data it has, data that is only up to 2021.
Due to its level of use, ChatGPT makes millions of decisions every second about what words and phrases to use to craft its responses, which usually only take a few seconds to start appearing. It evaluates not the facts, but the language in which it has been programmed. So ChatGPT generates human-like responses that are sometimes illogical or imprecise, much like trained Internet language.
What ChatGPT isn’t, as the sportswriter and CNET discovered, is a program that searches the Internet in real time for information on what to type. If someone asked ChatGPT to write an article about Kevin McCarthy’s recent trials and tribulations of becoming Speaker of the House, he wouldn’t be able to provide an exact answer because he doesn’t do a real-time search for articles written about that. It has a search engine that looks for websites in response to a person’s query, but it doesn’t actively read those websites to get data to use in the answers.
But despite some of its limitations, ChatGPT is a massive piece of artificial intelligence. She knows enough to pass the MBA exam at the University of Pennsylvania’s Wharton School of Business. It is a great tool in generating accurate computer code. And when I asked him to describe five faults in the mining of copper and nickel, he made me wonder if some of Marshall Helmberger’s writings had been included in their database. It wasn’t as good or detailed as Marshall’s writing, but it was as accurate as a basic overview.
I’ll admit I was tempted to have ChatGPT write my column this week and finally show you what I’ve done. It may not have my style, but I suspect it will be good enough to make you think you’re getting the original Colburn and not the synthetic.
And ChatGPT is already working on spin-offs, like a site that promises to incorporate existing information into what ChatGPT writes. Another company is touting its ability to take the “best-selling book” you “write” with ChatGPT and turn it into an e-book for sale. One can even choose to depict the book with AI-generated art from a site like DALL-E or Midjourney.
AI technology like ChatGPT seems to be racing ahead of us when we’re ready to deal with it, and as with most technologies, there are practical and ethical issues that need to be addressed. If you “write” a book or poem on ChatGPT, who owns it for copyright purposes? Much of the written material that ChatGPT was trained on was copyrighted material, so do the original copyright holders of a phrase get a stake in what it produces? Is it ethical to produce a book and list yourself as the author when it has already been written and illustrated by AI? Does ChatGPT have an ethical responsibility to ensure that the material it produces is factual? The questions are many, complex and largely unanswered at the moment.
The same can be said of artificial intelligence in general as it wriggles its way into more of our daily lives. The constant question raised by advances in AI is: If we can do it with AI, should we?
Artificial intelligence is rapidly expanding in our society, making decisions that humans use and using their enormous computing power to make decisions that humans cannot. It’s moving at a faster rate than most of us realize, with ramifications we can’t fully comprehend. As with all technology, there will be benefits and there will be drawbacks. Is he moving too fast for us to keep up? Only time will tell.

Leave a Comment