Relax—#ChatGPT will not steal our #writing jobs
Posted On April 23, 2023
Nearly ten years ago, before ChatGPT was a twinkle in its creators’ eyes, in one (nearly finished) novel I wrote, two of my characters created a program they called Timebender they used to predict future events based on information models fed into it to make military and business decisions. Highly proprietary, only three people in the story universe used the most sophisticated version of it, though a much more scaled-down version was released for public use. With a fourth character added to this happy little group, the female protagonist taught the new person how to choose datasets to produce a result for a specific question.
[He] stared at the dizzying list of data sets.
“How do you know which ones to use? Why wouldn’t you use all of them?”
“Because too much information is worse than too little information,” [she] said. “It would take too long for the calculation to run, and it would give you a useless answer. Because of all the variables, the result returned would be too global to apply to the situation. It is as if you asked it, “Will it rain today? It will answer, “It can and it can’t.” [Unpublished Novel—The Shadow King by B. R. Turnage]
And now a decade later, us smart little monkeys created ChatGPT, a product OpenAIs CEO now says is at its scaling limits.
Yet, it’s hailed as a time saving, and efficient solution for content creators, that’s you and me, fellow wordsmiths generating everything from ad copy, story synopses, outlines, and even whole books. Here is what ChatGPT says about the benefits of using generative AI:
Indeed, the AI that has set the world on fire, with integration with Bing search engine, has upset many people from academics and now fiction magazines because of the ease of generating content from the AI. Academics are worried about students using the AI to write their papers. Others are worried that AI, which simulates human contextual speech, will end up being an unreliable tool in different applications.
ChatGPT is just one of a new generation of generative models—its fame is a result of how accessible it is to the public, not necessarily its extraordinary function…While any algorithm can generate output, generative AI systems are typically thought of as those which focus on aesthetically pleasing imagery, compelling text, or coherent audio outputs. These are different goals than more traditional AI systems, which often try to estimate a specific number or choose between a set of options. More traditional AI systems might identify which advertisement would lead to the highest chance that an individual will click on it. Generative AI is different—it is instead doing its best to match aesthetic patterns in its underlying data to create convincing content.
But with any new technology there are down sides.
The New York Times reporter who got freaked out from his two-hour conversation with the Bing AI describes it this way.
And while hailed as the next big thing in content creation, people have discovered in the few months since world-wide deployment ChatGPT has its limitations. From a question on Quora:
Why does ChatGPT make up fictitious references/citations? If i ask for citations to support a question’s answer, it has created totally fictitious (but plausible-sounding) ones. It will use appropriate author names (who published in a journal), but couple them to completely fabricated article titles, coupled to Volume/Issue and assign Page Numbers which don’t even align with the articles in that volume/issue. If challenged in OpenAI’s feedback, ChatGPT will regenerate the answer, but this time with correct citations. If it “knew about” correct citations, why weren’t they supplied in the first place? Why “hallucinate”?
Let’s ask it.
To ChatGPT: Some users have reportedChatGPT returns fictitious references/citations. The users reported when they asked ChatGPT for citations to support a question’s answer, it has created totally fictitious (but plausible-sounding) ones. It will use appropriate author names (who published in a journal), but couple them to completely fabricated article titles, coupled to Volume/Issue and assign Page Numbers which don’t even align with the articles in that volume/issue. If challenged in OpenAI’s feedback, ChatGPT will regenerate the answer, but this time with correct citations. Why?
ChatGPT’s (Version 4)Answer As an AI language model, ChatGPT does not intentionally generate fictitious references or citations. However, it is possible that the model may make errors or produce inaccurate responses due to the limitations of its training data and algorithms.
It’s possible that in some cases, ChatGPT may generate responses that sound plausible but are not accurate. This could be because the model is trained on a large amount of text, including both factual and fictional content. The model may also use statistical patterns to generate responses, which could lead to inaccuracies.
In the case where ChatGPT generates fictitious references, it’s possible that the model is incorrectly applying patterns it has learned from its training data. When the model is challenged on its response and asked to provide correct citations, it may generate a new response that aligns more closely with the user’s expectations.
It’s important to note that ChatGPT is a machine-learning model that relies on its training data to generate responses. While the model can be incredibly useful for generating ideas and providing information, it may not always be 100% accurate. It’s always a good idea to verify information from multiple sources and consult with experts in the field to ensure accuracy. ChatGPT session from my account on April, 23, 2023 titled “Inaccurate Citations.”
There you have it, folks. ChatGPT generates responses based on patterns it detects from the base text fed to it.
This is true if the answer is returned from its information base, and also within the prompts you feed the AI. It can only give you an answer based on those patterns. It is not programmed to self-check the answers it returns. It will only refine its answer based on new prompts you feed it. It’s rather like your lazy brother who can’t get off the couch to take out the trash the first time you ask him.
ChatGPT has zero ability to discriminate a truth from a lie, factual information from inaccurate information, fact from fiction, and will treat all text it was fed as “true” despite the inaccuracy or bias of the information fed to it.
And as it warns, it may not be 100% accurate.
The whole answer ChatGPT returned above was predicated on the parameters of the question I fed into it because I presented as a fact that “some users report.” Now this premise could be bogus, but ChatGPT will answer like it is true and provides an answer based on that premise.
And rather like the except from my story above, too much information from a prompt will not return a useful answer. One answer for a request of an outline was so sub par, that I cancelled my subscription to it. As I found you can’t feed in your story premise, characters lists and parameters for a chapter-by-chapter synopiss and expect the AI generate a complete outline. To get anything halfway useful, you need to build iit bit-by-bit, and even then you’ll be editing it heavily to make it marketable.
My take? While ChatGPT is a useful tool, it acts like a co-dependent partner willing to do anything to please its partner. Fact-check it, always.
And with ChatGPT’s current 500 words limit on responses, any writer who is claiming to “write a book” using Chat GPT is downplaying how much work they had to put into doing so.
Lacking the creative human spark in generative AI in its present form will not replace writers. So, grab a cup of java, my friends. We aren’t out of a job yet. But it is helpful. In my next blog post, I’ll show you how.
Image Credit: By OpenAI – Public Domain: https://commons.wikimedia.org/w/index.php?curid=128598384