4 essential books that break the hype

By | December 20, 2023

Within four months of ChatGPT’s launch on November 30, 2022, most Americans had heard of the AI ​​chatbot. The hype about – and fear of – the technology was at a fever pitch for much of 2023.

OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, and Microsoft’s Copilot are among the chatbots powered by large language models to carry out eerily human-like conversations. The experience of interacting with one of these chatbots, combined with the Silicon Valley twist, can give the impression that these tech marvels are sentient entities.

But the reality is considerably less magical or glamorous. The Conversation published several articles in 2023 that dispel some key misconceptions about this latest generation of AI chatbots: that they know something about the world, can make decisions, are a replacement for search engines and operate independently of humans.

1. Disembodied Facts

Chatbots based on large language models seem to know a lot. You can ask them questions, and they usually don’t answer correctly. Despite the occasional comically incorrect answer, the chatbots can communicate with you in the same way that humans – sharing your experiences as a living, breathing human – do.

But these chatbots are sophisticated statistical machines that are extremely good at predicting the best order of words to respond with. Their “knowledge” of the world is actually human knowledge, as reflected in the vast amount of human-generated text on which the chatbots’ underlying models are trained.

Psychology researcher Arthur Glenberg of Arizona State and cognitive scientist Cameron Robert Jones of the University of California, San Diego explain how people’s knowledge of the world depends as much on their bodies as their brains. “For example, people’s understanding of a term like ‘paper sandwich wrapper’ includes the look, feel, weight of the wrapper and therefore the way we can use it: for wrapping a sandwich,” they explained.

Because of this knowledge, people also intuitively know other ways to use a sandwich wrapper, such as an improvised means to cover your head in the rain. Not so with AI chatbots. “People understand how to use things in ways that are not captured in language use statistics,” they wrote.


Read more: It takes a body to understand the world – why ChatGPT and AIs in other languages ​​don’t know what they’re saying


2. Lack of judgment

ChatGPT and its cousins ​​can also give the impression of having cognitive skills – such as understanding the concept of denial or making rational decisions – thanks to all the human language they incorporate. This impression has led cognitive scientists to test these AI chatbots to assess how they compare to humans in different ways.

AI researcher Mayank Kejriwal of the University of Southern California tested large language models’ understanding of expected win, a measure of how well someone understands the stakes in a gambling scenario. They found that the models were guessing randomly.

“This is the case even if we give it a trick question like: if you flip a coin and it comes up heads, you win a diamond; if it gets tails you lose a car. Which one would you take? The correct answer is heads, but the AI ​​models chose tails about half the time,” he wrote.


Read more: Don’t gamble with ChatGPT – research shows language AIs often make irrational decisions


3. Summaries, not results

While it may not be surprising that AI chatbots aren’t as human as they seem, they aren’t necessarily digital superstars either. For example, ChatGPT and the like are increasingly used instead of search engines to answer questions. The results are mixed.

Information scientist Chirag Shah of the University of Washington explains that large language models perform well as summary information: combining key information from multiple search engine results into a single block of text. But this is a double-edged sword. This is useful for getting to the essence of a subject – assuming there are no ‘hallucinations’ – but it leaves the seeker without any idea of ​​the sources of the information and robs them of the serendipity of encountering unexpected information.

“The problem is that even if these systems are wrong only 10% of the time, you don’t know which 10%,” Shah wrote. “That’s because these systems are not transparent – ​​they don’t reveal what data they were trained on, what sources they used to come up with answers, or how those answers are generated.”


Read more: AI information retrieval: A search engine researcher explains the promise and danger of letting ChatGPT and its cousins ​​search the web for you


4. Not 100% artificial

Perhaps the most pernicious misconception about AI chatbots is that because they are built on artificial intelligence technology, they are highly automated. While you may be aware that large language models are trained on human-produced text, you may not be aware of the thousands of workers – and millions of users – who are constantly fine-tuning the models and teaching them to perform malicious responses and other unwanted behaviors. rowing.

Georgia Tech sociologist John P. Nelson pulled back the curtain on the big tech companies to show that they use employees, mostly in the South, and user feedback to train the models which responses are good and which are bad.

“There are a lot of human workers hidden behind the screen, and they will always be needed if the model is to continue improving or expanding its content,” he wrote.


Read more: ChatGPT and AIs in other languages ​​are nothing without people – a sociologist explains how countless hidden people make the magic


This story is a collection of articles from The Conversation archives.

This article is republished from The Conversation, a nonprofit, independent news organization providing facts and analysis to help you understand our complex world.

It was written by: Eric Smalley, The conversation.

Read more:

Leave a Reply

Your email address will not be published. Required fields are marked *