As the development of chatbots grows, so does the debate about artificial intelligence

As the development of chatbots grows, so does the debate about artificial intelligence

OpenAI, which was co-founded in 2015 in San Francisco by billionaire tech tycoon Elon Musk, who left the business in 2018, received $1 billion from Microsoft in 2019.

California startup OpenAI has launched a chatbot capable of answering a variety of questions, but its impressive performance has reopened the debate about the risks associated with AI technologies.

Conversations with ChatGPT, posted on Twitter by intrigued users, show a kind of omniscient machine, capable of explaining scientific concepts and writing scenes for a play, college letters or even functional lines of computer code.

“His answer to the question ‘what to do if someone has a heart attack’ was clear and incredibly relevant,” Claude de Luby, head of French script maker Syllabs, told AFP.

“When you start asking very specific questions, ChatGPT’s response can be off the mark,” he said, but overall performance remains “really impressive,” with a “high language level.”

OpenAI, was founded in 2015 in San Francisco by a tech billionaire Elon Muskwho left the business in 2018, and took home $1 billion from Microsoft in 2019.

The startup is known for its automated creation software: GPT-3 for text creation and DALL-E for image creation.

De Loupy said that ChatGPT is able to ask its interlocutor for details, and has fewer bizarre responses than GPT-3, which, while ingenious, sometimes yields silly results.

Cicero

“A few years ago, chatbots had the dictionary vocabulary and memory of a goldfish,” said researcher Sean MacGregor. World Health Organization Manages a database of AI-related incidents.

“Chatbots are getting a lot better at the ‘history problem’ where they behave in a way consistent with the history of queries and responses. Chatbots are out of the goldfish state.”

Like other programs that rely on deep learning, mimicking neural activity, ChatGPT has one major weakness: “It has no access to meaning,” says De Loupy.

The program cannot justify its choices, such as explaining why it chose the words that make up its responses.

However, artificial intelligence technologies that are capable of communication are increasingly capable of giving the impression of thought.

Researchers at Facebook subsidiary Meta recently developed a the computer The program was named Cicero after the Roman statesman.

The program has proven itself in the Diplomacy board game, which requires negotiation skills.

“If you don’t speak like a real person—showing empathy, building relationships, speaking knowingly about the game—you won’t find other players willing to work with it,” Meta said of the search results.

In October, Character.ai, a startup founded by the former The Google The engineers put an experimental chatbot online that could adopt any personality.

Users create characters based on a brief description and can then “chat” with an imaginary Sherlock Holmes, Socrates, or Donald Trump.

Just a machine

This level of sophistication fascinates and worries some observers who have expressed concern that these technologies can be misused to deceive people, by spreading false information or by creating increasingly credible scams.

What does ChatGPT think of these risks?

“There are potential risks in building highly sophisticated chatbots, especially if they are designed to be indistinguishable from humans in their language and behavior,” the chatbot told AFP.

Some companies put safeguards in place to avoid misuse of their technology.

On its welcome page, OpenAI sets out a disclaimer, saying that the chatbot “may occasionally produce incorrect information” or “produce harmful instructions or biased content.”

And ChatGPT refuses to take sides.

“OpenAI has made it very difficult to get the model to express opinions about things,” McGregor said.

Once, McGregor asked his chatbot to write a poem about a moral issue.

“I’m just a machine, a tool you use, I don’t have the power to choose, or to refuse. I can’t evaluate options, I can’t judge what’s right, I can’t make a decision on this fateful night,” he replied.

On Saturday, Sam Altman, co-founder and CEO of OpenAI, took to Twitter to reflect on the debates around AI.

“It’s interesting to watch people begin to argue about whether powerful AI systems should behave the way their users want them or their creators intend,” he wrote.

“The question of whose values ​​we are arranging these systems will be one of the most important debates society has ever had.”


#development #chatbots #grows #debate #artificial #intelligence

Leave a Comment

Your email address will not be published. Required fields are marked *