San Francisco, CA: OpenAI, a startup based in California, has released a chatbot that can answer a variety of questions. However, the chatbot’s impressive performance has rekindled the discussion regarding the dangers associated with artificial intelligence (AI) technologies.
Users who are enthralled by ChatGPT’s conversations post them on Twitter, demonstrating a form of omniscient machine that is also capable of writing plays, dissertations, and even functional lines of computer code.
Claude de Loupy, head of Syllabs, a French company that specializes in automatic text generation, told AFP, “Its answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant.”
He stated that although ChatGPT’s overall performance remains “really impressive” and has a “high linguistic level,” “when you start asking very specific questions, ChatGPT’s response can be off the mark.”
In 2019, Microsoft gave OpenAI, which Elon Musk, a tech billionaire, cofounded in San Francisco in 2015 and left in 2018, $1 billion.
The automated creation software is the start-up’s most well-known product:GPT-3 is used to create text, and DALL-E is used to create images.
According to De Loupy, ChatGPT is capable of asking its interlocutor for specifics and produces fewer bizarre responses than GPT-3, which, despite its proficiency, occasionally produces absurd results.
A researcher who maintains a database of AI-related incidents, Sean McGregor, stated, “A few years ago chatbots had the vocabulary of a dictionary and the memory of a goldfish.”
“The ‘history problem,’ in which chatbots behave in a manner that is consistent with the history of queries and responses, is becoming much more sophisticated.The chatbots are no longer the same as goldfish.
ChatGPT, like other programs that use deep learning to mimic neural activity, has one major flaw:De Loupy asserts that it lacks access to meaning.
The software is unable to explain why it made its choices, such as the words that make up its responses.
However, AI technologies that are able to communicate are increasingly able to convey thought.
Specialists at Facebook-parent Meta as of late fostered a PC program named Cicero, after the Roman legislator.
The board game Diplomacy, which requires negotiation skills, has demonstrated the software’s proficiency.
According to the findings of the research, Meta stated, “If it doesn’t talk like a real person — showing empathy, building relationships, and speaking knowledgeably about the game — it won’t find other players willing to work with it.”
An experimental chatbot that can take on any personality was made available online in October by the start-up Character.ai, which was founded by former Google engineers.
Users can “chat” with a fictitious Socrates, Donald Trump, or Sherlock Holmes by creating characters based on a brief description.
“Just a machine”
This level of sophistication both awes and frightens some observers, who worry that these technologies could be used to deceive people by disseminating false information or fabricating increasingly convincing scams.
How does ChatGPT feel about these dangers?
The chatbot stated to AFP, “Especially if they are designed to be indistinguishable from humans in their language and behavior, there are potential dangers in building highly sophisticated chatbots.”
A few organizations are setting up shields to keep away from maltreatment of their innovations.
OpenAI disclaims that the chatbot “may occasionally generate incorrect information” or “produce harmful instructions or biased content” on its welcome page.
ChatGPT, on the other hand, is neutral.
McGregor stated, “OpenAI made it extremely difficult to get the model to express opinions on things.”
McGregor once asked the chatbot to compose a poem about a moral dilemma.
“I am merely a machine, a tool for your use; I have no authority to choose or reject.”On this fateful night, I cannot weigh the options, determine what is right, or make a decision,” it replied.
Sam Altman, cofounder and CEO of OpenAI, posted a musing on the AI debates on Twitter on Saturday.
He wrote, “Interesting watching people begin to debate whether powerful AI systems should behave in the way users want or the way their creators intend.”
“The subject of whose values we adjust these frameworks to will be perhaps of the main discussion society at any point has.”