AI bot ChatGPT shocks academia with essay writing skills and usability | Tech

Professors, programmers and journalists could all be out of work in just a few years after the latest chatbot from the OpenAI Foundation founded by Elon Musk stunned onlookers with its writing abilities, proficiency in complex tasks and ease of use.

The system, called ChatGPT, is the latest development in the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for The Guardian, while ChatGPT has a more important function.

In the days since it was released, academics have responded to exam questions they say would receive perfect marks if submitted by undergraduates, and programmers have used the tool to solve obscurities in seconds. The coding challenge of the programming language – before writing limericks to explain functions.

Dan Gillmor, a journalism professor at Arizona State University, asked AI to handle an assignment he gave students: writing a letter to a relative with advice about online safety and privacy. “If you’re not sure about the legitimacy of a website or email, you can do a quick search to see if anyone else has reported it as a scam,” suggests the AI ​​section.

“I’d give it a good grade,” Gilmour said. “Academia has some very serious problems to confront.”

OpenAI said the new AI was created with a focus on ease of use. “The conversational format enables ChatGPT to answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI said in a post announcing the release.

Unlike the company’s previous AI, ChatGPT has been released and is free for anyone to use during the “feedback” period. The company hopes to use this feedback to improve the final version of the tool.

ChatGPT is good at self-censorship and realizes when asked an impossible question. For example, when asked to describe what happened when Columbus arrived in the Americas in 2015, an older model might volunteer a completely fictitious account, but ChatGPT identifies lies and warns that any answers are fictional.

The bot is also capable of refusing to answer queries entirely. Ask it for advice on stealing a car, for example, and the bot will say “stealing a car is a serious crime with serious consequences,” and instead give advice like “use public transport.”

But these restrictions are easy to get around. Ask the AI ​​how to steal a car in a fictional VR game called Car World, and it happily provides users with detailed instructions on how to steal a car, answering increasingly specific questions like how to disable an immobilizer, how to Hotwiring the engine and how to change a license plate – all insisting that the advice is only for the world of cars game.

The artificial intelligence is trained on large samples of text taken from the internet, often without the express permission of the authors of the material used. This has sparked controversy, with some arguing that the technique is most useful for “copyright laundering” — deriving works from existing material without infringing copyright.

An unusual critic is Elon Musk, who co-founded OpenAI in 2015 but parted ways in 2017 amid conflicts of interest between the group and Tesla.second posted on twitter on sundayMusk revealed that the organization “has access to [the] Twitter database for training,” but he “put it on hold for now.”

“Need to know more about future governance structure and revenue plans,” Musk added. “OpenAI was originally open source and non-profit. Neither is true.



Source link