Imagine an artificial intelligence tool that not only responds to your instructions but can instruct itself and execute tasks without need for human approval.

It’s not science fiction – the tech is already here with semi-autonomous agents such as AutoGPT.

Based on the same technology as OpenAI’s wildly popular chat bot ChatGPT, AutoGPT is more powerful as it is connected to the internet and does not wait to be told what to do next.

It can take a user’s broad goal then break it down into sub-tasks and execute each one – such as reading documents, searching the internet, creating spreadsheets and even downloading software – to eventually produce an output without any further prompting from the user.

For any Terminator fans, it may sound eerily similar to Skynet, but experts insist AutoGPT is not self-aware and “generalised AI” with the ability to think is decades away.

“We haven’t got to that Skynet stage,” University of New South Wales Business School associate professor Dr Rob Nicholls said.

“Generative AI products (such as ChatGPT and AutoGPT) appear to be very smart because they are very smart but they are not thinking products.

“Generalised AI is theoretically possible but we don’t know how to get to it at the moment.”

How AutoGPT is being used

Already, people are using AutoGPT to help create apps and websites, conduct market research for business ideas, automate job searches and generate Dungeons and Dragons campaigns.

Dr Nicholls has personally used it to come up with questions for his university students that cannot be answered by generative AI – essentially using AutoGPT to render itself useless.

While his experimentation has produced results, the bot still “takes a lot of tweaking”.

He is confident, however, this will quickly improve.

“It learns from what it’s asked and by its mistakes,” he said.

“When somebody else comes and asks a similar question, it will require less tweaking and if lots of people are asking similar questions, the amount of tweaking for the 100th person will be quite minimal.”

ChatGPT took just two months to reach 100 million users, allowing it to improve very rapidly.

“It was almost like you were asking a child in December and now you are talking to a well-educated adult,” Dr Nicholls said.

In the future, experts hope AI will improve everything from transport, with the creation of safer autonomous vehicles; through to healthcare, with earlier cancer diagnoses; law and order, with faster crime detection; and education, with AI assistants relieving teachers of repetitive work that leads to burn out.

Dr Nicholls hoped it would also learn to sort fact from fiction, addressing the issue of misinformation and disinformation.

The good and the bad of Generative AI

University of Adelaide’s Australian Institute of Machine Learning founding director Professor Anton van den Hengel said AI would give countries and companies an economic advantage.

“It’s the single technology that will improve productivity in every Australian industry,” he said.

Global research by professional services company Accenture projected AI could double annual economic growth rates and boost labour productivity by up to 40 per cent by 2035.

But AI could also have a dark side if it was used for mischievous projects or allowed to run wild with “hallucinations” – inaccurate or completely fabricated responses presented as fact.

AutoGPT could be used to speed up malware creation and increase the reach of scams, such as phishing emails that attempt to steal personal details.

Still, Dr Nicholls – also deputy director of UNSW Institute for Cyber Security – did not discourage people from experimenting with AutoGPT.

He only warned against over-sharing personal data, or that of an employer.

“When all of the large language models (ChatGPT, AutoGPT, etc) collect information off the internet, primarily they try to de-identify information, but if you are continually feeding information about you in, there is a risk that will form part of the training model,” he said.

“Anything you wouldn’t be comfortable saying to a stranger, you shouldn’t be saying to a generative AI.”

He was not concerned, however, that AutoGPT would be hacked to execute tasks, such as downloading malware, without a user’s knowledge.

“All the players creating these systems – OpenAI, Microsoft, Google, Meta, etc. – they see this as a huge reputational risk (and) provide protection against the potential for hacking,” he said.

While AutoGPT was created by video game developer Toran Bruce Richards rather than one of the major tech companies, it was based on OpenAI’s GPT-4 (the model that powers ChatGPT) and Dr Nicholls said access to this was designed to minimise risk to users.

AutoGPT itself may not be so confident, though.

Its own disclaimer included an agreement that “by using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise”.

It also said that “as an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements.”

To address AI’s potential issues, regulation has become a hot topic across the world, with governments scrambling to control the technology that is evolving faster than they can keep up.

Professor van den Hengel said the only way Australia would have any control over the AI used here would be to develop its own.

“At the moment, any AI we use is being developed overseas so anything we say is going to be irrelevant,” he said.

Excerpt from article from the Herald Sun, reported by Melanie Burgess, read the full article here.