NEXT UP ON this.
Artificial intelligence tools have arrived with a bang. While you might have been vaguely aware that various algorithms have been in development for some time, you’re not alone if you didn’t see the storm approaching quite this quickly.
Since the free conversational chatbot ChatGPT launched in November 2022, online commentary has been constant. Can it really write your essays for you? Will writers be put out of a job? What does it mean for research? How can we harness its power to improve our work? Can it really replace humans?
Soon after ChatGPT’s launch, Microsoft jumped in with a $14 billion investment in its developer OpenAI, while Google rushed to launch its competitor tool, Bard in March 2023. Meanwhile, various other generative AI tools are producing all kinds of content. Dall-e, another tool also from OpenAI, promises to produce original artwork based on anything you ask. Other AI tools are making videos, podcasts, logos and more.
As with any new technological development, generative AI tools have their naysayers. But this doesn’t mean we should be discounting their potential power, warns Professor Richard Dazeley, deputy head of school at Deakin University’s School of Information Technology.
‘The next generation is supposed to be far more accurate in what it is predicting,’ he says. ‘I’m not particularly into super intelligence fearmongering, but we’re typically predicting shorter timeframes for significant improvements.’
Before getting yourself worked up about all the human jobs AI is going to replace, it’s important to understand what we’re dealing with. Let’s take a backstep and start by understanding how these tools are powered.
‘AI just a fancy term for algorithm,’ Prof. Dazeley explains. ‘The main difference is the perception of these tools – that they produce behaviour that appears to be intelligent. What it generates is not a duplication of something somebody else has done; it’s original in its own sense.’
The important thing to know about generative AI is that these tools operate on generalisations.
‘It studies billions of examples and builds up its own invented model of what those examples represent,’ Prof. Dazeley explains. ‘In the case of ChatGPT, if you ask it a question, it considers the way thousands of people have answered that, then generates something that is a generalisation of that answer.’
The term ‘generative’ comes from the way the tools make probability-type predictions about what would come next.
‘In the case of the image generation tools, it starts with a random thing then corrects until it forms an image that it associates with the answer to the prompt you’ve provided,’ Prof. Dazeley adds.
The fact that generative AI is based on generalisations is the crux of its ‘fundamental flaw’, Prof. Dazeley suggests. Even discounting current processing limitations, it might not be reasonable to expect improved versions of this technology to be completely reliable in the future.
‘The generative AI model cannot be perfectly accurate guaranteed all the time, because as soon as you have something that generalises, it’s going to get specifics wrong,’ he explains.
Because they try to generalise across a lot of data, they can struggle with any topic about which there is limited current knowledge. This has implications for anyone trying to do useful work in business, scientific research, studying and more – especially within niche fields or when working with new information.
‘It’s showing it’s good at fairly generalised skills, but where these systems will struggle is any time you’re doing something quite specific and need to be accurate,’ Prof. Dazeley says.
The next step in generative AI will involve integrating user-friendly systems like ChatGPT and Dall-e with other useful applications. For example, allowing ChatGPT to do a Google Search. Or allowing Siri to start talking conversationally like ChatGPT. It’s easy to imagine this experience feeling rather ‘intelligent’.
‘At the moment its just using a model to predict how a human might respond, but this would mean having the system taking your prompt and formulating a search query itself, finding what it needs, then finding a correct and specific answer for you,’ Prof. Dazeley explains.
‘In a research sense, these things exist now. In a commercial sense, companies are still in the process of testing to make sure they’re not releasing something too inaccurate. I expect we’ll see integrated AI systems probably within a year or two.’
While Prof. Dazeley opts out of fearmongering, he does think young people should consider the emerging capabilities of AI when choosing a career path.
‘Anything that requires high-level intelligence is probably safe for a good period of time, plus there’s always new jobs,’ he says.
As AI continues to take hold, prompt engineering will become a highly sought-after skill, he predicts.
‘Rather than doing the data analysis, it’s how do you ask the right question to the AI so you get the answer that you want. We used to have training on how to do Google Search, but we’ll now need training on how to write questions for AI.’
Subscribe for a regular dose of technology, innovation, culture and personal development.