



#1 Victorian uni for graduate employment1
#1 in the world for sport science2
#1 Victorian uni for course satisfaction3
The dawn of artificial intelligence (AI) is officially upon us. Our daily scrolls are littered with AI helpers, from ChatGPT to Meta AI and Twitter’s Grok. More and more companies are utilising AI pals in a bid to understand us (and our consumeristic behaviours) better.
While these tools have improved workflow efficiencies across the board, many of us are left wondering: are they doing more harm than good?
Experts Dr Richelle Mayshak and Dr Jess Saligari from Deakin University’s School of Psychology sit down with us to unpack the dangers of AI.
There are no two ways about it – the AI industry is booming. The sheer volume of AI tools entering the market is indicative of a shift in the way we work.
Thanks to the ever-improving capabilities of AI tools, many day-to-day tasks have been relegated to ‘that’s a job for AI’ status, meaning we can spend more time on creativity and innovation.
While the advantages of AI are clear, the past few years have also illuminated the limitations of AI tools.
All AI tools are trained to recognise patterns and make decisions without human input. AI cannot create anything ‘new’; the programs are trained on models – or sets of data.
In other words, AI uses information that already exists on the internet (or information that is fed into it manually) to formulate its output.
As we become more accustomed to using AI, we demand more complex and nuanced output. Of course, AI tools need to be fed more comprehensive data sets to be able to deliver this kind of work.
Worryingly, recent studies have shown that there are dangerous and disturbing biases lurking within AI models.
In Nature’s 2024 study, researchers found that AI tools generated covertly racist decisions about people based on their dialect.
When asked about speakers of African American English, the bots delivered a host of negative stereotypes. They also found that AI models continually assigned African American English speakers to low-status jobs and deployed more convictions and death penalties in hypothetical criminal cases.
If AI models continue to go unchecked and unregulated, they pose a very real risk of perpetuating dangerous stereotypes, biases and bigotry.
One of the most concerning developments in AI technology has been the introduction of deepfake tools.
Online harassment is far from a new phenomenon, but the ever-enhancing capabilities of AI programs has meant that harassment appears in new, frightening forms.
Deepfake technology has dominated headlines in 2024, highlighting the growing need for tighter regulations around how AI tools are used and what they can produce.
Deepfakes are videos or images of a person in which their face or body has been digitally altered to make them appear like somebody else. These images are usually used with malicious intent.
Much of the deepfake content being produced online is pornographic in nature. Many celebrities have been victim to deepfake harassment. Deepfake images of Taylor Swift caused widespread condemnation in January 2024.
But this troubling trend is having widespread impacts outside Hollywood.
Deepfake technologies are also being used as a form of ‘revenge porn’, with the ABC reporting that in June 2024, 50 girls from Bacchus Marsh Grammar in Melbourne had their images used to create deepfake pornography.
Experts expect that as AI tools improve, this issue will only become more prevalent.
In addition to the dangers AI poses to humans, the tools also pose a very real risk to the environment.
Large-scale AI models such as ChatGPT3 require significant power to remain up and running. The servers they run on are extremely powerful, consuming a few kilowatts each. This is equivalent to the average power use of a house.
In other words, AI tools are enormous energy consumers.
AI tools are also thirsty beasts; with current global demand for AI, it’s estimated that 4.2–6.6 billion cubic meters of water will be withdrawn in 2027. This is more than the total annual water withdrawal of half of the United Kingdom.
As demand for AI tools increases, so too will water withdrawal; it’s impossible to keep the tools running without it.
We’ve long understood that social media is negatively impacting the mental health of young people. The Australian Government’s social media ban for people under 16, set to be introduced in 2025, is evidence of just how serious the issue has become.
Deakin University’s Dr Mayshak and Dr Saligari note, ‘When used heavily, social media use has been associated with higher levels of anxiety, depression and social comparison pressures, particularly for adolescents; cyberbullying remains a major concern, disproportionately impacting vulnerable groups.’
Unfortunately, the improving capabilities of AI have only exacerbated the issue.
As Dr Mayshak and Dr Saligari say, ‘AI driven tools like deepfakes have been weaponised for bullying, spreading misinformation, contributing to fear and distrust in information gained in online environments, which can undermine the benefits of online interaction’.
While the dangers of AI are irrefutable, Dr Mayshak and Dr Saligari suggest that banishing the tools isn’t the answer.
‘Banning AI for young people may not be the most practical or effective solution, given its potential for both harm and benefit, and its integration into many common place systems,’ Dr Mayshak and Dr Saligari note.
Instead, the focus should be on equipping young people with the skills they need to evaluate digital content and safely navigate online spaces.
The duality of social media – being both a resource and a risk to young people – exposes a need for youth to be empowered with digital literacy knowledge. The goal is to reduce the negative impacts, while amplifying the positive potential of the tools.
Dr Mayshak and Dr Saligari point to the benefits of AI, including the ‘opportunities [it offers] for education, cultural awareness and mental health support.’
So, restricting access isn’t the solution. Rather, Dr Mayshak and Dr Saligari suggest, ‘We should prioritise fostering digital literacy to help young users critically evaluate AI outputs. This includes teaching them to question the credibility and motives behind AI-driven interactions while emphasising ethical AI development and transparency.’
With increasing instances of online harassment and deepfake imagery entering our legal systems, new legislations and regulations have been put in place to punish offenders.
For example, ‘Using a Carriage Service to Menace or Harass’ is prosecuted under Commonwealth legislation. A carriage service is any form of electronic communication, for example emails, calls, text messages and social media communications.
This offence is indictable under section 474.17 of the Criminal Code. It outlines that a person who uses a carriage service to menace, harass or cause offense to another person is punishable by law.
The maximum penalty for this offence is three years imprisonment, while the maximum penalty for the aggravated offence is five years imprisonment.
While this is just one example of legislation attempting to combat the dangers of AI and social media, Dr Mayshak and Dr Saligari posit that further regulations will likely be put in place to address ongoing harms.
‘Future measures may include enhanced privacy protections, stricter oversight of harmful content and mandated digital literacy programs. While regulations alone cannot solve all issues, they are a crucial step toward creating an online environment that prioritises wellbeing, safety and inclusivity’, Dr Mayshak and Dr Saligari note.
As social media and the internet more broadly become increasingly fraught with unchecked and unregulated AI tools, there are questions about what the future of the space will look like.
Dr Mayshak and Dr Sailgari tell us that the future of social media and the internet for young people is promising but requires intentional efforts to mitigate harm. While social media can promote positive mental health by offering support networks, education and destigmatisation, issues such as cyberbullying and harassment also exist.
‘The misuse of deepfake technologies demand stronger legal and educational interventions,’ Dr Mayshak and Dr Saligari note.
There are developments currently happening in this space. Several new articles have proposed a co-design approach for social media, where young people are involved in designing digital policies to address challenges and shape their online future.
Dr Mayshak and Dr Saligari note that ‘Education, particularly in digital and emotional literacy, will be critical in equipping young people to navigate these spaces safely.’
As we continue to navigate the ever-changing world of AI, governments, industries and individuals will need to find the balance between embracing its benefits and safeguarding against its dangers.
AI holds promise — from supporting mental health and educational initiatives to enhancing productivity in countless fields.
But, as we’ve seen, these tools can also be used to inflict a great deal of harm, particularly when it comes to young people. The rise of deepfakes, cyberbullying and misinformation, coupled with the environmental costs of AI, highlights the complexity of this technology.
Experts like Dr Mayshak and Dr Saligari are emphatic that banning AI may not be the solution; instead, putting resources towards digital literacy education and safe online engagement is key.
As we continue our journey towards a technological-fuelled future, it’s clear that regulation and education will be key to reducing the dangers of AI.