Words by Nicolas Zoumboulis Art by Rebekah Rose
The term “Artificial Intelligence” (AI) is often associated with cartoonish villains like the Terminator or Agent Smith.
Epic tales of science fiction have been culturally significant in shaping our larger than life notion of what artificial intelligence means.
Yet because our general concept of AI comes from these fictional stories, we tend to strictly associate the dangers of AI with fiction too. Until the day we create a six-foot tall, Austrian-accented, leather jacket wearing bodybuilder who fires off quips like “hasta la vista, baby”, there’s nothing to worry about, right?
The super-advanced AI we know and fear from pop culture is what’s referred to as the “singularity”, it’s the technological point of no return and Stephen Hawking thought we’d already reached it. Imagine that we make a machine that we eventually upgrade, that machine then learns to upgrade itself, and that version quickly upgrades itself and so on. The rate of growth this artificial superintelligence would hypothetically experience would be so rapid that experts call it an “intelligence explosion”, an explosion we would have no control over. So what happens to us then? Well we are left far behind, rotting in the bygone age of humanity. This is the fear that Stephen Hawking had, and Elon Musk, that this nightmare is merely a few decades away.
If you’re not convinced that AI poses such an extreme threat to our lives, then let’s take a look at the immediate and more realistic future of where it threatens our jobs.
According to research, a third of Australian jobs are at risk of being automated by the year 2030. Can we imagine the uproar if a politician was to declare, “I have a plan to destroy a third of the workforce in 10 years, and you’re all going to thank me for it”? Yet this is the vision that the technology sector is selling us. The bulk of these lost jobs will be in lower-skilled or manual labour roles and concentrated within regional areas. Automation will also affect areas like legal services, with more than 100,000 jobs in the sector at a high chance of being automated in the next 20 years. According to Deloitte, with paralegals and legal assistants the first to go. Generally the jobs that involve repetitive and routine processes will be overtaken by automation. According to PricewaterhouseCoopers, accountants and cashiers are most likely to be automated. In fact, most of our current part-time jobs will cease to exist, as sales roles and the food-service industry are likely to be automated.
Optimists say that most jobs will not be “destroyed”, but rather “redefined”, and widespread automation will help to usher in an influx of new jobs. However, right now the effects of automation look devastating, especially for young people. A study by the Foundation for Young Australians found that 60% of Australian students are currently studying or training for jobs that either won’t exist, or will look completely different in the next 10 to 15 years. To ensure better job security, young people are being urged to learn digital skills and embrace industries that value creativity and strong interpersonal skills.
Throughout history, technological progress has dramatically displaced workers, but as time goes on, markets have adapted and created new jobs for people. However, in this brave new world, can we still rely on simply trusting the way progress has worked in the past? As the population size rapidly increases, it’s difficult to determine whether new jobs will be created fast enough to match those being destroyed. Regardless of what anyone says, there simply is no definitive law that assures us the equilibrium will return to delicate balance.
Of course, if AI can help us to live better and fuller lives then we should embrace it. If driverless cars can bring the road toll to zero, then let’s use it to make our lives safer. The hypothetical benefits of AI sound great, the problem is we can’t actually say for certain if we will be better off with it. Who should a driverless car save in a forced crash situation, its driver or civilians? If a person dies in an autonomous car crash, who is legally responsible? These are only a couple of ethical questions we are struggling to solve ourselves, and as the Silicon Valley carelessly races to develop AI, these difficult questions are only piling up.
So, what’s the right answer to all this? Well, I’m not going to ask Siri, that’s for sure.