From deep learning that can replicate human behaviour to the seamless manipulation of photos and videos, we’ve seen artificial intelligence become more and more relevant each day. The McKinsey Global Survey on AI shows that AI adoption globally in 2022 is 2.5 times higher than it was in 2017. It has spread across several industries. Even those like art, which many once deemed too creative and subjective to be replaced by technology. As we get ready to dive headfirst into 2023, we must consider the very real possibility that AI is on its way to take over several areas of our lives in the near and distant future. While some of its creations are proving useful in unexpected ways, others remain embroiled in controversy.
Recently, it brought good news. Search engine optimization consultant Danny Richman used to help contractor Danny Whittle, who has dyslexia, edit his emails. That was until he used AI to build an app that converts typed text into a formal email.
Another swanky piece of AI technology to catch the internet’s attention this week is ChatGTP. It simulates human behaviour in order to understand and respond to human language. The company behind the chat bot is OpenAI, founded by none other than the internet’s favourite troublemaker, Elon Musk. The same company also gave birth to Dall-E, which can produce images from text descriptions. It became so popular that it now has entire Twitter accounts dedicated to its strange creations.
Photo of a gnome with a knife running in a hallway pic.twitter.com/Tmq76feC1N
— Weird Ai Generations (@weirddalle) November 29, 2022
Social media has also been flooding with AI generated portraits made with Lensa AI – an app that can create paintings in various styles using your selfies. Although quite entertaining, it has had a massive adverse impact on the art community. Artists have already raised valid problems they have with the use of AI for art, and how it threatens their work. Many people online are also accusing the app of stealing from real, human artists to generate its paintings.
I’m cropping these for privacy reasons/because I’m not trying to call out any one individual. These are all Lensa portraits where the mangled remains of an artist’s signature is still visible. That’s the remains of the signature of one of the multiple artists it stole from.
A 🧵 https://t.co/0lS4WHmQfW pic.twitter.com/7GfDXZ22s1
— Lauryn Ipsum (@LaurynIpsum) December 6, 2022
But even more concerning is the possibility that a predator might use such apps to generate harmful or offensive images of someone without their consent. In an article on WIRED, Olivia Snow recounted Lensa AI creating nude photos of her, despite the fact that she only uploaded headshots. Several other women reported receiving highly sexualised images, if not completely nude ones. Further, women of colour claimed the app lightened their skin and made their features look more anglicized.
Such fears arose even when deepfakes became popular. They employ an unsettling, but impressive AI technology where someone’s face is placed on another person’s existing photo or video. Several complaints of nonconsensual porn, material sexually abusive to children and political propaganda emerged. It brought into question the ethics of AI – a conversation that we are yet to have.
The world of artificial intelligence is one that we are still tip-toeing into. It is a technology with immense, life changing potential – and we are already seeing evidence of it. But we are also well aware of the dangers it poses if misused. The scale will likely continue to swing from pros outweighing cons to vice versa for a while. But the urgent need to discuss AI’s serious implications for safety and privacy is growing each day.