OPINION: Tackling the Rising Mental Health Crisis Caused by AI (with AI)

By Emmanuel Akindele

Most popular

1.

In an era where the impact of social media on mental health feels like it’s become old news, a new concern looms on the horizon—artificial intelligence (AI). 

 

Our use of AI has skyrocketed daily, from asking digital assistants such as Siri or Alexa to tell us the weather before leaving the house to submitting returns through customer service chatbots. 

 

While AI has automated and sped up mundane tasks and processes, remaining vigilant about its implications has never been more critical. 

 

When it comes to mental health services, AI can expand access and reduce costs for patients. However, the field has much work to do before realizing these benefits, and it starts with addressing how AI is being misused towards youth and the impact on their mental health. 

 

With AI technology advancing by the minute, the power it possesses cannot be underestimated. The AI market size will reach a staggering $407 billion by 2027, but it doesn’t end there. Approximately, 84 percent of people who use AI-based technology are entirely oblivious to the fact that they are even interacting with AI. Social media platforms are a very chilling example of this. 

 

So, let’s take a step back to take this in fully. 

 

As we mindlessly scroll through our feeds, algorithms powered by AI analyze our behaviour, preferences and interactions, using it to generate content that influences our thoughts and beliefs. It’s almost as though we live in a digital bubble that AI manipulates without a second thought. 

 

RELATED: Closing the Gender Gap on Mental Health 

 

And while AI is making significant strides across industries to push the boundary on innovation, many companies have now started to program it for malicious purposes—most notably through the altering of personal images and videos through deepfake technology, as well as the recent introduction of AI voice changers such as Revoicer

 

This software violates our basic entitlement to privacy by manipulating and destroying our online perceptions with false information. 

 

With the pre-existing mental health crisis affecting approximately 70 percent of Canadian youth and 39 percent of Ontario high-school students demonstrating a moderate-to-serious level of anxiety and depression, the repercussions of improper manipulation of AI will only add salt to a severe societal wound. 

Man wearing a black t-shirt and jacket smiling into the camera. He is centered with office lighting in the background.
Blue Guardian CEO, Emmanuel Akindele

 

Despite the inherent risks, if used appropriately, AI holds immense potential to revolutionize youth mental health care. By harnessing advanced technologies and algorithms for the better, we can provide personalized and accessible support catered to the needs of young individuals. 

 

Blue Guardian is at the forefront of leveraging data to hyper-personalize mental health resources, ensuring that the end users’ well-being remains the top priority. 

 

That said, the industry must have a collective effort to establish ethical standards and promote responsible AI practices. Transparency, protection, privacy, and the well-being of the end-user should be at the forefront of every organization that harnesses AI, especially if they engage with youth.  

 

As we navigate the uncharted waters of AI’s impact on youth mental health, it is crucial to maintain an optimistic outlook. By embracing the power of advanced technologies, we have the potential to transform the youth mental health sphere for the better.