Digital platforms promote mental health services, goods, or therapies by targeting people with ads linked to mental health using algorithms and user data. This focused strategy displays appropriate advertisements by utilising user behaviour, content consumption, and even inferred mental states. Although this can help people get the treatment they need, it also raises moral questions about privacy, possible stigmatisation, and the reliability of mental health evaluations based on internet activity.
Depression is a frequent condition that goes undiagnosed and untreated in the UK National Health Service. Charities and volunteer organisations provide mental health services, but they continue to struggle to reach those in need. By analysing social media (SM) content with machine learning techniques, it may be feasible to determine which SM users are currently experiencing poor mood, allowing for targeted advertising of mental health services to those who would benefit from them.
Depression is a leading cause of disability worldwide, affecting approximately 1 in 6 adults (17%) in high-income countries like the UK. It significantly contributes to reduced quality of life, suicide, and economic burden—costing the UK economy an estimated £105.2 billion annually due to lost productivity and healthcare expenses. Despite this, less than 40% of individuals with depression receive timely treatment. The ongoing underdiagnosis and undertreatment of depression highlight a gap that emerging digital technologies and targeted advertising seek to fill.
In the UK, primary care physicians handle the majority of mental health consultations. However, depression often goes undiagnosed due to vague symptom presentation or patients’ reluctance to disclose psychological distress, mainly due to stigma and fear of record access by employers. Even when diagnosed, treatment delays are common; for instance, over 50% of patients wait more than 3 months to access psychological therapies. This underlines a serious deficiency in service accessibility, compounded by increasing demand and limited resources.
With over 4.5 billion users worldwide, social media (SM) platforms have become powerful tools for behaviour prediction. Machine learning and AI techniques can detect depressive symptoms in user-generated content such as posts, comments, and images. These tools analyse patterns like emotional tone (sentiment analysis), post frequency, and visual cues to flag individuals potentially suffering from low mood or depression. Research has even shown that some disclosures on Facebook and Twitter can be specific enough to meet clinical criteria for major depressive episodes. This enables targeted advertising: mental health organisations or pharmaceutical companies could deliver ads or resources directly to individuals flagged as at risk. This form of precision outreach promises earlier intervention and better resource allocation.
While promising, targeting mental health conditions raises critical ethical concerns. Users often remain unaware of how their data is collected or used. There’s potential for misdiagnosis by algorithms and misuse of sensitive information by third parties, especially commercial advertisers. The prospect of receiving mental health-related ads may also unintentionally increase stigma or distress. Surveys show that while many users are open to receiving targeted support online, they demand transparency and safeguards. Many express concerns about how accurately depression can be detected, who accesses the data, and how the information is used.
Mental health charities and third-sector organizations could benefit from this strategy by reaching underserved populations without physical outreach. Similarly, pharmaceutical companies may use these tools to advertise antidepressants more efficiently. However, this blurs the line between ethical healthcare promotion and commercial exploitation—especially if advertising is not medically neutral or appropriately regulated.