Algorithms Archives - People vs. Big Tech https://peoplevsbig.tech/category/algorithms/ We’re people, not users Thu, 07 Nov 2024 13:10:17 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png Algorithms Archives - People vs. Big Tech https://peoplevsbig.tech/category/algorithms/ 32 32 Briefing: protecting children and young people from addictive design https://peoplevsbig.tech/briefing-protecting-children-and-young-people-from-addictive-design/ Thu, 07 Nov 2024 11:59:42 +0000 https://peoplevsbig.tech/?p=1015 Research has shown the deep harm excessive social media use can do to young brains and bodies. The EU Commission must tackle the root cause.

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>

Social media companies design their platforms to encourage users to spend as much time on them as possible. Addictive design impacts everyone, but children and young people are especially susceptible. Research shows that given their neural developmental stage, young users are particularly prone both to excessive use of social media as well as its harmful effects, and young users with preexisting psychosocial vulnerabilities are even more at risk.

What is addictive design?

Social media platforms’ business model relies on keeping users online for as long as possible, so they can display more advertising. The platforms are optimised to trigger the release of dopamine - a neurotransmitter the brain releases when it expects a reward - making users crave more and use more.
Young users are far from exempt, documents reveal that Meta has invested significant resources to study and even created an internal presentation on how to exploit the neurological vulnerabilities of young users.

While more research is needed, the following addictive features have been identified:

  • Notifications such as “likes”: both the novelty and validation of another user’s engagement triggers a dopamine release reinforcing the desire to post and interact creating a “social validation feedback loop”.
  • Hyper personalised content algorithms or “recommender systems”: Brain scans of students showed that watching a personalised selection of videos triggered stronger activity in addiction-related areas of the brain compared to non-personalised videos.
  • Intermittent-reinforcement: meaning users receive content they find less interesting punctuated by frequent dopamine hits from likes or a video they really like. This keeps the user scrolling in anticipation for the next dopamine reward. This randomisation of rewards has been compared to “fruit machines” in gambling.
  • Autoplay and infinite scroll: automatically showing the next piece of content to provide a continuous, endless feed, makes it difficult to find a natural stopping point.

 Why is addictive design so harmful? 

Excessive screen time and social media usage has been shown to cause

  • Neurological harm:
    • Reduction in grey matter in the brain according to several studies, similar to the effects seen in other addictions.
    • Reduced attention span and impulse control is linked to the rapid consumption of content on social media, particularly short-form videos, and especially in younger users.
    • Possible impairment of prefrontal cortex development, which is responsible for decision-making and impulse control, due to early exposure to  social media's fast-paced content. N.B. the prefrontal cortex does not fully develop until around age 25.
    • Possible development of ADHD-like symptoms: may be linked to excessive screen according to early studies.
    • Temporary decline in task performance identified in children after watching fast-paced videos.

  • Psychological harm:
    • In November 2023, Amnesty International found that within an hour of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, multiple videos romanticising, normalising or encouraging suicide had been recommended. This illustrates both the risk of prolonged screen time and also the hyper personalisation of content recommender systems.
    • Increased anxiety, depression, and feelings of isolation have been linked to prolonged online engagement, as social media can negatively affect self-esteem, body image and overall psychological well-being.
    • Risk exposure: Longer time online exposes children and young people more to risks such as cyberbullying, abuse, scams, and age-inappropriate content.

  • Physical harm:
    • “93% of Gen Z have lost sleep because they stayed up to view or participate in social media,” according to the American Academy of Sleep Medicine.
    • Reduced sleep and activity: Social media usage can lead to sleep loss and decreased physical activity, which impacts weight, school performance, mental health, and distracts from real-life experiences.

Gone is the time when the streets were considered the most dangerous place for a child to be - now, for many young people the most dangerous place they can be is alone in their room with their phone.

What’s the solution?

Given the severity of the risks to children online, we need binding rules for platforms. Unfortunately, the very large online platforms (VLOPs) have repeatedly demonstrated that they choose profit over the safety of children, young people and society in general.

The adjustments that some have made have been minor, for example, TikTok no longer allows push notifications after 9 pm for users aged 13 to 15. But they will still be exposed to push notifications (linked to addictive behaviour) for most of the day. In March 2023, TikTok introduced a new screen-time management tool which requires under-18s to actively extend their time on the app once they have reached a 60-minute daily limit. However, this measure puts the burden on children, who in large numbers describe themselves as “addicted” to TikTok, to set limits on their own use of the platform. The prompt can also be easily dismissed and does not include a health warning. Adding to the limitations of the measure, the change only applies to users who the system identifies as being a child, with the effectiveness of TikTok’s age verification being called into question. For example, the UK’s media regulator Ofcom has found that 16% of British three- and four-year-olds have access to TikTok.

Meta’s leaked internal documents reveal that the corporation knowingly retains millions of users under 13 years old, and has chosen not to remove them. Notably, Harvard University research last year estimated that in the US alone, Instagram made $11 billion in advertising revenue from minors in 2022.

Risk of overreliance on age verification 

While we welcome norms on an appropriate age to access social media platforms, overreliance on age-gating and age verification to adequately protect minors online is, unfortunately, unrealistic and alone will not adequately protect minors online. Even the most robust age-verification can be circumvented.
Age-gating and age verification still assume that parents or guardians have the availability, capacity and interest in monitoring internet usage. Frequent monitoring is unrealistic for most families but in particular, this approach risks disadvantaging young people who face additional challenges, such as those living in care, whose parents work long hours or face language barriers in their country of residence.
To truly protect children and young users, we need safe defaults for all. Please see our whitepaper prepared in collaboration with Panoptykon and other researchers and technologists: Safe by Default: Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems.
Aside from this, age verification can present its own risks to privacy, security, free speech, as well as cost and convenience to businesses.

Establishing binding rules

Fortunately, there has been momentum to tackle addictive design in the EU; last December the European Parliament adopted by an overwhelming majority a call urging the Commission to address addictive design. In its conclusions for the Future of Digital Policy, the Council stressed the need for measures to address issues related to addictive design. In July, Commission President von der Leyen listed this as a priority for the 2024-2029 mandate. The Commission’s recent Digital Fairness Fitness Check also outlined the importance of addressing addictive design.

The Commission must:

  • assess and prohibit the most harmful addictive techniques not already covered by existing regulation, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities.
    • examine whether an obligation not to use profiling/interaction-based content recommender systems ‘by default’ is required in order to protect users from hyper personalised content algorithms; 
    • put forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off.
  • ensure strong enforcement of the Digital Services Act on the protection of minors, prioritising:
    • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
    • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
    • naming features in recommender systems that contribute to systemic risks;

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>
Letter to European Commissioner Breton: Tackling harmful recommender systems https://peoplevsbig.tech/letter-to-european-commissioner-breton-tackling-harmful-recommender-systems/ Mon, 05 Feb 2024 15:26:00 +0000 https://peoplevsbig.tech/?p=514 Civil society organisations unite behind Coimisiún na Meán's proposal to disable profiling-based recommender systems on social media video platforms

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>

Dear Commissioner Breton,

Coimisiún na Meán’s proposal to require social media video platforms to disable recommender systems based on intimately profiling people by default, is an important step toward realising the vision of the Digital Services Act (DSA). We eighteen civil society organisations urge you not to block it, and moreover, to recommend this as a risk mitigation measure under Article 35 of the DSA. This is an opportunity to once more prove European leadership.

Disabling profiling-based recommender systems by default has overwhelming support from civil society, the Irish public and cross-group MEPs. More than 60 diverse Irish civil society organisations endorsed a submission strongly backing this measure, as covered by the Irish Examiner. We are united in our support for this Irish civil society initiative. 82% of Irish citizens are also in favour, as shown in a national poll across all ages, education, income, and regions of Ireland conducted independently by Ireland Thinks in January 2024. At the end of last year, a cross-party group of MEPs wrote a letter to the Commission to adopt the Ireland example across the European Union.

Our collective stance is based on overwhelming evidence of the harms caused by profiling-based recommender systems especially for most vulnerable groups such as children – Algorithmic recommender systems select emotive and extreme content and show it to people who they estimate are most likely to engage with it. These people then spend longer on the platform, which allows Big Tech corporations to sell ad space. Meta's own internal research disclosed that a significant 64% of extremist group joins were caused by their toxic algorithms. Even more alarmingly, Amnesty International found that TikTok’s algorithms exposed multiple 13-year-old child accounts to videos glorifying suicide in less than an hour of launching the account.

Platforms that originally promised to connect and empower people have become tools that are optimised to “engage, enrage and addict” them. As described above, profiling-based recommender systems are one of the major areas where platform design decisions contribute to “systemic risks”, as defined in Article 34 of the DSA, especially when it comes to “any actual or foreseeable negative effects” for the exercise of fundamental rights, to the protection of personal data, to respect for the rights of the child, on civic discourse and electoral processes, and public security, to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are therefore a crucial design-layer of Very Large Online Platforms regulated by the DSA.

Therefore, we urge the European Commission not only to support Ireland’s move, but to apply this across the European Union, and recommend disabling recommender systems based on profiling people by default on social media video platforms as a mitigation measure for Very Large Online Platforms, as outlined in article 35(1)(c) of the Digital Services Act.

Furthermore, we join the Irish civil society organisations in urging the Coimisiún na Meán and the European Commission to foster the development of rights-respecting alternative recommender systems. For example, experts have pointed to various alternatives including recommender-systems that are built on explicit user feedback rather than data profiling, as well as signals that optimise for outcomes other than engagement, such as quality content and plurality of viewpoint. Ultimately, the solution is not for platforms to provide only one alternative to the currently harmful defaults but rather to open up their networks to allow a marketplace of possible options offered by third parties, competing on a number of parameters including how rights respecting they are, thereby promoting much greater user choice.

We believe these actions are crucial steps towards mitigating against the inherent risks of profiling based recommender systems towards a rights-respecting and pluralistic information ecosystem. We look forward to your support and action on this matter.

Yours sincerely,

  1. Amnesty International
  2. Civil Liberties Union for Europe (Liberties)
  3. Defend Democracy
  4. Ekō
  5. The Electronic Privacy Information Center (EPIC)
  6. Fair Vote UK
  7. Federación de Consumidores y Usuarios CECU
  8. Global Witness
  9. Irish Council for Civil Liberties
  10. LODelle
  11. Panoptykon Foundation
  12. People vs Big Tech
  13. The Citizens
  14. The Real Facebook Oversight Board
  15. Xnet, Institute for Democratic Digitalisation
  16. 5Rights Foundation
  17. #jesuislà
  18. Homo Digitalus

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>
Open letter to the European Parliament: A critical opportunity to protect children and young people https://peoplevsbig.tech/open-letter-to-the-european-parliament-on-the-addictive-design-of-online-services/ Mon, 11 Dec 2023 12:21:00 +0000 https://peoplevsbig.tech/?p=899 Dear Members of the European Parliament, We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>

Dear Members of the European Parliament,

We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven mental health crisis harming our young people and children. We urge you to take immediate action to rein in the abusive Big Tech business model at its core to protect all people, including consumers and children. As an immediate first step, this means voting for the Internal Market and Consumer Protection Committee’s report on addictive design of online services and consumer protection in the EU, in its entirety.

We consider social media's predatory, addictive business model to be a public health and democratic priority that should top the agenda of legislators globally. Earlier this year, the US Surgeon General issued a clear warning about the impact of addictive social media design: “Excessive and problematic social media use, such as compulsive or uncontrollable use, has been linked to sleep problems, attention problems, and feelings of exclusion among adolescents… Small studies have shown that people with frequent and problematic social media use can experience changes in brain structure similar to changes seen in individuals with substance use or gambling addictions”.

This is no glitch in the system; addiction is precisely the outcome tech platforms like Instagram, TikTok and YouTube are designed and calibrated for. The platforms make more money the longer people are kept online and scrolling, and their products are therefore built around ‘engagement at all costs’ – leading to potentially devastating outcomes while social media corporations profit. One recent study by Panoptykon Foundation showed that Facebook's recommender system not only exploits users' fears and vulnerabilities to maintain their engagement but also ignores users' explicit feedback, even when they request to stop seeing certain content.

The negative consequences of this business model are particularly acute among those we should be protecting most closely: children and young people whose developing minds are most vulnerable to social media addiction and the ‘rabbit hole’ effect that is unleashed by hyper-personalised recommender systems. In October 2023, dozens of states in the U.S. filed a lawsuit on behalf of children and young people accusing Meta of knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms, leading to "depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes".

Mounting research has revealed the pernicious ways in which social media platforms capitalise on the specific vulnerabilities of the youngest in society. In November 2023, an investigation by Amnesty International, for example, found that within 20 minutes of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, more than half of the videos in the ‘For You’ feed were related to mental health struggles. Within an hour, multiple videos romanticising, normalising or encouraging suicide had been recommended.

The real-world ramifications of this predatory targeting can be devastating. In 2017, 14 year-old British teenager Molly Russell took her own life after being bombarded with 2,100 posts discussing and glorying self-harm and suicide on Instagram and Pinterest over a 6-month period. A coroner’s report found that this material likely “contributed to her death in a more than minimal way”. The words of Molly’s father, Ian Russell, must serve as an urgent message to us all: “It’s time to protect our innocent young people, instead of allowing platforms to prioritise their profits by monetising their misery.”

Across Europe, children and young people, parents, teachers and doctors are facing the devastating consequences of this mental health crisis. But change will not come about from individual action. We urgently need lawmakers and regulators to stand up against a social media business model that is wreaking havoc on the lives of young people. We strongly endorse and echo the IMCO Committee Report’s calls on the European Commission to:

1. ensure strong enforcement of the Digital Services Act on the matter, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities. This should include as a matter of priority:

  • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
  • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
  • naming features in recommender systems that contribute to systemic risks;
  • naming design features that are not addictive or manipulative and that enable users to take conscious and informed actions online (see, for example, People vs Big Tech and Panoptykon report: Prototyping user empowerment: Towards DSA-compliant recommender systems).

2. assess and prohibit harmful addictive techniques that are not covered by existing legislation, paying special consideration to vulnerable groups such as children. This should include:

  • assessing and prohibiting the most harmful addictive practices;
  • examining whether an obligation not to use interaction-based recommendation systems ‘by default’ is required in order to protect consumers;
  • putting forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off by design.
Signed by the following experts and academics,

Dr Bernadka Dubicka Bsc MBBs MD FRCPsych, Professor of Child and Adolescent Psychiatry, Hull and York Medical School, University of York

Dr Elvira Perez Vallejos, Professor of Mental Health and Digital Technology, Director RRI, UKRI Trustworthy Autonomous Systems (TAS) Hub, EDI & RRI Lead, Responsible AI UK, Youth Lead, Digital Youth, University of Nottingham

Ian Russell, Chair of Trustees, Molly Rose Foundation

Kyle Taylor, Visiting Digital World and Human Rights Fellow, Tokyo Peace Centre

Dr Marina Jirotka, Professor of Human Centred Computing, Department of Computer Science, University of Oxford

Michael Stora, Psychologist and Psychoanalyst, Founder and Director of Observatoire des Mondes Numériques en Sciences Humaines

Dr Nicole Gross, Associate Professor in Business & Society, School of Business, National College of Ireland

Dr S. Bryn Austin, ScD, Professor, Harvard T.H. Chan School of Public Health, and Director, Strategic Training Initiative for the Prevention of Eating Disorders

Dr Trudi Seneviratne OBE, Consultant Adult & Perinatal Psychiatrist, Registrar, The Royal College of Psychiatrists

Signed by the following civil society organisations,

AI Forensics

Amnesty International

ARTICLE 19

Avaaz Foundation

Civil Liberties Union for Europe (Liberties)

Federación de Consumidores y Usuarios CECU

Defend Democracy

Digital Action

D64 - Center for Digital Progress (Zentrum für Digitalen Fortschritt)

Ekō

Fair Vote UK

Global Action Plan

Global Witness

Health Action International

Institute for Strategic Dialogue (ISD)

Irish Council for Civil Liberties

Mental Health Europe

Panoptykon Foundation

Superbloom (previously known as Simply Secure)

5Rights Foundation

#JeSuisLà

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>
Prototyping User Empowerment – Towards DSA-compliant recommender systems https://peoplevsbig.tech/prototyping-user-empowerment-towards-dsa-compliant-recommender-systems/ Fri, 08 Dec 2023 15:29:00 +0000 https://peoplevsbig.tech/?p=516 What would a healthy social network look like? Researchers, civil society experts, technologists and designers came together to imagine a new way forward

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>

Executive Summary (full briefing here)

What would a healthy social network look and feel like, with recommender systems that show users the content they really want to see, rather than content based on predatory and addictive design features?

In October 2022, the European Union adopted the Digital Services Act (DSA), introducing transparency and procedural accountability rules for large social media platforms – including giants such as Facebook, Instagram, YouTube and TikTok – for the first time. When it comes to their recommender systems, Very Large Online Platforms (VLOPs) are now required to assess systemic risks of their products and services (Article 34), and propose measures to mitigate against any negative effects (Article 35). In addition, VLOPs are required to disclose the “main parameters” of their recommender systems (Article 27), provide users with at least one option that is not based on personal data profiling (Article 38), and prevent the use of dark patterns and manipulative design practices to influence user behaviour (Article 25).

Many advocates and policy makers are hopeful that the DSA will create the regulatory conditions for a healthier digital public sphere – that is, social media that act as public spaces, sources of quality information and facilitators of meaningful social connection. However, many of the risks and harms linked to recommender system design cannot be mitigated without directly addressing the underlying business model of the dominant social media platforms, which is currently designed to maximise users’ attention in order to generate profit from advertisements and sponsored content. In this respect, changes that would mitigate systemic risks as defined by the DSA are likely to be heavily resisted – and contested – by VLOPs, making independent recommendations all the more urgent and necessary.

It is in this context that a multidisciplinary group of independent researchers, civil society experts, technologists and designers came together in 2023 to explore answers to the question: ‘How can the ambitious principles enshrined in the DSA be operationalised by social media platforms?’. On August 25th 2023, we published the first brief, looking at the relationship between specific design features in recommender systems and specific harms.1 Our hypotheses were accompanied by a list of detailed questions to VLOPs and Very Large Online Search Engines (VLOSEs), which serve as a ‘technical checklist’ for risk assessments, as well as for auditing recommender systems.

In this second brief, we explore user experience (UX) and interaction design choices that would provide people with more meaningful control and choice over the recommender systems that shape the content they see. We propose nine practical UX changes that we believe can facilitate greater user agency, from content feedback features to controls over the signals used to curate their feeds, and specific ‘wellbeing’ features. We hope this second briefing serves as a starting point for future user research to ground UX changes related to DSA risk mitigation in a better understanding of user's needs.

This briefing concludes with recommendations for VLOPs and the European Commission.

With regards to VLOPs, we would like to see these and other design provocations user-tested, experimented with and iterated upon. This should happen in a transparent manner to ensure that conflicting design goals are navigated with respect to the DSA. Risk assessment and risk mitigation is not a one-time exercise but an ongoing process, which should engage civil society, the ethical design community and a diverse representation of users as consulted stakeholders.

The European Commission should use all of its powers under the DSA, including the power to issue delegated acts and guidelines (e.g., in accordance with Article 35), to ensure that VLOPs:

  • Implement the best UX practices in their recommender systems
  • Modify their interfaces and content ranking algorithms in order to mitigate systemic risks
  • Make transparency disclosures and engage stakeholders in the ways we describe above.

Read the full briefing here.


Photo by Christin Hume

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>
ESC Big Tech this Black Friday https://peoplevsbig.tech/esc-big-tech-this-black-friday/ Thu, 23 Nov 2023 15:33:00 +0000 https://peoplevsbig.tech/?p=518 People vs Big Tech has joined forces with high street brand Lush to raise money for our work to rein in the harmful power of the Big Tech corporations

The post ESC Big Tech this Black Friday appeared first on People vs. Big Tech.

]]>

We have joined forces with high street brand Lush in a continuation of their Big Tech Rebellion, to raise money for our work to rein in the handful of Big Tech companies that have monopolised the Internet with intrusive surveillance, predatory addictive-algorithms, harmful content and echo chambers.

Lush online shops in the UKIrelandFranceGermanyHungarySpainItalyJapanAustraliaNew ZealandUnited States, and Canada will sell a limited edition bath bomb called The Cloud, with 100% of the sales price (minus VAT) will support People vs Big Tech campaigns.

Bathe in the knowledge that your purchase is helping to challenge the power and abuses of Big Tech.

Not only can you support our movement by buying The Cloud bath bomb, but you can also become a part of it. By signing the People’s Declaration, you will join our open network of concerned individuals and civil society organisations working together to challenge the power and abuses of Big Tech.

Lush, a high street cosmetics brand, has taken a stand against Big Tech's harmful business model. They've announced a Digital Divestment roadmap, and are committing to “push back against the stranglehold the Big Tech 5 have on our business, our families and our communities.”

More information about Lush and their Big Tech rebellion can be found here.

Sign the People’s Declaration and join the Big Tech Rebellion!

The post ESC Big Tech this Black Friday appeared first on People vs. Big Tech.

]]>
BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation https://peoplevsbig.tech/briefing-fixing-recommender-systems-from-identification-of-risk-factors-to-meaningful-transparency-and-mitigation/ Wed, 23 Aug 2023 15:45:00 +0000 https://peoplevsbig.tech/?p=522 As platforms gear up to submit their first risk assessments to the European Commission, civil society experts set out what the regulator should look for

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>

From August 25th 2023 Europe’s new Digital Services Act (DSA) rules kick in for the world’s largest digital platforms, shaping the design and functioning of their key services. For the nineteen platforms that have been designated “Very Large Online Platforms” (VLOPs) and “Very Large Online Search Engines” (VLOSEs), there will be many new requirements, from the obligation to undergo independent audits and share relevant data in their transparency reports, to the responsibility to assess and mitigate against “systemic risks” in the design and implementation of their products and services. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.

One of the major areas where platform design decisions contribute to “systemic risks” is through their recommender systems – algorithmic systems used to rank, filter and target individual pieces of content to users. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems became a crucial design-layer of VLOPs regulated by the DSA. Shadowing their rise, is a growing body of research and evidence indicating that certain design features in popular recommender systems contribute to the amplification and virality of harmful content, such as hate speech, misinformation and disinformation, addictive personalisation and discriminatory targeting in ways that harm fundamental rights, particularly the rights of minors. As such, social media recommender systems warrant urgent and special attention from the Regulator.

VLOPs and VLOSEs are due to submit their first risk assessments (RAs) to the European Commission in late August 2023. Without official guidelines from the Commission on the exact scope, structure and format of the RAs, it is up to each large platform to interpret what “systemic risks” mean in the context of their services – and to choose their own metrics and methodologies for assessing specific risks.

In order to assist the Commission in reviewing the RAs, we have compiled a list of hypotheses that indicate which design features used in recommender systems may be contributing to what the DSA calls “systemic risks”. Our hypotheses are accompanied by a list of detailed questions to VLOPs and VLOSEs, which can serve as a “technical checklist” for risk assessments as well as for auditing recommender systems.

Based on independent research and available evidence we identified six mechanisms by which recommender systems may be contributing to “systemic risks”:

  1. amplification of “borderline” content (content that the platform has classified as being at higher risk of violating their terms of service) because such content drives “user engagement”;
  2. rewarding users who provoke the strongest engagement from others (whether positive or negative) with greater reach, further skewing the publicly available inventory towards divisive and controversial content;
  3. making editorial choices that boost, protect or suppress some users over others, which can lead to censorship of certain voices;
  4. exploiting people’s data to personalise content in a way that harms their health and wellbeing, especially for minors and vulnerable adults;
  5. building in features that are designed to be addictive at the expense of people’s health and wellbeing, especially minors;
  6. using people’s data to personalise content in ways that lead to discrimination.

For each hypothesis, we provide highlights from available research, which support our understanding of how design features used in recommender systems contribute to harms experienced by their users. However, it is important to note that researchers have been constrained in their attempts to verify causal relationships between specific features of recommender systems and observed harms by what data was made available to them either by online platforms or platforms’ users. Because of these limitations external audits have spurred debates about the extent to which observed harms are caused by recommender system design decisions or by natural patterns in human behaviour.

It is our hope that risk assessments carried out by VLOPs and VLOSEs, followed by independent audits and investigations led by DG CONNECT, will end these speculations by providing data for scientific research and revealing specific features of social media recommender systems that directly or indirectly contribute to “systemic risks” as defined by Article 34 of the DSA.

In the second part of this brief (page 14) we provide a list of technical information that platforms should disclose to the Regulator, independent researchers and auditors to ensure that results of the risk assessments can be verified. This includes providing a high-level architectural description of the algorithmic stack as well as specifications of different algorithmic modules used in the recommender systems (type of algorithm and its hyperparameters; input features; loss function of the model; performance documentation; training data; labelling process etc).

Revealing key choices made by VLOPs and VLOSEs when designing their recommender systems would provide a “technical bedrock” for better design choices and policy decisions aimed at safeguarding the rights of European citizens online.

You can find a full glossary of technical terms used in this briefing on page 16 of the full report.


Read the full report in the pdf attached.

ACKNOWLEDGEMENTS

This brief was drafted by Katarzyna Szymielewicz (Senior Advisor at the Irish Council for Civil Liberties) and Dorota Głowacka (Panoptykon Foundation), with notable contributions from Alexander Hohlfeld (independent researcher), Bhargav Srinivasa Desikan (Knowledge Lab, University of Chicago), Marc Faddoul (AI Forensics) and Tanya O’Carroll (independent expert).

In addition, we are grateful to the following civil society experts for their contributions:

Anna-Katharina Meßmer (Stiftung Neue Verantwortung (SNV). Asha Allen (Centre for Democracy and Technology, Europe Office). Belen Luna (HateAid). Josephine Ballon (HateAid). Claire Pershan (Mozilla Foundation). David Nolan (Amnesty International). Fernando Hortal Foronda (European Partnership for Democracy). Jesse McCrosky (Mozilla Foundation/Thoughtworks). John Albert (AlgorithmWatch). Lisa Dittmer (Amnesty International). Martin Degeling (Stiftung Neue Verantwortung (SNV). Pat de Brún (Amnesty International). Ramak Molavi Vasse’i (Mozilla Foundation). Richard Woods (Global Disinformation Index).

Fixing Recommender Systems_Briefing for the European Commission (PDF)

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>
The Tale of the Heckler https://peoplevsbig.tech/the-tale-of-the-heckler/ Thu, 11 Nov 2021 06:15:00 +0000 https://peoplevsbig.tech/?p=569 WaterBear releases short film illustrating how social media platforms are amplifying mis and disinformation online

The post The Tale of the Heckler appeared first on People vs. Big Tech.

]]>

WaterBear releases short film illustrating how social media platforms are amplifying mis and disinformation online.

Interactive streaming platform WaterBear has released a short documentary film called The Tale of the Heckler. The film illustrates how social media platforms are amplifying mis and disinformation, undermining the ability to progress on critical issues like climate change.

Watch the film below.

Also available with German and French subtitles.

The short film tells the story of a heckler at a town hall meeting where people are discussing the increase in extreme weather, its connection to climate change, and what could be done about it. The heckler claims that the storms have nothing to do with climate change and at first, only the people closest to the heckler can hear his dissent. But then he is handed a megaphone, and suddenly his opinion is able to travel further and more loudly than before. Quickly, the attention is taken away from the purpose of the town hall meeting and people leave dispirited and divided by the takeover of the meeting by the heckler.

This is how mis and disinformation spread on social media - powered by algorithms that can make unverified, triggering information go viral, creating a crisis in our information ecosystem. These platforms’ algorithms end up rewarding outright lies and hate because it keeps people clicking and scrolling for longer. This has deepened fault lines across different societies and the film shows how digital world lies exacerbate real-world crises.

The film was commissioned by the global philanthropic organisation Luminate. Alaphia Zoyab,  Director of Advocacy at Reset, an initiative of Luminate, said: “We live in a disinformation age where lies and hate powered by algorithms are getting amplified to millions of people.  Through this simple take of what happens to a heckler at a town hall meeting, we see how social media is polluting our public square, threatening our progress on so many vital issues. To solve this information crisis, first, we need to understand it.”

The film releases at a critical juncture when the EU (Digital Services Act) and the UK (Online Safety Bill) have draft laws on regulating Big Tech that are currently being debated. A massive movement of citizens across Europe is urging these governments to ensure that new laws don’t just tackle illegal content but force platforms to address systemic risks created by social media algorithms – such as the amplification of disinformation, hate and abuse.

The post The Tale of the Heckler appeared first on People vs. Big Tech.

]]>
The EU’s Golden Opportunity to Turn Off Big Tech’s Manipulation Machine https://peoplevsbig.tech/the-eus-golden-opportunity-to-turn-off-big-techs-manipulation-machine/ Fri, 05 Nov 2021 06:18:00 +0000 https://peoplevsbig.tech/?p=573 The Digital Services Act is a chance to address algorithmic harms exposed by Facebook whistleblower Frances Haugen

The post The EU’s Golden Opportunity to Turn Off Big Tech’s Manipulation Machine appeared first on People vs. Big Tech.

]]>

How the pending Digital Services Act can address algorithmic harms exposed by Facebook whistleblower Frances Haugen.

The entire world has been shocked by the extent of the algorithmic harms revealed by Facebook whistleblower Frances Haugen. Besides sounding the alarm with her evidence to lawmakers, Haugen and her legal team have also filed at least eight formal complaints with the US Securities and Exchange Commission alleging Facebook broke the law through a series of misstatements, omissions, and lies. These complaints collectively demonstrate that Facebook has consistently misled both the public and potential regulators about the extent to which its platform directly harms society. Yet while legislators in the US and other countries now scramble to prepare oversight options to rein in one of Big Tech’s most powerful and toxic companies, it is the European Union that is in pole position to enact rules to mitigate these harms.

Years in the making, the pending Digital Services Act (DSA) is designed to introduce a new set of regulations that would require giant platforms like Facebook to, amongst other things, retool their advertising and recommendation algorithms to take into account their impact on society. As Haugen’s testimony and disclosures make plain, Big Tech’s unchecked business of prioritising profits over people has come at an immense real world cost. She will discuss her explosive revelations with the lead committee at the European Parliament responsible for framing new rules for the massive EU market. The power of democracy to make laws, whistleblowers to expose the truth, and citizens to demand change is truly crystallising in the EU.

Haugen’s Disclosures Show Facebook Knows its Algorithms are Harmful

Haugen’s well-documented allegations cite internal Facebook documents that confirm what many women -- particularly women of colour -- and members of marginalised communities have long reported: hate speech and online violence are not simply tolerated by Facebook, they are amplified and promoted by the platform’s own algorithms.

(CBS News - Whistleblower’s SEC Complaint: Facebook Knew Platform Was Used to “Promote Human Trafficking and Domestic Servitude”)

Despite being aware of the company’s “significant” role in enabling hate speech and other toxic content to “flourish” on its platform, Facebook has done little to meaningfully address the problem. Internal documents cited by Haugen reveal that the company takes action against as little as 3% to 5% of hate speech and less than 1% of Violence and Inciting to Violence (V&I) content on Facebook.

This troublingly low figure is a direct result of the company’s unwillingness to implement impactful interventions that would lower the prevalence of such content but might otherwise reduce or slow the platform’s astronomical growth. Instead of pursuing solutions to reduce the virality of harmful content via algorithmic modifications and product design changes, Facebook instead employs an under-resourced content-level enforcement strategy. This approach, which relies on flagging and human review (as well as generally ineffective AI), is inherently slow -- a piecemeal approach that enables problematic content to go viral and spread its dangerous influence long before its root source is ever reviewed or taken down. Such an approach may make sense for Facebook from a bottom line perspective -- after all, the more inflammatory the content, the more interactions and ad revenue to be reaped -- but it is devastating people’s lives as hate speech, homophobia, sexism, and racism are left to flourish. Yet with no financial incentive to act otherwise, change will only come when the tech giant is required to alter its ways. A DSA that only addresses illegal content will fail to address the beating heart of what makes the platform currently truly toxic.

How Two Key Tools in the DSA Toolbox Would Address these Algorithmic Harms

In light of Facebook’s well-documented refusal to protect the public good, EU lawmakers must take action by passing the strongest possible version of the DSA. If designed properly, this landmark legislation will equip European regulators to clean up Facebook’s toxic algorithms and ensure meaningful transparency and accountability of the platform. Two tools in particular are needed to address the issues exposed by Haugen and other whistleblowers: (1) Risk Assessment Obligations, and (2) Recommender System Transparency and Controls.

Risk Assessment Obligations: As Haugen’s disclosures demonstrate, instead of slowing the spread of toxic content, Facebook’s design features in fact facilitate and indirectly encourage hate speech, violence, and disinformation. Given this, Facebook and other large platforms must be obligated to identify, prevent, and mitigate the risk of these types of content being distributed and amplified by their products. Under Article 26 of the DSA, large platforms would be required to take into account the ways in which their design choices and operational approaches influence and increase these risks. Under Article 27, they would also be required to take action to prevent and mitigate the risks identified. Together, these two articles would serve as a counterweight to Facebook’s current refusal to implement impactful interventions that would lower the prevalence of harmful content but might otherwise reduce or slow the platform’s growth. Amending Article 27 to require platforms to provide written justification to independent auditors whenever a subject corporation fails to put in place risk-mitigating measures would further strengthen the regulatory framework and increase the instance of responsible design choices.

Recommender System Transparency and Controls: Whistleblower disclosures such as Haugen’s also reveal how little insight is publicly available into the inner workings of Facebook’s core mechanics. As a result, countless users remain unaware as to why they are being shown certain posts and how their experiences on the platform differ from that of other users due to financially motivated -- and societally detrimental -- filter bubbles. In response, Article 29 of the DSA would require platforms to provide users with clear information about the main parameters used in their recommender systems. At present, these invisible systems wield a dangerously high degree of power over all those who log on to Facebook -- dictating the content and order of the posts each user sees, which invariably informs and shapes their worldview. As an internal Facebook study disclosed by Haugen reported, test accounts that followed “verified/high quality conservative pages” such as Fox news and Donald Trump “began to include conspiracy recommendations after only 2 days.”

To further benefit and protect users, Article 29 should also be strengthened by mandating that recommender systems can no longer be based on data profiling by default. Creating a baseline that users must opt-in to such profiling (rather than opt-out) would ensure that recommender systems are not personalised to a user’s online activity and past behaviour without the user being fully aware and explicitly consenting beforehand. This type of measure would put the brakes on a foundational element that promotes the way that toxic content currently spreads on Facebook. In an effort to increase diversity and user choice over time, Article 29 should additionally be modified to allow third parties to offer alternative recommender systems on very large online platforms. Allowing for user choice in recommender systems outside of the four walls of Facebook would promote healthier algorithms optimized for factors beyond mere engagement to flourish over time.

The EU Must Seize this Historic Opportunity to Lead and Protect its Citizens

Frances Haugen’s revelations make it clear that Facebook is incapable of fixing itself. Amidst a global crisis born out of online misinformation and corporate mismanagement, the European Parliament must therefore seize its golden opportunity, put people’s rights before Big Tech profits, and vote for the strongest possible version of the DSA.

In doing so, it is essential for MEPs to bear in mind that the new law will only be as powerful as its enforcement mechanisms. Regulators must accordingly be vested with meaningful powers to supervise the tech titans and take strong action if companies fail to address identified risks. Supervisory powers must be both independent and robust, as must auditing powers. Every auditor should have defined independence, expertise on platform design, and access to all relevant information and data -- including access to the platforms’ algorithms (in turn, Article 31 on data scrutiny should be improved by widening the definition of “vetted researchers” to include civil society and investigative journalists). MEPs must also be watchful and reject any proposal to include a media exemption clause in the DSA. Ring-fencing “media” content will be worse than today’s status quo -- where platforms will be actively stopped from taking voluntary remedial action to stem the spread of orchestrated disinformation, violence, and hate.

Frances Haugen will be giving live testimony to different committees of the European Parliament on 8th November 2021 from 16:45 - 19:30 CET. You can watch the livestream here.

To learn more about the essential DSA package, click here.

The post The EU’s Golden Opportunity to Turn Off Big Tech’s Manipulation Machine appeared first on People vs. Big Tech.

]]>
Toxic Body Image Content for Teens on Instagram https://peoplevsbig.tech/toxic-body-image-content-for-teens-on-instagram/ Thu, 30 Sep 2021 06:22:00 +0000 https://peoplevsbig.tech/?p=579 New research exposes the prevalence of Instagram posts promoting eating disorders, plastic surgery, and skin whitening products to young women.

The post Toxic Body Image Content for Teens on Instagram appeared first on People vs. Big Tech.

]]>

New research exposes the prevalence of Instagram posts promoting eating disorders, plastic surgery, and skin whitening products to young women.

With more than a billion people now using Instagram across the world, the Facebook-owned platform plays an outsized role in shaping visual norms and portraying aspirational goals for vast swathes of society today. Though this staggering degree of influence offers significant potential to enhance the public good, all too often it is used to maximise corporate profits at the expense of individual well-being and agency. Case in point: A research report commissioned by global activist group SumOfUs, which reveals the alarming degree to which content that promotes eating disorders, plastic surgery, and skin whitening products is made available on the platform to young women and teens.

Body image issues -- already highly prevalent amongst young people -- have been exacerbated by Covid-19. The number of patients ages 10 to 23 suffering from eating disorders has doubled since the start of the pandemic, and the National Eating Disorders Association in the US has seen a 40% increase in calls to its helpline since March 2020. With more time spent online, young people are increasingly susceptible to the persuasive suggestions of Instagram’s algorithms -- a manipulation machine designed to maximise engagement by amplifying content which is routinely harmful, untrue, or both. Such suggestions can be devastating on mental and physical health, with Instagram acknowledging having promoted weight loss content to people with eating disorders (unexpected triggers carry significant risk of relapse).

Aware of this highly serious issue, and unconvinced that Instagram and Facebook were taking sufficient steps to remedy it, the researchers examined 720 Instagram posts to better quantify the scale of the problem (240 posts each on eating disorders, plastic surgery, and skin whitening). Their research findings confirm that despite Facebook’s promises to curb such content, Instagram remains full of toxic posts that pose an immediate danger to the lives of those who use its services -- one heightened for teens and people of colour.

The report, Eating Disorders, Plastic Surgery, and Skin Whitening on Instagram: How young people are exposed to toxic content, highlights several key findings, including:

  • 22 different hashtags were identified that promote eating disorders on Instagram, potentially leading to over 45 million eating disorder related posts.
  • Of the 240 eating disorder posts analysed, 86.7% were pushing unapproved appetite suppressants, and 52.9% posts directly promoted eating disorders.
  • 22 different hashtags were identified that promote plastic surgery on Instagram, potentially leading to over 22 million plastic surgery related posts.
  • Of the 240 plastic surgery posts analysed, 86.7% were using posts to promote plastic surgery procedures or clinics.
  • Of the 240 skin-whitening posts analysed, 81.3% promoted skin whitening products, of which 83.1% were unapproved products.

These findings dovetail with recent Wall Street Journal bombshell reporting on the significant negative impact of Instagram posts on teen girls. As revealed by the Journal, Facebook’s own internal research declared: “We make body image issues worse for one in three teen girls” and “Teens blame Instagram for increases in the rate of anxiety and depression.” Yet despite this and other critical information being made available to top executives, the company has continued to consistently target the most at-risk groups. Without regulatory intervention, Facebook simply has no financial incentive to remove the harmful content.

In response to its troubling findings, SumOfUs is calling for legislators to pass meaningful platform regulation and take action to ensure Instagram, Facebook, YouTube, TikTok, and other platforms give independent researchers access to the data they hold regarding the impact of their platforms on kids and teens.

Under the democratic banner of The People’s Declaration, the citizens of Europe have made it known that the Digital Services Act (DSA) and Digital Markets Act (DMA) must value people over profits. Findings like those in the SumOfUs report underscore the urgent need to turn off the manipulation machine. The organisations of The People's Declaration, which represent over 25 million citizens across the European Union, are firm in their conviction: “It is time for these platforms to de-risk their design, detox their algorithms, give users real control over them and be held to account for failing to do so.”

Download Eating Disorders, Plastic Surgery, and Skin Whitening on Instagram: How young people are exposed to toxic content here.

The post Toxic Body Image Content for Teens on Instagram appeared first on People vs. Big Tech.

]]>