You searched for feed - People vs. Big Tech https://peoplevsbig.tech/ We’re people, not users Thu, 27 Feb 2025 19:36:01 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png You searched for feed - People vs. Big Tech https://peoplevsbig.tech/ 32 32 Briefing: protecting children and young people from addictive design https://peoplevsbig.tech/briefing-protecting-children-and-young-people-from-addictive-design/ Thu, 07 Nov 2024 11:59:42 +0000 https://peoplevsbig.tech/?p=1015 Research has shown the deep harm excessive social media use can do to young brains and bodies. The EU Commission must tackle the root cause.

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>

Social media companies design their platforms to encourage users to spend as much time on them as possible. Addictive design impacts everyone, but children and young people are especially susceptible. Research shows that given their neural developmental stage, young users are particularly prone both to excessive use of social media as well as its harmful effects, and young users with preexisting psychosocial vulnerabilities are even more at risk.

What is addictive design?

Social media platforms’ business model relies on keeping users online for as long as possible, so they can display more advertising. The platforms are optimised to trigger the release of dopamine - a neurotransmitter the brain releases when it expects a reward - making users crave more and use more.
Young users are far from exempt, documents reveal that Meta has invested significant resources to study and even created an internal presentation on how to exploit the neurological vulnerabilities of young users.

While more research is needed, the following addictive features have been identified:

  • Notifications such as “likes”: both the novelty and validation of another user’s engagement triggers a dopamine release reinforcing the desire to post and interact creating a “social validation feedback loop”.
  • Hyper personalised content algorithms or “recommender systems”: Brain scans of students showed that watching a personalised selection of videos triggered stronger activity in addiction-related areas of the brain compared to non-personalised videos.
  • Intermittent-reinforcement: meaning users receive content they find less interesting punctuated by frequent dopamine hits from likes or a video they really like. This keeps the user scrolling in anticipation for the next dopamine reward. This randomisation of rewards has been compared to “fruit machines” in gambling.
  • Autoplay and infinite scroll: automatically showing the next piece of content to provide a continuous, endless feed, makes it difficult to find a natural stopping point.

 Why is addictive design so harmful? 

Excessive screen time and social media usage has been shown to cause

  • Neurological harm:
    • Reduction in grey matter in the brain according to several studies, similar to the effects seen in other addictions.
    • Reduced attention span and impulse control is linked to the rapid consumption of content on social media, particularly short-form videos, and especially in younger users.
    • Possible impairment of prefrontal cortex development, which is responsible for decision-making and impulse control, due to early exposure to  social media's fast-paced content. N.B. the prefrontal cortex does not fully develop until around age 25.
    • Possible development of ADHD-like symptoms: may be linked to excessive screen according to early studies.
    • Temporary decline in task performance identified in children after watching fast-paced videos.

  • Psychological harm:
    • In November 2023, Amnesty International found that within an hour of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, multiple videos romanticising, normalising or encouraging suicide had been recommended. This illustrates both the risk of prolonged screen time and also the hyper personalisation of content recommender systems.
    • Increased anxiety, depression, and feelings of isolation have been linked to prolonged online engagement, as social media can negatively affect self-esteem, body image and overall psychological well-being.
    • Risk exposure: Longer time online exposes children and young people more to risks such as cyberbullying, abuse, scams, and age-inappropriate content.

  • Physical harm:
    • “93% of Gen Z have lost sleep because they stayed up to view or participate in social media,” according to the American Academy of Sleep Medicine.
    • Reduced sleep and activity: Social media usage can lead to sleep loss and decreased physical activity, which impacts weight, school performance, mental health, and distracts from real-life experiences.

Gone is the time when the streets were considered the most dangerous place for a child to be - now, for many young people the most dangerous place they can be is alone in their room with their phone.

What’s the solution?

Given the severity of the risks to children online, we need binding rules for platforms. Unfortunately, the very large online platforms (VLOPs) have repeatedly demonstrated that they choose profit over the safety of children, young people and society in general.

The adjustments that some have made have been minor, for example, TikTok no longer allows push notifications after 9 pm for users aged 13 to 15. But they will still be exposed to push notifications (linked to addictive behaviour) for most of the day. In March 2023, TikTok introduced a new screen-time management tool which requires under-18s to actively extend their time on the app once they have reached a 60-minute daily limit. However, this measure puts the burden on children, who in large numbers describe themselves as “addicted” to TikTok, to set limits on their own use of the platform. The prompt can also be easily dismissed and does not include a health warning. Adding to the limitations of the measure, the change only applies to users who the system identifies as being a child, with the effectiveness of TikTok’s age verification being called into question. For example, the UK’s media regulator Ofcom has found that 16% of British three- and four-year-olds have access to TikTok.

Meta’s leaked internal documents reveal that the corporation knowingly retains millions of users under 13 years old, and has chosen not to remove them. Notably, Harvard University research last year estimated that in the US alone, Instagram made $11 billion in advertising revenue from minors in 2022.

Risk of overreliance on age verification 

While we welcome norms on an appropriate age to access social media platforms, overreliance on age-gating and age verification to adequately protect minors online is, unfortunately, unrealistic and alone will not adequately protect minors online. Even the most robust age-verification can be circumvented.
Age-gating and age verification still assume that parents or guardians have the availability, capacity and interest in monitoring internet usage. Frequent monitoring is unrealistic for most families but in particular, this approach risks disadvantaging young people who face additional challenges, such as those living in care, whose parents work long hours or face language barriers in their country of residence.
To truly protect children and young users, we need safe defaults for all. Please see our whitepaper prepared in collaboration with Panoptykon and other researchers and technologists: Safe by Default: Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems.
Aside from this, age verification can present its own risks to privacy, security, free speech, as well as cost and convenience to businesses.

Establishing binding rules

Fortunately, there has been momentum to tackle addictive design in the EU; last December the European Parliament adopted by an overwhelming majority a call urging the Commission to address addictive design. In its conclusions for the Future of Digital Policy, the Council stressed the need for measures to address issues related to addictive design. In July, Commission President von der Leyen listed this as a priority for the 2024-2029 mandate. The Commission’s recent Digital Fairness Fitness Check also outlined the importance of addressing addictive design.

The Commission must:

  • assess and prohibit the most harmful addictive techniques not already covered by existing regulation, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities.
    • examine whether an obligation not to use profiling/interaction-based content recommender systems ‘by default’ is required in order to protect users from hyper personalised content algorithms; 
    • put forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off.
  • ensure strong enforcement of the Digital Services Act on the protection of minors, prioritising:
    • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
    • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
    • naming features in recommender systems that contribute to systemic risks;

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>
Beyond Big Tech: A manifesto for a new digital economy https://peoplevsbig.tech/beyond-big-tech-a-manifesto-for-a-new-digital-economy/ Sun, 29 Sep 2024 23:01:24 +0000 https://peoplevsbig.tech/?p=989 70+ orgs back bold vision for a better, fairer digital world and call on states to invest for the public good

The post Beyond Big Tech: A manifesto for a new digital economy appeared first on People vs. Big Tech.

]]>

Graphic of tree resembling a connected, tech ecosystem, with branches, roots and people icons

We, people and organizations from across the globe, are fighting for a future where the digital infrastructure underpinning our world works for people, workers, and the planet. And where creativity and innovation can flourish free from centralized control. 

We believe in a world where power over data and technology is decentralized, redistributed and democratized, instead of being held by harmful tech monopolies. Where people can choose between a wide variety of digital tools to explore and connect, without giving up their privacy and other rights. Where we really get to control and trust what we see on our social media feeds, instead of algorithms designed to surveil, exploit, enrage, and addict.

It’s a world we know is possible, necessary and urgent. But it won’t come about by chance. Creating the digital future we all deserve will take a determined ‘whole of government’ effort from states – to break up the powerful tech monopolies, steer the digital economy in a direction that promotes innovation, fair competition, and democratic values, and offer people genuine freedom and choice in goods and services that are designed to serve them, rather than use and abuse them. 

Big Tech corporations have locked us into a narrow and warped version of the digital world that chips away at our democracies and concentrates wealth, while deepening global inequalities. We must break down Big Tech’s walled gardens, which protect their profit margins, not the public interest. Doing so will unleash progress, fair competition and innovation in a new digital economy – one built to serve the billions of lives the Internet is weaved from, not the tiny group that currently holds the strings. Doing so will require states to:

1) Break Open Big Tech to level the playing field

To create the conditions for a new digital economy, regulators must target the structural power of the tech giants and level the playing field so alternatives can emerge and scale. This includes:

  1. Break up dominant tech firms through stronger enforcement of competition and antitrust law and regulation to enforce structural separations, and prevent further consolidation by blocking more mergers and acquisitions. 
  2. Require dominant tech firms to be more interoperable to enable users to freely choose and move between different platforms and services, open up new entrants to the market, and make platform recommendation systems customizable for users.
  3. Tax dominant tech firms to redistribute the enormous profits they currently extract as rents, including through digital services taxes. 

2) Stimulate a new and fair digital economy 

States’ industrial policies and strategies must proactively foster a more open and diverse ecosystem of digital services that serves public goals and not just private profit. This includes:

  1. Commit significant investment towards public digital infrastructure based on free and open source software and the digital commons. 
  2. Use public procurement as a market lever to encourage the adoption and scaling of open and interoperable alternatives to the Big Tech incumbents.  
  3. Put in place and enforce strong human rights safeguards and accountable governance frameworks, including over public digital infrastructure. 

The digital world is our world; its tools are the infrastructure of human connection. It is too vast and important to leave in the hands of a tiny, self-interested few. By using these levers, states can move beyond efforts to simply mitigate the symptoms of Big Tech’s concentrated power and harmful business models and allow a better and fairer digital economy to emerge. We urge them to begin now.

Read the white paper:
"Beyond Big Tech: A framework for building a new and fair digital economy"


Signed, 

Robin Berjon, Governance & Standards Technologist

Dr. Ian Brown, Centre for Technology and Society at Fundação Getulio Vargas (FGV)

Dr. Christina J. Colclough, The Why Not Lab

Dr. Maria Farrell, Writer

Michelle Meagher, Competition lawyer and author

Cosima Wiltshire, Advocate for Change in the Digital World

Accountable Tech 

African Internet Rights Alliance

AfroLeadership

AI Forensics

AlgorithmWatch

Alternatif Bilisim (AiA-Alternative Informatics Association)

Alliance4Europe

ARTICLE 19

Associação Alternativa Terrazul 

Association for Progressive Communications

Attac Norway

Balanced Economy Project

Bangladesh NGOs Network for Radio and Communications

Bürgerbewegung Finanzwende e.V. 

Canadian Anti-Monopoly Project (CAMP)

Center for the Study of Organized Hate

Centre for Artificial Intelligence Ethics and Governance in Africa (CAIEGA)

Centre for Internet and Society, India

Check My Ads Institute

Coding Rights

Commons Network

Conectas Direitos Humanos

Consortium of Ethiopian Human Rights Organizations

Corporate Europe Observatory (CEO)

Council for Responsible Social Media  

CyberLove

D64 - Zentrum für Digitalen Fortschritt

Data & Society 

Defend Democracy

Digital Action

digiQ

Digitalcourage

Ekō

European Digital Rights (EDRi)

Fair Vote UK

Federación de Consumidores y Usuarios CECU

Forum on Information and Democracy

Foxglove

German NGO Forum on Environment & Development

Global Action Plan

GRESEA

Hindus for Human Rights

Homo Digitalis 

Human Rights Journalists Network Nigeria 

HuMENA for Human Rights and Civic Engagement

IT 4 CHange

Lie Detectors

LobbyControl e.V.

LODelle

Media Matters for Democracy

Nexus Research Cooperative

Open Future

Open Knowledge Foundation

Open MIC

Open Markets Institute

Open Rights Group 

Panoptykon Foundation

People Vs Big Tech

PLZ Cooperative

Public Citizen

Rebalance Now

SHARE Foundation

SocialTIC

SOMO

Superbloom Design 

TEDIC

Tehila 

The Citizens

The London Story

Transnational Institute

UBUNTEAM

Universität zu Köln

Uplift

Waag Futurelab

WACC

Wikimedia Germany

T20 Working Group on Information Integrity, Interoperability & Media Diversity (i3M)

VoxPublic

World Economy, Ecology & Development - WEED

Xnet, Institute for Democratic Digitalisation

The post Beyond Big Tech: A manifesto for a new digital economy appeared first on People vs. Big Tech.

]]>
FIX OUR FEEDS – EU Elections 2024 https://peoplevsbig.tech/campaign/fix-our-feeds-eu-elections-2024/ Tue, 25 Jun 2024 13:40:21 +0000 https://peoplevsbig.tech/?post_type=campaign&p=776 In a year of critical elections, we're asking regulators to restore public trust in information and democracy by disabling toxic recommender systems

The post FIX OUR FEEDS – EU Elections 2024 appeared first on People vs. Big Tech.

]]>
Our information space has never been under greater threat. Trust in information, healthy public debates and democracy are intertwined and nowhere more so than on social media platforms. The EU has made huge strides forward in regulating Big Tech. Yet enforcement is uncertain. The EU elections present the Digital Services Act with a baptism of fire.

Today’s dominant social media platforms have fallen short of their promise to connect and empower people, instead prioritising engagement metrics over safety. Misinformation, hate speech and other forms of divisive discourse have pervaded this public sphere through engagement metrics and the use of “recommender systems” based on behavioural profiling, in other words “observing and collecting passive data about how users behave and interact on the platform in order to infer their interests.”

Time is running out to protect our elections. That's why, on Thursday, May 16, we are doing a day of action together with a group of civil society organisations including The London Story, #jesuislá, Panoptykon, Ekō, EDRi, The Citizens, and The Real Facebook Oversight Board, calling on Commissioner Thierry Breton to #FixOurFeeds!

Our Key Asks:

  1. Turn off profiling-based recommender systems the week before and the week of the EU elections.
  2. Implement other appropriate “break-glass” measures to prevent algorithmic amplification of harmful borderline content, such as misinformation (according to independent third-party fact-checkers), or hateful content in the run up to elections.

***Campaign updates***

FEED FIX™

Everyone’s favourite ‘FREEmium' chocolate corporation”

On 16th May, outside the European Commission in Brussels, we set up a pop-up snack-bar “FEED FIX” to launch the results of our new public poll showing public support to rein in toxic social media algorithms.


In a parody of social media's exploitative and objectionable business model, we handed out delicious FEED FIX chocolate, along with onerous and unfair terms and conditions that came with accepting it, plus receipts listing the "real" price of social media harms.


Ahead of the EU Parliamentary elections, new polling from YouGov showed that EU residents in France, Germany and Ireland overwhelmingly want social media companies to stop “using behavioural profiling by default”. The proposal was favoured by 61% and opposed by only 9%.

The nationally representative poll of 4,182 adults, which was commissioned by People Vs Big Tech in partnership with The London Story and Real Facebook Oversight Board, also showed that by a 30-point margin (54% to 24%) EU residents believe social media platforms are not doing enough to prevent harmful content being shown to users. 55% are not comfortable with “behavioural profiling”, the current default on all major social media platforms.

“The poll reflects an EU citizenry deeply concerned about the corrosive effects of social media, and uncomfortable with the ways algorithms target them to keep their attention. Social media giants know how to make their platforms safer, but refuse. We’re calling on the EU to make the choice for them.” – Clara Maguire, Executive Director of The Citizens.


Under the Digital Services Act, which was passed in 2023 and went fully into effect in 2024, the EU has wide-ranging authority to address algorithmic risks on social media. The EU has already launched an investigation into potential foreign interference with elections on Meta’s platforms. Advocates are calling on Commissioner Breton to go further – turning off behavioural profiling in recommender systems by default, and introducing other “break the glass” measures to reduce the amplification of harmful content the week before and after the June 6th election.

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 4,182 adults. Fieldwork was undertaken between 30th April – 7th May 2024. The survey was carried out online. The figures have been weighted and are representative of all country adults (aged 18+). Further results here.

“An investigation is warranted and welcome, but we need action – now. The message we’re taking to Commission today – and the clear message EU residents sent in today’s poll, is fix our feed … before it’s too late.” – Ritumbra Manuvie, Executive Director of The London Story.

ELECTION 2024 ON THE LINE PLAYLIST

Thierry Breton leads the European Commission Directorate responsible for digital policy, “DG Connect” . He has been considered the “architect” of the Digital Services Act, a major piece of legislation intended to hold Big Tech platforms accountable for disinformation and online harms on their platforms.

In the past, Breton has announced major policy updates by creating and sharing Spotify playlists, with song titles revealing a secret message. We wanted to use Breton’s creative idea to bring attention to our own policy demands, so we made him a bespoke playlist of our own setting out our key demands… “Hey Ya! Mr Thierry”.


Why is fixing our feeds so important?

  • Research by University College London and the University of Kent found that after only 5 days of TikTok usage there was a four-fold increase in the level of misogynistic content being presented to teenagers on the platform's “For You” page.
  • Amnesty International found that TikTok’s algorithms exposed multiple 13-year-old child accounts that had indicated interest in mental health topics to videos glorifying suicide within less than an hour of launching the account.
  • Research by Panoptykon shows how Facebook uses algorithms to deliver hyper-personalised content that may exploit users’ mental vulnerabilities. Users were unable to get rid of disturbing content, even when they used the platforms user empowerment tools to update their preferences.
  • Research by The Wall Street Journal shows that when adults followed children on Instagram, they were shown overtly sexual videos and ads from major brands. The Canadian Center for Child Protection ran its own tests with similar results.


People vs Big Tech
 
is an open network of civil society organisations and concerned citizens working together to challenge the power and abuses of Big Tech. Our first major priority is to ensure today’s tech titans and their legions of lobbyists are not allowed to be the dominant designers of tomorrow’s technology regulations. The network’s current major focus is to distil key legislative priorities in European regulation, highlight moments for targeted action, and amplify opportunities anywhere for people-powered mobilisation in the fight to increase public and political support for Big Tech accountability.

The London Story
 
is an Indian diaspora-led civil society organisation. We advocate for justice, peace, and collective action against grave human rights violations.

Real Facebook Oversight Board, founded as an emergency response to the harms of Meta in the leadup to the 2020 Elections, is a diverse global coalition of the most dynamic, creative and critical voices today in civil rights, human rights, social media policy, disinformation research, extremism and the role of social media in society.

Ekō
 
is a global consumer watchdog: an online community of ten million people who campaign to hold big corporations accountable. We use our power as consumers, workers and investors to hold the biggest companies in the world to account.

#jesuislá
: The association that fights against hate, misinformation, and promotes civility online to make the Internet a better place.

Panoptykon Foundation was established in 2009 as a grassroots initiative by a group of young people who refused to treat new technologies as a cure-all. We keep an eye on state and tech corporations. We fight for laws to protect people’s freedom and privacy. We empower people to live consciously in the digital world.

The EDRi network
 is a dynamic and resilient collective of NGOs, experts, advocates and academics working to defend and advance digital rights across the continent. For over two decades, it has served as the backbone of the digital rights movement in Europe.

The post FIX OUR FEEDS – EU Elections 2024 appeared first on People vs. Big Tech.

]]>
Poll Shows 61% of EU Citizens Want Social Media Behavioural Profiling-based Recommender Systems Turned Off https://peoplevsbig.tech/press/poll-shows-61-of-eu-citizens-want-social-media-behavioural-profiling-based-recommender-systems-turned-off/ Thu, 16 May 2024 23:56:00 +0000 https://peoplevsbig.tech/?post_type=press&p=690 16 MAY, BRUSSELS - Ahead of next month’s critical EU Parliamentary elections, new polling from YouGov show EU residents in France, Germany and Ireland

The post Poll Shows 61% of EU Citizens Want Social Media Behavioural Profiling-based Recommender Systems Turned Off appeared first on People vs. Big Tech.

]]>

16 MAY, BRUSSELS - Ahead of next month’s critical EU Parliamentary elections, new polling from YouGov show EU residents in France, Germany and Ireland overwhelmingly want social media companies to stop “using behavioural profiling by default,” and instead be based on the content that users themselves decide they want to see. The proposal was favoured by 61% and opposed by only 9%. [See full results here]

The nationally representative poll of 4182 adults which was commissioned by People v. Big Tech in partnership with The London Story and Real Facebook Oversight Board, also showed that by a 30-point margin, 54-24, EU residents believe social media platforms are not doing enough to prevent harmful content being shown to users. 55% are not comfortable with “behavioural profiling,*” the current default on all major social media platforms.


“With the election three weeks away and investigations showing the system blinking red, with Kremlin disinformation and election hate speech proliferating across Europe, poll respondents clearly want platforms that are safe by default,” said Tanya O’Carroll, Founder of People v. Big Tech. “The good news is that Commissioner Breton and EU leadership can still make it happen, and act to safeguard the election from algorithmic malfeasance.”

Concurrent with the release of the poll, advocates today also launched a campaign to share a Spotify PlaylistHey Ya! Mr Thierry, with Commissioner Breton, who has used Spotify playlists to communicate the impact of the DSA and other digital policies.

Advocates this afternoon will also be bringing their message directly to the European Commission, with a “Feed Fix” food bike offering chocolates to passers-by - if they accept the “terms and conditions” presented on a faux receipt listing potential social media harms. These show the “real price” of social media. [Images available approx 12:30 PM CEST]

“The poll reflects an EU citizenry deeply concerned about the corrosive effects of social media, and uncomfortable with the ways algorithms target them to keep their attention,” said Clara Maguire, Executive Director of The Citizens. “Social media giants know how to make their platforms safer, but refuse. We’re calling on the EU to make the choice for them.”

When asked “to what extent, if at all, are you comfortable with behavioural profiling?,” 27% responded “not at all,” with 55% expressing some level of discomfort. Ireland, which just debated nationally the merits of “safe by default,” showed the strongest support by agreeing the proposal to be implemented that helps reform social media where social media companies would stop using behavioural profiling by default. Social media feeds would instead be based on the content that users themselves decide they want to see - 77%.

Under the Digital Services Act, which was passed in 2023 and went fully into effect in 2024, the EU has wide-ranging authority to address algorithmic risks on social media. The EU has already launched an investigation into potential foreign interference with elections on Meta’s platforms; advocates are calling on Commissioner Breton to go further - turning off behavioural profiling in recommender systems by default, and also introduce other “break the glass” measures to reduce the amplification of harmful content the week before and after the June 6th election.

“An investigation is warranted and welcome, but we need action - now,” said Ritumbra Manuvie, Executive Director of The London Story. “The message we’re taking to the Commission today - and the clear message EU residents sent in today’s poll, is fix our feed … before it's too late.”

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 4182 adults. Fieldwork was undertaken between 30th April - 7th May 2024. The survey was carried out online. The figures have been given an even weighting for each country to produce an ‘average’ value.

For interviews and more information, please contact media@the-citizens.com.


*Ed Note: Behavioural profiling is defined for this polling as “social media platforms monitoring the posts you read, videos you watch, your comments, reactions, as well as your activity across the internet. Platforms then use this data to select what content to show you to keep you longer on the platform.”

#


People vs Big Tech is an open network of civil society organisations and concerned citizens working together to challenge the power and abuses of Big Tech. Our first major priority is to ensure today’s tech titans and their legions of lobbyists are not allowed to be the dominant designers of tomorrow’s technology regulations. The network’s current major focus is to distil key legislative priorities in European regulation, highlight moments for targeted action, and amplify opportunities anywhere for people-powered mobilisation in the fight to increase public and political support for Big Tech accountability. https://peoplevsbig.tech/


The London Story is an Indian diaspora-led civil society organisation. We advocate for justice, peace, and collective action against grave human rights violations. https://thelondonstory.org/

Real Facebook Oversight Board, founded as an emergency response to the harms of Meta in the leadup to the 2020 Elections, is a diverse global coalition of the most dynamic, creative and critical voices today in civil rights, human rights, social media policy, disinformation research, extremism and the role of social media in society. https://the-citizens.com/campaign/real-facebook-oversight-board/

The post Poll Shows 61% of EU Citizens Want Social Media Behavioural Profiling-based Recommender Systems Turned Off appeared first on People vs. Big Tech.

]]>
The Movement https://peoplevsbig.tech/the-movement/ Sun, 05 May 2024 11:26:03 +0000 http://people-vs-big-tech.local/?page_id=229 People vs Big Tech is a movement fighting to overturn the predatory business model of giant tech corporations and change the internet for good. We’re people

The post The Movement appeared first on People vs. Big Tech.

]]>

People vs Big Tech is a movement fighting to overturn the predatory business model of giant tech corporations and change the internet for good.

We’re people and organisations from across Europe and the world dedicated to building a just, equitable online world. Free from the grip of a handful of CEOs.

We are ordinary people, civil society experts, designers, technologists, campaigners, psychologists… taking action and using our collective voice to fight for the public interest.

And we’re making ourselves heard. From winning a ban on the most intrusive kinds of surveillance advertising in Europe to pushing regulators to investigate Meta to standing with whistleblowers at pivotal moments, our members are forging powerful coalitions to make change happen. And we’re laying the ground for a better tech future.

Because we believe in a digital world built for empowerment -- not brainwashing. For connection -- not addiction. We want our children to grow up with tools designed for discovery and self-expression, not exploitation and isolation.

For far too long, the Big Tech corporations have used and abused us to build their vast fortunes. But together we’re fighting back. And we won’t stop until we have the internet everyone, everywhere deserves.

But it doesn't have to be this way

There is a solution...

Fix our feeds

Big Tech’s toxic recommender systems and algorithms are amplifying hate speech and disinformation, weaponising every societal fault line to keep us scrolling and the ad money rolling in. This isn't just a glitch in the system, it is the system: an unconscionable, unethical business model designed to engage, enrage and addict. It's time to force these CEOs to make their platforms safe by default, detox their algorithms and give users real control.

Stop Surveillance For Profit

A tiny group of CEOs have created vast advertising empires off of our backs. They spy on our most personal interactions, profile our vulnerabilities, and then creep into our lives to hijack our time, emotions and money. It’s a dystopian version of the internet literally no one asked for. But regulators can end it now, by banning surveillance advertising, requiring transparency on all aspects of online ads and rigorously enforcing our privacy rights.

Break Open Big Tech

Big Tech has grown to dominate every aspect of our lives, including our economies - and they are dictating the direction of powerful new technologies like AI in ways that harm us. We believe technological progress needs to serve people first and foremost, rather than the profits of a handful of global mega corporations. Governments must end the stranglehold of Big Tech through robust antitrust enforcement and economic regulation and investment in technology that serves public goals rather than private profit.

Who are we?

People vs Big Tech is a movement of 131 civil society organisations and concerned citizens working together to challenge the power and abuses of Big Tech.

Discover the 131 organisations who form our movement:
Campact
Declic
Glitch

What we have achieved so far

We won an EU-wide ban on the use of sensitive personal data for targeted advertising

Targeted advertising is at the dark heart of big tech’s twisted business model. It means corporations slicing and dicing the most intimate details of our lives to prey on our vulnerabilities, hijack our time and sell us things we don’t need or want.

So when the pundits said we had no hope of getting action on ads into the EU’s new tech law, a powerful movement of organisations, activists, and businesses refused to listen – instead working up watertight policy proposals and launching a massive people’s advocacy push for change.

The resulting ban on using sensitive data like race and religion to target people with ads is a vital first step towards the surveillance-free future we’re determined to build for everyone.

We got the EU to prohibit targeted advertising to kids

Keeping children safe from harm is society’s most basic job. But for years, a handful of tech CEOs have had free rein to spy and experiment on our kids – to sell them stuff, addict them to dangerous products and exploit their developing brains for profit.

Now a mega movement of parents, teachers, psychologists and concerned citizens is rising up to demand a safer, more enriching online world for children. And it’s winning results, like getting MEPs to vote for a ban on insidious spy-advertising to kids in Europe.

We’re just starting to see what the determination and moral clarity of this powerful coalition can achieve. Next stop? An internet that truly meets kids’ needs.

We teamed up with Nobel Laureates to write a 10-point plan to end disinformation

Sometimes people everywhere agree, and what’s needed is a powerful rallying call to connect, unite and galvanise action. So when Nobel prize-winners Maria Ressa and Dmitry Muratov said they wanted to issue a global call to rein in Big Tech and support journalism, our members leapt into action – teaming up to help develop, draft and promote a compelling manifesto for change.

Launched at the Nobel Peace Center in 2022, the actionable, 10-point plan has since been endorsed by over 294 Nobel laureates, organisations and individuals around the world. It’s a vision for a different future, and the best part is it’s totally doable.

Our members got the EU Commission to investigate Meta

Getting Europe to pass a strong tech law was a major victory for this movement. But the real work is making sure that law is enforced. That’s why the digital detectives at AI Forensics hit the ground running, with a deep-dive investigation revealing how Meta is letting pro-Russia propaganda ads flood the EU, in alleged breach of the Digital Services Act.

The European Commission has since launched formal proceedings against Meta, citing AI Forensics’ research. This is accountability in action!

Our members are pushing regulators to tackle gender-biased algorithms

Tech justice and women’s rights campaigners have united in a mighty coalition to tackle the insidious challenge of discriminatory algorithms, and made vital progress pressing authorities in the Netherlands and France to take up the cause.

PvBT member Global Witness, together with Bureau Clara Wichmann in the Netherlands and Fondation des Femmes in France, submitted four complaints to national regulators based on the suspicion that Meta’s algorithms to determine which users see certain job ads discriminate against women and breach equality and data protection laws.

The Dutch Institute of Human Rights has since held a hearing on the complaint while France’s Défenseur des Droits has written “investigation letters” to Meta demanding a response. This is the first time Meta has been questioned on the issue in the EU!

The post The Movement appeared first on People vs. Big Tech.

]]>
People vs. Big Tech https://peoplevsbig.tech/ Wed, 01 May 2024 05:37:50 +0000 http://people-vs-big-tech.local/?page_id=2 We are a growing movement fighting for a different digital future We believe in the original promise of the internet as a force for good. But a small group of

The post People vs. Big Tech appeared first on People vs. Big Tech.

]]>

We are a growing movement fighting for a different digital future

We believe in the original promise of the internet as a force for good.

But a small group of tech corporations have cheated us of that dream, instead constructing a toxic, online world where they exploit our private data and attention to build vast advertising empires.

While they make obscene profits, we face the devastation of a system designed to keep people scrolling at all costs – rising hate and lies, stolen childhoods, polarisation, fragmentation of society and even violence.

But it doesn’t have to be this way. In fact, we’re already shifting the dial. Because together we are stronger than any corporation… and together we’re rising up to build a better digital world.

Our three priorities

Fix Our Feeds

Hate, disinformation, division... it's not a glitch in the system, it is the system. But tech giants can design their products to be safe. We just need to make them.

Stop Surveillance For Profit

Tech corporations spy on and profile us, then exploit our vulnerabilities to build vast fortunes. It's time to dismantle the spy machinery and reclaim our rights.

Break Open Big Tech

True tech progress serves people everywhere, not just a tiny group of CEOs. We're pushing to end Big Tech's stranglehold, so we can build a different future.

Our latest campaigns

Latest news

What we have achieved so far

We won an EU-wide ban on the use of sensitive personal data for targeted advertising

Targeted advertising is at the dark heart of big tech’s twisted business model. It means corporations slicing and dicing the most intimate details of our lives to prey on our vulnerabilities, hijack our time and sell us things we don’t need or want.

So when the pundits said we had no hope of getting action on ads into the EU’s new tech law, a powerful movement of organisations, activists, and businesses refused to listen – instead working up watertight policy proposals and launching a massive people’s advocacy push for change.

The resulting ban on using sensitive data like race and religion to target people with ads is a vital first step towards the surveillance-free future we’re determined to build for everyone.

We got the EU to prohibit targeted advertising to kids

Keeping children safe from harm is society’s most basic job. But for years, a handful of tech CEOs have had free rein to spy and experiment on our kids – to sell them stuff, addict them to dangerous products and exploit their developing brains for profit.

Now a mega movement of parents, teachers, psychologists and concerned citizens is rising up to demand a safer, more enriching online world for children. And it’s winning results, like getting MEPs to vote for a ban on insidious spy-advertising to kids in Europe.

We’re just starting to see what the determination and moral clarity of this powerful coalition can achieve. Next stop? An internet that truly meets kids’ needs.

We teamed up with Nobel Laureates to write a 10-point plan to end disinformation

Sometimes people everywhere agree, and what’s needed is a powerful rallying call to connect, unite and galvanise action. So when Nobel prize-winners Maria Ressa and Dmitry Muratov said they wanted to issue a global call to rein in Big Tech and support journalism, our members leapt into action – teaming up to help develop, draft and promote a compelling manifesto for change.

Launched at the Nobel Peace Center in 2022, the actionable, 10-point plan has since been endorsed by over 294 Nobel laureates, organisations and individuals around the world. It’s a vision for a different future, and the best part is it’s totally doable.

Our members got the EU Commission to investigate Meta

Getting Europe to pass a strong tech law was a major victory for this movement. But the real work is making sure that law is enforced. That’s why the digital detectives at AI Forensics hit the ground running, with a deep-dive investigation revealing how Meta is letting pro-Russia propaganda ads flood the EU, in alleged breach of the Digital Services Act.

The European Commission has since launched formal proceedings against Meta, citing AI Forensics’ research. This is accountability in action!

Our members are pushing regulators to tackle gender-biased algorithms

Tech justice and women’s rights campaigners have united in a mighty coalition to tackle the insidious challenge of discriminatory algorithms, and made vital progress pressing authorities in the Netherlands and France to take up the cause.

PvBT member Global Witness, together with Bureau Clara Wichmann in the Netherlands and Fondation des Femmes in France, submitted four complaints to national regulators based on the suspicion that Meta’s algorithms to determine which users see certain job ads discriminate against women and breach equality and data protection laws.

The Dutch Institute of Human Rights has since held a hearing on the complaint while France’s Défenseur des Droits has written “investigation letters” to Meta demanding a response. This is the first time Meta has been questioned on the issue in the EU!

People vs Big Tech is a movement formed by more than 131 organisations

Follow us on Mastodon

Avatar
Posted on 2025-03-24 15:35:34

🚨🥳 WIN: Our incredible co-founder Tanya O'Carroll has won a legal case against Meta! She took them to court after being served creepy personalised ads about her pregnancy. Now the tech giant has agreed to no longer show her targeted surveillance-ads on Facebook. bbc.com/news/articles/c1en1yjv

Avatar
Posted on 2025-03-19 11:26:02

@cdteurope @eupolicy.social @accessnow.social @aiforensics_org @algorithmwatch
@article19
@pressfreedom @ekomovement.bsky.social @eff @DisinfoEU @edri @EFJEurope @globalwitness
@hateaid @humanrightswatch
@isdglobal
@mozillaofficial @panoptykon
@7amleh

Avatar
Posted on 2025-03-19 11:16:49

🚨 NEW: The EU Digital Services Act requires Big Tech to be more transparent about platform harms + how they're mitigating them.

So a group of us took a deep dive into their latest reports on this, and let's just say...there's a lot of room for improvement. Get into it here 👀 cdt.org/insights/dsa-civil-soc

Avatar
Posted on 2025-03-19 09:48:21

"Vance’s warning not to engage in “hand-wringing about safety” seems to have been thoroughly received by the platforms. The word “safety” appears just once in the submissions from Open & Google...and not at all in Meta’s" platformer.news/ai-action-plan

Avatar
Posted on 2025-03-18 10:36:53

Last night, Isr@e1 launched dozens of attacks in . P@lestin!an officials say >320 people have been killed.

Big Tech's involvement in Isr@e1's ap@rthe!d + genoc!de goes beyond complicity. Find out more in the latest version of our free newsletter: blog.peoplevsbig.tech/the-peop

Avatar
Posted on 2025-03-17 14:20:39

🚨 New report alert! Going forward we want Big Tech platforms to focus on design-related risks, enhance transparency, and ensure risk assessments reflect real-world harms.

Check out the full report here 🤓👾 cdt.org/insights/dsa-civil-soc

Avatar
Posted on 2025-03-17 13:46:41

🚨 Amazing event from our collaborators on The People newsletter tomorrow @thecitizens! Sign up here 👀 dispatch.the-citizens.com/resi

Avatar
Posted on 2025-03-17 12:26:24

Love to see this from the excellent @eff over in the US! They've put together a directory of independent community organisations cross the United States work to support digital rights and empower their local communities 😍👾 efa.eff.org/pages/about

Avatar
Posted on 2025-03-17 10:15:32

"Facebook’s owner has this year led the charge against the EU’s Act...with tech lobbyists in the bloc believing they can successfully water down implementation of a law considered the world’s strictest regime over the cutting edge technology." ft.com/content/3e75c36e-d29e-4

Avatar
Posted on 2025-03-13 13:55:29

🚨 New newsletter alert! Ft. An interview with a fired Meta worker on internal censorship, overview of Big Tech's involvement in Isr*eli apartheid, the real reason the US tried to ban TikTok...and a sprinkling of hope 👀👾

Full story + subscribe link here: blog.peoplevsbig.tech/the-peop

Avatar
Posted on 2025-03-11 13:02:55

According to @TechRadar, the two Big Tech companies are set to come out "relatively unscathed," adding that it's possible Trump's threats to impose tariffs against countries and fine US companies may be a factor at play here. What do you think? techradar.com/pro/apple-and-me

Avatar
Posted on 2025-03-10 09:45:32

The new head of the US Justice Department’s antitrust division unites right and left with a sceptical view of big business. She has expressed concern about market concentration and said enforcement should be focused sectors that affect Americans’ daily lives, including tech 👀 ft.com/content/769709d5-f897-4

Avatar
Posted on 2025-03-07 13:24:59

"According to 's own data, more than 300,000 accounts and pages on Facebook are posting about Cyclone Alfred. However, users who tried to learn more about it using the search term on the social media platform were coming up with empty or non-related results." abc.net.au/news/2025-03-05/met

Avatar
Posted on 2025-03-06 15:41:21

🚨 New: For too long, Google's monopolistic practices have undercut competition & blocked innovation. Together with Duck Duck Go, search engines, browsers, civil + consumer groups, we're calling on the EU Commission to act. The Digital Markets Act must be enforced.

We have tools, let's use 'em! 🛠

Avatar
Posted on 2025-02-27 16:33:31

The internet was 𝗺𝗲𝗮𝗻𝘁 to be a space where anything was possible and everyone has an equal voice. That feels less and less true - especially on 𝘴𝘰𝘮𝘦 platforms (owned by Tech Bro Billionaires) more than others ('decentralised' ones)

Check out The People for a deep dive on the harms of Big Tech every fortnight and check out our suggested ways to push back!

SUBCRIBE: blog.peoplevsbig.tech

Avatar
Posted on 2025-02-27 14:23:17

We'll check it out, thank you! 💗 👾

Avatar
Posted on 2025-02-27 14:22:33

@patrickleavy 🤞 💗 👾

Avatar
Posted on 2025-02-27 14:05:44

" was particularly outraged by the concept of digital services taxes, under which countries like the UK + Italy levy a small tax...on the local revenues of online platforms like search engines and social media services. He alleged that these laws are designed to “plunder American companies.” fortune.com/2025/02/25/europe-
fortune.com/2025/02/25/europe-

Avatar
Posted on 2025-02-26 11:46:51

"This could suggest that ’s interventions in European political affairs + senior role in Donald ’s administration defunding + depopulating the US govt...are leading to a consumer backlash."

Could the B*g T*ech companies face a similar backlash?
theguardian.com/technology/202

Avatar
Posted on 2025-02-25 09:37:24

Under new UK laws, tech companies will be asked to take action on posts that encourage ideas. The non-consensual sharing of intimate images (inc those created with ) is also covered. ft.com/content/f7dd40ea-7556-4

"Together we’re rising up to build
 a better digital world.''

The post People vs. Big Tech appeared first on People vs. Big Tech.

]]>
Momentum Grows to Tackle Toxic Social Media Algorithms as Civil Society Groups Demand Platforms Go “Safe By Default” https://peoplevsbig.tech/press/momentum-grows-to-tackle-toxic-social-media-algorithms-as-civil-society-groups-demand-platforms-go-safe-by-default/ Thu, 07 Mar 2024 23:59:00 +0000 https://peoplevsbig.tech/?post_type=press&p=692 People vs. Big Tech and Panoptykon Release Policy Briefing, Sign-On Calling for Safe and Quality-Driven Recommender Systems, as Urgent Measure to Safeguard 2024

The post Momentum Grows to Tackle Toxic Social Media Algorithms as Civil Society Groups Demand Platforms Go “Safe By Default” appeared first on People vs. Big Tech.

]]>
People vs. Big Tech and Panoptykon Release Policy Briefing, Sign-On Calling for Safe and Quality-Driven Recommender Systems, as Urgent Measure to Safeguard 2024 EU Elections


7 March 2024 - People vs. Big Tech today called on the European Commission Directorate-General for Communications Networks, Content and Technology to act boldly to protect the 2024 EU Elections and the longer-term safety of social media users by enacting critical reforms to “recommender systems” on social media platforms. People vs. Big Tech released a sign-on letter, joined by over a dozen leading civil society organisations, urging adoption of “safe by default” rules, and a policy brief with the Panoptykon Foundation, Safe by Default.

The letter comes a day after advocates gathered in Brussels to urge action from the EU Parliament to stop the flow of disinformation targeting voters.

“As Europe, and ultimately millions of people worldwide go to the polls in 2024, social media experts and civil society advocates are coming together around a simple, powerful demand: turn off the ranking systems that are polarising the electorate and spreading disinformation,” said Tanya O’Carroll, independent tech expert, who leads People vs. Big Tech. “There is growing momentum urging the EU to use the groundbreaking Digital Services Act effectively and make safety the default for online platforms.”

In the letter, organisations called for implementation of a rule that would make platforms default to content recommender systems that are not based on personal behavioural profiling nor engagement.

“Achieving safety by design is at the heart of tackling content that threatens the integrity of the electoral process. By disabling profiling-based recommender systems by default and optimising for values other than engagement, [platforms] can take significant steps toward mitigating the threats that their recommender systems pose to election integrity.

Signatories included Global Witness, Digital Action, Ekō and the Real Facebook Oversight Board.

“Why be worried about disinformation and hate speech in the 2024 elections? Because social media and recommender systems are still optimised for engagement. It's simple: engagement = clickbait = disinfo,” said Katarzyna Szymielewicz, Panoptykon Foundation. “With no time to waste, we call on the European Commission to use its powers under the DSA to define safe defaults for (very large) social media platforms and ensure that deceptive and harmful design features disappear from the market.”

In the policy brief, authors also outlined the extensive evidence for ending “addictive design” that triggers “compulsive behaviour,” particularly in an electoral context, and the urgency of moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems. The brief gives a roadmap to design safer systems that prioritise safety and quality of user experience - instead of engagement at all costs.

The previous day after global leaders, whistleblowers and civil society experts - including Nobel Laureate Maria Ressa and Facebook Whistleblower Frances Haugen came together in Brussels to urge action from the EU. With the EU Parliamentary elections scheduled for June 4th, advocates urged swift action to protect upcoming voting from profiling-driven, social media interference.

“We have an immediate fix: With just 14 weeks to go, platforms could turn off this toxic, engagement-focused version of their recommender systems to a non-profiling version that is ‘safe by default’. This would automatically stop the platforms from actively pushing the worst content into people’s feeds.” said Tanya O’Carroll, independent tech expert, lead at People vs. Big Tech, “This is a vital first step towards a longer-term goal of creating safe and healthy online experiences.”

#

For interviews, please contact press@peoplevsbig.tech

About PvsBT

People vs Big Tech is an open network of civil society organisations and concerned citizens working together to challenge the power and abuses of Big Tech. Our first major priority is to ensure today’s tech titans and their legions of lobbyists are not allowed to be the dominant designers of tomorrow’s technology regulations. The network’s current major focus is to distil key legislative priorities in European regulation, highlight moments for targeted action, and amplify opportunities anywhere for people-powered mobilisation in the fight to increase public and political support for Big Tech accountability.

About the Panoptykon Foundation

The Panoptykon Foundation was established in April 2009 upon the initiative of a group of engaged lawyers, to express their opposition to surveillance. Our mission is to protect fundamental rights and freedoms in the context of fast-changing technologies and growing surveillance.

The post Momentum Grows to Tackle Toxic Social Media Algorithms as Civil Society Groups Demand Platforms Go “Safe By Default” appeared first on People vs. Big Tech.

]]>
Letter to European Commissioner Breton: Tackling harmful recommender systems https://peoplevsbig.tech/letter-to-european-commissioner-breton-tackling-harmful-recommender-systems/ Mon, 05 Feb 2024 15:26:00 +0000 https://peoplevsbig.tech/?p=514 Civil society organisations unite behind Coimisiún na Meán's proposal to disable profiling-based recommender systems on social media video platforms

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>

Dear Commissioner Breton,

Coimisiún na Meán’s proposal to require social media video platforms to disable recommender systems based on intimately profiling people by default, is an important step toward realising the vision of the Digital Services Act (DSA). We eighteen civil society organisations urge you not to block it, and moreover, to recommend this as a risk mitigation measure under Article 35 of the DSA. This is an opportunity to once more prove European leadership.

Disabling profiling-based recommender systems by default has overwhelming support from civil society, the Irish public and cross-group MEPs. More than 60 diverse Irish civil society organisations endorsed a submission strongly backing this measure, as covered by the Irish Examiner. We are united in our support for this Irish civil society initiative. 82% of Irish citizens are also in favour, as shown in a national poll across all ages, education, income, and regions of Ireland conducted independently by Ireland Thinks in January 2024. At the end of last year, a cross-party group of MEPs wrote a letter to the Commission to adopt the Ireland example across the European Union.

Our collective stance is based on overwhelming evidence of the harms caused by profiling-based recommender systems especially for most vulnerable groups such as children – Algorithmic recommender systems select emotive and extreme content and show it to people who they estimate are most likely to engage with it. These people then spend longer on the platform, which allows Big Tech corporations to sell ad space. Meta's own internal research disclosed that a significant 64% of extremist group joins were caused by their toxic algorithms. Even more alarmingly, Amnesty International found that TikTok’s algorithms exposed multiple 13-year-old child accounts to videos glorifying suicide in less than an hour of launching the account.

Platforms that originally promised to connect and empower people have become tools that are optimised to “engage, enrage and addict” them. As described above, profiling-based recommender systems are one of the major areas where platform design decisions contribute to “systemic risks”, as defined in Article 34 of the DSA, especially when it comes to “any actual or foreseeable negative effects” for the exercise of fundamental rights, to the protection of personal data, to respect for the rights of the child, on civic discourse and electoral processes, and public security, to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are therefore a crucial design-layer of Very Large Online Platforms regulated by the DSA.

Therefore, we urge the European Commission not only to support Ireland’s move, but to apply this across the European Union, and recommend disabling recommender systems based on profiling people by default on social media video platforms as a mitigation measure for Very Large Online Platforms, as outlined in article 35(1)(c) of the Digital Services Act.

Furthermore, we join the Irish civil society organisations in urging the Coimisiún na Meán and the European Commission to foster the development of rights-respecting alternative recommender systems. For example, experts have pointed to various alternatives including recommender-systems that are built on explicit user feedback rather than data profiling, as well as signals that optimise for outcomes other than engagement, such as quality content and plurality of viewpoint. Ultimately, the solution is not for platforms to provide only one alternative to the currently harmful defaults but rather to open up their networks to allow a marketplace of possible options offered by third parties, competing on a number of parameters including how rights respecting they are, thereby promoting much greater user choice.

We believe these actions are crucial steps towards mitigating against the inherent risks of profiling based recommender systems towards a rights-respecting and pluralistic information ecosystem. We look forward to your support and action on this matter.

Yours sincerely,

  1. Amnesty International
  2. Civil Liberties Union for Europe (Liberties)
  3. Defend Democracy
  4. Ekō
  5. The Electronic Privacy Information Center (EPIC)
  6. Fair Vote UK
  7. Federación de Consumidores y Usuarios CECU
  8. Global Witness
  9. Irish Council for Civil Liberties
  10. LODelle
  11. Panoptykon Foundation
  12. People vs Big Tech
  13. The Citizens
  14. The Real Facebook Oversight Board
  15. Xnet, Institute for Democratic Digitalisation
  16. 5Rights Foundation
  17. #jesuislà
  18. Homo Digitalus

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>
Open letter to the European Parliament: A critical opportunity to protect children and young people https://peoplevsbig.tech/open-letter-to-the-european-parliament-on-the-addictive-design-of-online-services/ Mon, 11 Dec 2023 12:21:00 +0000 https://peoplevsbig.tech/?p=899 Dear Members of the European Parliament, We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>

Dear Members of the European Parliament,

We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven mental health crisis harming our young people and children. We urge you to take immediate action to rein in the abusive Big Tech business model at its core to protect all people, including consumers and children. As an immediate first step, this means voting for the Internal Market and Consumer Protection Committee’s report on addictive design of online services and consumer protection in the EU, in its entirety.

We consider social media's predatory, addictive business model to be a public health and democratic priority that should top the agenda of legislators globally. Earlier this year, the US Surgeon General issued a clear warning about the impact of addictive social media design: “Excessive and problematic social media use, such as compulsive or uncontrollable use, has been linked to sleep problems, attention problems, and feelings of exclusion among adolescents… Small studies have shown that people with frequent and problematic social media use can experience changes in brain structure similar to changes seen in individuals with substance use or gambling addictions”.

This is no glitch in the system; addiction is precisely the outcome tech platforms like Instagram, TikTok and YouTube are designed and calibrated for. The platforms make more money the longer people are kept online and scrolling, and their products are therefore built around ‘engagement at all costs’ – leading to potentially devastating outcomes while social media corporations profit. One recent study by Panoptykon Foundation showed that Facebook's recommender system not only exploits users' fears and vulnerabilities to maintain their engagement but also ignores users' explicit feedback, even when they request to stop seeing certain content.

The negative consequences of this business model are particularly acute among those we should be protecting most closely: children and young people whose developing minds are most vulnerable to social media addiction and the ‘rabbit hole’ effect that is unleashed by hyper-personalised recommender systems. In October 2023, dozens of states in the U.S. filed a lawsuit on behalf of children and young people accusing Meta of knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms, leading to "depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes".

Mounting research has revealed the pernicious ways in which social media platforms capitalise on the specific vulnerabilities of the youngest in society. In November 2023, an investigation by Amnesty International, for example, found that within 20 minutes of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, more than half of the videos in the ‘For You’ feed were related to mental health struggles. Within an hour, multiple videos romanticising, normalising or encouraging suicide had been recommended.

The real-world ramifications of this predatory targeting can be devastating. In 2017, 14 year-old British teenager Molly Russell took her own life after being bombarded with 2,100 posts discussing and glorying self-harm and suicide on Instagram and Pinterest over a 6-month period. A coroner’s report found that this material likely “contributed to her death in a more than minimal way”. The words of Molly’s father, Ian Russell, must serve as an urgent message to us all: “It’s time to protect our innocent young people, instead of allowing platforms to prioritise their profits by monetising their misery.”

Across Europe, children and young people, parents, teachers and doctors are facing the devastating consequences of this mental health crisis. But change will not come about from individual action. We urgently need lawmakers and regulators to stand up against a social media business model that is wreaking havoc on the lives of young people. We strongly endorse and echo the IMCO Committee Report’s calls on the European Commission to:

1. ensure strong enforcement of the Digital Services Act on the matter, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities. This should include as a matter of priority:

  • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
  • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
  • naming features in recommender systems that contribute to systemic risks;
  • naming design features that are not addictive or manipulative and that enable users to take conscious and informed actions online (see, for example, People vs Big Tech and Panoptykon report: Prototyping user empowerment: Towards DSA-compliant recommender systems).

2. assess and prohibit harmful addictive techniques that are not covered by existing legislation, paying special consideration to vulnerable groups such as children. This should include:

  • assessing and prohibiting the most harmful addictive practices;
  • examining whether an obligation not to use interaction-based recommendation systems ‘by default’ is required in order to protect consumers;
  • putting forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off by design.
Signed by the following experts and academics,

Dr Bernadka Dubicka Bsc MBBs MD FRCPsych, Professor of Child and Adolescent Psychiatry, Hull and York Medical School, University of York

Dr Elvira Perez Vallejos, Professor of Mental Health and Digital Technology, Director RRI, UKRI Trustworthy Autonomous Systems (TAS) Hub, EDI & RRI Lead, Responsible AI UK, Youth Lead, Digital Youth, University of Nottingham

Ian Russell, Chair of Trustees, Molly Rose Foundation

Kyle Taylor, Visiting Digital World and Human Rights Fellow, Tokyo Peace Centre

Dr Marina Jirotka, Professor of Human Centred Computing, Department of Computer Science, University of Oxford

Michael Stora, Psychologist and Psychoanalyst, Founder and Director of Observatoire des Mondes Numériques en Sciences Humaines

Dr Nicole Gross, Associate Professor in Business & Society, School of Business, National College of Ireland

Dr S. Bryn Austin, ScD, Professor, Harvard T.H. Chan School of Public Health, and Director, Strategic Training Initiative for the Prevention of Eating Disorders

Dr Trudi Seneviratne OBE, Consultant Adult & Perinatal Psychiatrist, Registrar, The Royal College of Psychiatrists

Signed by the following civil society organisations,

AI Forensics

Amnesty International

ARTICLE 19

Avaaz Foundation

Civil Liberties Union for Europe (Liberties)

Federación de Consumidores y Usuarios CECU

Defend Democracy

Digital Action

D64 - Center for Digital Progress (Zentrum für Digitalen Fortschritt)

Ekō

Fair Vote UK

Global Action Plan

Global Witness

Health Action International

Institute for Strategic Dialogue (ISD)

Irish Council for Civil Liberties

Mental Health Europe

Panoptykon Foundation

Superbloom (previously known as Simply Secure)

5Rights Foundation

#JeSuisLà

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>
Prototyping User Empowerment – Towards DSA-compliant recommender systems https://peoplevsbig.tech/prototyping-user-empowerment-towards-dsa-compliant-recommender-systems/ Fri, 08 Dec 2023 15:29:00 +0000 https://peoplevsbig.tech/?p=516 What would a healthy social network look like? Researchers, civil society experts, technologists and designers came together to imagine a new way forward

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>

Executive Summary (full briefing here)

What would a healthy social network look and feel like, with recommender systems that show users the content they really want to see, rather than content based on predatory and addictive design features?

In October 2022, the European Union adopted the Digital Services Act (DSA), introducing transparency and procedural accountability rules for large social media platforms – including giants such as Facebook, Instagram, YouTube and TikTok – for the first time. When it comes to their recommender systems, Very Large Online Platforms (VLOPs) are now required to assess systemic risks of their products and services (Article 34), and propose measures to mitigate against any negative effects (Article 35). In addition, VLOPs are required to disclose the “main parameters” of their recommender systems (Article 27), provide users with at least one option that is not based on personal data profiling (Article 38), and prevent the use of dark patterns and manipulative design practices to influence user behaviour (Article 25).

Many advocates and policy makers are hopeful that the DSA will create the regulatory conditions for a healthier digital public sphere – that is, social media that act as public spaces, sources of quality information and facilitators of meaningful social connection. However, many of the risks and harms linked to recommender system design cannot be mitigated without directly addressing the underlying business model of the dominant social media platforms, which is currently designed to maximise users’ attention in order to generate profit from advertisements and sponsored content. In this respect, changes that would mitigate systemic risks as defined by the DSA are likely to be heavily resisted – and contested – by VLOPs, making independent recommendations all the more urgent and necessary.

It is in this context that a multidisciplinary group of independent researchers, civil society experts, technologists and designers came together in 2023 to explore answers to the question: ‘How can the ambitious principles enshrined in the DSA be operationalised by social media platforms?’. On August 25th 2023, we published the first brief, looking at the relationship between specific design features in recommender systems and specific harms.1 Our hypotheses were accompanied by a list of detailed questions to VLOPs and Very Large Online Search Engines (VLOSEs), which serve as a ‘technical checklist’ for risk assessments, as well as for auditing recommender systems.

In this second brief, we explore user experience (UX) and interaction design choices that would provide people with more meaningful control and choice over the recommender systems that shape the content they see. We propose nine practical UX changes that we believe can facilitate greater user agency, from content feedback features to controls over the signals used to curate their feeds, and specific ‘wellbeing’ features. We hope this second briefing serves as a starting point for future user research to ground UX changes related to DSA risk mitigation in a better understanding of user's needs.

This briefing concludes with recommendations for VLOPs and the European Commission.

With regards to VLOPs, we would like to see these and other design provocations user-tested, experimented with and iterated upon. This should happen in a transparent manner to ensure that conflicting design goals are navigated with respect to the DSA. Risk assessment and risk mitigation is not a one-time exercise but an ongoing process, which should engage civil society, the ethical design community and a diverse representation of users as consulted stakeholders.

The European Commission should use all of its powers under the DSA, including the power to issue delegated acts and guidelines (e.g., in accordance with Article 35), to ensure that VLOPs:

  • Implement the best UX practices in their recommender systems
  • Modify their interfaces and content ranking algorithms in order to mitigate systemic risks
  • Make transparency disclosures and engage stakeholders in the ways we describe above.

Read the full briefing here.


Photo by Christin Hume

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>