Friday, August 23, 2024

Fascism 2.0 – The changing face of social media censorship

 

90

Fascism 2.0 – The changing face of social media censorship

AI-driven content moderation is subtly shaping public opinion and political engagement

Paul Lancefield

Facebook make only about £34 a year from the average customer in the UK – a little under £3 a month (and that’s before costs) so clearly there is no head-room or motivation, for a human level of customer service or attention. The user is not the customer; rather, they are the product whose data is sold to advertisers.

Thus, users do not have a direct customer relationship with the platform. The network is not directly incentivised to “care” about the user before the advertiser. And no matter where you lie on the spectrum between “free speech absolutism” and “private entities have the right to censor any user”, with such low margins it is inevitable machine processing will have to be used to moderate posts and deal with the customer interface.

But it is a fact the customer processing and management capabilities Social Networks are now evolving is being utilised in a variety of ways beyond just moderation. And it is also true this automated processing is being done at scale and is now applied to every post every member makes. 68% of US voters are on Facebook. In the UK it’s 66% and France 73.2%. The figures are similar for every democratic nation in the West. So it is vitally important the applied rules should be politically neutral.

The power that exists within the ability to machine-process every users posts is far deeper and more profound than perhaps many realise. And while it can’t directly dictate what users write in their messages it has the capacity to fundamentally shape which messages gain traction.

Social Media services have become de-facto town squares and most would agree their corporate owners should avoid ever putting a hand on the scales and influencing politics.

Additionally as everyone who uses Facebook is aware, especially when it comes to politically sensitive topics, the system will qualify an individual’s reach; sometimes to an extreme degree. Or that user will simply be banned for a period of time, or banned from the network entirely.

So we can ask the question, since the social media corporations have so much censorship power, how do we know they aren’t engaging in unethical political interference? Can they be trusted with the responsibility?

I will return to this question, but it’s clear that trust in these corporations is deeply misplaced.

The pandemic woke many people up to the levels of control those in charge of our Social Media networks are imposing. They, write the rules to boost engagement for posts they favour, making certain individuals’ follower counts more valuable. Conversely, users who go against the grain (or against the establishment narrative) see their engagement subtly reduced or even tank, or they can be banned from the service entirely. And the evidence is that, somewhat contrary to the principles of democracy, hands have been very firmly placed on the scales at Facebook, Twitter and YouTube.

When Elon Musk bought Twitter, he invited independent journalists Matt Taibbi, Bari Weiss and Michael Schellenburger into the Twitter offices to research internal company communications and see how far the previous owners had been censoring user tweets.

The Twitter Files are the result, and they clearly demonstrate there has been interference on a major scale and that it has also in many cases been on political grounds. The Twitter Files team established government agencies have been firmly embedded at the company monitoring and censoring US citizens and the citizens of other nations and government agencies were regularly (strongly) requesting censorship actions. But more than this they have also revealed similar levels of interference have been taking place at other Social Media networks such a Facebook.

But since The Twitter files evidence of interference, a new and potentially even greater interference threat has emerged. AI.

There was a time it seemed like algorithms were the only topic of conversation digital marketers could discuss. And since there is no margin for human intervention at the level of individual posts, algorithms were what was being used.

To start with they were quite simple, like the equations we practiced in school math class, so they were relatively easy to work out. Google’s rise was powered by a simple yet brilliant idea: counting external links to a webpage as a proxy for relevance.

But algorithms have since given way to more complex machine-learning models which still at core, rely on algorithms, but now they are automatically generated and so vast any human attempt to untangle them is a non-starter. So we confine our thinking about them to what they can achieve, what significant things they can discriminate rather than exactly how the code works.

And now we have entered a third generation of technology. Machine learning has transformed into the development of Large Language Models (LLMs) or, more popularly, AI. And with this latest evolution, corporatists have found immense and frightening new opportunities for power and control.

The creation of LLMs involves training. The training imbues them with specific skills and biases. The purpose of the training is to fill in gaps, such that there are no obvious holes in the LLMs capacity to deal with the building blocks of human conceptualisation and speech. And this is the distinguishing feature of LLMs; that we can converse with them and the conversation flows, and the grammar and the content feels normal fluent and complete. Ideally, an LLM acts like a refined English butler—polite, informative, and correct without being rude. But also training confers specialisations to the LLM.

In the context of social media – and this is where the frightening levels of power start to become evident – LLMs are being used to act as the hall monitor, enforcing “content moderation.”

Meta’s Llama Guard is a prime example, trained not just to moderate but also to report on users. And this reporting function embodies not just the opportunity to report, but also through that reporting data, the mining of opportunities to influence and make suggestions about the user and too the user. And when I say suggestions, an LLM is capable not only of the obvious kind that the user might welcome and is happy to receive, but also a more devious unconscious kind that can be manipulative and designed to control.

There is not yet evidence gathered (that I’m aware of) LLMs in particular are being used this way; yet. But the capability is most certainly there and if past behaviours indicate future developments, likely to be so used.

You only need watch the 2006 episode from the Derren Brown’s TV show “Derren Brown: The Heist” where he convinces a group of strangers they have to commit a bank heist, to appreciate just how deep, and powerful the use of suggestion can be. For those unaware of Derren Brown, he is is a stage hypnotist and mentalist, who tends to emphasise the power of suggestion over hypnosis (most of his shows contain no hypnosis at all). Merely through the power of suggestion he gets people to do the most extraordinary things.

“Derren-Brown-like” suggestions work because the human brain is actually far less agile and far more linear than we like to think. Consciousness is a precious resource and many actions we do frequently are transferred to habit, so we can do them without thinking and this is so we can preserve consciousness for where it is needed most.

Through habit we change gear in a stick-shift car without having to think about it. And we’ve all experienced that game where you have a set time to think of a list of things such as countries, ending with the letter A. If put on the spot in front of a crowd, it can sometimes be difficult to come up with any at all. The brain often isn’t actually that good at thinking creatively or making fast conscious on the spot recall.

But, if someone you spoke too a few minutes before the game told you about their holiday in Kenya, you can be sure Kenya will be the first answer to pop into your head. More than that the association will happen automatically, whether we want it to or not!

This is simply the way the brain works. If information is conveyed at just the right time and in the right way, it can be made almost a dead cert a given suggestion will be followed. Derren Brown understands this and is a master at exploiting it.

Search engines and social media platforms wield immense power to engineer behaviour through subtle suggestions. And indeed, there is evidence Facebook and Google have done so.

Professor and researcher Dr Robert Epstein – as it were – “caught Google out” manipulating the search suggestions box that appears under the text box where users enter a search request. The whole episode became additionally sordid when it become clear they were being deceptive and had a level of were awareness their experimentation is unethical. I won’t recount the full details, but do check out the links to this – it is an interesting story in its own right.

Users are in a particularly trusting and receptive mental state when using Google’s suggested links function and don’t notice when the results contain action and imperative suggestions that, far from being the best answer to your search query – are there to manipulate a user’s subsequent actions.

In relation to Social media posts, the use of suggestion is often far more subtle, making it harder to detect and resist. LLM analysis across the database of user Posts can reveal related posts which supply suggest actions. Here the network can utilise the fact they have many millions of user messages at their disposal, including messages suggesting preferred outcomes. Such messages can be selected and preferentially promoted in user feeds.

Content moderation is, of course, necessary to handle unacceptable language and anti-social behaviour. However, there’s a large grey area where disagreeable opinions can labeled as “hate speech” and because it is a grey area, there is much leeway for the social network to intrude into the personal politics and free speech space.

The term “hate speech” has been very effective device for justifying use of the ban-hammer, but the main concern now is that, with the deployment of LLMs a major historical milestone has passed with barely a whisper that implies a whole new level of such constraints and threats to users freedom to communicate.

And that milestone is that now LLMs are being used to govern human behaviour and not the other way around. The passing of this milestone has barely been noticed because we already previously had more simple algorithms performing this role and it is done in the dark anyway.

User’s don’t see it until unless they are affected by it in an obvious way. But even so there is ample reason to think in the future we may will look back and recognise this milestone was something of a critical juncture after which some version of a “Sky-Net” like future became inevitable.

Just last week, UK Prime Minister Keir Starmer has announced a police initiative to use social media to identify those involved in quelling public disorder, illustrating how LLM automated reporting is poised to be used beyond social media and in the context of law enforcement.

There is no detail as yet of how this monitoring will be done, but, having experience of Tech Project pitching you can be sure the government will have a roster of technology firms suggesting solutions. And you can be sure LLMs are being pitched as integral to almost all of them!

So we have established Social Media is closed and proprietary and has enabled new media power structures to be established. We have seen Social Media owners have the power to suppress or boost a posts virality and have now implemented policing and reporting by LLM (AI) which looks set to extend into real world policing. We have seen, through the Twitter Files, social media corporations broke the law during the Pandemic and displayed a willingness to collaborate with government agencies to censor and suppress disfavoured views.

Paul Lancefield is the author of Desilo, an app for helping to turn the tables on AI censorship and political misrepresentation. If you agree with Paul about the danger AI represents to free speech, you can help simply by following him on X.

SUPPORT OFFGUARDIAN

If you enjoy OffG's content, please help us make our monthly fund-raising goal and keep the site alive.

For other ways to donate, including direct-transfer bank details click HERE.

No comments:

Post a Comment