Algorithms and Radicalisation

I’ve been busy recently—work, study, Christmas—and haven’t felt a huge urge to write anything here. So let’s round off the year with some old fashioned web-logging: the mere bookmarking a story on a subject that feels emblematic of the entire decade.

A discussion about a pre-publication research paper with some shoddy methodology leads me to a New York Times article by Kevin Roose, published in June this year, chronicling one young man’s journey into alt.right radicalisation. A key insight:

The radicalization of young men is driven by a complex stew of emotional, economic and political elements, many having nothing to do with social media. But critics and independent researchers say YouTube has inadvertently created a dangerous on-ramp to extremism by combining two things: a business model that rewards provocative videos with exposure and advertising dollars, and an algorithm that guides users down personalized paths meant to keep them glued to their screens.

The impact of algorithms on our psyche and society has become very apparent in the last few years. The power to determine who sees what and when can change moods and swing elections. But discussing the issue this week, Roose and other data journalists present some important caveats. The algorithm isn’t everything.


The only thing I’d add, coming at the issue (as I do) with an eye on freedom of expression concerns, is that the way the algos affect our interests is not in itself a bad thing.
We’ve all fallen down algorithm-induced ‘YouTube Rabbit Holes in our time, and when the subject is not political, the way that the system steers users away from mainstream content and into the back-catalogue the results can be delightful. Last night, for example, I watched a load of astonishing videos of ballet performances. I know nothing about ballet and cannot now remember how I happened upon them (perhaps I clicked on a link on someone’s blog?) but it’s possible this could be the start of a deep and consuming interest that we would usually applaud.y
Even political ‘radicalisation’ is not necessarily a bad thing. I imagine that ‘radicalising’ people to fight for racial or gender equality (say) or to become environmental activists, is actually desirable.
The issue, as ever, is not with ‘radicalisation’ per se but ‘violent radicalisation’ or (as the Commission for Counter Extremism recently suggested) with ‘hateful extremism.’ Algorithms that serve us relevant content are useful tools for many that can be misused by a few. Or, as Kevin Roose and Becca Lewis point out above, algorithms don’t radicalise people; people radicalise people.
That is not to say that we shouldn’t intervene to temper the algorithms. Just that the challenge for tech companies and governments is not one of banning, but of balance. This will be the task of the next decade. Let us hope that by 2030 we will have reached a fair settlement.

Online Harms: A Few Times When The Algorithms Chilled Freedom of Expression

The consultation to the British government’s Online Harms White Paper closed this week. English PEN and Scottish PEN made a submission, arguing that the government rethink its approach.
The government proposal is that a new ‘duty of care’ is placed upon online platforms like Facebook, Twitter and YouTube to protect their users. If they expose users to harmful content—ranging from terrorist propaganda and child porn, to hazily defined problems like ‘trolling’ — then a new regulator could sanction them.
This sounds sensible, but it presents a problem for freedom of expression. If the online platforms are threatened with large fines, and their senior management are held personally responsible for the ‘duty of care’ then it’s likely that the online platforms will take a precautionary approach to content moderation. Whenever in doubt, whenever it’s borderline, whenever there is a grey area… the platforms will find it expeditious to remove whatever has been posted. When that happens, it is unlikely that the platforms will offer much of an appeals process, and certainly not one that abides by international free speech standards. A situation will arise where perfectly legal content cannot be posted online. A two tier system for speech. Continue reading “Online Harms: A Few Times When The Algorithms Chilled Freedom of Expression”

Beyond Beginners Rubik's Cube Tutorials

I think I’ve mentioned before that I recently taught myself to solve a Rubik’s Cube. I often take my cube onto the bus or train and solve it, as an alternative to messing about on my phone.

The beginners’ method of solving the cube is quite inefficient. It teaches seven algorithms, which sometimes have to be repeated until the right pattern emerges.
There are loads of internet resources for people who want to get into speed-cubing. But I have found very little for people like me who just want to be slightly more efficient at solving the cube.
It is for this incredibly specific niche that I have launched a series of YouTube video tutorials entitled Beyond Beginners. They’re a bit cheesy but I had fun making them. Continue reading “Beyond Beginners Rubik's Cube Tutorials”

Notes on the Nazi Pug Thing

The Nazi pug
In Airdrie, Scotland, a man named Markus Meechan has been convicted of posting a grossly offensive video on his ‘Count Dankula’ YouTube channel. He taught his girlfriend’s dog to give a Nazi salute in response to the phrase ‘gas the jews’.
It’s clearly a joke. In fact, he explains as much in the video itself:

Mah girlfriend is always ranting and raving about how cute her wee dug is, and so I thought that I would turn him into the least cutest thing that I can think of, which is a Nazi.

This is clearly in poor taste. However, making offensive jokes should not be a criminal offence.

Many people have been sharing this Jonathan Pie video, where the frazzled reporter voices indignation that the conviction has happened.

Comedians Ricky Gervais and David Baddiel also discussed the context and why this sort of thing is funny.

Over on Sp!ked, columnist Andrew Doyle suggests that the context makes the Count Dankula conviction absurd. To secure a conviction, the prosecution has to wilfully misunderstand the context of the video.

I have a couple of things to add.

Continue reading “Notes on the Nazi Pug Thing”