I’ve been busy recently—work, study, Christmas—and haven’t felt a huge urge to write anything here. So let’s round off the year with some old fashioned web-logging: the mere bookmarking a story on a subject that feels emblematic of the entire decade.
A discussion about a pre-publication research paper with some shoddy methodology leads me to a New York Times article by Kevin Roose, published in June this year, chronicling one young man’s journey into alt.right radicalisation. A key insight:
The radicalization of young men is driven by a complex stew of emotional, economic and political elements, many having nothing to do with social media. But critics and independent researchers say YouTube has inadvertently created a dangerous on-ramp to extremism by combining two things: a business model that rewards provocative videos with exposure and advertising dollars, and an algorithm that guides users down personalized paths meant to keep them glued to their screens.
The impact of algorithms on our psyche and society has become very apparent in the last few years. The power to determine who sees what and when can change moods and swing elections. But discussing the issue this week, Roose and other data journalists present some important caveats. The algorithm isn’t everything.
The only thing I’d add, coming at the issue (as I do) with an eye on freedom of expression concerns, is that the way the algos affect our interests is not in itself a bad thing.
We’ve all fallen down algorithm-induced ‘YouTube Rabbit Holes in our time, and when the subject is not political, the way that the system steers users away from mainstream content and into the back-catalogue the results can be delightful. Last night, for example, I watched a load of astonishing videos of ballet performances. I know nothing about ballet and cannot now remember how I happened upon them (perhaps I clicked on a link on someone’s blog?) but it’s possible this could be the start of a deep and consuming interest that we would usually applaud.y
Even political ‘radicalisation’ is not necessarily a bad thing. I imagine that ‘radicalising’ people to fight for racial or gender equality (say) or to become environmental activists, is actually desirable.
The issue, as ever, is not with ‘radicalisation’ per se but ‘violent radicalisation’ or (as the Commission for Counter Extremism recently suggested) with ‘hateful extremism.’ Algorithms that serve us relevant content are useful tools for many that can be misused by a few. Or, as Kevin Roose and Becca Lewis point out above, algorithms don’t radicalise people; people radicalise people.
That is not to say that we shouldn’t intervene to temper the algorithms. Just that the challenge for tech companies and governments is not one of banning, but of balance. This will be the task of the next decade. Let us hope that by 2030 we will have reached a fair settlement.