Over the weekend I was quoted in Politiken, the Danish broadsheet, discussing the LOCOG attempt to control how staff, athletes and the public tweet during the Olympics. The ‘Games Makers’ have strict tweeting rules, and Twitter have been roped in to police ‘ambush marketing’ attempts by companies who are not an official games sponsor.
Hos den engelske afdeling af PEN, der kæmper for ytringsfrihed over hele verden, siger kampagneleder Robert Sharp, at han finder forbuddet direkte latterligt. “Det er bizart og man kan spekulere over hvilket signal OL sender ud ved netop at lægge så meget vægt på deres sponsorers interesser. Det efterlader en med en dårlig smag i munden og det strider for mig at se imod hele den olympiske ånd, der går ud på åbenhed og at dele”, siger han.
Robert Sharp tvivler alvorligt på, at de den Olympiske Komite kan håndhæve nogen form for censur. “Vi har tidligere set i forbindelse med retssager her i Storbritannien, at selv ikke et forbud fra Højesteret har kunnet stillet meget op overfor twitter. Tværtimod tror jeg ethvert forsøg på at stoppe en twitterpost eller et opslag på Facebook vil have den modsatte effekt. Det vil sprede sig på nettet med lynets hast”, mener han.
In a paywalled Times article this time last week, Hugo Rifkind highlighted our loss of the communal Christmas TV moment. EastEnders can never achieve the dizzy ratings heights of the 1980s, Eric and Ernie are dead, and even the numbers for Her Majesty The Queen’s Christmas message are in decline. Rifkind blames the spread of new viewing technologies as the cause of this: A plethora of channels; asynchronous viewing options like Sky+, TiVo, and iPlayer; and the alternatives presented by DVDs and YouTube.
It is interesting that despite this decline, new technology can provide a facsimile of the old, communal TV viewing experience. Instead of discussing an episode over the water-cooler or at the school gates the following morning, we all have a ‘second screen’ and discuss it in real time over Twitter. This is not a particularly original observation, but I mention it because it is Twitter that tells me just how universally popular is Sherlock, the second series of which began last weekend, with Episode 2 to be aired later this evening.
Hilariously, given the above paragraph, I did not actually watch the first episode ‘live’ – instead I caught up later in the week via iPlayer. That doesn’t detract from how popular the show seems to be, at least among the connected Twitterati.
There are plenty of explanations for the success. The writing is excellent and funny. Actor Benedict Cumberbatch exudes an autistic confidence that is true to Conan Doyle’s original character. Mysteries and puzzles are always the most popular stories (c.f. the perennial dominance of detective stories over Lit Fic) and the Sherlock series adheres to the rules of a good detective story, presenting all the clues to the audience as they are presented to the sleuth himself.
However, I think it is the representation of technology, and the visual choices inspired by technology, which make the thing feel so contemporary. Holmes receives text messages and interacts with Lestrade on a mobile phone. Dr Watson has a blog, and the villainess of Series 2, Ep. 1 had her own Twitter account (both of which, as is obligatory these days, also exist in the real world and keep up the conceit). However, it is not just that the characters use technology that makes the show interesting, but how the director integrates that into the visual style. Sherlock employs the popular technique of overlaying motion graphics onto the action. It is method made easy by new digital editing tools (see the opening scene of Stranger Than Fiction with Will Ferrell for an ostentatious example of the genre, as is Fifty Nine Productions’ work in Two Boys at the ENO). In Sherlock, the subtle use of this style makes the technology seem fully integrated into the way the characters view the world. The text messages flow past and through Sherlock, he barely has to look at his handset. I think it mirrors the way most of us live, with our eyes flitting between the screen and reality so quickly that it is sometimes difficult to remember how exactly a particular piece of information came to us. It certainly represents the way a large audience segment are experiencing the show. Are they watching Sherlock, or are they watching #Sherlock? Both.
Reading this article about the genesis and project management of Google+, a new social network, reminded me of the Through A Web Darkly event I attended at Demos last month. They’ve uploaded a helpful video outlining the main theme of the event – the idea that the ‘personalisation’ of the web might be a problem.
Its interesting that, as we move into an era where all the HTML code on our websites have been crafted for you us by the social networking companies, we are are nevertheless still the creators, or maybe the curators, of our online world. As Tom Chatfield put it (paraphrasing Alexis Madrigal) “Twitter is a human recommendation engine of which I am the algorithm.” The same is true of Facebook too, of course, which prioritises those people whose content you most frequently ‘like’. It is also true of Google, which is starting to take your location and your past browsing history into account when delivering search results. The danger with this, well documented with respects to Twitter, is that opinions that differ from your own are eventually weeded out of your personalised stream of information. Mistaken or ill-thought out beliefs are affirmed and not challenged, and our knowledge is weaker as a result. On a macro level, our democracies can become more polarised, with less consusus and a smaller space for compromise.
Once we are aware of this phenomenon, we can of course guard against it ‘manually’, by following people we disagree with, deliberately mixing up our RSS feeds, and otherwise introducing disruptions into the stream. There are two problems with this approach. The first, is that by confusiong or confounding the machines at Google and Facebook (to ensure that they serve you more diverse content) you are actually breaking their business model, because they can no longer target relevant adverts at you. If everyone did this, then advertisers will find other places to spend their pounds and dollars and the social internet services we rely upon may disappear. This is not necessarily our concern, and many people argue that essential web tools should not be provided by corporate bodies at all.
The second problem is that not everyone will introduce these disruptions into their stream. So while I may be reading all manner of different people with different views, they may not be reading me (or people like me) in return!
The worry, therefore, is that the liberating and equalising effects of the internet may begin to fizzle out. So far, we have been trumpeting the fact that anyone can become a global publisher with just a few keystrokes and clicks of a mouse. In recent years, once a website has been published, the author had the reasonable expectation that the site would have an equal chance of appearing, when a person looked for that subject matter on Google or other search engines. In the near future, this is unlikely to be so.
My final thought: I wonder what moral obligations Facebook etc have to me, to not filter what I publish on the web… Is there a free speech issue at stake here?
Was it last year, or 2009, or maybe 2008, that was branded “The Year of Twitter”? I am tempted to say that it’s an accolade deserved this year too. We’ve had the Arab Spring, the Japanese earthquake, the Royal Wedding and the death of Osama Bin Landen this year, and it’s only May. All these globally significant events have been defined and re-defined in the popular consciousness by the micro blogging site we have come to know and love. In the case of #OBL the event was actually live-tweeted by a Pakistani citizen journalist. 2011, the Year of Twitter again, right?
I think this misses the point. it’s better to say 2011 has already been an important year for events, and Twitter has both reflected and amplified those events.
It is also affecting more traditional news gathering too, so my claim (above) about “the popular consciousness” holds true even if not everyone uses Twitter. This critique by Felix Salmon of the New York Times‘ coverage (or rather, its coverage of it’s own coverage) shows how the organisation is in denial about how social networks affect it’s relevance and it’s reporting. Meanwhile, this article by Frédéric Filloux points to the wider evolution of news. This has a knock on effect for everyone.
In some cases, independent Twitter users are providing a crucial link in the news reporting chain. News editors have been fuming for years about super-injunctions, and their inability to mention gagging orders in their coverage. Meanwhile, Twitter regularly carries the names of those celebrities who have sought injunctions… So why has the main stream news media jumped on the story about one particular tweeter who has explicitly revealed the details of particular super-injunctions? The answer is of course that it provides an excuse for papers to reveal such details by other means.
In this story, apparently some of the tweets are actually inaccurate. Is this a fatal flaw, a reason for heavy censorship? Not really. As we saw earlier this week when a quote was misattributed to Martin Luther King Jnr, the same networks that propagate the inaccuracies are also the place to correct them. Social networks are surprisingly good at doing this. With the rise of the Internet, we have also seen the rise of new social norms and eittiquette. Forwarding on a false story is quite a major faux pas in the 21st century, perhaps more so than printing gossip, rumour and anonymous sources. The major reason for the New York Times’ loss of credibility in recent years was it’s failure to fact-check the anonymous government sources that told reporters that Saddam did have WMD. The paper was ruthlessly manipulated by the Bush Administration hawks, and yet does not seem contrite. If only Twitter had been around in 2002-03, we may have had the tools to more effectively call the news media, and through them, the US government, to account.
I’ve just read an interesting short blog post by Nicholas Carr on ‘Nowness’:
The Net’s bias, Gelernter explains, is toward the fresh, the new, the now. Nothing is left to ripen. History gets lost in the chatter. But, he suggests, we can correct that bias. We can turn the realtime stream into a “lifestream,” tended by historians, along which the past will crystallize into rich, digital deposits of knowledge.
I think this is why James Bridle’s Tweetbook appeals to me. By pulling a large set of data into book form, James imposes a permanence on something that was previously transient. I plan to recreate the project for my own tweets one day soon – Not to publish to the world, but a single copy for myself. Twitter is a diary and it is upon diaries that some of the best history is derived.
I’ve found myself doing that with other creations too. I have hundreds of digital photos sitting on my hard-drive, but I busied myself last weekend by printing out about five of them as 8″x5″ and putting them in nice frames. I think that act of printing and fixing is an act of stepping out of the stream. An act of stopping. Only then can you look back, look forward, and perhaps, look properly inward, too.