At Unlike Us #3 Ben Grosser presented the Facebook Demetricator which is a web browser extension that hides all the metrics on Facebook and therewith demetricates Facebook’s interface. Grosser describes his project as a piece of critical software that intervenes in the numerical focus of Facebook.
The quantification of social relations: More!
Ben Grosser narrates a scene from Wall Street: Money Never Sleeps where Jacob asks his new boss, Bretton James: “What’s your number?” “Everybody has a number, a set amount of money that once they hit, they’ll leave the game and just go play golf for the rest of their life. What’s yours?” and his response is: “More.”
The scene depicts a moment in the movie, which deals with the 2008 USA financial collapse, before the financial crisis and shows capitalist society’s fetish with increasing numbers and numerical growth which eventually came to a collapse. He describes the human desire to make numbers go higher, whether this means stocks rising, calories burned, friends added, likes accrued or comments left. Grosser states how we are obsessed with these numbers and that we’re paying more attention to the numbers than the actual content of the interaction.
He defines metrics in relation to his project as enumerations of data categories or groups that are easily obtained via typical database operations and studies these in relation to the Facebook platform from a software studies perspective. How do these metrics enable things in the sense of Matthew Fuller’s conditions of possibilities? The metrics increase user engagement with the site through the quantification of social relations as may be seen in the +1 included in the Add Friend button. You can increase your personal worth by incrementing your social value if you add a friend and the number is making that value explicit.
The extension does not only remove the numbers from likes, shares or friends but it also removes the timestamps in the interface. The Newsfeed is presented as a never-ending conversation and has engineered presence in the system where if you step out of the stream you may miss something. By quantifying our social relations Facebook becomes a technology of control that pushes for continous consumption. In our paper on Facebook’s Like Economy colleague Carolin Gerlitz and I describe this as a process of intensification and extensification where “user engagement is instantly transfigured into comparable metrics and at the same time multiplied and intensified on several levels:”
the metrifying capacities of the Like button are inextricable from its intensifying capacities. Within the Like economy, data and numbers have performative and productive capacities, they can generate user affects, enact more activities and thus multiply themselves or, as Simondon puts it, ‘Beyond information as quantity and information as quality, there is what one could call information as intensity’ (cited in Venn, 2010: 146). Such dynamics are enabled through the medium-specific infrastructure of the Like economy which simultaneously enacts, measures and multiplies user actions. (Gerlitz & Helmond, 2013)
Facebook Demetricator user feedback
The feedback from users that installed the Facebook Demetricator after five months reveals how it removes addictive behavior (to like more, to constantly check for feedback or appreciation), how it blunts competition and calms users down, how it lessens emotional manipulation (one user stated (s)he was now in a neutral state of mind all the time) and how it relaxes rules. When going through the feedback it became apparent that many Facebook users have self-imposed rules on how to deal with the numbers and how to interact with content. One user stated: “I don’t know how to respond to this because I don’t know how old it is” indicating that (s)he would not respond to old content and actually asked if (s)he could have the Facebook Demetricator but with the timestamps back. Another user stated that “I need to know the numbers because I don’t want to be the first or second person to like it, because what if other people don’t like it?” and “If it has over 25 likes I am not going to like it anymore because that person has enough likes.”
With this piece of critical software Ben Grosser addresses how Facebook constructs its users by guiding its social interactions through the metrification of its interface.
More by Ben Grosser:
- Ben Grosser – How the Technological Design of Facebook Homogenizes Identity and Limits Personal Representation (PDF)
- Reload the love: Reload The Love! automatically detects when your Facebook notification icons are at zero and artificially inflates them for you. If new notifications arrive after Reload The Love! has inflated them, they will instantly revert back to accurate values. And any time you want to reinflate them, just reload the page to Reload The Love!
- Interview with Ben Grosser by Matthew Fuller: Don’t Give Me the Numbers
Article Series - Unlike Us 3
- Facebook Demetricator and the Easing of Prescribed Sociality by Ben Grosser at Unlike Us #3
- Minds Without Bodies: Rites of Religions 2.0 by Karlessi from Ippolita at Unlike Us #3
- The Future of Identity in a Digital World by Tobias Leingruber at Unlike Us #3
- Oliver Leistert and Leighton Evans on the Political Economy of Facebook Mobile at Unlike Us #3
Eight months after I requested my own Twitter data from Twitter through a legal request under the European privacy law, Twitter now allows you to download your own tweets through their interface. The archive can be downloaded from the settings page (see this blog post from Twitter) and the file named tweets.zip contains all your tweets from the beginning.
The tweets are stored in two different formats: CSV and JSON which makes it a versatile archive to work with for both users and developers. The archive does not only contain your own tweets but also tweets you have retweeted but excludes DMs and favorites. The archive is neatly organized and tweets are stored in files per year per month, for example: 2007_08.js. The .zip file also includes an interface to browse through your archive per year per month:
My previous archive which I received from Twitter contains more data because back then I requested all data Twitter keeps about me, which includes direct messages, metadata and logins, IP addresses, contacts, etc. The data that is available per tweet in both archives is quite similar:
When comparing my old archive to the new archive what seems to be different however is the availability of a retweet count. The old archive contained a line “retweet_count”: *, which would show the number of retweets for that particular tweet. This (valuable) data has been removed from the new archive.
I have been using the username silvertje for several services and multiple social media platforms for years and people have been asking me where my username comes from. The answer is IRC. When I first came online, somewhere in 1995, I quickly discovered IRC and became an avid user :)
In contrast to current social networking sites and social media platforms there was no way to “register” a username on IRC. This meant that you either had to hope no-one else would use the same name, or have a very unique username, or be online all the time and “claim” your username through persistant onlineness. Being on a 14K4 dail-up connection which cost over 5 Dutch Guilders (about 5 dollars) per hour I had a clear disadvantage compared to the American and Finnish IRC users who were able to be online all the time for free through their universities. So every time I used IRC I had to dail-up, get online, get on IRC and hope no one was using my username. The first username I picked was ‘sliver’ (after the Nirvana song) but that one was taken very often. Then I decided on ‘_sliver_’ but that one was also often taken. Then I chose ‘slivertje’ to create a Dutch diminutive but apparently another Dutch user thinking the same thing. Finally, I settled on ‘silvertje,’ which means little silver in Dutch, which never seemed taken on IRC and I have happily been using it ever since (although I am not on IRC anymore).
Over the past couple of weeks I have joined a variety of new services including App.net, State, Branch, Medium, Kippt, Buffer.
I recently backed my first kickstarter-ish project ever and decided to join App.net (AppDotNet or ADN). People keep asking me if I think it can ever compete with Twitter or will it ever reach critical mass or if it will stay a ghost town like Google+? For me the question is not whether ADN will be able to “replace” Twitter but rather I see it as a reflection of the current zeitgeist. ADN is not simply an ad-free alternative to Twitter. Instead, alternatives to major platforms such as Facebook and Twitter are increasingly gaining momentum. ADN is definitely not the first, think for example of Diaspora (launched as a Facebook alternative) and Identi.ca (formerly Status.Net) which calls itself “a stream oriented social network service” (FAQ). Both services never really went mainstream, maybe because they were both ahead of their time.
ADN, at a first glance, seems similar to Identi.ca but there is one important distinction which also differentiates it from Twitter because with Identi.ca “You can install the StatusNet software that runs Identi.ca on your own servers, since it’s Free and Open Source software. You can make groups, and share privately with those groups.” This allows you to run Identi.ca on your own server, a decentralized model, while both Twitter and ADN rely on a centralized model. ADN follows a centralized model which is very common for the current era of social media platforms. As a platform, ADN is operating as software as service, “a software delivery model in which software and associated data are centrally hosted on the cloud,” and offers an API for developers. The API is the main core of ADN and alpha.app.net is only one possible way of how an ADN application or service can look or function. Two great write-ups deal with these issues: First, Dan Wineman describes the relation between the social graph, publishing and aggregation and how social platforms like Twitter and ADN deal with these differently, and second, Orian Marx describes what ADN is, could possibly be and how it is different from its alternatives. Yes, ADN costs 50 dollars (or 100 if you are a developer) and it is still a centralized service but I can’t even begin to describe what has been developed with the ADN API in less than three weeks.
ADN isn’t the only thing that is currently brewing as an alternative to Twitter which is increasingly shutting out other services and third-party developers. Dave Winer hypothetically proposes a “A microblogging server that’s a simple install on EC2 or Rackspace or any other easy cloud-based server“ in other words, a decentralized easy self-install Twitter alternative in the cloud. Another initiative that is currently buzzing in the blogosphere is Tent.io “a protocol for open, decentralized social networking” which looks interesting but Winer reminds us that “What matters is what software is supporting the protocol, what content is available through it and how compelling is the content.” There is also critique on Tent.io developing Yet Another Protocol while it could use existing protocols, which reminded me of the following XKCD comic on standards:
My username is @silvertje if you would like to contact me on ADN. I have created a Google Doc which lists about 80 other Dutch ADN users, @adrianus has built Appnetizens streams, a “Tweedeck” like interface for ADN (for which I did some CSS-color-advice) with multiple column-view and tons of other features such as a “Netherlands” view with all known Dutch users, @frankmeeuwsen has started a blog titled Appdotnet Culture which documents ADN’s early developer and user culture and @richardk writes about ADN developments. I’ve also created an IFTTT recipe that allows you to cross-post selectively from Twitter to ADN whenever the tweet contains the hashtag #adn.
I started using Buffer to cross-post some messages from Twitter to ADN using an IFTTT recipe I created: Send Tweets with Hashtag #ADN to App.net via Buffer However, IFTTT just added ADN as a channel to their service so I don’t have to pipe everything through Buffer anymore, so until I find another use for this service I am putting it on pause.
At the first glance State looks like a Netvibes made for the platform & cloud era. It’s not simply a service to aggregate your streams because State also allows you to interact with your streams. In other words, you can reply to your Tweets and ADN posts and when you click on a user it brings you to the user profile displayed within State. However, not all actions that can be performed on objects within these platforms are available yet. You can also add RSS feeds but it is not immediately clear how this works. You can “search” for a feed, where it seems to search the web for your query and then grabs the feed from these results. When I ego-search for myself I get feeds for my Flickr photos, Quora profile etc but I cannot seem to find the main feed for my own blog. Adding a custom feed by URL would be a great option. I’ve only used it for a few hours but I love it so far and ReadWriteWeb calls it “A Streams App Of The Future“. It looks clean, minimal and good and they respond very quickly to feature suggestions (they implemented a reply to Instagram photos function after I suggested it on Twitter!), always a bonus :)
Update: Joshua from State kindly answered my question concerning the RSS feature. State is currently using “Google’s Feed API (https://developers.google.com/feed/) to search for feeds using the text you type into the box” which interestingly enough brings up the feeds for my presence elsewhere but not my own blog.
Branch, Medium, Kippt
Branch, Medium, Kippt are three more new platforms I joined recently for publishing, discussing and link sharing but so far I have merely glanced at them, as one can only spend so much time online.
On a final note, I’m happy to contribute as a female to the all these new services which are dominated by “alpha geeks” aka white males according to BuzzFeed’s latest article on the early adopters of these platforms.
The Institute of Network Cultures, Eva van den Eijnde and myself would like to welcome you to the official book launch of Geert Lovink’s new book Networks Without a Cause. A Critique of Social Media. Thank you very much for being here. Today I would like to start with a brief introduction to Geert’s new book and how it relates to his previous work. Afterwards Geert will talk about his new book, followed by a few questions and comments from Eva van den Eijnde and myself, and of course questions from the audience.
Networks Without a Cause is the fourth book by Geert in his series of studies into critical internet culture. For those unfamiliar with Geert’s work, the first book in this series is Dark Fiber (2001) which deals with early internet culture, from cyber culture to dot.com-mania. His second book My First Recession (2003) describes the aftermath of the dot.com mania and looks at the transition period of the dot.com crash to the early blogging years. His third book Zero Comments (2008) looks back on the blogging hype that has commenced since and addresses blogs as an unfolding process of “massification” and blogging as a “nihilistic venture.” It also looks at the Web 2.0 hype or Web 2.0 mini-bubble that echoes the dot-com era but also differs from it as described by Geert. His new monograph, Networks Without a Cause (2012), continues where Zero Comments has left off by describing the late Web 2.0 era.
The introduction of Networks Without a Cause starts with the important umbrella question “How do we capture Web 2.0 before its disappearance?” The rise of the real-time signifies a fundamental shift from the static archive and handcoded HTML websites toward “flow” and the “river” as metaphors of the real-time, where the software, social media platforms, are automatically generating content flows from the input from their users. Blogs and blog software have played an important role in this shift, with the reverse-chronology of blog entries and the river of fresh content produced by RSS feeds. Real-time is a key feature of social media platforms such as Facebook with its news feed and Twitter with its timeline, where content flows by, begging the question for researchers how to capture and archive this flow in order to be able to analyze it, and for Geert also the question of “why store a flow?” related to the notion of users no longer saving their files for offline retrieval but instead moving, storing and syncing everything in the cloud (think for example about Gmail and Dropbox) but also the question of identity management because “how do you shape the self in real-time flows?” (p. 11) These and many other questions posed throughout the book are part of a “Net criticism” project that seeks to develop sustainable concepts as individual building blocks that through dialogues and debates “will ultimately culminate in a comprehensive materialist (read: hardware- and software-focused) and affect-related theory.” (p. 22)1
Question 1: Web 2.0 versus social media
Is it a coincidence that a number of books dealing critically dealing with “social media” are coming out at the same time? This book Networks Without a Cause with its subtitle A Critique of Social Media, also The Social Media Reader a volume on the topic with contributions by well-known authors on the subject where, in the introduction the term Web 2.0 is called a buzzword, that on the one hand has been “emptied of its referent, it is an empty signifier: it is a brand.” (p. 4)2 but on the other hand encapsulates an aspect of the phenomenon of social media. And finally, the upcoming book by Andrew Keen called Digital Vertigo that addresses the threat of the social and the tension between the collective social and the individual in “today’s creeping tyranny of an ever-increasingly transparent social network that threatens the individual liberty.”3 Geert also addresses related issues in his book when describes “The social as a feature.” He describes how “Social media as a buzzword of the outgoing Web 2.0 era is just a product of business management strategies and should be judged accordingly.” (p. 6)
Is Web 2.0 a thing from the past? As a lecturer in the first year of Mediastudies at the University of Amsterdam I was surprised to learn this year that my students were not familiar with the term Web 2.0 at all! Everyone had heard of social media, everybody except for one privacy conscious student, was a member of Facebook, but none of them had heard of the term Web 2.0. This is also illustrated in the following image:
What is the relation between Web 2.0 and social media when thinking not only about terminology but also about software, practices and critiques?
Question 2: Comment cultures
While in Zero Comments Geert focused on the average blog with its zero comments, in Networks Without a Cause he focuses on the other end of the Power Law diagram and looks at blogs that have reached a critical mass. In the introduction he writes how in Web 2.0: “Current software invites users to leave short statements but often excludes the possibility for others to respond. Web 2.0 was not designed to facilitate debate with its thousands of contributions. […] What the back-office software does is merely measure “responsiveness”: in other words, there have been that many users, that much judgment, and that little debate.” p. 19
While blogs offers a form of facilitated debate by offering the possibility of comments, it is highly hierarchical due to the strict separation of content and comments. On top of that bloggers are continuously debating how to improve the old blog comment infrastructure in order to deal with the “tragedy of the comments” that have caused some bloggers to shut down their comments.
Geert argues that thinking about the software-architecture to design the comment ecology is important because software co-produces a social order. Could you further elaborate on current comment cultures, your ideas to go beyond taming the commentators, and the increasing splintering comment ecology with the conversation also moving to social media platforms such as Twitter and Facebook with no proper way to connect all these distributed comments back to the original text?
- Lovink, Geert. Networks Without a Cause. A Critique of Social Media. Polity Press, 2012.[↩]
- Mandiberg, Michael (editor). The Social Media Reader. New York University Press, 2012.[↩]
- Keen, Andrew. Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us. Forthcoming, May 2012.[↩]
Last year Erik Borra, Taina Bucher, Carolin Gerlitz, Esther Weltevrede and I worked on a project “One day on the internet is enough” which we have since referred to as “Pace Online.”
The project aims to contribute to thinking about temporality or pace online by focusing on the notion of spheres and distinct media spaces. Pace isn’t the most important question, respect for the objects and the relation between objects and pace per sphere are also of interest in this study. Both in terms of how the engines and platforms handle freshness, as well as currency objects that are used by the engines and platforms to organize content. Moving beyond a more general conclusion that there are multiple presents or a multiplicity of time on the internet, we can try to start specifying how paces are different, and overlap, empirically. The aim is to specify paces and to investigate the relation between freshness and relevance per media space. The assumption is that freshness and relevance create different paces and that the pace within each sphere and plattform is internally different and multiple in itself. (continue reading on the project wiki page)
I was reminded of the project when I read Rethinking the Digital Future, a piece by in the Wall Street Journal on David Gelernter and the lifestream. Gelernter describes a particular relationship between streams and pace when talking about the worldstream and an individual stream. In this subset of the worldstream things move at a slower pace because individual objects are added less frequently than when looking at the aggregate, the worldstream. We argue something similar in Pace Online, where – translated into Gelernter vocabulary – this worldstream consists of different spaces with different paces. Zooming into a space, such as Twitter or Facebook or Flickr, creates a subset within the worldstream. There are numerous subsets of subsets that may be created as one can zoom into the stream of Twitter and then further zoom into this stream based on a hashtag or an individual user profile where each of these subsets of streams have different paces.
In “Time to start taking the internet seriously” (2010) David Gelernter describes a shift from space to time and with it the lifestream as the organizing principle of the web: “The Internet’s future is not Web 2.0 or 200.0 but the post-Web, where time instead of space is the organizing principle.” Interestingly enough he does see a history in the fleeting stream: “Every month, more and more information surges through the Cybersphere in lifestreams — some called blogs, “feeds,” “activity streams,” “event streams,” Twitter streams. All these streams are specialized examples of the cyberstructure we called a lifestream in the mid-1990s: a stream made of all sorts of digital documents, arranged by time of creation or arrival, changing in realtime; a stream you can focus and thus turn into a different stream; a stream with a past, present and future. The future flows through the present into the past at the speed of time.” A stream with a past is something rare, for example you cannot go back to your first tweet if you have published over 3200 tweets on Twitter and you cannot search for tweets over 14 days old. While Twitter partner Gnip announced “Historical Twitter Data” yesterday, this history of tweets is only 30 days old. It also points to an interesting relation between the past, present and future of a stream as it offers the past because we cannot anticipate the future:
We have solved a fundamental challenge our customers face when working with realtime social data streams,” said Jud Valeski, Co-Founder and CEO of Gnip. “Since you can’t predict the future, it’s impossible to filter the realtime stream to capture every Tweet you need. Hindsight, however, is 20/20. With 30-Day Replay for Twitter, our customers can now replay history to get the data they want. (Gnip Blog)