My article “The Algorithmization of the Hyperlink” has just been published in the third issue of Computational Culture: a journal of software studies.
This study looks at the history of the hyperlink from a medium-specific perspective by analyzing the technical reconfiguration of the hyperlink by engines and platforms over time. Hyperlinks may be seen as having different roles belonging to specific periods, including the role of the hyperlink as a unit of navigation, a relationship marker, a reputation indicator and a currency of the web. The question here is how web devices have contributed to constituting these roles and how social media platforms have advanced the hyperlink from a navigational device into a data-rich analytical device. By following how hyperlinks have been handled by search engines and social media platforms, and in their turn have adapted to this treatment, this study traces the emergence of new link types and related linking practices. The focus is on the relations between hyperlinks, users, engines and platforms as mediated through software and in particular the process of the algorithmization of the hyperlink through short URLs by social media platforms. The important role these platforms play in the automation of hyperlinks through platform features and in the reconfiguration of the link as database call is illustrated in a case study on link sharing on Twitter.
I just arrived in San Diego where I will be attending the Software Studies Workshop and present my current research at UCSD. After that I will go to the HASTAC II Conference in Irvine where I will be on the Software Studies Panel and Los Angeles.
Thank you very much to the Institute of Network Cultures who made this trip possible.
The following post is a combination of a transcription of Manovich’s keynote and my own notes and commentary.
Introduction by Geert Lovink
Online video is renegotiating its (problematic) relationship with cinema. It deals with cinematographic principles versus the principles of the online age. We cannot directly transfer the cinematographic principles into the online age as new media has its own specificities. YouTube is not just video on the web but YouTube is a natively digital object.
Ten years ago Lev Manovich proposed to consider the database as the (new) dominant media form. The database is the hegemonic media form online, as can be seen on YouTube, Flickr, MySpace and Google. We should think beyond technology now the database is also becoming a dominant social form. The database is shaping the social.
User Generated Content by Lev Manovich
After the novel, and subsequently cinema privileged narrative as the key form of cultural expression of the modern age, the computer age introduces its correlate – database. Many new media objects do not tell stories; they don’t have beginning or end; in fact, they don’t have any development, thematically, formally or otherwise which would organize their elements into a sequence. Instead, they are collections of individual items, where every item has the same significance as any other. (Manovich, Database as a Symbolic Form)
These individual items could be considered to be little narratives. Even though it is debatable we could argue that within the database structure the actual elements are almost intensified little narratives.
It is interesting to note that Manovich starts his note by stating “I shouldn’t be here.” Even though he has a YouTube, Flickr and MySpace account he doesn’t use them because he is too shy. He dislikes talking from an expert point of view as he more of an observer than a participant.
The problems of user-generated content
The challenge user-generated content (UGC) presents to media theory is the same as it does to programmers: scale. If the number of people that produces content grows new social challenges arise such as the question of quality. Not only is the term user-generated content a term that is created by the industry it is also misleading. It is an umbrella term that not only simply counts users, it also tends to homogenize the content. Is every picture that is uploaded to Flickr meaningful? Content is created and uploaded for different reasons, purposes and audiences. Not all content is intended for wide distribution as some pictures are only uploaded for friends and family and others for general viewing. Some pictures are taken especially for (special-interest) Flickr groups and pools which illustrates that all pictures have a different purpose and meaning.
New social media behaviors
Both hardware and software (and interfaces if we choose to put the interface between hardware and software instead of seeing it as software) direct new users to turn their media into social media. YouTube pushes you to interact by offering a wide abundance of “social options” such as Share/Post video/Add to groups etc. I recognize this trend in the use of my new mobile phone. Not only is it my first mobile phone with an integrated camera it also gives me the option to publish a picture on the web immediately. This has been made even easier by installing a piece of software called Shozu that immediately pops up a “Send to Flickr” dialog after I have taken a picture. This causes me to upload nearly every picture I take to Flickr. My mobile phone creates new social media behaviors.
With newly created social media behavior we also need a new field of study to bring into focus the elements of digital culture created by software. Software shapes media behavior and that is why we need to study it.
Henri Jenkin’s Convergence Culture critique
Manovich main critique on Jenkins’ assumptions about user-generated content is that Jenkins’ does not evaluate the content. There is the underlying assumption that everything that the fans create is good. Even though Jenkins is from Humanities he takes a sociological approach and doesn’t look “inside” the content. We need to ask what the grammar of the content is? What is it composed of?
Users and Templates
Is the content produced by using old models, templates and iconography copied from mass media? Who are creating the new models? Are they still created by the professionals? Templates are no longer provided only by professionals such as the Word templates provided by Microsoft. Nowadays amateurs produce templates too. In relation to my thesis this can be applied to user-generated WordPress templates and plugins. Very few of these templates are created by so-called professionals. Where do we draw the boundary of professionals and amateurs? Are the default WordPress templates created by professionals? Most WordPress themes are created by the user community which we may label as amateurs. Some of these users are professional webdesigners or coders but others create themes for fun, recognition or money. To refer back to Henry Jenkins the themes are also part of the remix culture. Users adjust and adapt existing themes to their own needs. Keeping in mind Manovich’ critique on Jenkins not all themes bear the same quality. Not all themes are written according to W3C standards for example.
Models, templates and iconography are part of the cultural DNA of content. We should not only study the circulation of content but also this underlying cultural DNA.
Critique on the Long Tail
The long tail is often presented as having a fixed form while it actually comes in different shapes and forms. Not only is the curve is changing over the years, it is also different for different industries. The long tail in architecture for example is very steep with just a few major architects such as Rem Koolhaas. Manovich asks us what the different shapes of the long tail are in terms of popularity. Not only should we see the long tail through the eye of popularity, but we could also see it through the eye of quality or quotability. How many elements of a piece of content is used by others to produce a new piece of content?
To be able to adequately analyze global culture (the numbers of professionals, prosumers and users continuously growing) – patterns of creation, consumption, circulation and remix of content – we need new tools. (Manovich slide)
This is exactly what we are dealing with at the Digital Method Initiative: natively digital objects need new tools and research methods that take into account the natively digital.
Future (media) theory will be software based. The analysis must be able to deal with the scale of contemporary culture. On top of that a gap between the cultural tools and the industrial tools (for example data mining) must be bridged. We need large displays to visualize the immense amount of data that is being produced and visualized. New software will be based on theoretical tools. Software theory will do justice to the scale of contemporary cultural production. The form of scale we are currently dealing with is new/unkown to Humanities. The content we find online is only a small part of the totality of the cultural circulation. Instead of a structuralism like semiotics, where the structure is imagined and tested with individual texts, here the individual movements (flows) of content form an emergent structure.
On top of quantative analysis we also need to take into account a qualitative analysis that deals with questions such as what happens if you switch on a certain piece of technology. It is a double approach that not only looks at software but also studies it using software. Michael poses the important reflexivity question as the software we use to study software has certain assumptions embedded into it. The Wikiscanner has a particular vision of Wikipedia built in it.
It will be interesting to see what kind of new cultural reflections will these (new) tools will lead to.
Article Series - Video Vortex
- Video Vortex, Responses to YouTube : 5 October @ Argos, Brussels, Belgium
- Lev Manovich on User Generated Content @ Video Vortex
Peter Lang, New York, NY, USA 2006
216 pp. Paperback, $31.95 USD
Buy at Amazon
Cutting Code addresses the subject of software that has previously been marginalized due to its invisibility. Software is a very mutable object that is entangled in a web of relations. Mackenzie thus sees software as a social object and process that is intrinsically linked to code as a material and practice. Software has previously been studied from a formalist approach by Manovich. The problem with such an approach is that software is abstracted from practices and contexts surrounding coding and reduced to “relations and operations (such as sorting, comparing, copying, removing) on items of data.”1 These relations and operations are seen as quite stable forms and are often directly transfered from the field of computer science. Instead of abstracting and formalizing software Mackenzie argues for an ontology of software that deals with the mutability of software and its web of relations. Code is at the core of this web that software weaves:
[...] it treats the sociality of the software, the relations that obtain in its neighborhood, as mutable, involuted agential relations indexed by code.2
Mackenzie contributes to the emerging field of Software Studies with an interesting take on code and software. We should render software visible and notice the agency it provides, generates and distributes:
At stake here is an account of software as a highly involuted, historically media-specific distribution of agency. This account diverges from a general sociology of technology in highlighting the historical, material specificity of code as a labile, shifting nexus of relations, forms and practices. It regards software formally as a set of permutable distributions of agency between people, machines and contemporary symbolic environments carried as code. Code itself is structured as a distribution of agency.3