Un projet Labex Arts H2H
 
en /fr
LE SUJET DIGITAL
2012 - 2016

From Traces to Collective Memory: Databases and the Development of the Field of Electronic Literature
By Scott Rettberg

Professor of Digital Culture at the University of Bergen and H2H Labex International Chair

About the author

Scott  Rettberg  b.  1970  is  Professor  of  Digital  Culture  in  the  department  of   Linguistic,  Literary,  and  Aesthetic  studies  at  the  University  of  Bergen,  Norway.  A  US   citizen  and  native  of  Chicago,  Rettberg  has  live  in  Scandinavia  since  2006.  Rettberg   was  the  project  leader  of  ELMCIP  (Electronic  Literature  as  a  Model  of  Creativity  and   Innovation  in  Practice),  a  HERA-­funded  collaborative  research  project  from  2010-­13.   Rettberg  is  the  cofounder  and  served  as  the  first  executive  director  of  the  nonprofit   Electronic  Literature  Organization,  where  he  directed  major  projects  funded  by  the   Ford  Foundation  and  the  Rockefeller  Foundation.  Rettberg  is  the  author  or  coauthor   of  novel-­‐length  works  of  electronic  literature,  combinatory  poetry,  and  films   including  The  Unknown,  Kind  of  Blue,  Implementation,  Frequency,  Three  Rails  Live,   Toxi•City  and  others.  His  creative  work  has  been  exhibited  online  and  at  art  venues   including  the  Chemical  Heritage  Foundation  Museum,  Palazzo  dell  Arti  Napoli,  Beall   Center,  the  Slought  Foundation,  The  Krannert  Art  Museum,  and  elsewhere.  

 


Key words
Electronic literature; collective memory; archiving; hypertext; ELO; ELMCIP; collective development.

Keynote talk for “The Digital Subject: Questioning Hypermnesia” – International and Transdisciplinary Conference, Paris 8 University, November 13, 2012. Revised October 2013.

Recovered memory

I have recently read two books that I think speak in interesting ways to the topic of memory: Paul Auster’s autobiographical Winter Journal and D.T. Max’s Every Ghost Story is a Love Story, a biography of David Foster Wallace.
Auster’s Journal is a kind of memoir, written by the author one month before his sixty-fourth birthday. It is a searching, honest, and entertaining book. Auster meditates both on aspects of his life that have brought him joy and those that have brought him pain, and reveals himself as a complex and imperfect character. He includes for instance a great deal of information about his sexual history ranging from the lovers with whom he has shared a bed, to a venereal disease he once caught, to the pleasures of his marital bed, among other intimate details. He details a memory of wetting his pants in the car as a child and drinking himself to the floor on hearing news of his mother’s death with equal clarity. For the purposes of considering hypermnesia however what most interests me is the dominance of lists of banal facts in his accounting of his life, and how these function to generate narrative. Though the book flows in and out of chronology and weaves from theme to theme in an impressionistic way, several different lists serve as the dominant structuring devices of the memoir, most predominant among them a list of scars Auster has acquired over the course of his lifetime and a list of places where he has lived.
Auster writes of the “inventory of scars” that confronts him each time he looks in the mirror:

"You seldom think about them, but whenever you do, you understand that they are the marks of life, that the assorted jagged lines etched into the skin of your face are letters from a secret alphabet that tells the story of who you are... Contingent facts as opposed to necessary facts, and the realization as you look into the mirror this morning that all life is contingent, except for the necessary fact that sooner or later it will come to an end."

Auster moves from his scars to accounts of the incidents that produced them. Each scar leads to the recovery of a memory, and in a sense serves to help Auster recover a part of himself, or an earlier version of himself, whether as a three-year-old doing a belly flop onto a carpenter’s bench, finding the sharp end of a nail, and tearing apart his left cheek, or as a twelve-year-old boy tackled, while throwing a baseball high into the air, by another open-mouthed boy whose teeth tore into the skin above Auster’s left eye during the collision. As the sixty-four-year-old author Paul Auster finds his way back to the memories of the boy Paul Auster through these marks, these etchings in his skin, his body becomes a kind of database. His scars are data that he uses to recover aspects of his own character. The point I am trying to make here is that these seemingly trivial facts, aggregated and post-processed, become keys that the author uses to unlock a deeper understanding of himself. These scars are tags on the skin, and recalling their origin reveals the networks of cognitive relations on which human memory is based.

Sixty-five pages of The Winter Journal are preoccupied with a list of places Auster has lived. Each of the entries in this section of the journal begins with an address: mostly of abodes in New Jersey, New York, or France. Some of these twenty-one locations stir little memory, or some scant architectural details, but more often the discussion of a particular place opens an avenue for Auster to recall personal memory. In writing back to and through the rooms in which he has lived, loved, suffered and written, Auster is again treating an aspect of his life—his geolocation—as a database. The mere recitation of the facts of locations leads to a process of recovering memory, which in turn leads to the construction, or reconstruction, of a character. Given how much of a writer’s life—any life really—takes place in small rooms, it is appropriate that a list of those rooms becomes a database for personal memory— as these addresses are data-mined by the author for their content as links to memories within his own brain.

D.T. Max’s biography of David Foster Wallace Every Ghost Story is a Love Story makes for an anxious if engaging reading experience. The author of Infinite Jest took his own life in 2008 at the age of forty-six after a long struggle with medicated depression. Wallace’s fiction and non-fiction were characterized by an acute and hyperbolic sense of self-awareness. Wallace was an ironist precisely attuned to the conditions of his own subjectivity, who found the world an irresistible buffet of too much information that could not be possibly be described in all of its complex relations. In his books and essays Wallace set about trying to describe those complex webs of relationships between individuals, between institutions, and between media. Wallace mapped contingencies of power relations, individual and societal complexes, and enmeshed subjectivities. His work conjoins the intellect of an analytical philosopher with the energy of a manic high on the datastream of the world. His texts are extremely self-conscious as metafictional constructs—artifacts that contain within themselves an embedded awareness of their artificial nature. Wallace set about describing the world not by presenting the work of literature as a transparent window with a view onto the world, but as a construct fascinated with its own construction, filtered by the subjectivity of an author who could rarely resist embedding himself within his narratives. As much as the author is dead, Wallace’s novels, essays, and stories seem to argue, a representation of his consciousness, or rather his self-aware construction of his consciousness, is always present in the text. You can’t get him out of it, or perhaps even more importantly, he can’t get him out of it.

To come back to a point about collective memory and the database, the experience of reading Every Ghost Story is a Love Story was distinct for me for a couple of significant reasons. Twenty years ago Wallace was my writing teacher and thesis adviser, and I knew him fairly well for a fairly intense short period of my life when I was studying fiction writing in the M.A. program at Illinois State University. I spent many hours with him, talking about writing, literature, politics, the news, music, celebrity, etc. and knew him both as a teacher and a flawed human being. After my graduation, we had a falling out over our mutually exclusive romantic interests in one particular woman and never spoke again. I followed his writing and never lost respect for the contributions he made as a teacher but we never had the occasion to mend fences. So I had this sense that I knew him, and I did know him well for a time, at the same time as I always had a sense that I knew only a sliver of him. I was deeply shaken when I heard of his suicide, experiencing it both as a loss for literature and as a personal loss.

In some sense, Max’s book is typical of any biography of a literary figure: Max did some work with texts, with Wallace’s writing and correspondence, but the enterprise was peculiar in other ways. Most literary biographies are written well after their subjects are dead, and so the availability of people who actually knew the subject during different parts of his or her life is limited, as most of those potential respondents are also dead. In Wallace’s case, however, because he died young, the vast majority of his contemporaries are still alive and his presence is fresh in their memories. Because Max started his writing project within a year of Wallace’s death, many of those respondents were still struggling with it and so were perhaps more willing to open up to the biographer than they otherwise would have. As it emerges in Max’s depiction, Wallace’s life was highly compartmentalized and his interior subjectivity closely held. In tracking down both those who knew Wallace well, such as his parents, sister, wife, editor and agent, and those who knew him more distantly, Max began to put together a kind of amorphous mosaic built out of individual conversations with people who knew Wallace, each conversation with one person leading to conversations with another, a network of faceted relations slowly filling in a collective representation of the author. When I was talking with Max on the phone during one of these interviews, at a certain point I asked him why he was doing this? Was his interest in writing a kind of celebrity biography made for the mass market, to parasitically feed off the fame of a recently famous corpse, or was he hoping to somehow provide readers with a deeper understanding of Wallace’s books that they could not otherwise gain from reading those books or the many products of the critical industry that has sprouted up around them? Max paused heavily before he responded that he was “writing a life” and he seemed acutely aware of the kind of Frankensteinian ethical responsibilities this task entailed.

In reading the biography, I felt a queasy discomfort, as a person I once knew in the messy physical space of the world was becoming in a way flattened, compressed, characterized. The life project of the author, finalized by his own suicide, now had a definite arc and teleology. With all of the available facts at his disposal, the biographer was now able to pull out the themes of a complex life, and distill them in a way that the author himself, the conscious entity that was once David Foster Wallace, would never have been able to. And though the book suffers from some of the usual problems of biography—all of the conversations and notes and letter are ultimately filtered by Max to fit into the diagnostic narrative the biographer is telling as he is “writing a life”—it occurred to me that no single individual—not Wallace’s parents, not his wife, and most likely not even Wallace himself, knew him in this way—until all of these prismatic memories were aggregated, assembled, printed, and bound into a finished life. The book of collective memory is both a kind of tomb and a type of overview never permitted the subject in the course of biological life.

 

False memory

The most fundamental technological enhancement or extension of memory is of course writing itself. And just as with every other technology that extends memory, we have long been warned that we should distrust it. In the Plato’s Phaedrus Socrates recounts the story of an encounter between the Egyptian god Theuth, “the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice” whose greatest discovery was the use of letters, and Thamus, the king of Egypt. While Theuth claims that letters will “make the Egyptians wiser and give them better memories, King Thamus cautions that if men learn to write:

[it] will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing.

When I read these words, I think of my iPhone and the various other devices, networks, and platforms to which I have entrusted my vital information. I can barely remember my own phone number, and certainly not those of my wife or closest friends. My phone does that for me. I may or may not be able to tell you precisely where I am scheduled to be in two weeks, but my calendar knows. I can’t imagine where I was ten years ago today, but I could likely log onto Flickr and provide you with an image of the very place. What were my preoccupations two years ago today? I have no idea but I can probably find some trace of them in my Facebook timeline. I can never remember what day of the month I am supposed to put my recycling on the curb, but my phone chimes a Pavlovian once a month as I approach the stairs to my building to remind me. Perhaps more than ever before we are outsourcing the functions of human memory not only to letters and not only to our own devices but to networks and services that we constantly feed but over which we have little control.

It may be true that we become less reflective subjects the more we rely on our technologies of memory, and the less on the exercise of our own cognitive faculties. And yet here I am tethered to Twitter and Facebook, asking my phone to provide me with turn-by turn-directions to my favorite duck restaurant in Montmartre. Perhaps I will stop for a moment Wikipedia for the history of the corner on which I stand. I will find myself on Google maps. I am deeply ambivalent about this displacement and enhancement of my powers of memory: cognizant that something has been lost and yet conscious that something else has been gained. N. Katherine Hayles has described our contemporary situation as posthuman subjects. Our increasing reliance on these networked services to guide our journeys through life as embodied subjects, our willingness to abdicate our memories to smart phones and hard drives, and our willingness to commit our personal relationships to the cloud are convincing evidence that we have already become posthuman.

At the 2012 Remediating the Social conference, Roberto Simanowski made the observation that aspects of social networks such as the Facebook timeline feature might be supplanting the function of the personal diary, and that this might be part of a broader turn away from the process of reflective development of the self. Rather than a private reflexivity represented in personal writing, a generation, he argues, is feeding chronicles with meaningless facts and pithy observations connected by a template form but not by a narrative subjectivity. The end of this, Simanowski fears, might in fact be the comprehensive loss of personal subjectivity. In his view the sharing on social networks is not functioning to build community, but is instead a kind of collective exhibitionism that norms those who take part in it. While I see the dangers here in offering up not only so much of our data, but also our relationships, perhaps even the structure of our own subjectivity, to vast networks beyond our control, I know that without Facebook or Twitter I would not be arguing politics with my college roommate or following my cousin’s journey through South Africa or a friend’s process of developing a new work of digital literature. Like so many others I am enmeshed in this problematic symbiosis, and I am not rejecting it.

In the environment of a social network as commercialized and one might even say predatory as Facebook, every time I share a link, or an observation, or a photograph, or a memory, I am simultaneously serving the most comprehensive marketing survey ever known to man. I am thus allowing my thoughts, writing, friendships, and identity to be monetized and manipulated by a vast corporate enterprise to which I am nothing but a commodity. Yet I am also aware of the fact that without these networked services, I would not be able to share, construct, develop, communicate and remember with so many people in the many ways that I can now. For all their contingencies, and for all the asymmetricality of my relationships with these networks of which I am a willing subject, I am getting something back that I would not get without these social networks and the corporate networks that control them.

I am ambivalent and deeply troubled by my own trust in these services and technologies whose terms of service I rarely hesitate to agree to. As media for memory, they are bound to fail me. Just to provide one example that has brought me some recent heartache: after the screen of my last laptop began to flicker with alarming regularity and its fan to spin up far more often than it should (exhibiting the same death pangs any Apple product seems to after three years of use), I purchased a new machine and spent some time transferring my files. While I was doing that, I looked through some of the older files that I have managed to transition from computer to computer over the years, including some of the fiction I wrote back when I was a student in the early 90s. You can imagine my surprise when I opened these directories to find that the system could no longer recognize any of these files as documents. I did most of my early writing on the computer in WordPerfect or Word format. After some searching online, I was able to find software that could bring the WordPerfect files back to legibility and allow them to be read by Word and other applications. The files that appear to be intractable however are those I wrote using an earlier version of Word for the Mac. Word cannot read this earlier version of Word. Word cannot read my Word files. The very software in which they were created has betrayed the documents that I entrusted to my hard drive and transferred diligently over the years. The bits are still there, but the software in which they were created has forgotten how to read them. Less than twenty years after their initial creation, these files, produced using one of the most widely-adopted software platforms of the day, using software from a company—Microsoft—that has never gone out of business, are now functionally unreadable. You can easily extrapolate what this might mean for files produced using less-adopted platforms, such at those that have been used in the creation of experimental works of electronic literature, the field that has been at the center of my scholarly and creative life for the past two decades.

Any discussion of archiving electronic literature almost inevitably leads to a litany of laments: the rapid shifts in the software and hardware as well the comparatively short life cycles of the physical storage media of digital works, combined with the complexities of copyright and patent laws involved in works that were often produced using proprietary computer programs, the relatively small base community of readers and users who actually care if the text remains legible, the lack of library acquisitions or common cataloguing standards, and so forth have together made for a field in which the rapid technological obsolescence of the primary subject matter is a central concern. Critics and authors of electronic literature have thus found themselves thrown into some roles they might never have expected to play: the scholar becomes publisher, librarian, and archivist, cobbling together an infrastructure where one did not previously exist. And yet I think this void has also presented us with opportunities to collectively develop both new research infrastructures and new terms for using them. Where no conventions exist, new ones can be invented.

In comparison to some other relative new fields, such as the broad field of communications, film studies, or gender studies, for example, as electronic literature has developed as an academic subject, it is not easily located as a child or sub-discipline of any other field. While many literary scholars and theorists are active here, so too are many communications scholars, art historians, computer scientists, designers, creative writers, visual, conceptual, and performance artists, sound engineers, curators and others. So some basic disciplinary ontology is up for grabs. If literary studies is the closest thing we have to a parent discipline, this relationship is complicated by the fact that our objects of study: on the one hand respectively fixed literary texts generally created for reflective reading in print media, and on the other literary computer programs, computational artifacts and processes, are fundamentally different. While we can port some of our critical methodologies and concerns from print to electronic literature, that set of tools alone does not make a field of electronic literature possible. And the institutional apparatus of print literary culture, such as publishers and bookstores, canonical anthologies, teaching methodologies, libraries, and archives, have not been prepared for this influx of new categories of literary objects and experiences.

I have been involved with a number of community efforts to step into this breach over the past dozen or so years: several projects of the Electronic Literature Organization, including the Electronic Literature Collection and the Electronic Literature Directory, and most recently the ELMCIP Electronic Literature Knowledge Base, offer some pathways, potentially useful models of how a community of creative and scholarly practice might harness collective memory not only to preserve the past of a field, but also to provide a platform for its further development.

I won’t say too much here about the two Electronic Literature Collections, the two anthologies published by the ELO both online and on physical media under Creative Commons licenses, except that I think that publications like these are absolutely essential. The ELMCIP Anthology of European Electronic Literature published in 2012 on the Web and on USB drives is the latest manifestation of this sort of activity. By providing at least some tentative measure of fixity for these works and aggregating them at one address that does not change; by making them available on physical media and distributing it widely; and encouraging teachers, students, and readers to copy and share works of e-lit, these anthologies take some incomplete steps towards addressing the problems of archiving electronic literature. Perhaps most importantly these collections have provided a stable shared set of referents for a field in the process of defining itself. Yet it would be a huge mistake to build a field only around the works that have been somewhat arbitrarily published in these anthologies. More complex renderings of the field are necessary, databases that can help to provide information and access to many works, that can provide a sense of critical and cultural context, and establish an apparatus for documenting collective memory of works and practices that might or might not be readable in five, or ten years, or twenty.

 

How Hypertext Helps Us to Imagine a Better Archive

I am taking a winding path to the ELMCIP Electronic Literature Knowledge Base I am here to tell you about, but before I do, I would like to take another brief detour and describe a few aspects of hypertext that have informed the process of developing this database and archive.

Some of the core ideas of hypertext originated with Vannevar Bush’s 1945 essay “As We May Think,” in which he imagined the Memex, a microfilm-based device that would serve as “an enlarged intimate supplement to [his] memory.” The essential feature of Bush’s Memex would be “the process of tying two things together” in order to create “trails” – that is to say, associative indexes. Also key is the notion that any item can be joined to any number of different trails. So given the set of data available within an individual Memex, any given entity within it could be joined into different non-hierarchical arrangements of knowledge.

Ted Nelson, who coined the term hypertext, alternately defined it as "non-sequential writing—text that branches and allows choices to the reader, best read at an interactive screen” and as “body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” Though Nelson’s definitions of hypertext are essential, in considering how to develop a database that both preserves and makes possible new development of the field of electronic literature, what I keep coming back to is Nelson’s discussion “What is Literature” – from Literary Machines. Nelson proposes that “a literature” is “a system of interconnected writings” and that therefore “almost all writing is part of some literature.” Nelson continues:

"These interconnections do not exist on paper except in rudimentary form, and we have tended not to be aware of them. We see individual documents but not the literature, just as people see other individuals but tend not to see the society or culture that surround them. The way people read and write is based in large part on these interconnections."

The greatest power of hypertext as Nelson imagined it is to make the connections that bind these diverse entities or instantiations of writing into “a literature” both visible and usable—we don’t just know that the connections exist; we can actually follow the links. Further, Nelson has an important awareness of the fact that any given object within a hypertext system, any given piece of writing, exists simultaneously in multiple literatures, multiple systems of knowing, both those that exist in the present, and those that might be imagined in years to come. So any discrete text within a hypertext could potentially be framed and connected in a myriad of different ways by different actors, each with competing agendas and narratives, all within the same system. A core component of Nelson’s idea of hypertext is this idea of the hypertext as a flexible knowledge base, which can allow for multiple pathways and arrangements of any given set of entities.

Hypertext fiction, with its nonlinear affordances, appealed to a number of authors who were interested in developing metaphors for associative processes of memory and cognition. The mind, like conversation, does not move in a logical forced march through a chronology of events, but moves from topic to tangent and cycles through drafts and re-visitations of events and themes. Michael Joyce’s afternoon, a story is clearly a work centered on memory, if largely on false memory or obfuscated memory, as the protagonist is largely working through his own avoidance of the facts of a tragic car accident in which his ex-wife and child were involved. In Shelly Jackson’s My Body & A Wunderkammer, memories are inscribed into particular parts of the body in blemishes and scars, just as in Austers’s memoir, and the reader navigates through this representation of the embodied subject by clicking on a woodcut image of Jackson’s body, as well as by following individual links. Jackson plays a good deal with truth, evidence, and representation in this purported memoir. She for instance describes in considerable detail her sometimes-perverse relationship with her own short tail. In The Unknown (1999), the hypertext novel that I coauthored with Dirk Stratton and William Gillespie, we were quite interested in the additive (and addictive) nature of hypertext. During the course of writing the novel, a comedy based on a fictional book tour, it occurred to us that a novel of this sort could, for instance include documentation about its production. We added in a great deal of correspondence written between us about writing the novel. We added in audio recordings of our live readings, even segmenting them and attaching them to their individual pages within the novel. We added in a press kit, which included criticism written about The Unknown. We explored the notion that a work of literature could be a self-contained system, without need of publishers or critics, really. We proposed multiple systems of navigating through the text: by location, by references to people, through categorical classifications of materials, and added them all in. In retrospect, I think a great deal of what excited and interested us was the idea that the novel could be both a novel and a database of affiliated entities. There was an idea that the novel could exist as a kind of independent organic life form: the novel could bleed and weave into the world, and the world could feed back into the novel.

When in later years I became involved in larger-scale projects to document and archive the objects, actors, processes, and events that define the field of electronic literature, I often thought back to the experience of developing an expansive hypertext fiction in The Unknown. The core idea I have taken away from that experience is that a literary work can never be understood as an object in isolation, that it is in fact its context and relationships to other works and practices that makes it literature. In a field such as electronic literature, where the literary object itself is tenuous and contingent, capturing those contexts and those networks of relations is essential to the work of documentation, of collective memory, which has enabled the development of a field.

 

About the ELMCIP Project

Focusing on a particular creative community, of electronic literature practitioners, the central research question of the 2012-103 HERA-funded ELMCIP collaborative research project is how creative communities of practitioners form within transnational and transcultural contexts, within a globalized and distributed communications environment. We have sought to gain insight into and understanding of the social effects and manifestations of creativity. Our research tried to exploit the characteristics of electronic literature in order to inquire into how a broader range of networked creative communities could develop.

In pursuit of purely objective research goals, it would have been possible to frame such a research project externally to the field itself, for example by limiting the study to ethnographic research conducted by disinterested social scientists. But ELMCIP did not pretend to a false sense of objectivity. Our researchers are active as scholars, writers, and artists, in the field that is the subject of our research. Our interest has not simply been the study of a field that has already been established and understood as completely formed, but rather to better understand the conditions for the formation and advancement of network-based creative communities by actively engaging in the work of better developing a field in which we as researchers are already actively engaged. ELMCIP’s research outcomes were therefore not limited to cultural analysis, but included the development of research infrastructure for electronic literature.

The six-nation, seven-partner collaborative research project included seminars, workshops, a conference, an exhibition, an anthology, and diverse forms of scholarly publications. Linking all of these outcomes together and the central work package of the University of Bergen team is the ELMCIP Electronic Literature Knowledge Base (http://elmcip.net/knowledgebase), an extensive open-access contributory online database documenting the field of electronic literature.

 

Objectives of the ELMCIP Electronic Literature Knowledge Base

The Knowledge Base serves as a centralized, searchable archive of information about electronic literature and related creative communities. It is not simply as a set of information pages about the ELMCIP project but also a research outcome, both serving to increase the impact of the research produced by ELMCIP, and to provide a research platform and documentation archive for the field of electronic literature as a whole. In developing the Knowledge Base, our objectives have been:

- Breadth: The Knowledge Base was originally intended to document the research of the ELMCIP project but grew to have a much greater scope. While we began with a more limited focus on documenting the research and events of the ELMCIP project itself, at some point we decided that we might as well try to document the field of electronic literature as a whole.

- Granularity: Rather than simply redistributing PDF files of research and reports produced as part of the research project, the Knowledge Base extends the bibliographic usefulness and searchability of the research conducted by the project. Structured data about individual works, important critical and theoretical articles, individual authors, institutions, and events have been harvested and gathered as individual records, resulting in a useful searchable bibliographic resource.

- Open Access: The majority of the information entered into the Knowledge Base is available on a free and open access basis. Whenever possible, knowledge produced by the project has been released with a permissive Creative Commons license or into the public domain. As we have developed the platform, we have also moved to a model of open contributions. There is only a very thin layer of gatekeeping, in that we ask contributors to use their real names, and to send a note with a brief explanation of their research interest in e-lit when they sign up as contributors. Once anyone has a contributor account, he or she can not only add new records but also edit and improve existing records. There is a revisioning system in place, similar to that of Wikipedia, so in the unlikely event of a contributor is hell-bent on intellectual vandalism, any malicious changes or mistakes can be rolled back and corrected.

- Sustainability: To make the Knowledge Base as durable as possible, we use a widely supported open source CMS and database platform, Drupal. The University of Bergen has further committed to host and support the Knowledge Base for a minimum of five years beyond the project period. We will also take steps to assure that the project is archived by the Internet Archive (archive.org) and relevant national libraries, and will therefore be freely available for the foreseeable future. Another aspect of sustainability is the use of RDF schemas to make the metadata of the records easily accessible, readable and usable by other systems. The portability of the metadata is essential to the sustainability of the Knowledge Base, and will enhance its range of applications.

- Usability: We have tried to design and implement the Knowledge Base according to best practices of Web usability, emphasizing clarity, flexible search, and accessibility. While our design could be sexier, and we do have some changes in the pipeline to improve the user interface, our general principle has been to put information first, to provide an uncluttered interface, and get users to the texts they are after as quickly and simply as possible.

 

 The Importance of Cross-Referencing and Capturing the Activities of the Field

In selecting and specifying the record types for the Knowledge Base and including critical writing, authors, publishers, organizations, events, teaching resources, and databases and archives as well as creative works, we are trying to make it possible to document and make available for study the activities of a field of practice that operates differently from many traditional academic or artistic fields. Theory, practice, and critical activities are deeply intertwined in e-lit. Events such as exhibitions, festivals, and conferences, also play a different role, in that they are often the primary way of releasing a new work, and of building interest in it. So we want to document not only works but also the process of how works are presented and received, and in what context this happens.

In order to show all these connections between activities, we have allowed for extensive cross-referencing within the Knowledge Base. For instance we ask people creating records of critical writing, to enter references to the creative works that article addresses. As soon as they do, that reference also automatically appears on the record for the work itself. Likewise, if someone created a teaching resource record for a course that made use of a particular work, it would become visible on the record of the work itself. In one way this is a quite simple extension of traditional humanities research: essentially the database is doing the work of traditional scholarship, keeping track of references, and following footnotes. The difference between this and other approaches is that we are thinking of those references, those relations, as core to the entity, and bringing them out whenever possible. Because the references display on almost all of the objects they touch, patterns become apparent in the aggregation that would not be evident from looking a single entity in isolation.

Over time, all these cross-references begin to allow us to see how the practices of a creative community and its related scholarly community are building together in a way that no individual account could otherwise allow and enable us to see a kind of reception history of any individual work as it is written about by critics and theorists. This may also allow for the type of scholarship Franco Moretti describes in Graphs, Maps, and Trees: Abstract Models for Literary History as “distant reading.” In addition to tracking how the reception of an individual work occurred and changed over time, we might, for instance be able to track the lifespan of a particular genre, practice, or concept in both creative and critical spheres. In 2013, the University of Bergen Electronic Literature Research Group began to publish visualization-based research using data harvested from the Knowledge Base to track these sorts of large-scale patterns in the field. As the documentation in the Knowledge Base reaches a critical mass, a number of different information visualizations and new reading strategies become feasible.

 

Some lessons, and plans for future development of the Knowledge Base platform

In August 2013, ELMCIP drew to a close as a funded joint research project, but the development of the Electronic Literature Knowledge Base will continue as a central aspect of the work of the University of Bergen’s Electronic Literature Research Group for the foreseeable future. I want to sum up a few of the lessons we have learned so far.

1) The Knowledge Base belongs to all of us

In our development of this platform, we have benefited a great deal from inviting experts of various stripes, guest researchers, students, librarians, and virtually anyone in the international electronic literature community we can lay hands on to participate in the deliberative development of the Knowledge Base. And many have volunteered out of interest in the project. As a result, the project has a better structure than it otherwise would have, documents many types of entities we did not initially imagine we would have, and has far richer content than it would have if the team working at UiB were alone. We also intend to share the records of the database in whatever ways possible so that other groups can build with it and upon it, and to make that technically easy to do so. If the Knowledge Base is to be sustainable in the long term, beyond ELMCIP’s funded period, the participation of the international community of electronic literature authors and scholars will be essential.

2) To collect and describe is to make ideological choices and to define a set of values

As we have developed the Knowledge Base platform, our research group has met every Friday afternoon. We discuss various technical and content issues related to the database. Almost inevitably, we leave the meeting with a list of new fields, and new views of information we have decided to add and integrate into the Knowledge Base. As we discuss what types of entities compose the field of electronic literature and what type of material should appear on an author record, we realize that we are also discussing the politics of academia, which highlight and value certain types of work and obfuscate others. So for instance deciding that editorial work, and teaching, and curatorial work should be displayed on an author record, as well as critical writing and creative work authored, is not trivial. This goes for nearly every field and every content type in the database. Including or excluding fields from a form is a political decision, with consequences. To build a database is to realize the power of bureaucratic forms, and then to realize that you are building and responsible for the structure of the bureaucracy.

3) An archive is an improvisational medium

This comes back to Ted Nelson, but the great joy of building the Knowledge Base has been the realizations that:
A) We can and must invent the vocabulary as we go along;
B) As we realize that we are missing important aspects of the field we can add them; and
C) As we gather all of this information and see it in front of us, new applications reveal themselves to us, and we can build those in.
Just to provide a few examples: We realized about halfway through our development process that this would be an excellent platform in which to both share and develop teaching resources—given that many of the creative works and critical writing that are core to a syllabus are already there. Later we were discussing the fact that while we were spending a great deal of time discussing databases and archives, we had no way of accounting for those in the Knowledge Base itself, so we added those as a content type. We are currently in the process of adding a content type to describe specific authoring systems and platforms, which will be retrospectively be related to the records of works produced using them.

The vital content types that define the field of electronic literature have only become apparent as we have worked on the database. Developing research infrastructure is research.

Further, we are developing ways that individual researchers and teachers can use this platform as a research and teaching tool. For example, individuals can create private notebooks for their individual research and link to multiple items, or public research collections to gather resources on a specific topic, such as Brazilian electronic literature, or e-lit for the iPad. We are also developing teaching tools and other applications within the Knowledge Base. But the platform is flexible enough that we are able to engage in a continual process of reinvention.

4) A rising tide floats all boats (or electronic literature research platforms)

I will close by describing the Consortium for Electronic Literature, which now includes six different databases internationally, all active in the field of electronic literature. But first I want to acknowledge that while we are all describing similar (and sometimes the same) objects, we are doing it in different ways, according to different models that weigh, describe, and value different aspects of the objects concerned, each with its own strengths and weaknesses. The wonderful thing is that while these models are different, we are all sharing and learning from each other, and providing different inputs to what is becoming a globalized documentation and archiving network centered on electronic literature. Embedded within that larger network are at least six different ways of remembering a field in the process of becoming.

 

International cooperation and field building

We have designed the Knowledge Base to allow for different forms of interoperability with other online databases used within the field of electronic literature. Because electronic literature is a new field, and common bibliographic standards and archival practices have yet to be standardized, we will work together with the CELL partners to develop those standards.

In 2010, we had the first meeting of the Consortium for Electronic Literature (CELL), in Sydney Australia, including members of the ELMCIP project, the ELO, the Australian Creative Nation project, and the Siegen, Germany Media Upheavals project. At a follow-up meeting in Bergen in 2011 we focused on database-specific issues and additionally included representatives of the Brown University digital archive, the Portuguese experimental literature archive, and Quebec-based French-language electronic literature research observatory NT2. The CELL network has since then met frequently online, and held a meeting in Paris in 2013 during the ELO Conference.

We agreed to some interoperability goals for the databases:

- To make records in all the databases available on an open-access, Creative Commons licensed basis, and to allow each of the databases to reuse and develop records from any of the other participating databases;

- To share machine-readable short entries for works of electronic literature and related critical writing, and to strive to do so in an automated way;

- To develop a common bibliographic standard for entries describing works of electronic literature;

- To work to implement a mechanism to allow searching across all of the active databases from within the interface of any one; and

- To work to find ways to mirror all of the databases, and to preserve the bibliographic information within any of the databases if any one should go offline due to a loss of funding or any other circumstance.

This is work that will take some time, but which is already beginning to bear fruit. As of 2013, the CELL network has agreed to a minimal set of fields that define the core bibliographic metadata of a work of electronic literature, and has agreed to an implementation plan for shared search across the databases.

 

From documentation to archiving

The ELMCIP Knowledge Base is not primarily an archive or repository, but as we have proceeded with the work of documenting this field, the thinness of the boundary between documenting and archiving becomes more apparent. The ELMCIP project includes metadata-level documentation and some archival materials, such as PDF files, source code of some works, audio and video documentation of presentations and so forth. Projects such as the Brown literary arts digital archive, the Po.Ex archive, and even the ELO’s Electronic Literature Collection are more focused on archiving and preserving code and media assets to make the works more durable and accessible beyond metadata and descriptions. Although this is beyond the scope of the current Knowledge Base platform, we are beginning to consider how documentation work can connect with and facilitate the work of archiving repositories. As we consider the potentialities of the network we are together building in the Consortium for Electronic Literature, first by building a network of cooperation, of people talking and working together, and then of databases operating and communicating with each other, a network of archives, of repositories of electronic literature accessible from and to each other, seems a necessary and logical progression that we can work together to bring about. We can collectively constitute a “digital subject” that works to preserve its subject matter even as it provides us with a global research-oriented social network more productive than distractive, one that produces both a collective memory and a platform for the continuous development of a field.

 

References

AUSTER, Paul, Winter Journal, New York, Henry Holt, 2012, 240 p.

BUSH, Vannevar, As We May Think”, The Atlantic, Jul. 1 1945, online: http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ , 4 p.

GILLESPIE, William, RETTBERG, Scott, and STRATTON, Dirk, The Unknown, Spineless Books, 1999, online: http://unknownhypertext.com.

JACKSON, Shelley, My Body & A Wunderkammer”, Alt-X, 1997, online: http://www.altx.com/thebody/.

JOYCE, Michael, afternoon, a story, Watertown, Eastgate Systems, 1990.

MAX, D.T. Every Ghost Story is a Love Story, New York, Penguin, 2012, 368 p.

MORETTI, Franco, Graphs, Maps, Trees: Abstract Models for Literary History, New York, Verso, 1987, 119 p.

NELSON, Theodore Holm. Literary Machines. Sausalito, CA: Mindful Press, 1993, Edition 93.1. 

PLATO, Symposium and Phaedrus, Benjamin Jowett, trans., New York, Cosimo Classics, 2010, 98 p.

SIMANOWSKI, Roberto, “The Compelling Charm of Numbers”, Remediating the Social, Simon Biggs, ed., Bergen, ELMCIP, 2012, p. 20-28.

Electronic Literature Anthologies, Databases, and Archives

The Electronic Literature Collection, Volume 1, N. Katherine Hayles, Nick Montfort, Scott Rettberg, and Stephanie Strickland, eds., The Electronic Literature Organization, 2006, online: http://collection.eliterature.org/1.

The Electronic Literature Collection, Volume 2, Laura Borràs Castanyer, Talan Memmott, Rita Raley, and Brian Kim Stefans, eds., The Electronic Literature Organization, 2011, online: http://collection.eliterature.org/2.

The ELMCIP Anthology of European Electronic Literature, Maria Engberg and Talan Memmott, eds., ELMIP, 2012, online: http://anthology.elmcip.net.

The Electronic Literature Directory (version 2), The Electronic Literature Organization, 2008, online: http://directory.eliterature.org.

The ELMCIP Electronic Literature Knowledge Base, ELMCIP, 2010, online: http://elmcip.net/knowledgebase.

Po.Ex: Arquivo Digital da Literatura Experimental Portuguesa, 2005, online: http://po-ex.net/.

Répertoire des Arts et Littératures Hypermédiatiques, NT2, 2005, online: http://nt2.uqam.ca/observatoire/repertoire.

 

Ajouter un commentaire

Un projet Labex Arts H2H