End of level for Gonzalo Frasca

Billede004(8)
Former Urugayan ITU resident and PhD program co-sufferer Gonzalo Frasca just (yesterday) told the world how games, play, and rhetorics are connected thus earning himself the non-refundable title of PhD.
Because of exams I could only attend the esteemed candidate’s presentation (thus missing out on the subsequent Q&A fun) but what I heard seemed both coherent and science-like. I cannot be more specific since I was concentrating on taking low-quality pictures with my clumsy phone.

Congratulations, Dr.

Anne Mette Thorhauge defends game PhD

Anne Mette Thorhauge defendes her PhD dissertationAnne Mette Thorhauge of Copenhagen University just defended her (Danish) dissertation entitled The Computer Game as Communication Form. Topic-wise, Anne Mette’s work seems quite close to my own, so I look forward to reading her text soon. I might, perhaps, disagree with her conceptualization of rules as something “established and maintained by communication”, but it could just be a definition issue and life is surely too short to argue about those.

Either way: Congratulations Anne Mette :-)

No Medium is an Island: An essay on the Video Game and its cultural neighborhood

No medium exists in a vacuum. Media draw upon established forms of expression and depends on existing hardware. Only gradually do they evolve towards aesthetic independence and take on forms that are less derivative. As a medium evolves, its practitioners usually try to “liberate” the medium from what is often seen as the dominance of external phenomena – often more established forms – and claim that the medium in question is important, artistically and academically, in its own right. Video games are presently in the late stages of this phase. To illustrate the entire process let us first, as an example, look to another medium which has moved beyond any inferiority complexes.

Cinema as an example of medium development

In film’s infancy, the enormous possibilities of the medium were poorly understood. While the notion of moving images was awe-inspiring, movie pioneers Louis and Auguste Lumiere were initially satisfied with simply placing a camera on a tripod and leave it to capture whatever went on in the frame. The earliest movies were of workers exiting a factory or a train pulling into a station. There was no staging, no narrative to speak of, and no editing. Essentially, the Lumieres worked as if they had a still camera that happened to capture moving images.

The concept of editing was a radical one. So innovative was this concept that it was unclear whether movie-goers would be able to make sense of a film’s disjointed points-of-view, and lack of a clear real-life counterpart. After nearly two decades of editorial experimentation, in 1913 D.W. Griffiths dramatically altered the future of the medium. Griffiths grasped the importance of a wide range of techniques. None were entirely new, but they had not yet been used efficiently and certainly never combined to form one compelling dramatic vision. Griffiths’s Birth of a Nation featured dramatic close-ups and dramatic cross-editing (cutting repeatedly between interconnected scenes).1

Birth of a Nation

In the midst of these innovations, however, some of Griffiths’s contemporaries used an opposite approach in order to establish the seriousness of films: they sought to link film to already established art forms, mostly theatre. Thus, a surprisingly wide range of films merely showed theatre performances of classics; today the term “filmed theatre” refers to a truly primitive approach to film making. Nevertheless, it represented a particular evolutionary stage that has parallels in game design, as we shall see below.

With the introduction of sound in the late 1920s, the “talkies” paved the way for much more complex narratives and for the wide-ranging dramatic uses of sound that we take for granted today. And with it a new controversy arose, as some argued that the addition of sound changed the audience’s experience and threatened the medium. There is a direct comparison with the now-mostly-historical rivalry between text based adventure games and their graphic counterparts. Text game designers often bemoaned the loss of “that special something” – like the active appeal to a player’s imagination – which made the old games superior, and which they felt was lost with the addition of graphics.

With the introduction of color film in the 1930s we see another interesting shift in the medium’s development. In these early years, color sequences represented fantastic situations or dream moments, whereas “normal” life was rendered in black and white; in The Wizard of OZ, for instance, the bleak reality of Dorothy’s Kansas home is monochromatic, whereas the dream-like vision of Oz is intensely colorful. But today the situation has reversed, and black-and-white film is generally reserved for dreams or flash-backs.

The Wizard of Oz

In the 1960s, cinema entered its rebellious phase. Film was no longer simply entertainment for the illiterate masses. The believers claimed that film had special properties and functions not found in other media. Notably, critics and film-makers associated with the French Nouvelle Vague (or “New Wave”) argued that film was comparable to literature. Although film offered new forms of narrative, the movie director was comparable to the book author2 using his camera “as a pen”. And as several of these auteurs – from Pier Paolo Pasolini to Ingmar Bergman – rose to worldwide prominence, and as the academy grew more interested in an analytical approach to film, the artistic ambitions of the medium could no longer be denied. Today, cinema no longer has to defend itself as a form of artistic expression. No one argues that films cannot be considered art – but we must not forget that this evolution was many decades in the making.

The development of video games as a medium

Let us approach video games in a similar fashion. For present purposes, we are interested in the development of their relationship with other media and other phenomena, rather than their aesthetic development per se.

Did 1962’s Spacewar borrow from previous media? As to form, we cannot say that it copied anything directly, although it is interesting that the one-screen, fixed perspective is reminiscent of the Lumieres’ first films. As to content, on the other hand, the game designers were explicitly inspired by science fiction books and low-brow action movies (Graetz, 2001). Spacewar also borrowed from non-electronic games. It mimicked certain skill-based ball games and, more importantly, it required two players. Thus it was a continuation of previous game types – from tennis to chess – which had mostly been multi-player.

With the growth of arcade games in the early 1980s, game designers drew heavily on pop culture symbols. Game cabinets explicitly cited popular movies, which, although often irrelevant to gameplay, enriched the game experience by framing it within a larger narrative. For instance, Shark Jaws, published by Atari in 1975, shamelessly referred to the blockbuster movie Jaws (itself based on a book) in order to piggyback on the film’s popularity.

1976 was a watershed year for video games for two reasons. First, Night Driver challenged the dominance of the third-person perspective by having the player drive into the screen from a first-person perspective. This mirrors discoveries made by movie-makers in the 1910s and 1920s who found new ways to work with the camera and perspective. Second, another driving game, Death Race, shattered the status of games as harmless fun by sparking widespread fear of the detrimental effects of on-screen violence. The game, (based on the movie Death Race 2000) had players control a car in order to run down “gremlins”, who looked like little men, an activity unacceptable to many.

Although the arcade business involved intense creativity, few entertained the notion that games should be considered anything more than entertainment. This public perception was rooted in the fact that games were closely associated with the teenagers who played them, and the somewhat dark and disreputable arcades that housed them. This perception changed with the release of Zork in 1980, an early adventure game. Games could now approximate literature. Those who wrote about video games started describing them in radically different terms. In return, adventure game designers began the attempt to separate themselves from their less-lofty arcade relatives. Adventure games were called ‘interactive fiction’, story-games, compu-novels etc. (e.g. Rothstein, 1983).

The effort to distance adventure games from other game genres can be interpreted in two ways. On the one hand, this evolutionary step was could be seen as fully justified, since these game types are radically different and offer far richer or deeper experiences. Compared to then-contemporary action games such as Space Invaders, adventure games could offer far more complex and emotionally rewarding stories. Furthermore, because they were interactive, adventure games were not “mere” stories but offered new techniques and pleasures. They offered a chance to experiment with alternative story lines, and enabled the player to confront the consequences of choice and the very nature of narrative form.

On the other hand, we can see this effort of separation as a case of “filmed theatre”, an unreflective yet strategic attempt to piggyback on the legitimacy of established art forms. Adventure games essentially miss that which is special about games. By confining the player to a linear story, designers display a lack of courage to engage in shared authorship. These games illustrate an immature understanding of the medium, one which merely makes games subservient to literature.

As the reader will have noticed these two positions do not represent answers to a scientific question. Stripped bare, the discussion is fundamentally about what makes games good or bad – and this is not something that can be decided by game scholars. Let us note, then, that adventure games appealed to many, while others considered them boring. Considering the target audience, the struggle by many adventure game designers to frame their work in terms of literature was a successful marketing strategy. Text adventure games vanished from the mainstream in the late 1980s. But ten years later they were followed into near-oblivion by their direct descendants, the graphical adventure games (though there have been a few successful recent titles, such as Microïds’ Siberia from 2002).

The late nineties saw another far more coordinated and successful attempt to argue for the relevance of games as aesthetic objects. First of all, game design had reached a level of complexity where professionalization was necessary. Gone were the days where single individuals worked out of their garage to create popular games. To compete in the game business, “developers” became teams of highly specialized individuals overseen by project managers and backed by dedicated marketing departments. New professional organizations such as the International Game Developer’s Association sprung up and the sharing of knowledge on the intricacies of design and development increased.

Meanwhile, the academic world was rapidly becoming interested in games as aesthetic and cultural objects, rather than as simply a sub-genre of literature or a dangerous social phenomenon. The IT University of Copenhagen (in 2001) and the university of Manchester (in 2002) held the first international conferences on video games. Books such as Espen Aarseth’s Cybertext (1997) or Steven Poole’s Trigger Happy (1999) highlighted the status of games as new and important cultural objects.

Further evidence came with the rise of ludology (see Smith, 2004) which was a move towards studying games first and foremost in their capacity as rule-based systems. Today, both the analysis of video games continues unabated. For instance in journals such as Game Studies and Games and Culture and through the work of associations like the Digital Games Research Association.

The relationship between games and cinema

Video games are compared and contrasted to movies more often than to any other media. As audiovisual works, games have clear connections to cinema and indeed many games have suffered from what we can call “cinema envy”. Though the two differ greatly in the way they present on-screen activity, games have adopted a variety of conventions established by Hollywood style cinema. For instance, games employ a range of “continuity techniques”. Most obviously, they do not skip frames which would disorient the player. The term for a break in continuity is “lag” and is generally considered a flaw. Nor do they normally break the 180° “rule”, which states that you cannot cut between two camera positions that are more than 180° apart from one another. Doing so would reverse the direction of on-screen objects; a person moving in one direction would suddenly seem to be moving in another.

Nowhere is this more obvious, of course, than in games which closely mimic the structure and form of narrative films. Adventure games like Gabriel Knight III uphold these conventions almost completely, as do games with scripted editing like the Resident Evil series.

Resident Evil 2: The game uses scripted editing that complies with Hollywood conventions.

While similarities stand out, one crucial difference between games and movies relates to the use of editing. Some games have semi-linear narratives and employ almost the entire arsenal of movie conventions, but many do not. Action games like Kung Fu Master and Doom, for instance, do not divide the on-screen action into sequences of shots, but rather display continuous streams of images that stop only when the player reaches a new level. Doom uses two techniques that are impossible in narrative film for dramatic or practical purposes. Firstly, the game uses the first-person-perspective only. The best known attempt to tell a film from the first-person perspective was Robert Montgomery’s 1947 Lady in the Lake; while interesting, the effect is less than compelling. Secondly, the game’s lack of editing is virtually impossible in movies. It would require super-human planning and luck, and would do away with many fundamental film techniques such as close-ups, cross-editing, reaction shots, and establishing shots. Perhaps the ease with which the Doom player orients himself is a testament to the success of letting the player control perspective with his mouse or keyboard.

Cross-media titles

The video game business has a longstanding affair with Hollywood. Mostly, it is a win-win situation. One may piggyback on the popularity or marketing efforts of the other and, increasingly, one may directly use material produced in the making of the other. Also, the two do not really compete for the same money or time. Since the two media generally provide different experiences it is not an either-or situation for many viewers/players.

However, the relationship has undeniably been fraught with artistically questionable products. In this category, Atari’s infamous E.T. the Extra-Terrestrial outshines most others. The game failed so spectacularly that, arguably, the link between movies and video games was compromised for years. It was evident beyond any doubt that a good movie did not automatically make for a good game. For reasons already mentioned, however, the temptation did not vanish. The mid-1980s saw the release of games like Ghostbusters, Gremlins, Indiana Jones and the Temple of Doom and Aliens. Since those days, many movie blockbusters (at least those with strong action elements) have been increasingly accompanied by one or more games. Many of these adaptations have worked well, but it is noteworthy that practically none of these games are seen as groundbreaking. Recently, attempts have been made to go beyond the mere translation of movie to game. Enter the Matrix, for instance, tried including scenes that were not shown in the movie Matrix: Reloaded in an attempt to create a more exciting synergy between the media. Reviewers were not impressed. Influential Gamespot.com described it as “just another licensed game that doesn’t do justice to its source material”, while PC Gamer felt that had it not been for the Matrix setting one would be left with “an action game that really does nothing new – and looks pretty average doing it”. More recently (in 2004), Electronic Arts attempted yet another alternative strategy, by releasing the James Bond game 007: Everything or Nothing as an original Bond title without a supporting movie. The developers scanned actors who appeared in the movies in order to have game characters mimic their movement styles and mapped their faces onto the characters. This attempt was met with much more critical success than Enter the Matrix.

We also see movies based on games, but with far less regularity. Oddly enough from a design perspective, the games chosen for the silver screen have mostly been action games. The Super Mario Brothers movie is based on a game which revolves around the less than epic kinetics of jumping between platforms while avoiding small animals. The movie obviously had to move quite far from the defining features of the game. This is less the case with the movies based on street fighting games like Double Dragon, Street Fighter and gory, arena-based Mortal Kombat. These games can be converted into action-packed movie narrative easily and directly, although the movies have not been particularly ambitious productions in terms of budgets. Creepy survival horror games translate almost directly, though the attempt is not always successful. Reviewing the Resident Evil movie, The New York Times despaired that “The movie has a frantic staccato style that is more game-oriented than cinematic.” (Holden, 2002). The first real attempt at a full budget Hollywood game adaptation was Simon West’s 2001 Tomb Raider. Building on the fame of gaming’s most celebrated heroine, Lara Croft, the movie saw Angelina Jolie traveling the world to fight crime and recover archaeological treasure. Practically universally disparaged by critics, the movie was a hit at the box office inspiring a 2003 sequel, Lara Croft Tomb Raider: The Cradle of Life.

Continuity and self-reflexivity

In narrative literature and movies, suspension of disbelief is generally achieved by presenting a coherent, self-contained world and a story that does not call attention to its artificial nature. In mainstream cinema we do not see the movie production crew on-screen and in novels we do not hear about the author. Similarly, we might think that successful games immerse the player in an experience by supporting his suspension of disbelief. But some games seem to sin against this rule by specifically highlighting their gameness. Typically, this happens by referring directly to the game interface (“Now, press X to jump across the gap”). In some cases, however, game designers include more playful features that bridge the gap between representation and real life. In the adventure game Planetfall, for instance, when the player wished to save his position, the robot sidekick Floyd would ask “Are we going to do something dangerous now?”. Something similar happens in Prince of Persia: Sands of Time when the narrator comments on the death of the player with phrases like “No no, that’s not what happened!” drawing attention to the fact that the game’s action is a retelling of past events. In a sequence in Metal Gear Solid: Twin Snakes, an in-game enemy “reads” the player’s mind by analyzing certain data on the PS2 memory card. In many real-time-strategy games (such as Warcraft II) units will start addressing the player directly if clicked repeatedly without being given orders.

Such gimmicks arguably break the illusion and remind the player of the artificiality of the situation. Film makers go to great lengths to avoid drawing attention to “the fourth wall”, a term originating in theatre to describe the imagined wall at the side of the stage from which the audience looks in. From a traditionalist Hollywood perspective, this illusion must be preserved for the spectators to be able to lose themselves in the narrative. Film-makers of the modernist school have challenged these classic film-making conventions. An example is the camera conspicuously entering our field of vision in Ingmar Bergman’s Persona, thus stressing the representational nature of the action. Designer Ernest Adams has a very unambiguous opinion about illusion-disruptive techniques in games: “Such cute gimmicks don’t improve the players’ experience; they harm it. It’s a direct slap in the face.” (Adams, 2004). Here, Adams voices a common notion that games and all media must uphold certain rules and conventions that help transport the player to an imaginary space. The slightest incongruence may violently rip the player out of this space, rendering the experience shallow and imperfect. There is an opposing position, however. Game designers Salen and Zimmerman define “the immersive fallacy” as “the idea that the pleasure of a media experience lies in its ability to sensually transport the participant into an illusory, simulated reality.” (Salen & Zimmerman, 2004, p. 450). They argue that, to the contrary, we become engrossed in games through the activity of play, which necessarily entails that the player, at some level, is aware that the situation is at once real and make-believe.3

Taken to extremes the idea that “immersion is always broken by self-reflexivity thus hurting the experience” and the idea that “self-reflexivity in games is never an issue since the player is aware of the game’s nature” both pose problems. Even Adams admits that many games do in fact make strategic use of mixing fictional levels. In the case of real-time-strategy games the player is probably less immersed in a narrative than feverishly processing strategic opportunities in her head and thus not likely to be torn from any deep-felt immersion. In games that rely on the progression of a richly textured narrative such antics may well seem inappropriate, however. In other words: we need to take into account genre when considering the effects of immersion-disruptive techniques.

Interactivity

Games require the active participation of players and the way a game plays out depends on input from players. This, at a very concrete and basic level, sets games apart from linear media like novels or movies. A typical game is more like an amusement park than like a novel. Generally, the concept of interactivity has been associated with positive notions of freedom and the liberation of media users. Having people make choices and exert influence was, particularly during the 1990s, one of the greatest emancipatory promises of computing and networking. Game scholar Espen Aarseth (1997) points out that attempts to produce nonlinear fiction are not tied exclusively to computer technology but can be found throughout the entire history of written literature. He aims to cut through the ‘hype’ of interactivity, seeing the term as highly ideological and as connoting revolutionary or utopian expectations that can never be fulfilled:

The industrial rhetoric produced concepts such as interactive newspapers, interactive video, interactive television, and even interactive houses, all implying that the role of the consumer had (or would very soon) change for the better. […] To declare a system interactive is to endorse it with a magic power. (Aarseth, 1997, p. 48).

What is interactivity? Media Scholar Jens F. Jensen has emphasized that the concept is multi-discursive having significantly different meanings in different fields (Jensen, 1997). In particular, he focuses on three. In sociology, the term “interaction” refers to “the relationship between two or more people who, in a given situation, mutually adapt their behavior and actions to each other.” Communication and media studies have a broader definition of interaction including “processes that take place between receivers on the one hand and a media message on the other.” Finally, Informatics uses interaction as “the process that takes place when a human user operates a machine“. These uses are quite different but building upon the most influential definitions of the word, Jensen proposes one of his own: Interactivity is “a measure of a media’s potential ability to let the user exert an influence on the content and/or form of the mediated communication.” This is probably not too far from the colloquial use of the term. Interactivity refers to the meaningful ways in which the user becomes a co-author by directly manipulating variables. DVD viewers are technically able to edit their own narrative and can influence the form of the movie by adjusting the lighting or sound. But the video game player is usually able to determine the configuration of the signs presented to him or her on-screen and through the speakers. Again, the issue is genre-dependent. Although all games have an abstract “potential ability” to allow the user co-authorship, adventure games do this only modestly while MMORPGs lie at the other end of the spectrum, in principle letting every player choice impact the future of the world as long as the server is running.

Most discussions of interactivity in video games are muddled by the fact that they assume that users of other media are passive. This corresponds poorly to the understanding employed by most media scholars who argue that media use such as television viewing demands a high degree of cognitive activity on the part of the viewer. To understand a novel, a movie or a television drama, the reader/viewer must make a large number of inferences, fill in a number of blanks and often deal with numerous narrative threads. The meaning of a movie is something that the viewer must largely construct cognitively from what are essentially patterns of light on a screen. Also, media users sometimes make interpretations that are different from or even opposite to the intended meaning. When discussing the interactive elements of games we must be careful not to be swept away by the positive connotations of the term and we must be quite precise about what we mean so as not to ignore the “active” nature of all media use.

A few remarks towards the end

We can, contrary to common arguments, learn much about video games by looking at other media, even film. While analogies can of course run out of control, the cultural development of games has many similarities with that of film and the two media obviously inspire each other thematically and aesthetically to great extents.

At present, studies of the cultural reception of video games during the course of their four decades of existence are sparse. In particular, cross-national studies of how various cultures have dealt with the arrival of video games on the cultural landscape would be illuminating; not least for developers and publishers who are still facing some opposition from policy makers and from those who would delegate gaming to the domain of children and the young. Such studies would help us understand an important part of the video game ecology, the effects of which – however subtly – influences both games, their creators, and their players.

References

Adams, E. (2004, 9th of July). Postmodernism and the Three Types of Immersion. Gamasutra.com.

Graetz, J. (2001). The Origin of Spacewar! In V. Burnham (Ed.), Supercade, a visual history of the videogame age 1971-1984. Cambridge, Massachusetts: The MIT Press.

Holden, S. (2002, 15th of March). They May Be High-Tech, But They’re Still the Undead. The New York Times.

Jensen, J. F. (1997). ‘Interactivity’. Tracking a New Concept in Media and Communication Studies. Paper presented at the The XIII Nordic Conference on Mass Communication Research, Jyväsklä.

Poole, S. (1999). Trigger happy : the inner life of videogames. London: Fourth Estate.

Rothstein, E. (1983, 8th of May). Reading and Writing: Participatory Novels. The New York Times Book Review.

Salen, K., & Zimmerman, E. (2004). Rules of Play – Game Design Fundamentals. London: MIT Press.

Smith, J. H. (2004). Does gameplay have politics? [Electronic Version], 2004. Retrieved 13th of April 2004 from http://www.game-research.com/art_gameplay_politics.asp.

Aarseth, E. (1997). Cybertext : perspectives on ergodic literature. London: Johns Hopkins University Press.

  1. It also, unpleasantly, features the Ku Klux Klan as heroic protectors of sound values creating an unfortunate situation for film historians who tend to praise the movie’s form but not its contents. []
  2. The term used was auteur, which does not necessarily translate into (book) author. Their point was that the director, although engaged in a collective form of expression, could be the single determining force behind the movie. []
  3. This is also Jesper Juuls’s argument in his book Half-Real (2005). []

Limited character enactment in computer RPGs

The degree to which computer RPG players speak in-character vs. out-of-character
This graph shows the ratio of in-character to out-of-character statements during five sessions of the PS2 RPG Champions of Norrath. While coding the data, any utterance which could be construed as in-character was categorized as such, heavily favoring this category.
Interestingly, the in-character percentage is significantly lower than in similarly coded pen-and-paper RPG sessions.
As discussed in (*) and in the article I’m currently writing with Anders Tychsen.

* Tychsen, Anders & Smith, Jonas Heide & Hitchens, Michael & Tosca, Susana (2006). Communication in Multi-Player Role Playing Games – The Effect of Medium. Technologies for Interactive Digital Storytelling and Entertainment (Lecture Notes in Computer Science). Berlin: Springer Verlag. (If you don’t have Springer access, try this version).