Conference Abstracts (in alphabetical order by presenter)
“Video Games in the Digital Music Trap”
David Arditi (University of Texas at Arlington)
When Grand Theft Auto V (GTAV) was released in 2013, it set records by being the largest first day release of an entertainment product hauling in $800 million in sales. Notably, GTAV included 240 music tracks in the game. As the recording industry responded to the decline of CD sales, major record labels began to seek-out new ways to generate revenue from the sale of music. The overall strategy has been to expand consumption beyond album and single sales by deploying copyrights. I call the subsequent business model the digital music trap. Major record labels and their artists benefited from synchronization licenses on the blockbuster video game. Whereas the recording industry viewed video games as competition for consumers in the early 2000s, the major labels have developed a way to use gaming to create new revenue.
This paper will explore the role that synchronization licenses have played for major record labels to generate revenue following the decline of CD sales. It will focus specifically on ways that record labels deploy their copyrighted music in GATV and the music rhythm games genre (e.g. Guitar Hero and Rock Band). I will develop a critical political economy of copyrighted music in popular video games. I argue that video games act both as a means for record labels to exploit copyrights and as a source of marketing for the sale of music in other media.
“Music Appreciation and the Mario Bros.: The Pedagogy of Musical Hermeneutics”
Stephen Armstrong (Michigan State University)
In the music appreciation classroom, instructors face students who have an exhaustive intuitive knowledge of tonal procedures, though they cannot articulate the facts of tonal construction and socially-constructed musical meaning. Because today’s music is so well-produced and abundant, music itself becomes difficult to approach with a critical ear. Despite these difficulties, instructors cannot solve the underlying problem simply by exposing students to more and better music; students are already inundated with musical information. Rather, the challenge is to find ways of distancing students from musical experience in surprising and meaningful ways.
In taking up this challenge, I argue that the music of pre-millennial video games are an ideal medium for teaching both the elements of music and the hermeneutics of musical devices. Apart from its cultural currency and undeniable shock value, game music is useful precisely because of its technical limitations: programmers had to maximize their limited materials to generate compelling musical frameworks for accompanying play. In developing my arguments, I include numerous case studies from my own experience as an instructor in music appreciation, showing how an eclectic blend of pre-millennial game music can complement the deployment of film music and the classics. By demonstrating that the same sonic principles govern everything from symphonies to chiptunes, instructors can help students gain a deeper appreciation for the fundamentals of musical structure.
“From Attunement to Interference: A Typology of Musical Intertextuality in Video Games.”
Dominic Arsenault and Andréane Morin-Simard (University of Montreal)
This paper maps out the differing kinds of intertextual uses of music in video games, building a typology from the standpoint of their reception and affective impacts on the player. We first build upon Philip Tagg’s concept of codal interference (2013), supplemented by Sébastien Babeux (2004, 2007) and Andréane Morin-Simard’s larger work on interference to open up a discussion on the various types of musical intertextual references, ranging from the inclusion and recognition of leitmotifs providing an intended aesthetic payoff increasing the player’s in-game attunement (Odin 2000) to the various types of interference, where a player’s reaction to the game’s music interferes with the music’s perceived aesthetic intent. This primary framework is complemented with genre theory and aesthetics (Arsenault, Eco, Jauss, Roberts) to integrate the concepts of horizon of expectations and intentio operis, which provide the tools necessary for qualifying the semiotic reception of music and its intertextual effects. We briefly demonstrate the various types of intertextual references and dwell mainly on interference, categorizing the phenomenon in its many dimensions according to the source (the musical intertext can be internal/external), its semiotic channel (codal/aural), its role and integration into gameplay (actional/affective), and horizon (serial/generic). Examples will be produced from a variety of games covering a wide historical spectrum (roughly 1983-2013), including Donkey Kong Country, The Legend of Zelda: Ocarina of Time, Prince of Persia: Warrior Within, Dynasty Warriors, Blue Dragon, and DuckTales: Remastered."
“Old Categories for New Media: Rethinking Music Videogame Organology”
Michael Austin (Howard University)
Guitar Hero (2005) and Rock Band (2007) are, of course, the most famous specimens of the “music videogame” genre. Groundbreaking scholarship on these games regarding the ways in which they influence players to learn to play “real” instruments can be found in several books, book chapters, conference presentations, and academic and journalistic articles. Scholars have also addressed many other issues surrounding these games regarding their potential for genuine music-making. While other music games have received exponentially less critical attention, these issues apply equally to them, and most investigations of music games also center around the ways in which players interact with the music in the game.
The most famous musical instrument classification system, the Hornbostel-Sachs System, categorizes instruments based on the ways they are played (struck, blown, plucked, shaken, etc.), i.e., how musicians interact with them; likewise, music games are classified based on player interaction and gameplay. For example, rhythm games (such as Guitar Hero and Rock Band) require players act in rhythm with the game’s music, and music memory games challenge a player’s musical memory in gameplay. Even if this is the “best” way to classify these games, is it the only way? Can we really even make music with music video games, and if so, how? Are music games virtual simulations of “real” musical instruments or real instruments in their own right? What can we learn by rethinking putative categories and taxonomies? This essay addresses these questions by challenging the ways we currently classify music videogames.
“Analyzing Narrative in Video Game Music: Topic Theory and Modular Design.”
William Ayers (University of Cincinnati)
Examining the narrative structure of music in video games is often a challenge due to the dynamic quality of the medium. Cues are frequently triggered by player actions, providing a degree of unpredictability to the overall musical design. Recent studies in video game music have provided tools for analyzing this open-ended structure. Elizabeth Medina-Gray’s work on modular design in video game music offers a rigorous system for examining the various combinations of musical modules. She uses this system to examine narrative elements in Kingdom Hearts, The Walking Dead, and Mass Effect 2 (among others). Additionally, Winifred Phillips (2014) considers the narrative functionality of certain musical elements from a composer’s perspective, noting the dynamic nature of stinger cues and their purpose in gameplay.
Though these studies provide a strong foundation, many aspects of narrative structure in video game music have not yet been explored in the dynamic/modular framework. In this presentation I will expand this modular methodology to deal with the materials of topic theory, specifically considering examples from the Arkham series of video games and other similar games. Musical modules in the Arkham series are often coded with specific topical references, generally aligning with the Batman character himself, e.g. martial or macabre topics given in Donnelly (1998) and Young (2013). By analyzing the topical content of interacting modules as they are triggered by a player’s actions, we can observe an emergent musical narrative which materializes in conjunction with gameplay situations such as combat, exploration, and victory.
“Immersion into what? The sound world of Sid Meier’s Civilization V”
Karen Cook (University of Hartford)
The fifth installment in Sid Meier’s popular strategy game series, Civilization V, was released in 2010, quickly living up to expectations by winning Game of the Year. The series is famous for its premise, in which the player takes on the role of a particular historical figure and strives to lead his/her civilization to victory through a variety of military, technological, and diplomatic means. The series has also earned both critical and popular acclaim for its musical soundtracks; for example, the theme song to Civilization IV was the first composition written for a video game to win a Grammy Award (in 2011). Civilization V publicity reveals a self-awareness of the popularity of its predecessors’ music: it highlights the new edition’s sound world as one of its main attractions. Moreover, it specifically describes the aural components as ‘immersive,’ suggesting that sound is a crucial component of the series’ prized “just one more turn” gameplay. Sound is a primary factor in player engagement throughout the Civilization series, yet the sound world in Civilization V moves beyond that of its most immediate predecessor in several key ways: most notably, it now provides a more musically homogeneous and culturally distinct soundtrack for each of its eighteen civilizations. In this paper, I investigate how this new approach to the soundtrack provides the heightened sense of immersion that the game designers desire, but simultaneously raises questions about the (re)production of cultural stereotypes in music.
“Lost (and Found) In Your Head: The Significance of Binaural Diegetic Audio in The Nightjar”
Thomas H. Doughty (University of Florida)
Frances Dyson (2009), in Sounding New Media, describes sound as the perfect immersive medium. “Three dimensional, interactive and synesthetic, perceived in the here and now of an embodied space.” Game sound has a distinctive quality that advanced digital imaging, in spite of all its dynamically rendered high resolution art, cannot replicate. Game sound is experienced internally and can evoke perceptible realism, sonically and spatially. Specifically, binaural audio allows the player to perceive the diegetic sounds encompassing the game’s avatar as their own. Binaural/3D audio engines have been used experimentally for many years, most notably by Char Davis who used interactive 3D sound for her virtual reality art installation Osmose (1995). Binaural audio, however has been used almost exclusively by audio-only games. The most commercially successful of these, developed by Somethin’ Else, are the Papa Sangre series, The Nightjar, and Audio Defense: Zombie Arena. In The Nightjar, a narrative-driven mobile app that is played on headphones, the player learns that they have been stranded on the spaceship Nightjar and must navigate the maze of the spaceship to safety using only audio cues, sound events, and the voice of the guide (voiced by actor Benedict Cumberbatch). In order to be successful the player must orient themselves within the sound space of the Nightjar. The significance of The Nightjar and other recent audio-only games is with how binaural audio acts to connect the player to what Salen and Zimmerman(2003) call the “immersive fallacy,” the illusion of spatial placement inside a digitally created environment.
“Navigating the Uncanny Musical Valley: Red Dead Redemption, Ni no Kuni, and the Dangers of Cinematic Game Scores”
William Gibbons (Texas Christian University)
Many recent video games strive towards an “interactive film” model, a trend epitomized in such games as Red Dead Redemption (2010) and Ni no Kuni: Wrath of the White Witch (2010/2011), both of which explicitly imitate cinematic models in their narratives, visual presentation, and musical scores. Red Dead Redemption is an homage to classic film Westerns, and its aleatoric score by composers Bill Elm and Woody Jackson evokes those films in both orchestration and tonal language. Ni No Kuni is a collaboration between Level-5 Games and the animation collective Studio Ghibli—with music by Joe Hisaishi, who scored such Ghibli films as Spirited Away (2001) and Howl’s Moving Castle (2004).
In both instances, however, I argue that the emulation of cinematic scoring ultimately reveals the illusory nature of its mechanics. The revelation of artificiality creates a disjunction that, once apparent, risks alienating players. Videogame designers have long been plagued by the “uncanny valley”: the “almost-but-not-quite real” point at which increasingly lifelike digital representations of humans becoming disturbing to players, uncomfortably reminding them of the unreality of what they see, reinforcing the absence of “liveness” and undercutting the potential for emotional connection and dramatic impact. Using Red Dead Redemption and Ni no Kuni as case studies, I make the case for the existence of a similar “uncanny musical valley” that occurs when videogame music too closely approximates cinematic musical styles. In doing so I engage with and expand the growing amount of scholarship exploring the complex relationship between film and videogame scores.
“Intersections of Musical Performance and Play in Video Games”
Julianne Grasso (University of Chicago)
When Mario ducks behind a platform and finds a secret room in a level of Nintendo’s Super Mario Bros. 3 (1988), his friend Toad offers him a treasure chest with a clue to the contents within: “One toot on this whistle will send you to a far away land!” Indeed, with Mario’s own Zauberflöte of sorts, the player gains the power to warp among different worlds in the universe of the game. Super Mario Bros. 3 is, of course, just one example of a game in which the player can take on some role, however small, in a performance of music. On the other end of the spectrum, games like the Rock Band series (Harmonix, 2007) are built entirely on the gamification of musical performance. Yet both of these disparate examples rely on states of “performativity” as tools for ludic and narrative engagement, whether within a magical world or in front of a virtual audience.
This paper explores implementations of musical performance across multiple genres of video games in which music becomes a medium of interactivity in virtual play. Drawing from Kiri Miller’s concept of schizophonic performance, I discuss the modes of engagement and embodiment created by various intersections of game-play and musicplay. Finally, I bring virtual performance to bear on “real-world” video game music concerts, adding another intersection of performance to the realms of memory, nostalgia, and community.
“Designing Game-Centric Academic Curricula for Procedural Audio and Music”
Robert Hamilton (Stanford University)
The use of procedural technologies for the generation and control of realtime game music and audio systems has in recent times become both more possible and prevalent. Increased industry exploration and adoption of realtime audio engines like libPD coupled with the maturity of abstract audio languages such as FAUST are driving new interactive musical possibilities. As such a distinct need is emerging for educators to codify nextgeneration techniques and tools into coherent curricula in early support of future generations of sound designers and composers. This paper details a multitiered set of technologies and workflows appropriate for the introduction and exploration of beginner, intermediate and advanced procedural audio and music techniques. Specific systems and workflows for rapid gameaudio prototyping, realtime generative audio and music systems, as well as performance optimization through lowlevel code generation will be discussed.
“Compositional Techniques of Chiptune Music”
Christopher Hopkins (Long Island University)
The style of chiptune music used in video games from the 70s to the 90s is a product of technology, the era of its popularity in video games, and the composers. I wrote a doctoral dissertation currently awaiting approval titled "Chiptune Music: An Exploration of Compositional Techniques in Sunsoft Games for the Nintendo Entertainment System and Famicom from 1988-1992" that explores the tradition of chiptunes.
The inner workings of the audio processing unit of the NES and Famicom suggest audio design choices and limitations for composition. Sunsoft composers Naoki Kotaka and Masashi Kageyama composed soundtracks in conjunction with sound programmers, taking advantage of the soundchips' strengths and establishing a common set of compositional techniques that are classified as the chiptune style. The decisions made and techniques preferred comprise the chiptune style used in modern compositions in games and other media.
Over twenty compositional techniques are identified and confirmed through eight original soundtrack transcriptions of Sunsoft games. Of particular interest is the use of the pulse-code-modulation channel to create a melodic instrument with a limited set of samples. Analysis of the audio engines in these games shows how Sunsoft and its sound programmers improved the musical palette.
New interviews with Neil Baldwin, Alberto Jose Gonzalez, and Masashi Kageyama reveal the process by which NES and Famicom composers translated musical ideas to instructions for the sound chip. Further interviews with Troupe Gammage and members of the NESDev community identify the role chiptune music has in retro-inspired games and the greater listening community.
“Lighter Than Air: A Return to Columbia”
Enoch Jacobus (Independent Scholar)
At last year’s North American Conference on Video Game Music, Bioshock Infinite seemed a popular topic of discussion. But those discussions centered on the diegetic use of licensed (usually anachronistic) music reinvented, or at least reinterpreted within the context of the game. This presentation seeks to open a deeper discussion of the original music composed for Bioshock Infinite by Garry Schyman.
The track “Lighter than Air” is a particularly intriguing point of departure from a music-theoretical perspective as well as from the broader perspective of one’s musical philosophy. I present an analysis to demonstrate, despite Schyman’s own protestations, the links between “Lighter than Air” and the aesthetic of early twentieth-century composers such as Charles Ives. I say “protestations” because Schyman claims to have eschewed being influenced by music contemporaneous with the game’s setting. However, it is my belief that his music (and this track in particular) betrays a certain inescapable correlation to music just after the turn of the twentieth century.
“ ‘Sounding Like the Piece’: How Rhythm Games Can Help Us Teach Rhythmic Reduction”
John Knoedler (The University of Michigan)
At first glance, the various difficulty levels in games like Guitar Hero and Rock Band appear to be suitable models for rhythmic reduction, but closer inspection shows they have more in common with the problems students face than with the solutions they seek. Peter Schultz, in his discussion of music theory in music games, states: “[T]he developers employ a kind of reductive analysis to select certain notes as more important than others, allowing these important notes to percolate down into the easier difficulty levels, so that only the most structurally important notes make it down into Easy mode” (2007). This need not be true, however, as game developers may have a different definition of “important” than analysts.
Any rhythmic reduction has three main goals: sound like the piece, preserve its essentials, and fit the prescribed environment. For rhythm games, fit involves events per second (difficulty level) over a preservation of structural essentials, focusing instead on “sounding like the piece.” These charts provide a nice foil for students, who often struggle with issues of saliency versus primacy, level of detail, and maintaining an appropriate level of abstraction.
In this presentation, I will examine several versions of “Carry On My Wayward Son,” considering how the designers set out to preserve their essentials, showing the parallels with student pitfalls, and wrap up with a discussion of how students can develop their own analytical reductions that are both structurally minded and still “sound like the piece.”
“Hitting Reset: Reception, Replay Value, and the Creative Process of Video Game Cover Music”
Kathleen Kuo (Indiana University)
Studies of video game music tend to focus on the analysis of the original source music or the supporting role that music plays in the context of a game. Less attention has been paid to the relationships between individuals and the original audio, and, in particular, the rearrangements and remixes created by fans and amateur musicians. In this paper I consider alternative methods and approaches to analyzing game music: more specifically, how can ethnomusicological perspectives enrich and contribute to ludomusicological studies? Drawing from three years of ethnographic fieldwork conducted with video game cover bands and orchestra concerts, I discuss the creative process, output, and reception of three stylistically different bands (Descendants of Erdrick, The World is Square and Moogleplex). One goal of this paper is to highlight the fan networks associated with video game cover music; another is to shed light upon the creative process behind these covers. In order to accomplish these goals, I ask: How do video game cover bands navigate digital and physical networks of production in order to create and release their own unique covers? How much leeway do they have when it comes to the reinterpretation of original soundtracks? Finally, what strategies do these musicians use to keep the replay value of their music high within the game music community? By addressing these questions I hope to not only illuminate fan expressions and experiences and their relevance to game music studies, but also expand the methods and scope available to researchers interested in this area.
“Teaching the Soundtrack in a Video Game Music Class”
Neil Lerner and Michael DeSimone (Davidson College)
With the possibility of a growing number of classes studying video game music looking ever brighter, those of us working in this new field find ourselves with relatively few pedagogical models. Yet models for teaching the soundtrack in film--pace ludologists—can offer rich possibilities for use in video game studies. Recently in “Visual Representation as an Analytical Tool” (The Oxford Handbook of Film Music Studies, ed. Neumeyer, 2013), the pioneering film sound scholar Rick Altman provides what he calls a “thumbnail history of film sound diagrams,” surveying graphs by Eisenstein, Schaeffer, Manvell & Huntley, Gorbman, and Beck. In addition to offering his observations about the relative strengths and weaknesses of various techniques (some of which rely on purely graphic representations, some of which use traditional musical notation, and some of which use images of the sound waves), Altman describes his experience asking a class to generate their own novel ways for graphing film sound. I used Altman’s essay and example in a course I recently taught on video game music, and this presentation will relay what did and did not work in this assignment. It allowed students without much formal musical training (over half of this class at a liberal arts college) to create some insightful graphic analyses. The challenge of having to generate these graphs engaged all of the students in what became an ongoing exercise in problem-solving and creative thinking. Examining some of these graphic analyses will hopefully prompt further discussion on video game music pedagogy.
“Sound Effects as Music (or Not): Earcons and Auditory Icons in Video Games”
Elizabeth Medina-Gray (Oberlin College)
Defining “music” in video games may at first seem to be unproblematic. Familiarly, music involves sustained organization of sound across time according to structures of pitch and/or rhythm. The ongoing musical scores that commonly accompany gameplay therefore most clearly typify “video game music.” However, many games also produce sound effects—brief sounds tied to gameplay actions or events through what Karen Collins has called kinesonic synchresis—that contain musical elements such as pitch or rhythm. Indeed, several authors have pointed out that sound effects (even those without musical elements) can become part of a game’s music in particular contexts. (Reale 2014; Mundhenke 2013) Among all the rich and varied sounds that video games produce, then, which sounds might we consider to be “music,” and what implications might this designation have for gameplay?
This paper explores the permeable boundary between musical and non-musical sound in video games through a focus on sound effects. Drawing on a distinction from the field of Human-Computer Interaction between auditory icons—sounds with pre-existing real-world associations—and earcons—abstract sequences of tones—this paper first proposes a framework for considering individual sounds as either musical or non-musical. Next, this paper considers sound effects in their wider context, and especially together with a game’s musical score, using an analytical method for gauging smoothness between layers of the soundtrack. (Medina-Gray 2014) With this view, a particularly strong agreement between score and sound effect can pull the sound effect into the realm of “music,” while disagreement casts the sound effect more clearly as non-musical sound. This paper treats examples from various games to suggest some ways in which such distinctions might impact gameplay.
“On Silence in Video Games”
Dana Plank-Blasko (The Ohio State University)
Scholars of sound often overlook silence. It is our antithesis as beings in motion: living bodies whose heartbeats create a subtle metronome behind all that we perceive. Yet silence can be pregnant, peaceful, unsettling. Silence is an art of context.
In this paper I examine the impact of four types of silence on video game soundscapes: nondiegetic, structural, psychological, and potential. True nondiegetic silence is rare, representing death or profound isolation. Structural silence allows ambient sound to become salient as music and dialogue cease, creating environmental depth or dramatic tension. Psychological silence is symbolic, created by ambient music that a player stops consciously perceiving. Stasis lulls the player into contemplative inwardness; minimalist textures create discursive spaces for the player to insert herself into emotionally complex game narratives. Potential silence represents a broad range of unheard sounds in games, such as a casual gamer’s muted cell phone. Some sounds are relegated to a silence of the circuits, never heard because the player does not (or cannot) trigger them in the course of play. Potential silence can also describe the experience of a hearing impaired player, missing vital cues in popular games that lack accessibility modifications. Unheard sounds represent unrealized potential in the game, and thus an incomplete (and potentially unsatisfying) experience.
Ludomusicologists investigate soundscapes, aural stimuli that brings the game to life: Link’s swishing sword or the contours of Terra’s leitmotif. Perhaps we can also learn to listen to the spaces between, when silence speaks of perfect peace or impending danger.
“Additional Modes of Interactivity Decrease Inhibitors to Flow”
Joshua Sites (Indiana University)
Flow is a concept of positive psychology, first conceptualized by Csikszenthmihalyi,that describes intense focus on the task at hand. It is a pleasant experience in which the senses of self and time are lost. It is similar to the colloquial concept of “being in the zone.” Generative music systems produce music that is ever-changing. These systems take nonmusical inputs and output music. These inputs can be based off of any type of data, including video game states and player interactions within video games. This paper specifically connects the experience of flow to generative music systems in video games to through a review of available literature on human/computer interaction, generative music systems, flow, flow in video games, and flow in music listening. Finally, it introduces Additional Modes of Interactivity Decrease Inhibitors to Flow (AMIDIF) theory, which states that linear systems embedded in an otherwise interactive medium are distractors which ultimately get in the way of the user’s experience of flow. Examples of hypotheses arising from AMIDIF are presented and some theories and effects are reinterpreted using this new perspective.
“Teaching Video Game Sound Design with Pure Data”
Paul Turowski (University of Virginia)
A practical understanding of basic sound design is useful for creating and understanding music in any medium. Sound design is especially important in the realm of video games, where sonic interactions can be complex and unpredictable. For game music students, establishing a strong foundation in sound design can aid composition, promote critical listening, and inform perspectives of historical precedents, such as the sonic output of arcade machines and early home consoles.
During Summer 2014, I taught a course on the history, theory, and practice of video game music in which students explored sound with Pure Data (Pd), an open source audio programming environment. Pd has many beneficial features for teaching sound design to beginners, including an intuitive graphical interface, low-level digital signal processing, and an extensible structure. Its flexibility facilitates the creation of custom compositional and educational utilities, such as a dynamic mixer or an adaptive iMUSE-style playback engine, without needing extensive programming experience. Pd patches may even be embedded within games on most platforms, allowing rapid cross-modal prototyping. In my presentation, I will discuss various aspects of using tools like Pd to teach sound design in game music courses, including materials and methods as well as technical and philosophical considerations for future development.
“Displacing Nostalgia: Medium-Specific Compositional Strategies in Motoi Sakuraba’s Soundtrack for Golden Sun”
Oren Vinogradov (The University of North Carolina at Chapel Hill)
Since the 2001 release of Golden Sun, American and Japanese reviews continue to unanimously praise the game for its soundtrack, often described as “nostalgic” or “classic.” On the Gameboy Advance, a handheld console teeming with hasty software ports from Nintendo’s 1991 Super Nintendo home console and lambasted for its cheap speakers, Motoi Sakuraba’s music for Golden Sun stands out for what reviewers imply is a more authentic sonic emulation of 90’s roleplaying experiences. Considering the vast differences in hardware and perceived failures to translate musical materials between consoles, praise garnered by Golden Sun’s soundtrack appears unusual.
My study posits that Sakuraba’s compositions for Golden Sun successfully relied on a deliberate acknowledgement of the Gameboy Advance’s hardware failures. Due to the smaller number of digital-audio channels available, Sakuraba could not have composed in the same pseudo-orchestral methods utilized for Super Nintendo fantasy roleplaying games. By comparing his previous work on Tales of Phantasia (1995), its 2003 GBA port, and Golden Sun, I suggest that Sakuraba focused on manipulating the timber of the GBA’s virtual instruments to resemble the resulting sonorities of his 90’s compositions. Rather than attempting to recreate his previous style of harmonization or digital instrumentation, Sakuraba references a Super Nintendo soundworld: layering memories of older, more richly textured soundtracks onto music in Golden Sun. Based on the unanimity of its reception across time, I argue that Golden Sun’s reception is built more on Sakuraba’s successful displacement of audience nostalgia than any melodic or harmonic element of his handheld compositions.
“Video Games in the Digital Music Trap”
David Arditi (University of Texas at Arlington)
When Grand Theft Auto V (GTAV) was released in 2013, it set records by being the largest first day release of an entertainment product hauling in $800 million in sales. Notably, GTAV included 240 music tracks in the game. As the recording industry responded to the decline of CD sales, major record labels began to seek-out new ways to generate revenue from the sale of music. The overall strategy has been to expand consumption beyond album and single sales by deploying copyrights. I call the subsequent business model the digital music trap. Major record labels and their artists benefited from synchronization licenses on the blockbuster video game. Whereas the recording industry viewed video games as competition for consumers in the early 2000s, the major labels have developed a way to use gaming to create new revenue.
This paper will explore the role that synchronization licenses have played for major record labels to generate revenue following the decline of CD sales. It will focus specifically on ways that record labels deploy their copyrighted music in GATV and the music rhythm games genre (e.g. Guitar Hero and Rock Band). I will develop a critical political economy of copyrighted music in popular video games. I argue that video games act both as a means for record labels to exploit copyrights and as a source of marketing for the sale of music in other media.
“Music Appreciation and the Mario Bros.: The Pedagogy of Musical Hermeneutics”
Stephen Armstrong (Michigan State University)
In the music appreciation classroom, instructors face students who have an exhaustive intuitive knowledge of tonal procedures, though they cannot articulate the facts of tonal construction and socially-constructed musical meaning. Because today’s music is so well-produced and abundant, music itself becomes difficult to approach with a critical ear. Despite these difficulties, instructors cannot solve the underlying problem simply by exposing students to more and better music; students are already inundated with musical information. Rather, the challenge is to find ways of distancing students from musical experience in surprising and meaningful ways.
In taking up this challenge, I argue that the music of pre-millennial video games are an ideal medium for teaching both the elements of music and the hermeneutics of musical devices. Apart from its cultural currency and undeniable shock value, game music is useful precisely because of its technical limitations: programmers had to maximize their limited materials to generate compelling musical frameworks for accompanying play. In developing my arguments, I include numerous case studies from my own experience as an instructor in music appreciation, showing how an eclectic blend of pre-millennial game music can complement the deployment of film music and the classics. By demonstrating that the same sonic principles govern everything from symphonies to chiptunes, instructors can help students gain a deeper appreciation for the fundamentals of musical structure.
“From Attunement to Interference: A Typology of Musical Intertextuality in Video Games.”
Dominic Arsenault and Andréane Morin-Simard (University of Montreal)
This paper maps out the differing kinds of intertextual uses of music in video games, building a typology from the standpoint of their reception and affective impacts on the player. We first build upon Philip Tagg’s concept of codal interference (2013), supplemented by Sébastien Babeux (2004, 2007) and Andréane Morin-Simard’s larger work on interference to open up a discussion on the various types of musical intertextual references, ranging from the inclusion and recognition of leitmotifs providing an intended aesthetic payoff increasing the player’s in-game attunement (Odin 2000) to the various types of interference, where a player’s reaction to the game’s music interferes with the music’s perceived aesthetic intent. This primary framework is complemented with genre theory and aesthetics (Arsenault, Eco, Jauss, Roberts) to integrate the concepts of horizon of expectations and intentio operis, which provide the tools necessary for qualifying the semiotic reception of music and its intertextual effects. We briefly demonstrate the various types of intertextual references and dwell mainly on interference, categorizing the phenomenon in its many dimensions according to the source (the musical intertext can be internal/external), its semiotic channel (codal/aural), its role and integration into gameplay (actional/affective), and horizon (serial/generic). Examples will be produced from a variety of games covering a wide historical spectrum (roughly 1983-2013), including Donkey Kong Country, The Legend of Zelda: Ocarina of Time, Prince of Persia: Warrior Within, Dynasty Warriors, Blue Dragon, and DuckTales: Remastered."
“Old Categories for New Media: Rethinking Music Videogame Organology”
Michael Austin (Howard University)
Guitar Hero (2005) and Rock Band (2007) are, of course, the most famous specimens of the “music videogame” genre. Groundbreaking scholarship on these games regarding the ways in which they influence players to learn to play “real” instruments can be found in several books, book chapters, conference presentations, and academic and journalistic articles. Scholars have also addressed many other issues surrounding these games regarding their potential for genuine music-making. While other music games have received exponentially less critical attention, these issues apply equally to them, and most investigations of music games also center around the ways in which players interact with the music in the game.
The most famous musical instrument classification system, the Hornbostel-Sachs System, categorizes instruments based on the ways they are played (struck, blown, plucked, shaken, etc.), i.e., how musicians interact with them; likewise, music games are classified based on player interaction and gameplay. For example, rhythm games (such as Guitar Hero and Rock Band) require players act in rhythm with the game’s music, and music memory games challenge a player’s musical memory in gameplay. Even if this is the “best” way to classify these games, is it the only way? Can we really even make music with music video games, and if so, how? Are music games virtual simulations of “real” musical instruments or real instruments in their own right? What can we learn by rethinking putative categories and taxonomies? This essay addresses these questions by challenging the ways we currently classify music videogames.
“Analyzing Narrative in Video Game Music: Topic Theory and Modular Design.”
William Ayers (University of Cincinnati)
Examining the narrative structure of music in video games is often a challenge due to the dynamic quality of the medium. Cues are frequently triggered by player actions, providing a degree of unpredictability to the overall musical design. Recent studies in video game music have provided tools for analyzing this open-ended structure. Elizabeth Medina-Gray’s work on modular design in video game music offers a rigorous system for examining the various combinations of musical modules. She uses this system to examine narrative elements in Kingdom Hearts, The Walking Dead, and Mass Effect 2 (among others). Additionally, Winifred Phillips (2014) considers the narrative functionality of certain musical elements from a composer’s perspective, noting the dynamic nature of stinger cues and their purpose in gameplay.
Though these studies provide a strong foundation, many aspects of narrative structure in video game music have not yet been explored in the dynamic/modular framework. In this presentation I will expand this modular methodology to deal with the materials of topic theory, specifically considering examples from the Arkham series of video games and other similar games. Musical modules in the Arkham series are often coded with specific topical references, generally aligning with the Batman character himself, e.g. martial or macabre topics given in Donnelly (1998) and Young (2013). By analyzing the topical content of interacting modules as they are triggered by a player’s actions, we can observe an emergent musical narrative which materializes in conjunction with gameplay situations such as combat, exploration, and victory.
“Immersion into what? The sound world of Sid Meier’s Civilization V”
Karen Cook (University of Hartford)
The fifth installment in Sid Meier’s popular strategy game series, Civilization V, was released in 2010, quickly living up to expectations by winning Game of the Year. The series is famous for its premise, in which the player takes on the role of a particular historical figure and strives to lead his/her civilization to victory through a variety of military, technological, and diplomatic means. The series has also earned both critical and popular acclaim for its musical soundtracks; for example, the theme song to Civilization IV was the first composition written for a video game to win a Grammy Award (in 2011). Civilization V publicity reveals a self-awareness of the popularity of its predecessors’ music: it highlights the new edition’s sound world as one of its main attractions. Moreover, it specifically describes the aural components as ‘immersive,’ suggesting that sound is a crucial component of the series’ prized “just one more turn” gameplay. Sound is a primary factor in player engagement throughout the Civilization series, yet the sound world in Civilization V moves beyond that of its most immediate predecessor in several key ways: most notably, it now provides a more musically homogeneous and culturally distinct soundtrack for each of its eighteen civilizations. In this paper, I investigate how this new approach to the soundtrack provides the heightened sense of immersion that the game designers desire, but simultaneously raises questions about the (re)production of cultural stereotypes in music.
“Lost (and Found) In Your Head: The Significance of Binaural Diegetic Audio in The Nightjar”
Thomas H. Doughty (University of Florida)
Frances Dyson (2009), in Sounding New Media, describes sound as the perfect immersive medium. “Three dimensional, interactive and synesthetic, perceived in the here and now of an embodied space.” Game sound has a distinctive quality that advanced digital imaging, in spite of all its dynamically rendered high resolution art, cannot replicate. Game sound is experienced internally and can evoke perceptible realism, sonically and spatially. Specifically, binaural audio allows the player to perceive the diegetic sounds encompassing the game’s avatar as their own. Binaural/3D audio engines have been used experimentally for many years, most notably by Char Davis who used interactive 3D sound for her virtual reality art installation Osmose (1995). Binaural audio, however has been used almost exclusively by audio-only games. The most commercially successful of these, developed by Somethin’ Else, are the Papa Sangre series, The Nightjar, and Audio Defense: Zombie Arena. In The Nightjar, a narrative-driven mobile app that is played on headphones, the player learns that they have been stranded on the spaceship Nightjar and must navigate the maze of the spaceship to safety using only audio cues, sound events, and the voice of the guide (voiced by actor Benedict Cumberbatch). In order to be successful the player must orient themselves within the sound space of the Nightjar. The significance of The Nightjar and other recent audio-only games is with how binaural audio acts to connect the player to what Salen and Zimmerman(2003) call the “immersive fallacy,” the illusion of spatial placement inside a digitally created environment.
“Navigating the Uncanny Musical Valley: Red Dead Redemption, Ni no Kuni, and the Dangers of Cinematic Game Scores”
William Gibbons (Texas Christian University)
Many recent video games strive towards an “interactive film” model, a trend epitomized in such games as Red Dead Redemption (2010) and Ni no Kuni: Wrath of the White Witch (2010/2011), both of which explicitly imitate cinematic models in their narratives, visual presentation, and musical scores. Red Dead Redemption is an homage to classic film Westerns, and its aleatoric score by composers Bill Elm and Woody Jackson evokes those films in both orchestration and tonal language. Ni No Kuni is a collaboration between Level-5 Games and the animation collective Studio Ghibli—with music by Joe Hisaishi, who scored such Ghibli films as Spirited Away (2001) and Howl’s Moving Castle (2004).
In both instances, however, I argue that the emulation of cinematic scoring ultimately reveals the illusory nature of its mechanics. The revelation of artificiality creates a disjunction that, once apparent, risks alienating players. Videogame designers have long been plagued by the “uncanny valley”: the “almost-but-not-quite real” point at which increasingly lifelike digital representations of humans becoming disturbing to players, uncomfortably reminding them of the unreality of what they see, reinforcing the absence of “liveness” and undercutting the potential for emotional connection and dramatic impact. Using Red Dead Redemption and Ni no Kuni as case studies, I make the case for the existence of a similar “uncanny musical valley” that occurs when videogame music too closely approximates cinematic musical styles. In doing so I engage with and expand the growing amount of scholarship exploring the complex relationship between film and videogame scores.
“Intersections of Musical Performance and Play in Video Games”
Julianne Grasso (University of Chicago)
When Mario ducks behind a platform and finds a secret room in a level of Nintendo’s Super Mario Bros. 3 (1988), his friend Toad offers him a treasure chest with a clue to the contents within: “One toot on this whistle will send you to a far away land!” Indeed, with Mario’s own Zauberflöte of sorts, the player gains the power to warp among different worlds in the universe of the game. Super Mario Bros. 3 is, of course, just one example of a game in which the player can take on some role, however small, in a performance of music. On the other end of the spectrum, games like the Rock Band series (Harmonix, 2007) are built entirely on the gamification of musical performance. Yet both of these disparate examples rely on states of “performativity” as tools for ludic and narrative engagement, whether within a magical world or in front of a virtual audience.
This paper explores implementations of musical performance across multiple genres of video games in which music becomes a medium of interactivity in virtual play. Drawing from Kiri Miller’s concept of schizophonic performance, I discuss the modes of engagement and embodiment created by various intersections of game-play and musicplay. Finally, I bring virtual performance to bear on “real-world” video game music concerts, adding another intersection of performance to the realms of memory, nostalgia, and community.
“Designing Game-Centric Academic Curricula for Procedural Audio and Music”
Robert Hamilton (Stanford University)
The use of procedural technologies for the generation and control of realtime game music and audio systems has in recent times become both more possible and prevalent. Increased industry exploration and adoption of realtime audio engines like libPD coupled with the maturity of abstract audio languages such as FAUST are driving new interactive musical possibilities. As such a distinct need is emerging for educators to codify nextgeneration techniques and tools into coherent curricula in early support of future generations of sound designers and composers. This paper details a multitiered set of technologies and workflows appropriate for the introduction and exploration of beginner, intermediate and advanced procedural audio and music techniques. Specific systems and workflows for rapid gameaudio prototyping, realtime generative audio and music systems, as well as performance optimization through lowlevel code generation will be discussed.
“Compositional Techniques of Chiptune Music”
Christopher Hopkins (Long Island University)
The style of chiptune music used in video games from the 70s to the 90s is a product of technology, the era of its popularity in video games, and the composers. I wrote a doctoral dissertation currently awaiting approval titled "Chiptune Music: An Exploration of Compositional Techniques in Sunsoft Games for the Nintendo Entertainment System and Famicom from 1988-1992" that explores the tradition of chiptunes.
The inner workings of the audio processing unit of the NES and Famicom suggest audio design choices and limitations for composition. Sunsoft composers Naoki Kotaka and Masashi Kageyama composed soundtracks in conjunction with sound programmers, taking advantage of the soundchips' strengths and establishing a common set of compositional techniques that are classified as the chiptune style. The decisions made and techniques preferred comprise the chiptune style used in modern compositions in games and other media.
Over twenty compositional techniques are identified and confirmed through eight original soundtrack transcriptions of Sunsoft games. Of particular interest is the use of the pulse-code-modulation channel to create a melodic instrument with a limited set of samples. Analysis of the audio engines in these games shows how Sunsoft and its sound programmers improved the musical palette.
New interviews with Neil Baldwin, Alberto Jose Gonzalez, and Masashi Kageyama reveal the process by which NES and Famicom composers translated musical ideas to instructions for the sound chip. Further interviews with Troupe Gammage and members of the NESDev community identify the role chiptune music has in retro-inspired games and the greater listening community.
“Lighter Than Air: A Return to Columbia”
Enoch Jacobus (Independent Scholar)
At last year’s North American Conference on Video Game Music, Bioshock Infinite seemed a popular topic of discussion. But those discussions centered on the diegetic use of licensed (usually anachronistic) music reinvented, or at least reinterpreted within the context of the game. This presentation seeks to open a deeper discussion of the original music composed for Bioshock Infinite by Garry Schyman.
The track “Lighter than Air” is a particularly intriguing point of departure from a music-theoretical perspective as well as from the broader perspective of one’s musical philosophy. I present an analysis to demonstrate, despite Schyman’s own protestations, the links between “Lighter than Air” and the aesthetic of early twentieth-century composers such as Charles Ives. I say “protestations” because Schyman claims to have eschewed being influenced by music contemporaneous with the game’s setting. However, it is my belief that his music (and this track in particular) betrays a certain inescapable correlation to music just after the turn of the twentieth century.
“ ‘Sounding Like the Piece’: How Rhythm Games Can Help Us Teach Rhythmic Reduction”
John Knoedler (The University of Michigan)
At first glance, the various difficulty levels in games like Guitar Hero and Rock Band appear to be suitable models for rhythmic reduction, but closer inspection shows they have more in common with the problems students face than with the solutions they seek. Peter Schultz, in his discussion of music theory in music games, states: “[T]he developers employ a kind of reductive analysis to select certain notes as more important than others, allowing these important notes to percolate down into the easier difficulty levels, so that only the most structurally important notes make it down into Easy mode” (2007). This need not be true, however, as game developers may have a different definition of “important” than analysts.
Any rhythmic reduction has three main goals: sound like the piece, preserve its essentials, and fit the prescribed environment. For rhythm games, fit involves events per second (difficulty level) over a preservation of structural essentials, focusing instead on “sounding like the piece.” These charts provide a nice foil for students, who often struggle with issues of saliency versus primacy, level of detail, and maintaining an appropriate level of abstraction.
In this presentation, I will examine several versions of “Carry On My Wayward Son,” considering how the designers set out to preserve their essentials, showing the parallels with student pitfalls, and wrap up with a discussion of how students can develop their own analytical reductions that are both structurally minded and still “sound like the piece.”
“Hitting Reset: Reception, Replay Value, and the Creative Process of Video Game Cover Music”
Kathleen Kuo (Indiana University)
Studies of video game music tend to focus on the analysis of the original source music or the supporting role that music plays in the context of a game. Less attention has been paid to the relationships between individuals and the original audio, and, in particular, the rearrangements and remixes created by fans and amateur musicians. In this paper I consider alternative methods and approaches to analyzing game music: more specifically, how can ethnomusicological perspectives enrich and contribute to ludomusicological studies? Drawing from three years of ethnographic fieldwork conducted with video game cover bands and orchestra concerts, I discuss the creative process, output, and reception of three stylistically different bands (Descendants of Erdrick, The World is Square and Moogleplex). One goal of this paper is to highlight the fan networks associated with video game cover music; another is to shed light upon the creative process behind these covers. In order to accomplish these goals, I ask: How do video game cover bands navigate digital and physical networks of production in order to create and release their own unique covers? How much leeway do they have when it comes to the reinterpretation of original soundtracks? Finally, what strategies do these musicians use to keep the replay value of their music high within the game music community? By addressing these questions I hope to not only illuminate fan expressions and experiences and their relevance to game music studies, but also expand the methods and scope available to researchers interested in this area.
“Teaching the Soundtrack in a Video Game Music Class”
Neil Lerner and Michael DeSimone (Davidson College)
With the possibility of a growing number of classes studying video game music looking ever brighter, those of us working in this new field find ourselves with relatively few pedagogical models. Yet models for teaching the soundtrack in film--pace ludologists—can offer rich possibilities for use in video game studies. Recently in “Visual Representation as an Analytical Tool” (The Oxford Handbook of Film Music Studies, ed. Neumeyer, 2013), the pioneering film sound scholar Rick Altman provides what he calls a “thumbnail history of film sound diagrams,” surveying graphs by Eisenstein, Schaeffer, Manvell & Huntley, Gorbman, and Beck. In addition to offering his observations about the relative strengths and weaknesses of various techniques (some of which rely on purely graphic representations, some of which use traditional musical notation, and some of which use images of the sound waves), Altman describes his experience asking a class to generate their own novel ways for graphing film sound. I used Altman’s essay and example in a course I recently taught on video game music, and this presentation will relay what did and did not work in this assignment. It allowed students without much formal musical training (over half of this class at a liberal arts college) to create some insightful graphic analyses. The challenge of having to generate these graphs engaged all of the students in what became an ongoing exercise in problem-solving and creative thinking. Examining some of these graphic analyses will hopefully prompt further discussion on video game music pedagogy.
“Sound Effects as Music (or Not): Earcons and Auditory Icons in Video Games”
Elizabeth Medina-Gray (Oberlin College)
Defining “music” in video games may at first seem to be unproblematic. Familiarly, music involves sustained organization of sound across time according to structures of pitch and/or rhythm. The ongoing musical scores that commonly accompany gameplay therefore most clearly typify “video game music.” However, many games also produce sound effects—brief sounds tied to gameplay actions or events through what Karen Collins has called kinesonic synchresis—that contain musical elements such as pitch or rhythm. Indeed, several authors have pointed out that sound effects (even those without musical elements) can become part of a game’s music in particular contexts. (Reale 2014; Mundhenke 2013) Among all the rich and varied sounds that video games produce, then, which sounds might we consider to be “music,” and what implications might this designation have for gameplay?
This paper explores the permeable boundary between musical and non-musical sound in video games through a focus on sound effects. Drawing on a distinction from the field of Human-Computer Interaction between auditory icons—sounds with pre-existing real-world associations—and earcons—abstract sequences of tones—this paper first proposes a framework for considering individual sounds as either musical or non-musical. Next, this paper considers sound effects in their wider context, and especially together with a game’s musical score, using an analytical method for gauging smoothness between layers of the soundtrack. (Medina-Gray 2014) With this view, a particularly strong agreement between score and sound effect can pull the sound effect into the realm of “music,” while disagreement casts the sound effect more clearly as non-musical sound. This paper treats examples from various games to suggest some ways in which such distinctions might impact gameplay.
“On Silence in Video Games”
Dana Plank-Blasko (The Ohio State University)
Scholars of sound often overlook silence. It is our antithesis as beings in motion: living bodies whose heartbeats create a subtle metronome behind all that we perceive. Yet silence can be pregnant, peaceful, unsettling. Silence is an art of context.
In this paper I examine the impact of four types of silence on video game soundscapes: nondiegetic, structural, psychological, and potential. True nondiegetic silence is rare, representing death or profound isolation. Structural silence allows ambient sound to become salient as music and dialogue cease, creating environmental depth or dramatic tension. Psychological silence is symbolic, created by ambient music that a player stops consciously perceiving. Stasis lulls the player into contemplative inwardness; minimalist textures create discursive spaces for the player to insert herself into emotionally complex game narratives. Potential silence represents a broad range of unheard sounds in games, such as a casual gamer’s muted cell phone. Some sounds are relegated to a silence of the circuits, never heard because the player does not (or cannot) trigger them in the course of play. Potential silence can also describe the experience of a hearing impaired player, missing vital cues in popular games that lack accessibility modifications. Unheard sounds represent unrealized potential in the game, and thus an incomplete (and potentially unsatisfying) experience.
Ludomusicologists investigate soundscapes, aural stimuli that brings the game to life: Link’s swishing sword or the contours of Terra’s leitmotif. Perhaps we can also learn to listen to the spaces between, when silence speaks of perfect peace or impending danger.
“Additional Modes of Interactivity Decrease Inhibitors to Flow”
Joshua Sites (Indiana University)
Flow is a concept of positive psychology, first conceptualized by Csikszenthmihalyi,that describes intense focus on the task at hand. It is a pleasant experience in which the senses of self and time are lost. It is similar to the colloquial concept of “being in the zone.” Generative music systems produce music that is ever-changing. These systems take nonmusical inputs and output music. These inputs can be based off of any type of data, including video game states and player interactions within video games. This paper specifically connects the experience of flow to generative music systems in video games to through a review of available literature on human/computer interaction, generative music systems, flow, flow in video games, and flow in music listening. Finally, it introduces Additional Modes of Interactivity Decrease Inhibitors to Flow (AMIDIF) theory, which states that linear systems embedded in an otherwise interactive medium are distractors which ultimately get in the way of the user’s experience of flow. Examples of hypotheses arising from AMIDIF are presented and some theories and effects are reinterpreted using this new perspective.
“Teaching Video Game Sound Design with Pure Data”
Paul Turowski (University of Virginia)
A practical understanding of basic sound design is useful for creating and understanding music in any medium. Sound design is especially important in the realm of video games, where sonic interactions can be complex and unpredictable. For game music students, establishing a strong foundation in sound design can aid composition, promote critical listening, and inform perspectives of historical precedents, such as the sonic output of arcade machines and early home consoles.
During Summer 2014, I taught a course on the history, theory, and practice of video game music in which students explored sound with Pure Data (Pd), an open source audio programming environment. Pd has many beneficial features for teaching sound design to beginners, including an intuitive graphical interface, low-level digital signal processing, and an extensible structure. Its flexibility facilitates the creation of custom compositional and educational utilities, such as a dynamic mixer or an adaptive iMUSE-style playback engine, without needing extensive programming experience. Pd patches may even be embedded within games on most platforms, allowing rapid cross-modal prototyping. In my presentation, I will discuss various aspects of using tools like Pd to teach sound design in game music courses, including materials and methods as well as technical and philosophical considerations for future development.
“Displacing Nostalgia: Medium-Specific Compositional Strategies in Motoi Sakuraba’s Soundtrack for Golden Sun”
Oren Vinogradov (The University of North Carolina at Chapel Hill)
Since the 2001 release of Golden Sun, American and Japanese reviews continue to unanimously praise the game for its soundtrack, often described as “nostalgic” or “classic.” On the Gameboy Advance, a handheld console teeming with hasty software ports from Nintendo’s 1991 Super Nintendo home console and lambasted for its cheap speakers, Motoi Sakuraba’s music for Golden Sun stands out for what reviewers imply is a more authentic sonic emulation of 90’s roleplaying experiences. Considering the vast differences in hardware and perceived failures to translate musical materials between consoles, praise garnered by Golden Sun’s soundtrack appears unusual.
My study posits that Sakuraba’s compositions for Golden Sun successfully relied on a deliberate acknowledgement of the Gameboy Advance’s hardware failures. Due to the smaller number of digital-audio channels available, Sakuraba could not have composed in the same pseudo-orchestral methods utilized for Super Nintendo fantasy roleplaying games. By comparing his previous work on Tales of Phantasia (1995), its 2003 GBA port, and Golden Sun, I suggest that Sakuraba focused on manipulating the timber of the GBA’s virtual instruments to resemble the resulting sonorities of his 90’s compositions. Rather than attempting to recreate his previous style of harmonization or digital instrumentation, Sakuraba references a Super Nintendo soundworld: layering memories of older, more richly textured soundtracks onto music in Golden Sun. Based on the unanimity of its reception across time, I argue that Golden Sun’s reception is built more on Sakuraba’s successful displacement of audience nostalgia than any melodic or harmonic element of his handheld compositions.