For my entire life, I have been fascinated by video game controllers. When I see a controller I've never used before, even if it's not plugged into anything, I need to pick it up and play around with it. I have always felt that the way a game controls is one of the most important aspects of game design, if not the most important. In most games, it is desirable for the interactive elements to be responsive, intuitive and satisfying. Using the input method available to provide the best experience to the player is important, and the best input devices for games take the gameplay experience into account.
Early on, game controllers were designed around specific games. The monstrous Telstar Arcade (pictured above) features Pong-style paddles, a steering wheel and a gun. Soon enough, action games really caught on in the arcades, and the traditional joysticks and buttons of arcade cabinets became the norm. The earliest notably successful console, the Atari 2600, attempted to emulate the arcade experience as best it could. The standard 1-button-and-joystick controller couldn't quite hold up to the arcade experience, but it tried. This type of controller, with various improvements and variations, was generally the standard console game input for the next few years. It wasn't until the Nintendo Entertainment System that the game controller as we know it today was truly born.
Meanwhile, PC gaming was finding its footing. The keyboard as an input device has continued to see prominent use to this day, with very little variation. A device with around 100 buttons, especially one that many users are already familiar with, is incredibly versatile and can be used for a wide variety of play styles. Despite the universal nature of the keyboard, a handheld controller has its own advantages. It's small, it's simple, you can pass it to your friends. Many NES games didn't even really need to teach the user how to play. You only have so many options, so it made sense to just try all of them. A PC game can't really be designed this way, at least not without assuming users are familiar with genre conventions. The biggest disadvantage that a keyboard, and even a mouse and keyboard, have compared to a controller is that they are not made for games. They do work best for certain genres that originated and continue to thrive on PC, though. These genres, such as real time strategy games, first person shooters and western role playing games, are built around mouse and keyboard input. They have had a presence on consoles, but by most accounts do not control quite as well. Even amid the massive success of Halo and the console versions of Call of Duty, many gamers refuse to play first person shooters on any platform other than PC.
But there is a difference between PC input devices, as well as mobile phone inputs and other multi-purpose devices, and a controller. A controller is meant for games, and is designed around games. The NES controller, despite introducing some universal conventions, does not really do anything that keyboards couldn't do. But the form factor is specifically designed around games. In the post-market crash world of console games, inspiration was drawn heavily from the NES controller. Sega was the first notable competitor to borrow from Nintendo, with the Sega Master System's controller having a practically-identical form factor.
By the early 90s, it was clear that all of Nintendo's competitors were playing catch up. NEC's TurboGrafx 16 only had 2 main buttons. The Sega Genesis added a "C" button in addition to "A" and "B", giving it 3 main inputs instead of the 2 buttons of the NES. Then the Super NES added 4 more buttons, including shoulder buttons.
The difference here is that Sega seems to have been making an attempt to get the leg up on Nintendo with the Genesis. They were attempting to do what "Nintendon't", and maybe bring users closer to the arcade experience. The SNES, on the other hand, was arguable the first console controller to really attempt to bring something different to the table. It was clearly not emulating the arcade experience. Though functional for arcade ports, the idea of putting 6 buttons within easy reach made faithful ports of games like Street Fighter II very possible, the true strength of the SNES controller was what it brought to console-specific games.
The now-traditional diamond button format was unusual in 1991. This not only puts 4 face buttons within easy reach of each other, but also allows users to mentally arrange buttons better than they might with a row of buttons. The Sega Genesis later had a controller with 6 face buttons, as did the Sega Saturn, and while this setup works great for certain genres and is loved by many, the fact that it requires longer movements and different resting positions can lead to some level of player confusion. The SNES tends to not have this problem, and allows the user to forget about the controller and just enjoy the game. This type of control method cannot really be matched using a mouse and keyboard, and has allowed console games to evolve in a divergent direction. That said, the most notable innovation of the SNES controller was certainly the shoulder buttons. Shoulder buttons were very likely added to the SNES controller to account for the new "mode 7" graphical effect, allowing better control of direction during pseudo-3D segments. Sony and Sega used similar logic for including shoulder buttons on the PlayStation and Saturn (respectively), with Sony taking it a step further by including 4 shoulder buttons.
The introduction of 3D console games threw a wrench into the idea of the NES-inspired console controller. This was one of the most diverse console generations in terms of controller design. Sony's PlayStation controller was clearly inspired by the SNES controller, but the extra row of shoulder buttons did have a permanent impact on controller design and game design. Although they never really caught on in terms of 3D control, they did offer more options to the fingertips of users. Diverging farther and farther from PC and arcade games, the PlayStation's controller was made for console games.
Meanwhile, the Sega Saturn failed to be forward-looking in several ways. Sega was smart to fully embrace the CD, even if they were maybe a little bit early to the party, but they made the mistake of initially designing their console around being a strong 2D box. The 3D graphical capabilities of the Saturn suffered as a result, but taking it a step further, the controller was just not made for 3D. Shoulder buttons were added, but the 6-button layout of the controller's face was still clearly tailored to ports of arcade games. These types of games certainly had a market, especially in the 90's, and to this day the Saturn controller is regarded as one of the best controllers ever made. But it was not forward looking. It was not tailored toward the direction console games were headed, and it didn't really accommodate 3D games in any specific way. Shoulder buttons were added, but at this point that was the bare minimum.
With an analog stick present from day one, the Nintendo 64 embraced 3D games. Nintendo may not have embraced CD technology, and the N64 controller was not perfect (with a completely separate d-pad available almost as a safety net in case the 3D experiment didn't work), but Nintendo were incredibly forward looking in terms of game design. They went all-in with 3D games, and the Nintendo 64 controller was a key component of this new direction. Super Mario 64 was a revolution, and it absolutely would not have been possible with any other controller at the time. A keyboard and mouse couldn't pull it off. The PlayStation couldn't either, nor could the Saturn, the 3DO or the Jaguar (although they had a wealth of other problems that lead to a lack of success, it is interesting to note that the Jaguar and 3DO are both known for having terrible controllers).
The Saturn got an analog controller, the PlayStation got an analog controller. PC games got analog controllers. Sony's analog controller is notable for adding a second joystick, which has since become the industry standard. In fact, the modern idea of the controller is essentially the PlayStation's DualShock controller. It became such a standard, that the Dreamcast and GameCube suffered from having fewer buttons. Dreamcast, as cool as many of the games on the console were, did suffer quite a lot from having a limited controller. Although I wouldn't necessarily say the controller contributed to its demise, it would have been extremely hard for it to compete with the PS2, Xbox and GC with a dramatically lower number of buttons. While the GameCube controller is fantastically comfortable and works very well with Nintendo's own games, it is not nearly as utilitarian as the Dual Shock or even the original Xbox controller. When Sony attempted to reinvent the DualShock for the PlayStation 3, the backlash was so harsh that they reverted back to the old design (though to be fair, Sony in 2006 wasn't exactly hitting anything out of the park).
It was during the PS2's generation that the DualShock really established itself as the "standard". Microsoft did a great job of emulating (and, in the opinion of many people, improving upon) the universal nature of Sony's controller with their Xbox 360 controller, and even Nintendo in their post-Wii afterglow decided to embrace this type of controller with the Wii U. The simple fact now is that many console games assume there are 4 shoulder buttons, 4 face buttons and 2 analog sticks (that you can also push as buttons). To do anything else means less third party support, or ports that suffer from a lack of inputs (as seen on the GameCube). It has been interesting to see the idea of a controller go through iterations before settling on what many see as the "right" way to control most games (with genre-specific exceptions - such as arcade sticks for fighting games and instrument controllers for music games).
Emulating the DualShock is not the only option, though. The original Wii was an inventive reaction to the market's rejection of the GameCube. Nintendo decided not to compete on power, and instead introduced a new input method. In a way, this has always been what Nintendo has done. Input devices have been a huge part of the appeal of Nintendo consoles, and with the Wii's motion controls they appealed to an all new market. Games had to be built to function around this device, and allowed the Wii to introduce a number of new gameplay experiences.
Although I love the Wii U gamepad, I am a little bit disappointed that Nintendo didn't go all-in with motion controls. I can get traditional controller experiences elsewhere, and even though having a screen is nice, it does not fundamentally change gameplay. The Wii remote has its problems, but when it works well it provides something not found anywhere else. Even on the Wii U, Pikmin 3 controls best with Wii-style controls.
Nintendo's promotional images like this are one of the best things about the Wii.
The PS4 and Xbox One offer interesting evolutions of their predecessors' controllers. Both have ditched pressure-sensitive buttons (a standard feature for the past two generations that was barely utilized), both have improved their directional pads (surely a reaction to the resurgence of fighting games and 2D platformers), and both have made attempts to tailor their analog sticks and triggers better towards shooters. Sony also chose to add a touch pad and take further steps to integrate motion controls into the controller (something they began with the PS3's "SIXAXIS" functionality), which could be seen as reactions to the mobile gaming and motion gaming. It is clear that the designs of past games, and a forecast of where game design is headed, were pivotal in designing both controllers.
Even PC gaming is starting to see control innovations, with Valve's upcoming Steam controller. The first standardized controller designed specifically to handle traditionally PC-centric types of games, by offering a handheld device that attempts to bring the accuracy and versatility of a keyboard/mouse setup to the couch.
For some reason, the controller scroll wheel never really caught on.
When the idea of Nintendo "saving" gaming in 1985 with the Famicom and NES comes up, it is not uncommon to hear somebody mention that PC gaming never died and still exists to this day. While PC gaming has brought many innovations and continues to be one of the best platforms to game on, I would not want to trade away the contributions that consoles have brought to game design. Without standardized console controllers, many key innovations would not have caught on. The analog stick, never mind dual analog sticks, would not be as ubiquitous as it is today. Action games that require rapid use of many buttons would not exist, nor would games that require managing several shoulder buttons. For better or worse, gaming would be completely different. Even amid talk of mobile gaming gaining ground on traditional games, it's important to remember how important controller design has been for game design. Taking it a step further, standardized controllers that are well designed are one of the most important muses for any game creator. Knowing that a large number of people have this controller allows game designers to truly embrace the input device and make the most of what it brings to the table. Even the best mobile games make great use of the inputs available by default. Mobile games that try to play like console games work best with optional controllers that most people don't have, and suffer for it. I look forward to a future filled with new consoles, not just PCs and phones, precisely because of the power of standardized controllers. I look forward to seeing new innovations in the years to come that capture new audiences and bring new experiences. This has always been the greatest strength of the console market, and one of its strongest appeals to this day.
For a very long time, I have given a lot of consideration to what may be considered the equivalent to the drama genre in video games. Although there is no real need to create film genre equivalents in games, it is interesting to consider how rare it is for a game to attempt to tell a story that does not involve player-driven violence or some other challenge component. Clearly I am not alone in this thinking, as an increasing number of games have been making important strides towards moving interactive storytelling forward. The quality of that storytelling, however, can vary greatly.
There are many cases of video games that have very well thought out stories, but very much remain video games in the classic sense. There are also an increasing number games allowing for player choices that have a real impact on the plot, giving a sense of weight to a number of decisions that are left up to the player. However, these games are typically grounded in the type of settings that allow for gameplay. An enemy to kill, collectibles to seek out, good endings. It is very telling that two of the most praised stories in recent years, the ones found in The Last of Us and The Walking Dead, are zombie stories. In many cases, games try to emulate Hollywood movies to the best of their ability, but when was the last time a zombie movie got any sort of serious critical praise? Some developers, like Quantic Dream, like to make a film-like quality their focus above gameplay. In many cases, the entire effort misses the mark. The story isn't as good as a great film (or, let's be honest, a decent film) and the gameplay is kind of trite and almost unnecessary.
Going beyond Beyond Two Souls and its ilk, there are games that almost do away with gameplay entirely. The first of these that I played, something I was excited to play, is a game called Dinner Date. I sat down with Dinner Date expecting to experience something unique, and it did not disappoint in that regard. Despite the interesting core idea, a game about being stood up on a date, the execution leaves a lot to be desired. Well, actually, I'm really not sure what I'd desire from a game like Dinner Date. It's an interesting and valid experiment, to essentially ask players to live out a real-life situation. This sort of empathetic roleplaying is something I'd honestly love to see more of in games, actually. But the simple fact is that Dinner Date tells a linear story through a first person view, and you basically complete actions in the order it wants you to in order to advance through it. It has about as much gameplay as using the frame advance button on a DVD player, but it's far less intuitive.
But I'm not here to talk about gameplay. We're going to put gameplay aside for a minute. Dinner Date's story isn't all that interesting, either. Again, I feel that the idea of placing players in a real-life situation as an empathetic exercise is an interesting one. I love the idea of using this medium to really let players see things from another person's perspective. But the thing is, Dinner Date is just a guy in his apartment trying to pass the time until he gives up on the idea of his date ever showing up. There's also some introspective stuff and some smoking. I get that it's interesting to understand what a person in this situation is thinking about, but eventually he starts beating himself up over how much of a loser he is. At this point, he just seems like a miserable person in general and I stop sympathizing as much. I wouldn't show up to a date with him either.
The question often comes up over whether this type of game is really a game. Using the strict definition of the word "game" and what it means outside of video games? No, Dinner Date is not a game. But it is a video game. It exists within the sphere of video games, it was crafted the same way a video game would be crafted. The phrase "video game" has evolved beyond the literal definition and is now a colloquial term for "interactive entertainment thing programmed for some kind of computer". I guess.
So no, Dinner Date is not a game. But it is a video game and, for brevity's sake, I'll continue to refer to it as a game. I feel the same way about Dear Esther, a title I approached with so much hesitation and care that I refused to pay more than a couple of dollars for it. I knew it was probably something I should play, but I went in expecting to roll my eyes a lot. I didn't completely hate it, but this was another case of something with so little gameplay that the story just has to be great in order to stand up on its own. Dear Esther consists entirely of walking toward a destination while being narrated at. There are some pretty vistas, and I was genuinely amazed at the art direction inside the caves, but there just isn't anything that really drew me in to the story. It's a web of metaphors and similes, overly flowery and poetic without really presenting anything interesting or compelling. In that vein, a Shakespeare quote: "...it is a tale... full of sound and fury, Signifying nothing."
So, armed with my disdain for this type of game, I very hesitantly purchased Gone Home during a Steam sale. I had heard it was great, but people liked Dear Esther as well. Still though, I was interested in the 90's setting, especially the references to music and the riot grrrl subculture.
The first thing that struck me about Gone Home was how much it feels like a typical video game. There is no real challenge component, but the way you interact with the world is very similar to the way you might explore an environment in BioShock after clearing out some enemies (which shouldn't really come as a surprise considering the core group behind Gone Home worked on BioShock 2). Still, this was clearly something different from what most people would expect from a video game. You're essentially given a house to explore, and the story unfolds gradually as you progress.
There is also an interesting contrast between the storytelling in Gone Home and the storytelling in Dear Esther. Gone Home doesn't simply tell the story at you. You experience objects and notes that relate to the story, and at certain key moments get a bit of narration that ties it all together. It's very well thought out and executed almost flawlessly. The story focuses on Sam, the younger sister of the player character, but I felt like I really got to know a lot about their parents at the same time. They felt like completely fleshed out and well realized characters, but you only ever see photographs of them and examine objects of theirs that are scattered around the house. We may be experiencing Sam's story primarily, but we also get to see Jan and Terry struggle with their sex life. All from a few pamphlets and a letter from a friend. Just the simple fact that you do have some agency as a player and can explore the house as thoroughly as you see fit elevates Gone Home to a level beyond "do what the game wants and get narrated at". It even has level design - you unlock bits of the house as you go and learn things about the family across distinct story beats. This gives you a sense of direction and a feeling like you're accomplishing something, even if all you're doing is picking up everything and trying not to miss any of the plot details. Even when narration does occur, it sounds like a teenage girl going through confusing times opening up to her big sister. It isn't melodramatic, it's not poetic. It feels very real, and it gives the story a real sense of emotional weight that Dear Esther was missing.
Gameplay that's compelling in and of itself has always been a draw for me, story be damned. It's rare that I can say story is a draw for me over gameplay, but I have continued to subject myself to experiments with interactive story telling anyway. After suffering through Dinner Date and Dear Esther, I am glad to have finally experienced something truly special in Gone Home. The story itself and the way it is presented are masterful. While the plot and setting are very atypical of video games, they are certainly worthy of being presented in an interactive format. In fact, I don't think this story would have been nearly as effective in a non-interactive medium. As a piece of interactive media, it let's the player explore the environment and experience the story at their own pace while gradually learning more about the characters. It also lets me drop family photos into toilets if I feel like it.
The Legend of Zelda: A Link Between Worlds launched this past fall, with a new take on the time-tested Zelda formula. For the first time ever, most of the important items were available to rent from a special shop. If you die, you lose the item and can rent it again. You can also buy items permanently if you have enough rupees, which lets you upgrade the items and keep them in your inventory even after you die.
But hold on a minute. This game also lets you save at regular intervals. Why would you want to live with the consequences of your mistakes when you can just reload a save?
This was a topic I saw discussed on Twitter, in forum threads and even mulled over in reviews. There is a system in place to encourage you to purchase the items, but you can beat the game with nothing but rentals. If you die, just load your last save. No big deal. The game isn’t terribly hard either, so even if you’re avoiding options to become stronger (via item upgrades), you can still make your way through the game without too much trouble.
Human beings seem to be extremely apt at finding the easy way to do things. We are an efficient species, when we want to be, and that’s a huge part of why you’re probably sitting on something made by humans, inside another thing made by humans, looking at a thing made by humans and reading something written by a human. You don’t see dogs building computers and monkeys, try as they might, are yet to write anything that’s really worth reading.
Why then, did I find myself happy to accept my fate in A Link Between Worlds? I should probably point out that I only died once in the game, but part of that was because I made sure I bought and upgraded all of the items. There are some tough sequences in that game, and I wanted to be prepared. I can’t say for sure if I would have loaded my save if I had died with a rental item, but I feel confident in saying I wouldn’t have. Clearly this is not the most efficient path to take. If a person were to want to blow through the game, it may make sense to just rent the items. They could ignore side quests and just work their way through the game’s dungeons. If you happen to die, just load your save. No big deal. If you find yourself stuck, the game gives you lots of options to make things easier on yourself. You don’t need upgraded items, or even very many heart pieces, when you can purchase potions that can make you stronger or invulnerable for a period of time.
Looking back on my time with ALBW, there are couple of reasons why I decided to follow the risk-reward system it had in place. First of all, and possibly most importantly, I wanted to find everything in the game. I wanted to search all of the nooks and crannies and talk to every person and find every item. I wanted to upgrade all of the items I could. I bought the items fairly early in the game, and part of the reasoning was that I needed to in order to get everything in the game. This quickly negated the downsides of dying, so the idea of loading my save when I died barely entered my head. Despite this, I also feel I had a strong respect for “the rules”. To me, loading a save would have been equivalent to cheating at board game or a sport. Maybe I could get away with it, and it might make my life easier, but it wouldn’t be a real victory. This was actually another part of my motivation to own every item – if I died, I didn’t want to have to waste any rupees re-renting an item. I accepted that I would face this consequence if it came down to it, but I never did.
Sometimes consequences aren’t strictly life and death. It has become increasingly popular for games to include story choices, and for there to be a “good” choice and a “bad” choice. Sometimes these aren’t direct choices, but simply consequences for the actions you take. Do the “right” thing, get the “good” outcome. Do the “wrong” thing, get the “bad” outcome.
Anybody that has played Mass Effect 2 is aware of the loyalty missions that make up a significant portion of the game. To become properly prepared for the final battle, the player must gain the loyalty of each of their squad members by helping them take care of something personal. It is possible to “fail” these missions and continue without that squad member’s loyalty. Without enough loyalty, there's a good chance you'll witness a tragic ending when you finish the game. Without giving too much away, there is a mission that involves you following a non-player character through a city environment. If you get too far away and lose track of them, the squad member you were trying to help will not be loyal to you.
I was playing through this mission, and something went wrong with the camera. I might have pushed it in a weird direction, it may have hit the geometry in a weird way, but basically I found myself disoriented. I attempted to continue with the mission, and ended up failing because it took me too long to locate the NPC. I reloaded my save.
Later, in the same game, two characters that were already loyal to me got into an argument. I felt one of them was being pretty reasonable, while the other was being a bit too harsh. I took the side of the character I sympathized with, and lost the loyalty of the other character. I did not reload my save.
In the first example, I felt like forces out of my control caused a negative outcome to occur. I was frustrated that a mechanical error, either on my part or the game’s part, caused the story to move in a negative direction. In the second example there was a somewhat negative outcome (from what I’ve read since, it sounds like there is a way to resolve the conflict and keep both characters happy) but I felt that the story choices I made were consistent with how I wanted to play the game. To me, reloading the save in the first example felt justified. I felt like I was owed a second chance, and that it wasn’t acceptable for me to let that be the final outcome.
There must be people that feel this way about A Link Between Worlds. They aren’t just being efficient, they’re getting around what they may see as a poor design choice. A rule that they don’t respect. They have every right to see it this way and enjoy the product as they see fit, but I can’t help but feel that circumventing consequence is missing part of the point.
Fire Emblem, for the uninitiated, is a long-running strategy role-playing game series. Traditionally, if one of your characters in a Fire Emblem game dies in battle, that character is dead for the rest of the game. As you might imagine, this leads to a lot of users loading their saves rather than living with the consequences. The games are still made much more difficult by this mechanic, but the fact is many players will never continue with a battle if a character they like dies. In the most recent entry to the series, Fire Emblem: Awakening, players have the option to turn this feature on or off. The creators have recognized that permanent character death probably isn’t for everyone, and have likely taken into account the fact that many players don’t accept permanent character death anyway. They have essentially avoided players circumventing the rules by changing the rules to account for circumvention.
This was the solution that worked for Fire Emblem. What might work for a game like A Link Between Worlds? Well, Demon’s Souls has done a great job of providing an example. Death is a big part of Demon’s Souls, and players will typically choose to live with their death and continue on with their punishment. Why? Well, death also gives them a chance to improve themselves. You can purchase upgrades, maybe try a different stage (though you're more likely to want to go find the blood stain from your previous attempt). If nothing else, you’ll at least get some practice. Death is inevitable, and it’s built into the game. Your death in Demon’s Souls isn’t good, but it’s something most players will choose to live with. The key seems to be giving as well as taking.
Maybe instead of simply taking your rental items away when you die, ALBW should give the player the option to rent an upgraded version of the item. This option would only become available after you’ve died with the applicable item, and would perhaps give players a reason to live with their failure instead of restarting the game.
In regards to story-based games like Mass Effect 2, choices could stand to be more flexible. ME2 isn’t totally rigid in terms of what is “good” and what is “bad”, but there are certainly “good” and “bad” outcomes, and the requirements for each are fairly strict. In fact, it is not advantageous to be morally grey in the Mass Effect series. I mostly made the “good” Paragon choices, but made a few Renegade choices (if I’m going to spend half the game killing nameless thugs during gameplay, why would I spare a dangerous murderer during a cut scene?). This meant the “you-must-be-full-Paragon-to-choose-this” resolution option was greyed out for me when my two crew members were arguing.
Dragon Age: Origins, while still suffering from a few too many binary choices, did a much better job of not having a “good” outcome. If you make it through the game, you’re still going to see an ending that’s somewhat similar no matter what you do. But the loyalty of your party members, the circumstances surrounding the conclusion and the fate of your hero are all very flexible. I happened to achieve an ending that isn’t consider cannon by BioWare (they’ve specifically released downloadable content and an expansion that don’t accommodate my version of the story at all), but I was happy with it. Despite being annoyed at how they’ve handled the story since then, I was glad I was able to be the hero I wanted to be at the end of Origins. I never regretted a decision or felt like I got an undesirable outcome, and I never reloaded my save to give story choices a second go. When bad things did happen as a result of my choices, I felt like I still made the right choices for my character, even if they weren’t the “best” choices. Keeping the story interesting, and not attaching success and failure to story choices, makes players less likely to feel like they did something wrong after making crucial choices. This type of user-driven storytelling may eventually eliminate the idea that story-driven games have good endings and bad endings.
Traditionally, a negative outcome in a game will be a death, or an outright failure, followed by a chance to try again. As more and more games move into alternate success/failure states, and provide different positive and negative outcomes for player choices and user error, more and more players are going to be reloading their saves rather than face the consequences as they are designed. These players are essentially treating these games like they would treat any other game – if you do something wrong, try again. As long as this remains the most efficient option, a sizable percentage of the audience will choose to use it. When games try to integrate failure into their game design or story, they need to give players a reason to stick with them rather than load a save and try again. Though many of us are willing to play along most of the time, there will always be a portion of the audience that isn’t so understanding.
I am not old enough to have been a part of the riot grrrl movement. I did not live in the right place. I am not the right gender to have truly been affected by it. But it entered my life, and was part of shaping who I am.
Now, before I go too far into my personal history, let’s talk about what the riot grrl movement was. At the most basic level, it was a lot of fed up girls and women that liked punk rock, that decided they were going to go at punk rock on their own terms. It was the 90’s, they were feminists, they were angry, and the music reflected this strongly.
I first came across the term “riot grrrl” without thinking about punk rock or feminism. I spent much of my adolescence as a diehard fan of the Beastie Boys, to the point where almost anything they plugged or mentioned was worthwhile to me. I heard that Adrock’s wife was this woman named Kathleen Hanna, and that she was in a band called Le Tigre, and used to be in a band called Bikini Kill. This was in about 2004. I went to my trusty Kazaa Lite and downloaded a few Le Tigre songs, and a few Bikini Kill songs. I really got into Le Tigre, and have memories of listening to them on burned CDs in my discman while waiting for Halo 2 to find a game for me to join. They had a bouncy feminine charm to them, but also had a bit of an edge. I had always known I liked music with female vocalists, but had never found anything that really appealed to me. Le Tigre, for a time, were it.
A few years later, I started to get more and more into punk rock. I don’t think I ever fully qualified as a “punk kid” or anything like that, but I became increasingly interested in bands like Minor Threat and Black Flag. I found out where a few local record stores were and spent hours browsing their selections, sometimes buying albums without having heard a single song on them because I had heard the band were good or the art was cool. I came across a Bikini Kill album and decided to pick it up. Although it was only about 25 minutes long, and I later found out it was Bikini Kill's last album, Reject All American was my first large dose of riot grrrl.
I enjoyed what I had heard of Bikini Kill, loved their passion and felt good about agreeing with the messages they had. Though extreme at times, I felt they brought a fresh perspective to punk rock and addressed problems that many male-fronted bands wouldn’t dare touch. They had songs not just about gender inequality, but about sexual abuse and rape. No trigger warnings.
My collection of riot grrrl albums is actually pretty small, mostly because there weren’t a ton of notable riot grrrl bands. I have a lot of respect for the scene and the foundation it built, but when you really get down to it the most important part of music is, well, the music. Bratmobile is worth mentioning, and I like Slant 6. From what I understand, though, the most important part of riot grrrl was that it brought more girls and women to punk rock. Girls were picking up guitars, playing shows, being a part of something. They weren’t groupies or girlfriends, they were bands.
When the dust began to settle in the mid 90’s, when the scene began to fade away a little bit, there was Sleater-Kinney. An all-female rock band, fronted by two women (who were dating each other at the time). One of them came from a riot grrrl band called Heavens to Betsy, the other from a queercore band called Excuse 17. Sleater-Kinney was their side project, named for the street their practice space was located on.
Sleater-Kinney’s first album is what you might expect form a riot grrrl band: aggressive, a bit messy, and fiercely feminist. Songs like Sold Out, How to Play Dead and A Real Man call out the importance women place on men, and level a rallying cry against it.
This wasn’t the first album I heard, though. Sleater-Kinney’s Dig Me Out was one of the albums I bought because I heard the band was good and the cover was kind of cool. I knew they were associated with the riot grrrl movement, but didn’t really know what to expect. I put the CD on when I got home and fell in love almost instantly.
This wasn’t messy at all, this was tight and focused rock music. It was a relentless post-punk guitar-and-drums explosion driven by Corin Tucker’s ferocious wail. It wasn’t simplistic or rough, it was economical. 2 guitars, 2 voices, 1 drum kit. Words and Guitar, as one song on the album puts it.
It wasn’t the riot grrrl I knew. This was a band making great music, and they happened to be female. They happened to come out of the riot grrrl movement. They were encouraged to pick up instruments and raise their voices by people like Kathleen Hanna, but they took it in new directions. They may have had to deal with men shouting “show me your tits” from the crowd, but they dealt with it tactfully and cleverly by wearing t-shirts that said “show me your riffs” and they carried on.
Sleater-Kinney, despite their origins, decided early on that they didn’t want to be classified as a “girl band”, or even thought of in terms of gender. I am sure they would still call themselves feminists, to this day, but above that they focused on being a great band. Their music was priority one. The message came after that, if you cared enough to listen. This emphasis of music over message was the key difference between Sleater-Kinney and riot grrrl, but riot grrrl's message paved the way for a band like Sleater-Kinney to exist.
Sleater-Kinney put out seven fantastic albums before going on indefinite hiatus in 2006. One of them is now the co-creator of Portlandia. Though they may have moved away from an explicitly feminist message, they remained distinctly female throughout their time as a band. Not just by the fact that women were singing, but through the subject matter of the lyrics and the often feminine nature of the music itself.
How does this relate to gaming? Well, a few ways. There has been a large rise in female and feminist voices in games over the past few years, especially in the worlds of blogs and indie games. Though abrasive to some people, their intentions are valid and just. These women just want to be a part of the world of independent video games, they want to be heard and they don’t care if they need to get in your face to do it. Those condescending tweets and angry blogs and Feminist Frequencies (Feminist Frequency is actually pretty low key) are all a result of this. The games coming out of these women may be simple twine games or flash games, but they’re there. Like the noisy and simplistic punk rock of riot grrrl, these are the first steps. This is gaming’s riot grrrl. It might rub you the wrong way if you’re coming from the outside, but these are important steps.
What I’m waiting for are truly notable, acclaimed indie games made by women. Not just teams with one or two women, not just mainstream games with female staff, but games made by women. Games that are distinctly female, and may be informed by feminist rhetoric or counter-culture aggression but are not defined by it. Indie games with the appeal and quality of the biggest indie hits, but developed by women. Sleater-Kinney were never mainstream, but they were certainly in the upper echelon of indie music in their day. They worked hard to be get rid of the “girl band” label, and they achieved it. I look forward to the day that a creative, personal indie game hits it big and is made by a group of women because, well, why not? And people love it, and enjoy the game for what it is, without thinking too hard about the gender of the creators. I look forward to gaming’s Sleater-Kinney.
The final bow on this analogy is the inclusion of Heavens to Betsy in Gone Home. It’s perfect that one of the more progressive games of last year, one of the games most praised by feminist gaming commentators, features music by a riot grrrl band. A riot grrrl band that featured a member of Sleater-Kinney.
I wasn’t really planning to do an introduction blog, but upon browsing through the community it seems like a thing people do.
I’ve been a casual reader of Destructoid since whenever I first started seeing Destructoid get linked to and talked about. Maybe like 2008 or 2009? I don’t know. I started visiting with more regularity in about 2011 and comment on posts a lot. I’m one of those dudes that show up as my real name because I just use my Facebook account. Sometimes I even use my Twitter account instead!
Occasionally, I’ll blab on about a game or some sort of gaming-related topic on a forum or Facebook or Twitter, and a few people have suggested I try writing a blog. I’m not sure if they’re just trying to get rid of me or if they’re genuinely encouraging me, but here I am!
A bit about me:
- I somehow suckered my parents into buying me a Genesis for my birthday after buying my sister and I a Super NES for Christmas just 18 months or so before. I have been a fairly avid console buyer ever since. I love getting my hands on new hardware and seeing what it’s all about. At least once the games and price point are where I want them to be.
- I have almost spent $1337 on Steam and I’m trying to find purchases I can make that will round the number out nicely. I believe I’m $6.64 away. Any suggestions?
- I’m a regular listener of Podtoid and The Dismal Jesters. I also watch Sup Holmes when I can. Just can’t get enough Holmes. What a lovely boy he is.
- I attended college for “Game Art and Design”. As you might imagine, it’s hardly a direct path to a career. As demonstrated by the fact that…
- I work in QA. Major publisher, I can probably say who but I won’t right now. I worked for a smaller mobile company for a bit and it actually managed to make the big soulless corporation seem more appealing. I think it’s something to do with the games not being designed like slot machines?
- I also dabble in game development and have made some crappy games, most of them not readily available for sharing. I have one I've been working on for a while that I hope is good one day.
Anyway, I have a few actual posts planned that I’ll put the effort into creating at some point. Please be excited!
You can find me on Twitter at @adammcdonald and I also have a Word Press blog/portfolio/whatever at http://adammcd.com.