Assuming Control feat

Assuming Control: my past, our present & the possible futures of game interaction

Assuming Control feat

The current ‘Blogs of the Round Table’ topic on Critical Distance concerns game controls and the ways in which we interact with what we play:

Joysticks. Keyboards and mice. Mashing a controller with your fist. Touching. Poking. Waggling. Wiggling. Moving your head around a virtual reality world. Directing an arc of your own urine. The ways in which we can interact with games have changed from simple electrical switches into much more complex and nuanced forms. We can even adapt and alter controls for people who have difficulty using traditional methods.

Some of these methods work, and some don’t. Most of us we be familiar with complaints about the Wii’s “waggle” controls, the thumb-numbing frustration of virtual buttons on a touchscreen device, or the gyroscopic motions that ruin the 3D bit of the Nintendo 3DS.

How do we move forward with controls in games? Are the old ways the best, or a barrier to entry? Are you looking forward to playing Farmville on the Facebook-ulus Rift?

It’s not a topic that I have historically devoted significant thought toward. But bear with me – I do have thoughts on the matter.

For most of my life well wasted I have played PC games, which has meant that I have largely settled into a pattern with the mouse and keyboard. Of course I have upgraded these peripherals over the years; the Razor Diamondback I use today is a far cry from the primitive mice I first used. Nowadays I use a Microsoft keyboard that calls itself a “Digital Media Pro”; I find using it supports my wrists and typing style well, which helped with the RSI I suffered almost a decade ago (this rather dates my keyboard, but don’t worry – I shake the crumbs out every so often).

The PC’s enjoyed various peripherals too, of course. I know a few people who bought themselves expensive steering wheels for racing games, and I played a lot of space sims in the 90s using cheap QuickShot joysticks; I purchased a more modern though still affordable stick a few years ago so I could revisit Conflict Freespace. And of course since the release of the Xbox 360 its controllers have grown increasingly popular among PC gamers (I’m sure some weird deviants grew to prefer the PS3 pad, too).

That’s not to say I’ve never dabbled in consoles. I owned a NES long before I got my own PC, and that was replaced with a SNES after some years; I remember the hard plastic lumps and their stiff plastic buttons well. Occasionally I’d get to play their SEGA counterparts when visiting friends or staying with a childminder. Later, I experienced jealousy when childhood friends were given Saturns or N64s (jealousy which I turned around when I showed off Quake 2 with my new Orchid Righteous 3D card… but that’s a whole other story). I eventually bought myself a PS2 when I graduated from university, and upon finding myself a job I followed it up with an Xbox, Gamecube and Dreamcast. A few years later the Wii, Xbox 360 and eventually PS3 joined the collection.

With each console added to the line-up I found myself rating the respective merits of each controller, albeit not in tremendous depth. The single-sticked Dreamcast controller seemed to me an intolerable throwback, whilst the Gamecube’s odd layout and configuration seemed like a child’s toy until I learned that it worked surprisingly well – invisibly so – with a lot of the games I liked most on the platform. The weak analogue sticks and unimpressive shoulder buttons of the PS2’s DualShock hindered an otherwise solid controller, for me, and I ended up preferring the original Xbox’s s-controller (the console’s original controller, equivalent in size to the Wii console itself, was of course one of the funniest jokes about Microsoft’s first foray into the console marketplace).

N64 controller
I have never liked these things.

Handhelds, too, had their appeal: my first Gameboy was a red Pocket, and some years later I picked up first a DS, then a DS Lite, and finally a dirt-cheap PSP. The 3DS and the Vita appeal as well, as much for their interaction innovations than for any of the games I could actually play on them. And of course I’ve joined the smartphone generation, playing iOS and Android games on various handheld devices. I even bought myself a Kinect, and although using it in a small British living room is a challenge North American readers may not understand it does get dusted off semi-regularly. I’ve had some genuinely excellent times monkeying about with the voyeuristic little robot.

I’m sharing all this to provide some context around the modes of videogame interaction I’m familiar with, and to give you some perspective on the controller biases that I may possess but be unaware of… as well as those I’ve already expressed. All of the above is what I know: I’m au fait with gesture and voice controls, with wand and light sensor controls, with touchscreen controls, with controllers of various generations, with joysticks and of course with the mouse and keyboard.

I’m weighed down by the burdern of decades of experience with these devices. It is difficult to see beyond the expectation of familiarity.

It strikes me that insofar as the questions “How do we move forward with controls in games? Are the old ways the best, or a barrier to entry?” are concerned, we may be looking at this backwards. In design terms the hardware often comes first, with games subsequently designed within the parameters established by hardware specifications.

What if we inverted this? What if we set out to first design games, and then to identify how best they could be interacted with?

It’s rarely an economically viable proposition outside of research units and groups, of course. The videogame industry is ultimately a grouping of businesses who are out to make money, and finance dictates that the colossal hardware variance that would result from games-first design would be a poor business model with significant upfront and R&D costs. Independents are less beholden to the protective conservatism of businesses, though they too must have their vision limited by what they can realistically afford. For most, designing, prototyping and then mass-producing suitable hardware is an absurd proposition.

But if we are to really answer the question of “where do we go from here?” then it seems logical that we must put the need before the solution; the challenge before the resolution. Ultimately, videogames are a design-led industry and it is from here that such answers must emerge – not the ill-informed whims of a blogger, even if – perhaps especially because – he has experienced a lot of what the last three decades has had to offer.

That said, it is fun to engage in a little speculation. I suspect that the next five years will primarily involve the consolidation of existing technologies.

Gesture control will become more sophisticated as the sensors and software improve: already the Kinect 2 is lightyears ahead of its predecessor and WiSee suggests another route that similar technology might take in future. That said I do suspect it will be a long time before gesture control moves beyond very general sorts of interaction – far from the finesse that sophisticated modern games demand. There is the possibility that sophisticated interactions could be composed from sequences of gestural interactions, but the obvious risk there is that the error rate in gesture recognition is still high enough that individual gestures are often misinterpreted. That does not bode well for complex sequences. So, right now, gestural input in a videogame is typically tied to a very small number of gaming verbs.

Tony Stark gesture controls
CGI gesture/touch controls have cropped up a lot in films since 2002’s Minority Report. The realities are a lot less sophisticated and sexy (and using a Kinect does not make you feel like Tony Stark).

Touchscreens will improve in responsiveness and I am confident that connecting a smartphone to a larger screen will become increasingly popular, but the already-understood problems with touchscreens will persist. Haptic feedback (vibration and similar responses to the touch of a finger) cannot make up for the lack of tactility where a ‘virtual d-pad’ or ‘virtual thumbstick’ is in use, and the control interactions obscure the main feedback medium, i.e. the screen. I would honestly be surprised if touchscreens alone developed much further in terms of their utility as a controller: it’s more likely that we’ll see continued integration with other devices, as with the Wii U. There’s also potential scope for hybridization of non-flat or flexible touchscreens with traditional gaming controllers. For example, a semi-flexible touchscreen could be laid over a simple D-pad, offering physical resistance and movement as a response to the user. This offers clearer feedback on where the player’s digits are and what they’re doing than touchscreen haptic feedback ever could.

The focus of modern controller development appears to be primarily around ergonomics and form factor, alongside refining wireless functionality and durability, and frankly I’m quite happy with that. The controller, like the mouse and keyboard, is one of the oldest and most established forms of game interaction, and I don’t see any imminent need to reinvent these particular wheels.

Voice control will, I think, become more prevalent over the next five years. The trends are already in this direction with the growing sophistication of voice recognition software. There will be great challenges in building games that work with such software, but I suspect it will not be too long before we see games that are to Tom Clancy’s EndWar what Street Fighter IV is to Karate Champ.

As for the mouse and keyboard… well, I use them constantly for many things as well as playing games, so they’re not going anywhere. I laugh in the face of anyone who genuinely believes that touchscreens will replace physical keyboards: see above regarding haptic feedback vs. tactility.

But this is just me. Others have contributed to this round table discussion; see the form below to check out what they’re thrown into the ring. It would also be fascinating to see your thoughts in our comments thread below!

[I originally drafted this piece in late April, before Joel Goodwin wrote ‘Of Mice and Gamepads’. In it he speaks with numerous game developers and designers who are pushing the boundaries in terms of videogame interaction and the future thereof. It’s a fantastic read, particularly as it offers so many perspectives from people who are directly engaging with new interaction technologies – rather more engaged in the realities than my speculation. So go read!]


13 responses to “Assuming Control: my past, our present & the possible futures of game interaction”

  1. RJB Avatar

    I can't ever see the 'something' in your hand type of control ever going away. Our hands are built for complex manipulation and to feedback to us what we're doing through touch. We like to hold stuff, even when I was a kid a pretend gun wasn't nearly as much fun as a stuck gun. Also, how many times have you involuntarily gripped the controller/mouse at a particularly tense or frustrating moment? That would really mess up any sophisticated gesture control system.

    Having said that, I can remember confidently proclaiming controllers that weren't a fire button for your trigger finger on a joystick were doomed. So I may well be wrong.

    But when Elite: Dangerous comes out I bet I am jonesing for a joystick. They're totally coming back, they are!

    1. ShaunCG Avatar

      I think that is the space sim does experience a genuine resurgence (whether it's just these big Kickstarter projects or something more) then joysticks are likely to become more of a thing again. After that, who knows… maybe flight sims will come back?

      I agree very much about the feel of something in our hands. The tactility of controllers is a really important consideration, as is their presence! Though at the same time, I do genuinely believe there is a future in gesture and voice control.

  2. Simon_Walker Avatar

    I loved the original XBox controller; there was a device to fit a man's hands. Alas, it was doomed by the small hands lobby. Well, the 360 controller ain't bad.

    I think you're underestimating what can be done with haptic feedback. Not with the current one or two motor set-up, no, but I envision future devices achieving enough resolution that one could operate touchscreen by, well, touch.

    1. ShaunCG Avatar

      I am picturing you now, your enormous troll hands delicately pushing at the thumb sticks on the Xbox Serving Platter.

      I'm not convinced that resolution is the issue, nor additional motors or finer grades of response in haptic feedback. Touchscreen resolution is already more precise than human digits can be, and more sophisticated haptic feedback strikes me as a good development that cannot possibly address the absence of physical tactility.

      The best example I can think of to illustrate my point would be the dreaded virtual d-pad or control nub of many mobile games. It is not easy to get the positioning right when you cannot discern by feel if your thumb is in the correct place, and having to look is a great detriment to play – especially given that these types of controls tend to be used in action-focused games.

      1. Simon_Walker Avatar

        That is precisely the point of improved tactile feedback: to let you feel if your thumb is in the correct place.

        1. ShaunCG Avatar

          And how does it accomplish this?

          1. Simon_Walker Avatar

            Through sense of touch.

            No, really. How do you operate a physical d-pad by touch? You can feel its shape. So you use haptic feedback to trace out the shape and state of the control on the screen.

            No, it's not a physical, three dimensional shape. But you can still feel it.

            You can sort of kinda do it already, by varying the intensity of vibration based on the location of your thumb, but the frequency range on that is abysmal, as is latency, and since it's the whole device vibrating you can't really feel an edge so there's an extra level of abstraction to learn. Help visually impaired person to operate their phone, maybe; action game controller, no.

            Once we get more responsive solutions that allow discrimination of areas on the screen with a broader (and deeper) range, we're set. Several technologies that can do that already exist, but aren't widely used in commercial devices, yet. It's another case of future not being equally distributed. Presumably they are deemed too expensive (or possibly not mature enough) for mass market devices, which is a shame, as there's obvious utility for no-look interaction on mobile devices.

            Hell, vibration feedback on a virtual keyboard improves typing speed and accuracy by 10-15%, and that is with just one electric motor to vibrate the phone. Haptics are the future, man.

            (Although to be honest, I'm expecting voice recognition and eye tracking to be the techniques with the most mileage for general HCI, possibly to the point of replacing keyboard and mouse. Haptics have considerable entertainment potential, though.)

          2. ShaunCG Avatar

            Well, if haptic feedback can deliver what you've just described, I'll be thoroughly impressed. I'm just not convinced that vibration from a flat, featureless surface can accurately simulate the feel of a contoured object with degrees of resistance and give.

            But as the original article notes, I am quite bound up in my inevitable prejudices!

          3. Simon_Walker Avatar

            I believe you will be impressed, one day. What can be done with mature haptic technologies will be a revelation. We've gotten pretty good at visual and audio feedback, even to the point of simulating three-dimensionality (from a flat, featureless surface, no less), and sense of touch is not inherently any more complex. People are already talking about simulating different textures.

            That's the future; but even today, we have a range of technology available, much better than is found on mass-market phones and tablets, and not all of it based on vibration.

            It's just that we are so massively prejudiced in favor of sight that haptics haven't gotten a lot of attention. Mobile devices are driving some now, as there's only so much volume to package interaction bandwidth in and a device primarily operated through touch interface really also needs touch feedback.

  3. guillaumeodinduval Avatar

    From a site we all know and love:

    I'll just leave this here.

    1. guillaumeodinduval Avatar

      Actually, I'll do more than just leave that here. I'll add more on the matter.

      One day, somewhere in the Spring of year 2006, I have been graced with the possibility to enjoy FREE freedom-to-do-as-I-please-away-from-work (read: took semi-forced unpaid vacations because of REASONS. Those were harsh times for the video-game industry. Or maybe just the company I worked with. I don't know, POINT IS I was perfectly fine with the idea of having 2 surprise weeks of R&R).

      And so, I went to E3 in Los Angeles. There I saw bunch of hot titles and rooms full of props and boot-babes and… a particularly unattractive section that looked like a weird dealer's room from an anime convention. At first it gave the impression of being the backstage of the main event floor's, or worse, the ''we couldn't afford being on the main floor, but YAY! we may have ruined ourselves just to have a spot somewhere in E3, BUT WE ARE IN. God-I-hope-we-can-find-a-buyer'' room.

      That room turned out to be the most interesting of them all. I saw a PS2 controller made of buttons that you lock into your joints to create combinations of moves that allow you to FIGHT LIKE A DUDE FROM TEKKEN, as you play Tekken, a mouse that moved on the x y and z axis, and a thing you put on your head to play a game.

      That game was more primitive than Pong itself. If I had to give that demo a name, I'd call it ''mind wrestling''. Because it kinda was just that. You'd put a helmet on your head, and then someone else would do the same. Now without trying to look too constipated, you'd try to figure out what way to think or ''be at ease'' (I don't remember really) to push your bar as far as possible, more than the other person. Kind of like arm wrestling and tug-of-war, FOR YOUR MIND.

      I didn't think much of it, but thought it had potential, given I figured works of fiction had already been made with the idea of controlling a machine with the mind. Back then, I only knew of Macross Plus the anime, but apparently in 1982 this questionably poor movie with Clint Eastwood called FireFox (written in SEGA font) came out and it was all about flying a plane WITH YOUR MIND (in Russian). I kind of only got to know that last bit today from replies on the site of the article I just posted here though.

      But yeah, awesome stuff.


      A mere TWO YEARS later, I stumble on this:

      It's insane, it's got so ahead. And now, today, we just learned (well, I just learned at least) that they are planning on making that tech be used to flying freakin' planes.

      Now I tell you, if we're ''getting close to fly with our minds'', I'm pretty sure we have a safe mastery of the tech to map ''a mere handful'' of controller inputs.

      1. ShaunCG Avatar

        That does sound pretty impressive. I shall have to look into it more!

        Initially… I'm not wholly convinced. The human brain is still not well understood in a lot of ways, and given that it seems a reach to suggest that it can be used as a sophisticated control method. Although I can see sticking some electrodes on your head being preferable to a big VR helmet, ha.

        Mind-controlled flight is an interesting area of development, for sure, though I'm not convinced by its actual utility. I've not got much flying experience but what I do know is that physical feedback is tremendously important, which the article you linked to acknowledges is a weakness of the concept. This is also inherently a digital system, and I'm not sure how many planes today have all-digital control systems without end-to-end mechanical components of some kind? (The reason being, it's difficult to debug a problem during flight.)

        I'm also interested in how they'd approach the problem of filtering out inappropriate snap responses to stimuli, or thoughts that we entertain for a second and then dismiss. If it requires specific concentration and focus over long periods (like, a few seconds) then, well, that's a pretty slow decision-to-input time!

        Still, MIND CONTROL! I wish I had thought of that when writing the piece above. I'm definitely going to have to explore it further… :D

        1. Simon_Walker Avatar

          Deep understanding of brain is not required; all that's needed is recognizing distinct brain wave patterns associated with specific (though arbitrary) mental states with some reliability. The interfaces generally require some calibration and a learning period. But hey, you do need to learn to use a hand controller, too.

          I'm warming up to brain interface for gaming, especially if it can be packaged with a VR helmet. The current rigs can generally handle a few distinct actions, and you don't really need that many for a working game controller. Once upon a time we made do with two buttons and a d-pad.

          I am, however, skeptical about such a system handling simultaneous inputs very well, or at all. Still, given the kinds of skills people can learn, I wouldn't be shocked to find that people can learn to do the kind of mental gymnastics required to circle-strafe with a brain interface. I envision a future of Indian yogis traveling to USA to learn meditation at the feet of pro gaming gurus.