Good interface design means reducing the barriers between wanting to do and doing.
Take this epic (design success: soundtrack) game built in HTML5 (marketing success: trendy tech), Z-type. The programmatic hype may get me to look, and the soundtrack may connect emotionally, but the gameplay keeps me typing. It’s addictive because it uses a skill I already have, a critical factor in intuitive UI design.
Creating an interface that anyone can use—lessening or removing the interface from the interaction—brings games to more people: little kids, older folks, and new gamers who are shy about their skill level.
Remember the Atari 2600 one-button joystick? People could grab that controller and play. My mom kicked ass at Frostbite.
I wish we had screencasting back in 19-dickety, ’cause my mom was way better than this guy.
Fast forward to 2005, and you’ve got to take an undergraduate class in button layout to operate a game like Halo. It’s tough for casual gamers to drop in and have fun with a learning curve like this. “Button-mashing” becomes a verb and a style of gameplay born out of interface frustration.
A year later and the Wii’s motion sensitive controller begins to break down gaming’s barrier to entry by promising to be “more fun”. What’s more fun? The fact that you had a shot at being able to use it, because more of the control was natural. Nintendo President Satoru Iwata also thoughtfully designed it “to appeal to mothers who don’t want consoles in their living rooms”, a challenge apparently tantamount to “selling cosmetics to men”.
Now we have Xbox Kinect, the easiest interface so far, because there’s a lot less of one. Gamers simply walk up, are recognized and assigned a gender-appropriate avatar (God, I hate creating avatars), and start smashing. Finally gaming feels available to everyone.
If Microsoft is smart, they’ll convince Activision to put out Frostbite ’11. Tip: don’t get greedy about the fish.