As part of one of my courses in the Interaction Design Specialization, I was asked to come up with an idea for gestural interface design which would combine with a real world scenario. The task was to come up with a video prototype that explains how the concept works, for one single user task happening in the physical world.
Searching around in my apartment for possible ideas on what to design, I came up with the most dreadful daily task: choosing an outfit in the morning.
Why is choosing an outfit such a pain point?
Thinking from a user’s perspective, choosing an outfit is a pain point for the following reasons:
- People suffer from decision fatigue in the morning
- For most people, clothes are squeezed inside their closets and are not visible enough (which in turn also leads us to the classic problem of wearing the same clothes that we remember and feel the most comfortable with)
- We want to look nice but we have limited capacity to generate outfit combinations
- We don’t always have time and choosing clothes is a non-priority daily activity
- People love buying new clothes but most clothes end up stuffed inside the closet and are never worn again.
It would thus, be ideal if we would have a solution for all these pain points.
How did I come up with the solution?
Since we don’t (yet) have virtual clothes, I needed to come up with a solution that augments the existing reality with the help of technology. I couldn’t help but think of a user interface on the inside of your closet, which gives you information on the weather, makes outfit recommendations, lets you combine the clothes you already have and has a virtual inventory of your clothes. Every item is categorized based on its type, color and suitability for weather conditions. so that recommendations can easily be made.
How does it get activated?
To activate a gestural interface you need a trigger, and the trigger needs to be clear enough to avoid the Midas problem of accidental activation. The triggers I chose were:
- opening my hands widely to activate the wardrobe
- close my hands to deactivate it
- zoom in with my fingers (twice) to select an item
- move my fingers (twice) left and right, up and down to navigate the interface
In the video below you can see how the gestures work in combination with the interface:
What was the hardest part about choosing a gesture?
A good gesture is a gesture that mimics the real world and is easy for people from different cultures to use. When we interact with physical products, most of the time we use gestures which are difficult to picture in a three dimensional or spatial view; these gestures are so automated in our brain that it is difficult to find something that would be similar to them without touch.
Therefore, I had to find some internationally common gestures. Most of the times, common gestures around the world are gestures that come from the interaction of users with physical objects. An example of this is switching the light on and off; this gesture is something international and therefore the movement would be easily recognized and recalled by most people. Similarly, most people use wardrobes for their clothes, which they open when they need to get something out of it and close when they are done.
Last but not least, another challenge was how to browse to the next item, for which I based myself on the mental model of using touch interfaces like iPad (to see next item swipe to the right/left, and to view more items or go back swipe up/down) but reused this mental model with gestural movements in the same direction.