In a previous post I discussed the (disappearing) distinction between designing interfaces that primarily support physical interaction, and those that support cognitive interaction. But this is not a black and white differentiation, so I thought it would be useful to describe a continuum of physical interaction in user interfaces.
Note that I am focusing on willfully or intentionally controlled interfaces. There are examples, arguably, of involuntary interfaces - for instance, an autopilot function that activates if a pilot becomes disabled - but I'll table those interesting cases for a later discussion.
Continuum of Physical Interfaces
A continuum of physical interaction would range at one end from non-movement to that which requires significant, complex body movement. At the first extreme we could include theoretical telepathic control, where no visible physical interaction is evident. As we move away from that end of the spectrum we have existing and emerging neuroergonomic interfaces that rely on measurement of electrical potentials, cerebral blood flow, and MRI imagery other as triggers for action.
The next step towards physical interaction are devices that measure or track relatively small motor movements and translate them into interface actions. For example, Emotiv System's forthcoming Epoc device can translate facial muscle movement into expressions for online avatars. Eye tracking systems, already commonplace in supporting the physically disabled, track eye movements in place of mouse/keyboard controllers.
From here we move to relatively simple, ubiquitous traditional physical interaction controllers - buttons, keyboards, knobs, switches, levers - the stuff of mechanical and electro-mechanical devices that designers have been working with for years. These controllers are typically binary (on/off), or at least incremental (having multiple, discrete states). Most existing touch screen interfaces, such as bank ATMs, would fall under this category.
We then go from discrete, to continuous controllers, enabling multiple actions and greater flexibility. The computer mouse was a breakthrough for human-computer interaction in this context as it supports various types of interaction and interfacing from a single control device. In fact, while keys and buttons are typically designed with a specific function in mind, the mouse provided the opportunity for new user interfaces to be created for defining its functions. Gestural interfaces, from multi-touch screen to the Wii are also examples of this flexible, "open" physical interaction category. These are the "new "interaction devices that are opening up new possibilities for interaction designers.
We might imagine a Minority Report based interface as the ultimate extreme at the far end of the physical interaction spectrum, but as pictured in the video above, it is only limited to gestural hand movements. What about more complex bodily interactions combining other limbs, postural movement and line-of-sight? This is still largely unexplored territory might be best understood by observing how we use our bodies in the most dynamic and complex ways. Musicians, athletes and dancers may be a more valuable source for developing future physical interaction ideas than science fiction.
A Metric for Physical Interface Complexity
Note that the continuum I described above, while not by any means arbitrary, was not based on a well-defined metric that quantified greater or less physical complexity. If we were to do so, degrees of freedom would be an appropriate place to start. A degree of freedom can be defined as any independent direction in which movement is possible. A human finger has four degrees of freedom, made up of the extension/flexion of the three joints, as well as side-to-side movement. Combining the individual degrees of freedom of the four fingers, thumb and wrist gives the hand 26 degrees of freedom.
Hypothetically, we could apply this to the entire human body to specify the maximum level of complexity for any single physical interaction, or sequence of interactions. The total degrees of freedom for a fully functioning human is 1380. In theory, we could go back to any physical interaction and quantify the (minimum) amount of movement required to come up with relative complexity measures. But it actually gets more challenging as complexity is more than just the sum of the degrees of freedom, and would depend on the particular combination of movements, etc. In other words, it's an interesting idea, but requires a lot more thought to pursue practically.
Mapping Physical Interaction Inputs to Outputs
Another important consideration is the relationship between physical inputs and the associated outputs in a user interface system. Current discussion of gestural interfaces is primarily focused on using physical interaction to control virtual objects - a way to make the digital world more tangible. But physical interaction interfaces can also be used to control physical systems, and not just in the literal sense.
Intuitive Surgical's da Vinci surgical systemsrepresent the leading edge of commercialized physical interaction devices. As depicted in the video, the systems "translate and filter" a surgeon's precision hand motions into physical motions of surgical robot manipulators. This requires a two-way physical interaction where the user not only provides physical output, but receives haptic input such as resistance to force. So it's actually a physical-to virtual-to physical loop.
A Starting Point for Defining Physical Interactions in User Interfaces
While this is just a preliminary discussion, there are threads towards developing a taxonomy of physical interaction types:
-
There is the complexity of the physical movement, characterized by the number and type of degrees of freedom involved.
- The output of the physical interaction, resulting in either virtual actions, physical actions or a combination.
- The directionality of the interaction: either one way from user to system, or bi-directional between user and interface.