Brave NUI World

Now we stand at the brink of another potential evolution in computing. Natural user interfaces (NUIs) seem to be in a position similar to that occupied by the GUI. In the early 1980s. Like the desktop GUIs, NUIs promise to reduce the barriers to computing still further, while simultaneously increasing the power of the user, and enabling computing to access still further niches of use. But just as GUIs did not simply make command systems easier, NUIs are not simply a natural veneer over a GUI.

Instead, like GUIs, NUIs have a set of strengths-based on what they make easier, how they make those things easier, how they shape the user’s interaction with technology, which niches they fit in, and whether or not these niches expand to dwarf the space occupied by traditional GUIs.

NUI (Natural user interfaces) is based on a natural & intuitive interface. The term natural is often understood to mean mimicry of the “real world.”

From its inception, the inventor of the GUI did not think of it as a way to create easy-to-use systems or systems that were “fun” to interact with. Rather, it was a way to “augment” human capabilities. Analogously, developing a NUI system is more like creating a game. That is, the interaction should be fun, and should introduce new challenges in a gradual way. However, game interfaces differ from NUIs in that most games offer a challenge as part of the game. In contrast, NUI only offers a path to skilled practice. The development of skilled practice may be challenging in and of itself.

In the natural user interface, naturally refers to the user’s behavior and feeling during the experience rather than the interface being the product of some organic process. For example, wouldn’t a voice interface for giving commands be more “natural” in the context of driving a car? The argument would be that typing while driving is not natural. Few would disagree. Would that make speaking while driving “natural”? It’s certainly more appropriate to context. You can keep your eyes on the road and your hands on the wheel. We’ll leave aside the subtlety of where your attention is focused.

We also imply that the intended use is not a trivial one, for example, using an ATM, where the functions are very limited and the user is led through the interaction step by step and only needs to push the “correct” button. (Note: We don’t mean to minimize the importance and challenge of creating and testing effective designs for these types of interfaces; they are just not NUIs.)

All science is experiential; but all experience must be related back to and derives its validity from the conditions and context of consciousness in which it arises, i.e., the totality of our nature. — Wilhelm Dilthey

NUI is defined by three elements:

* Enjoyable

*Leading to skilled practice

*Appropriate to context

NUI Guidelines:

  1. Create an experience that, for expert users, can feel like an extension of their body.
  2. Create an experience that feels just as natural to a novice as it does to an expert user.
  3. Create an experience that is authentic to the medium — do not start by trying to mimic the real world or anything else.
  4. Build a user interface that considers context, including the right metaphors, visual indications, feedback, and input/output methods for the context.
  5. Avoid falling into the trap of copying existing user interface paradigms.
  6. Consider the context of use and the new possibilities that the interface brings to interaction in that context.
  7. Be aware that in different environments the patterns of use of an interface may be dramatically different.
  8. Start simple and look for every opportunity to build on simple interactions to support more complex tasks.
  9. Forget past interaction styles. Don’t simply transcribe an application rendered in a traditional medium (web or GUI) as a NUI.
  10. Choose a promising niche for developing a family of NUI apps. Thus far, the NUI has shown the most success in social and entertainment contexts. Its application to other domains requires an analysis of the way in which the interaction would support and teach the rules of the interaction domain.
  11. Test the fundamental mechanics of the primary interactions before building out the entire interaction. When these are working well (i.e., users enjoy doing them), build on them.
  12. Create an environment that is optimized for touch in its layout, feedback, metaphors, and behaviors. Any item that responds to users’ touch must be at least 15 mm in size in all directions, and there must be at least 5 mm between minimally sized touch targets.
  13. Create immediate responses to all user input that will receive a response. Prebuffer content, provide a transition or use other mechanisms to make sure that every touch receives an immediate and meaningful response. An application without immediate responses detracts significantly from the user experience.
  14. Enable single-finger drag and flick movements on movable content. You must always define a single-finger drag and flick to make sure that users can always apply these basic manipulations to all content.
  15. Do not use time-based gestures on content. Time-based activations introduce mandatory delays for expert users, and they also detract from the sense of a natural environment.
  16. Enable users to manipulate content directly, rather than through user interface controls. For example, use a scale manipulation instead of a zoom button.
  17. Always show signs of life, even when the user is not interacting. For example, the Water attraction on Microsoft Surface was designed to be constantly in motion, but it is never distracting.
  18. Scaffolding is a teaching method that breaks down bigger challenges (such as “How does this whole system work?” or “What are all the possibilities of this system?”) and focuses on smaller problem-solving challenges (such as “How do I initiate this one action?” or “What can I do next?”). These small problems are addressed through specific prompts, hints, and leading questions. Scaffolding provides supportive structures and situations that encourage active exploration.
  19. Foreshadow upcoming results so that users can reverse their actions. For example, during the resize of an image, if the image is about to jump to full screen (obscuring other images), show an outline of the image or a transparent version of the image at full-screen size. Then the user can either reverse and negate that action (the image will not jump to full size) or remove her fingers so that the image becomes full size.
  20. Consider how multiple users will learn together. Users, especially children, invite others to explain the use of the system.
  21. When users are asked to identify themselves, the process should be easy, private, and secure.
Ready to dive into the metaverse? Start Free For 7 Days.