Intuitive User Interface for Projection Based Systems

DOI : 10.17577/IJERTCONV5IS01107

Download Full-Text PDF Cite this Publication

Text Only Version

Intuitive User Interface for Projection Based Systems

Purav Bhardwaj

Electronics and Telecommunication Department,

Atharva College of Engineering Mumbai, India.

Abilash Nair

Electronics and Telecommunication Department,

Atharva College of Engineering, Mumbai, India.

Mahalaxmi Palinje,

Assistant Professor, Electronics and Telecommunication Department, Atharva College of Engineering, Mumbai, India.

AbstractUser Interfaces(UI) undergo evolution with the advent of new technologies and consumer product paradigms. In this paper we expound on a modality in current trends of user interfaces for projection based systems. These systems extend the touch sensitive experience to any/all surfaces and are not limited to a prescribed set. The interactive experience also encompasses communication through gestures and tangible object through meaningful data extrapolation. This user interface forms a part of a larger "Sentient Surfaces" that use raw depth information and advanced machine learning algorithms to extrapolate meaningful data. Through concatenated techniques such as posture recognition and palm rejection we make the interactive experience more seamless and perceptive.

Keywords Graphical; User Interface; User Experience; Gestures; Projection; Interaction.

  1. INTRODUCTION

    The creation of information systems, predominantly computers, have resulted in several methods of interaction. The earliest realization of modern interactive design can be traced back to punch cards and mainframes. Moreover, with the advent of the computer came the keyboard, that to this day forms an integral part of communication and I/O. The arrival of mouse however added a new dimension to interaction with digital systems but it is the conception of touch screens that brought about a paradigm shift by virtue of simulating real world gestures onto a screen.

    Common touch screen systems however have a limited capacity. For instance, these systems can only provide location of interaction point in two dimensions and have limited tracking capabilities (i.e. ability to track a handful of interaction points). In addition to this no shape information of the interaction zone is available. These limitations in part are due to hardware implementations which are driven by the emphasis on emulation of point inputs for prevalent GUI interactions.

    In an effort to create more visceral and perceptive interactions with digital systems several applications intend to move the input/output capabilities of a touchscreen on to everyday surfaces. This is primarily achieved using a projector and some form of input sensing system (which in the considered case uses image processing and raw depth data). Such an arrangement is used predominantly for the following reasons:

    • Absence of surface limitations: Since the output i.e. the display is realized using a projector, there exist virtually no limitations on the surfaces where the output can be projected.

    • Minimum apparatus: The apparatus can essentially be confined to a projector, a sensing module and a processing station such as a CPU. In fact, all the apparatus can be confined to a single compact device.

    • Possibility of Addition and Concatenation: The design of the entire apparatus is such that it is easy to concatenate addition functionality by means of new software or additional modules.

    • Low expenditure: With decreasing costs of projectors and image processing sensors (primarily cameras) the system can be executed at minimal cost.

    In addition to changes at a software and hardware level, a truly instinctive and intuitive interaction can only be achieved through a specially curated user interface targeted especially for projection based systems.

    In this paper we define a modality of current user interfaces called Intuitive user interfaces based on established UX principles derived from extensive user testing and studies [10].These interfaces include uniquely curated UI elements in addition to astute approaches such as palm rejection, hand position detection etc.

  2. USER INTERFACE DESIGN

    The overall design of the user interface for the aforementioned projection based system is based on three broad-based goals for an insightful interface and seamless interaction.

    1. Visceral Level

      Creation of an alluring boarding experience for the user panders to the immediate emotional response. A visually captivating user interface captures the users interest and attention and to an extent familiarizes the user to the interface and creates a sense of committal.

    2. Behavioural Level

      The user interface must be gratifying at behavioral level i.e. at an experience level. Visceral astuteness cannot compensate for user experience. The product must pander to the

      expectation of users. User experience deals with collecting the right feedback to interpret the adoption of an interface.

    3. Reflective Level

      The interaction must be noteworthy in that they must create a sense of admiration. Moreover, the user interface must draw inspiration from real life events/entities/gestures i.e. it must be a reflection of the physical world so as to create intuitive and coherent interactions.

      In fulfilling all of the above goals we create UI elements that take into considerations not just point interactions but several other parameters such as hand posture, angle and gestures. When taking advantage of the expressiveness of gesture- based interaction however, it is important not to go too far so as to a create tedious interaction experience. It is therefore essential to strive for middle ground where current trends in interfaces are realized using interaction styles based on a more flexible and direct sensing technique.

      The major tasks of aforementioned user interface can be divided into four major clusters, namely:

      • Pointing: Locating an arbitrary position in spatial confines. For eg: Positioning the mouse to a specific or random location.

      • Parameter regulation: Controlling the value of certain parameters to obtain imperative outcomes. For eg: Scrolling to view different sections of a page.

      • Information extraction: Triggering certain events to obtain data of a specific nature. For eg: Opening a file.

      • Spatial Selections: identifying one of several spatially distributed alternatives. For eg: Accordion Panels.

    These four tasks form a good basis to design a modality in user interfaces. Any complex User Interface however performs several other additional tasks which may create ambiguity in reference to the users intentions. In order to eliminate this ambiguity, the user must be provided with contextual information.

    A strong sense of context helps avoid false positives, where everything a user does is considered as an interaction. In present GUI design, context is provided by means of position of cursor in spatial domain. For eg: The relative spatial position of the mouse defines the extent of motion and subsequent actions such as dragging scrolling etc.

    In Intuitive User Interfaces, context is provided through visual cues such as shadows, opacity, color and animations instead of relative spatial positions. All UI elements have an x, y and z co-ordinates. Shadows are created by the elevation difference between overlapping elements. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the user. Motion or change in the z-axis the result of interaction with said elment. A vertical context (i.e. height) is conveyed through shadows with grounded objects casting an overall smaller and more opaque shadows. In the physical world, objects can be

    stacked or affixed to one another, but cannot pass through each other. Such objects cast shadows and reflect light. Intuitive user interfaces reflect these qualities to form a spatial model that is familiar to users and can be applied consistently.

    Fig (1) Contextual cues using element overlapping and shadows.

    Context is also insinuated through motion of UI elements. It provides:

    • Indication of what happens once user completes a gesture.

    • Hierarchical and spatial relationship between UI elements.

    • Adept Distraction from behind the scenes operations.

    • Guided attraction between views.

    On defining these principles of User Interface, we then design specific UI elements.

  3. DESIGNING UI ELEMENTS

    Elements within the user interface are designed keeping in mind the principals and contextual cues discussed in earlier sections. The design of UI elements encompasses several parameters, namely style/layout, color palette and the elements themselves.

    1. UI Elements/Components

      We define three rudimentary elements that form the substructure of the GUI in its entirety.

      1. Cards: A card is a sheet element that serves as an entry point to provide more detailed information. Cards may contain information pertaining to a single subject. This information may include but is not limited to text, photo, link of varying sizes and variable lengths. Cards are a convenient means of displaying content composed of different elements. A collection of cards is coplanar i.e. lies on the same plane. Cards have a constant width and variable height. Content hierarchy within a card is used to draw the attention of a user to important information. Primary information is places at the top followed by secondary information at the bottom with a smaller font size.

      2. Accordion panels: Accordion panels contain creation flows. It is a lightweight container that may either stand alone or added to a larger element surface such as a card. The height for a card can increase for some cases (such as in the case of expansion to reveal comments/additional information). Accordion panels may be displayed in a sequence to form creation flows. For eg: a UI element may use a series of such panels to collect additional information.

      3. Buttons: Buttons communicate the action that will occur when the user touches them. The type of button used should be suited to the context in which it appears. Button alignment in terms of standard dialogs should be placed on the right for an affirmative button while the dismissive button is on the left. In terms of cards, Buttons are best placed on the left side of a card to increase their visibility. However, as cards have flexible layouts, buttons may be placed in a location suited to the content and context, while maintaining consistency within the UI.

        Fig (2) Here [1], [2] and [3] represent cads, buttons and accordion panels respectively. [4] denotes positioning of cards at same z position.

    2. COLOR PALETTE

      Color Palette takes cues from contemporary architecture, road signs, pavement marking tape, and athletic courts. Color should be unexpected and vibrant. This color palette comprises primary and accent colors. The color palette starts with primary colors and fills in the spectrum to create a complete and usable palette for UI.

      Fig (3) Spectrum of possible color palette for the color grey.

      The accent color is predominantly used in active elements such as accordion panel indicators and buttons. The accent color must provide contrast to the spectrum of passive elements such as cards.

    3. Gestures

    Humans use hand motions and gestures to communicate with each other. By extending these gestures in the digital realm, an enhanced interaction experience can be achieved. A gesture is defined as the movement of the body, head, arms, hands, or face that is expressive of an idea, opinion, emotion, etc. This definition of a gesture is a generalized one and while it might eventually be possible to interpret such gestures in real time, such interpretation is far beyond the present scope of computer vision or processing capabilities. For this reason, the term "gesture" has a restricted connotation when use in the context of human-computer interaction. A gesture is usually used to describe a movement using touch or input device and refers to the scope of a command.

    A large set of gestures have already been proposed by researchers for interactive surfaces [2][3][4][5][6]. In this section we build upon prior work by comparing various user defined set of gestures to propose a pertinent set. Our proposal indicates importance of incorporating consensus by end-users and group of designers in creation of surface gestures. The gestures we define are as follows:

    • Pointing: Defining a point location on the surface by virtue of touching the surface at a specific x-y co- ordinate. Such a point is defined to manipulate elements or information in the user interface. It can also act as a stepping stone for further intended actions. Spatial selection of options while the user stays with some current task (like typing) is possible. Multiple touch

      inputs for pointing can be used for triggering specialized or multiple events.

    • Hovering: Hovering simply involves acknowledging a particular co-ordinate without touch interaction i.e. above the touch input region (explained in the next section). The distinction between hovering and pointing is that hovering is used predominantly for acknowledgement rather than any concrete affirmative action.

    • Dragging: A dragging gesture is registered on continued contact with the surface while maintaining some motion to manipulate a widget or element. For eg: Dragging a scroll bar to access content on a webpage.

    • Lingering: Lingering is a time dependent gesture. It can provide additional/contextual information pertaining to a specific UI element. If a user lingers i.e. maintains contact with a UI element for a duration of 500ms, corresponding animation is triggered. For eg: Lingering on a card to provide contextual information.

    • Pinching: This gesture may include both pinching in and pinching out. The gesture may trigger zooming in or zooming out of content such a photo. In addition to this, the gesture can also be used to move in and out of directories.

    These set of gestures form the basis of user interaction with projection based systems and are capable to emulate almost all conventional functions of a traditional interface such as GUI on a desktop computer.

  4. TOUCH SENSING, MODELLING AND ERROR RECTIFICATION

    For intuitive user interfaces, we are interested to implement techniques that allow detection of touch on the surface without the use of any specialized apparatus. This allows for use of any arbitrary surface and removal of complexity. One way to achieve this is by projecting a sheet of infrared light and watch for fingers intercepting the light. The other method is use depth data to define a touch input region as virtual volume located about the surface. For robust tracking of points, it is crucial to estimate the underlying surface and using the latter method generate difference maps to detect whether objects fall within the defined touch input region by virtue of connected component analysis. In order to generate multitouch events we simply use Windows global hook or TUIO protocol [7]

    1. Hand Modelling

      This section so far has concerned itself with detection of touch points in 2D space (i.e. in x and y direction). However, an exhaustive model includes other parameters such as finger and hand posture. This enables for a more ubiquitous and enriching interaction experienc. Using the same principles as those used to define the touch input region to locate touch points, we can locate the position of the palms and wrists of user. Using the same raw depth data, we define two

      additional virtual volumes, hand region and wrist region. Using connected component analysis, different models for the aforementioned gestures can be created. An impending improvement to this solution could integrate a second vision- based tracking system that could follow the participants themselves instead of merely their touch points.

    2. Palm Rejection

      Modern day tablets and interactive surfaces are used to emulate the pen and paper model quite often. One of the major issues faced in this process is the occurrence of illegitimate inputs on multi touch detection compatible devices via our palms or any other part of the hand apart from the stylus or finger. These accidental touches can be categorized by the following examples where –

      • One of the users finger apart from the one used to interact, accidentally registers a touch owing to the multi touch nature of the device.

      • The user is forced to hold the wrist in an uneasy angle trying to avoid the erroneous touch, which is highly uncomfortable while using the device for longer durations.

      • User has to use a cover sheet to stop the accidental touches but which happens to be highly distracting.

        Solutions to these issues as of now can be broadly classified into:

        1. Hardware based solutions

        2. Software based solutions

        Hardware based solutions incorporate methods like a pressure sensitive capacitive stylus or creating a pressure profile of hands and other objects. There are also methods where the stylus is connected to the device through a port (for example the earlier iPads) and touch inputs from any other object is rejected.

        Software solutions are divided into two categories:

        1. Implicitly rejecting the touch

        2. Explicitly rejecting the touch

        Implicit rejection works on pre-defined parameters like contact size, pressure, position etc. (for example Penultimate) Explicit rejection involves defining a certain space on the surface where touch inputs would not be registered (for example SmartNote12)

        Based on studies conducted by Richard et al at SMU [1], we propose a technique where both implicit and explicit methods are used. The area around the bezel where the dominant hand rests would be defined for explicit rejections, while implicit rejections based on contact area and pressure profile would also be present.

        This is in line with the above study which shows that most of the erroneous touches occur when writing or drawing rather than tapping or dragging. Also the area around the bezel could be resized and shifted according to preferences while also helping people with different dominant hands.

        Fig[x]; Here [1], [2] and [3] shows the errant touches from tapping, dragging and writing tasks in stylus condition respectively. [4], [5] and [6] shows errant touches from tapping, dragging and writing tasks in finger condition.

    3. Additional Inputs

    Additional inputs to the defined User Interface can be produced with the help of miscellaneous stimuli such as tangible objects and pens/stylus. Contact location for such stimuli is defined within the touch input region using similar principals. However, with the generation of accurate models, touch points for both hand and pen/stylus can be differentiated.

    In addition, tangible objects can be used to trigger specialized events or interactions. This can either be achieved using classifiers to determine a class of objects or using visual codes for augmented reality and projection scenarios. These visual codes may be used to identify any object large enough to bear the codes without recourse to complex generalized object recognition. These visual codes are essentially

    pertinent in table top projection scenarios of game pieces of parameter control such as knobs that vary in semantics.

  5. CONCLUSION

We have defined a novel user interface we call Intuitive user interface for projection based systems. These user interfaces are a modality of current trends that make them more perceptive and ubiquitous to use.

The user interface is based on three broad based levels of design. It not only registers touch as 2D points but also extrapolates palm and wrist position and orientation to provide meaningful extension. We also expound on element design, gestures and aesthetic guidelines for such user interfaces and their elements. In addition to this we explore rectification and error prevention methods. We also delve into supplementary inputs for the UI and contemporary methods of interaction.

REFERENCES

  1. Ke SHU Singapore Management University : Understanding and Rejecting Errant Touches on Multi-touch Tablets

  2. Malik, S., Ranjan, A. and Balakrishnan, R. Interacting with Large Displays from a Distance with Vision-Tracked Multi-Finger Gestural Input .UIST 2005, 43-52

  3. Morris, M.R., Huang, A., Paepcke, A. and Winograd, T. Cooperativegestures: Multi-user Gestural Interactions for Co-located Groupware. CHI 2006, 1201-1210.

  4. Moscovich, T. and Hughes, J.F. Multi-Finger Cursor Techniques. Graphics Interface 2006, 1-7.

  5. Rekimoto, J. SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces. CHI 2002, 113-120.

  6. Ringel, M., Berg, H., Jin,Y., and Winograd, T. Barehands:Implement- Free Interaction with a Wall-Mounted Display. CHI 2001 Ext. Abstracts, 367-368.

  7. KALTENBRUNNER, M. reacTIVision and TUIO: A tangible tabletop toolkit. In Proceedings of the ACM Conference on Interactive Tabletops and Surfaces (2009), pp. 916.

  8. BRANDL, P., FORLINES, C., WIGDOR, D., HALLER,M., AND SHEN, C. Combining and measuring the bene-fits of bimanual pen and direct-touch interaction on horizontalinterfaces. In Proceedings of the ACM Conferenceon Advanced Visual Interfaces (2008), pp. 154161.

  9. Meredith Ringel Morris, Jacob O. Wobbrock, Andrew D. Wilson, Microsoft Research,Information School, DUB Group, University of Washington

  10. Buxton, W. Projection-Vision Systems: Towards a Human-Centric Taxonomy Unpublished Manuscript ,Toronto: Buxton Design, 2005.

Leave a Reply