Specifically non-traditional interfaces create opportunities for computer supported co-located learning and accessibility.
Introduction
The praxis and scope of designing interactive learning environments goes beyond online platforms for learning content or interactive whiteboards. Specifically non-traditional interfaces create opportunities for computer supported co-located learning and accessibility of technology tools by audiences with little experience with traditional computer interfaces.
In the following project examples of Cape Town based design firm Formula D interactive, I would like to highlight the key considerations in the design process, challenges and lessons learned when designing non-traditional interfaces for learning applications in South Africa.
Multitouch surface at the Two Oceans Aquarium in Cape Town
In 2008 our company designed a 100” rear-projected multitouch wall for the Two Oceans Aquarium in Cape Town (Fig 1).
The goal of the exhibit was to create an immersive interactive landscape to encourage visitors of the aquarium to inquire and learn about habitats, calls and threats of local frog species.
Users find themselves looking at a photo-realistic animated landscape. By touching various hotspots they activate one of the habitats, revealing a 360 Deg panoramic view of a typical area. Now, the challenge is to discover the frog species living in the environment. When a frog icon has been found and touched, a window with description and images, video clips and the call of the frog appears. Screen objects can be moved around by dragging and dropping them, for example, to individual eye-level. Some objects can also be scaled using two fingers or both hands. As multiple frog sounds can be simultaneously activated and deactivated, visitors may collaborate orchestrating “frog symphonies”, broadcast via a directional overhead speaker.
The frog wall sported various novel features which had not been seen in South Africa at the time and thus presented itself as a good case study for non-traditional HCI as it was exposed to some thousand users from various cultural backgrounds every day at the Two Oceans Aquarium.
Lessons learned
At first, many people of all audiences did not identify the large screen as an inter-active surface. In addition to on screen instructions, additional instructional signage was required even though using bold icons as touch points in the GUI (Fig 2). Touch screens were quite common at the time in South Africa (for example in ATM’s which had been in use across all levels of the working society), so we concluded that various contextual elements did not support the user interaction although the GUI was carefully crafted:
- The screen size did not correspond with the conceptual model people had of a touch surface at the time.
- Apart from a photo booth, the screen was the only interactive exhibit in the aquarium.
- The first impression of the exhibit design was reminiscent of the live exhibits at the aquarium (fish tanks). These exhibits are usually not being touched or touching them is prohibited.
The majority of people in South Africa at the time were unfamiliar with the concept of multitouch (the iPhone had not yet been released in South Africa), so these features were hardly used. The two finger scaling gesture, although indicated by GUI elements (Fig 2), was only slowly adopted by the visitors in 2009, when this interaction entered mainstream through iPhone and iPad. Less privileged communities without access to these devices did not access this feature at all unless they were instructed by staff or observed other visitors.
- Users also mostly did not understand that the content windows could be moved around although this feature is very common with desktop computers.
- In turn, once the screen was in use by at least one user, other visitors frequently joined in. The size of the display seemed to suggest that multiple users are possible.
In summary, our observations seem to confirm that the context and size of the display was in conflict with the conceptual model of touch screens and desktop computers, which were familiar to at least a part of the audience. This resulted in certain GUI elements common to desktop applications not being identified. On the other hand, this contextual distance from traditional interfaces could have led to users embracing the multi-user functionality almost naturally.
Locomotion interface at Cape Town Tourism
In 2009 we installed our first 6 metre by 2 metre interactive wall at the Cape Town Tourism information centre, which uses visitor’s body motions as a means to navigate digital content. The wall projection showcases 40 activities and sights in the Cape region, set against an animated backdrop of interchanging panoramic representations of iconic areas around the Cape. When visitors step in front of the wall, and align themselves with one of the projected icons on the wall, they trigger animations, sounds, and pop-ups
with information on the selected attraction (Fig. 3, 4).
Limitations of the interaction
At the time we used infrared cameras for blob detection not only for multitouch displays, but also for interactive floor and wall projections (Microsoft Kinect had not been released yet). The overhead 2D-tracking of user activity on the floor only provided x and y positions as well as blob size and grouping as a means of user input. Since the floor in front of the wall is also a passage area in the centre we had to differentiate between intended and accidental users. In order to avoid accidental inputs, we decided to limit the interaction area to a designated floor strip in front of the wall and only worked with object movement on the x-axis, which further limited the user control in the application. We used a vertical colour bar which moved with the user across the wall to make the user aware they are influencing the environment. Next, we decided to trigger content when a user stopped at a certain area as this may suggest an interest in the special area.
Lessons learned
Digital “real estate”: There are many challenges and limitations when multiple users interact in a shared digital environment that is based on one shared output. Online, or in AR scenarios where users have individual devices or headsets, a subjective render of the environment with personalised content is possible. However, if there is only one shared output, the “real estate” of the digital space needs to be shared just like our physical spaces.
In the design process of this project we had a lot of debate around managing the physical interaction area and its corresponding digital space for an unknown num-ber of simultaneous users. We were concerned about conflicts arising between us-ers who aimed at triggering events in proximity to each other. This resulted in a quite rigid partitioning of the interface area with finite positions of trigger points, so we could be sure there would not be spatial conflicts between interacting users.
However, this concern has proven to be unjustified since the natural behaviour of people in communal space is to keep a minimum distance between each other unless two people are grouped, like a couple walking arm in arm or an adult with a child. It needs to be noted that this minimum distance between people in physical space differs in different cultural environments. However, we observed that a “spatial etiquette” is maintained when multiple users interact in large interactive environments such as the interactive tourism wall or the Frog Touch wall. People do generally keep distance physically and virtually.
Gesture controlled “point screen” for the Centre for Public Service Innovation
As part of a larger installation of various interactive displays for the Centre for Public Service Innovation (CPSI) near Pretoria, we created several user interfaces for a hand and finger-tracking device developed by HHI Fraunhofer in Berlin (Fig 5). iPoint accurately tracks one or multiple finger gestures, which can then be calibrated to navigate a Graphical User Interface. We developed two applications for this system for an educational exhibition environment, one information kiosk with best practise examples of innovation in public service, one assessment of the users’ understanding of innovation principles.
Challenges with the conceptual model of the interaction
Although pointing surely is one of the most archaic gestures as it can already be observed in small infants, users presented with a screen and the invitation to point at it behave all but “natural”. Even when instructed to point at the screen “naturally” many users bent their arms in anticipation of triggering a sensor in the black box above or underneath them.
It seems “unnatural” to users to interact with a screen by pointing at it. Thus, the gesture itself needed to be instructed and could only be applied after reflection in the new context (Fig. 6).
The trigger problem: Most computer interfaces separate pointer and input action. This gives users a chance to reflect on their choice before executing an action. A gesture based point interface does not have a mouse click, so an alternative way of selecting an action had to be invented. Already 10 years ago, I had worked on pointing devices using electric field sensing (EFS) technology at MARS Lab (Fraunhofer Institute for Media Communication) in Germany [3]. Various ideas and concepts were thought out at the time, such as multimodal interface solutions which combined finger pointing action with finger snapping or voice commands. However, the added complexity of the interaction always felt unsatisfactory since it undermined the simplicity of the pointing gesture.
Thus, for our South African client, we looked at GUI solution that did away with clicking and worked solely with roll-over like the website experiment Don’t click it [1].
Our first iPoint GUI (Fig. 7) was a structure of unfolding and collapsing content display areas which open and close according to the position of the cursor. However, what seemed a viable solutions for a Desktop computer interface with a mouse poin-ter, did not work as a finger-pointing device. After selecting content, users had to leave their arm in the same position as long they needed to absorb the content, which caused great discomfort for our users. A subsequent version was designed in a way that users could lower their arm without triggering other content areas. But even this solution resulted in many accidental selections. The current design comprises of a time-delayed activation, a visual countdown, which activates a link only if a user remains in one position for a few seconds.
A similar solution is featured in Microsoft Kinect games.
Pointing Ergonomics
Even with the time-delayed activation functionality, we still faced ergonomic problems. It was uncomfortable and tiring to navigate the interface even if just for a few minutes at a time. We realised that we had spent a lot of time thinking about trigger mechanisms, but had not yet looked closely at the specific ergonomics of pointing with hand and arm. After a few tests we found that although user managed to navigate traditional GUI’s with a horizontal menu structure (Fig 9), keeping arm and hand on a small target for the necessary 2 seconds until the link activated added a fair amount of discomfort. The second generation of tests was conducted with a checkerboard interface (Fig 10), which not only offered larger hit areas, but also used the entire screen space as the interactive area. This made the interaction more fluid and users felt more in control. However, at the same time, the absence of a resting area pressured users to make a quick selection, which made the interaction uneasy. The current interface and best solution to date is a circular GUI (Fig 8, Fig 11), which follows the natural ergonomics for pointing with the hand, since it supports the circular movement of the wrist. The movements feel natural. At the same time, movements can be reduced to an absolute minimum; the interface can be navigated with only the index finger moving. With regard to the trigger area, we found that adding large interactive areas and roll-over states added to the experience of control and ease, whilst the trigger areas or buttons needed to be separated from the active areas, so the trigger timer would only be activated if users intended to do so.
The Virtual Chemistry Lab Table (VCLT)
A few years ago Formula D interactive started the development of the Virtual Chemistry Lab Table (VCLT), a tool to help learners understand the basics of Chemistry through hands-on interaction. Once connected to a standard computer and screen, and the software is installed, the Virtual Chemistry Lab Desk allows learners to arrange physical objects on a surface in order control the software (Fig 12).
Learners then explore the digitally simulated experiments using an array of tools similar to the ones in a real chemistry lab by adding other objects in proximity to each other. A simple content management system expands the functionality of the lab from a simulator to a documentation and presentation tool. Here, learners embed their own knowledge or test results (from real lab experience or secondary research) and embed content, such as images, video or text within the application. Now, they can share and discuss their findings with peers in a classroom setting. The VCLT is built on top of the reactivision platform (http://reactivision.sourceforge.net/) with custom built hardware and content development system.
Lessons learned
A virtual chemistry lab is a great application for tangible interaction through refer-ence objects. Users can easily transfer the mental model from a real chemistry lab to the simulated one. The interactions are similar yet simplified.
Initial considerations for the features of the lab made only provision for various content modules with simulated experiments. Various discussions with teachers and students suggested, however, that the lab needed the possibility for user-generated content through a simple CMS (Fig 14). At this point the lab would also become a documentation centre and database, which could eventually be shared with other VCLT’s online.
The VCLT was originally designed as a Science Centre exhibit. The design included a much larger table with a rear-projected screen of 50”. The content animations projected around the objects which were laid on top of the glass surface.
Since many South African schools wouldn’t be able to afford a large table, we developed a low cost version which was able to plug into existing hardware and was more suited to a classical classroom scenario. We then experimented with a prototype which separated input and output (Fig 13). We expected problems with the interaction and were surprised when young learners of different cultural backgrounds acted very confidently when operating the nontraditional input device in combination with a traditional output device (screen).
For our initial user testing (Fig 15) we selected a girl’s junior school in Wynberg, a suburb of Cape Town. The pupils at the school come from culturally diverse back-grounds. We observed 8 girls in groups of two for 15 minutes each, and a group session with teacher. Approximately 50% of the testing group has access to a computer at home.
The observations confirmed that:
1. The interface was easily understood without instructions by the vast majority of the users including users who have no regular access to computers.
2. Only one girl had to be informed of the possibility of combining different objects, after she had placed single objects onto the lab one by one.
3. Most users were able to memorise the experiments they tried out.
4. Observations of user interaction confirmed that the VCLT invites collaboration since the tangible interface objects can easily be shared and jointly operated.
In various interviews the users commented positively on the tool:
1. They liked that they could “see what they are doing” through the strong representation of the current active state in the interface (objects/ingredients on the table).
2. They highlighted that they enjoyed the hands-on approach, as opposed to just listening to the teacher.
Conclusion
A recurring dilemma in designing nontraditional interfaces is that the designer’s intention of supporting more meaningful and user-friendly interaction through more “natural” interaction frameworks (like gesture, locomotion or tangible interfaces) is prone to failure when the user’s conceptual model of how technology works is predominantly based on traditional interfaces they have been exposed to.
These preconceptions can only be influenced if designers use strategies that make provision for the context of the user and the environment in which the tool is deployed. Through project work on the Virtual Chemistry Lab Table it became apparent that using a combination of traditional and non-traditional interface elements can be a good strategy to offer the user a familiar environment on one side, building up the confidence needed to engage with new forms of interaction on the other.
Beyond these design considerations and strategies, Formula D interactive’s project experiences indicate that non-traditional interfaces such as the Virtual Chemistry Lab Table improve co-located learning and collaboration in the classroom and make interactive digital technology more accessible to audiences with no or little experience with traditional interfaces.
References
[1] Don’t Click It! – www.dontclick.it[2] Kortum, P., (editor).: HCI beyond the GUI – Design for Haptic, Speech, Olfactory and Other Nontraditional Interfaces, Elsevier Inc, Burlington (2008)
[3] Strauss, W., Fleischmann, M. et al.: Information Jukebox – A semi-public device for presenting multimedia information content. In: Personal and Ubiquitous Computing July 2003, Volume 7, Issue 3-4, pp 217-220