Digital Exploring

by Fidian Warman, Director of SODA

There’s been a whole lot of coding, soldering and cabinetry since we kicked off Travellers’ Tails in December 2014 and here are a a few learnings and tales from our travails and triumphs through the project…..

It was a bold and open brief we responded to with a range of ambitious options some 18 months ago. The essential requirement was to create a ‘Digital Explorer’ to enable visitors to the partner museums to navigate digitised collections in a visual, non-textual manner. There was also a desire to interrogate ideas of digital empathy, a pretty slippery concept. Given the nature of our assignment we agreed with the partners to take an open iterative and agile approach to the development of the Digital Explorer. First we met with The National Maritime Museum, The Grant Museum of Zoology, The Horniman Museum and the CEDE (Creating and Exploring Digital Empathy) research team at UCL in London, and travelled up to the Captain Cook Memorial Museum in Whitby, North Yorkshire and The Hunterian Museum and Art Gallery in Glasgow.  The strategy that was agreed as a result of this consultation included: taking an Agile iterative test-driven approach; scaling and refining the project as it tours; focusing on a User Interface & User Experience that is innovative and ‘sublime’, finding ways to link content in surprising but relevant, informative ways; creating an artistic experience with enjoyable ‘flow’, creating a system that monitors its use providing valuable data for further development.

Pre-Alpha testing in the Queen’s House                           

After several scoping sessions we collectively resolved to test museum visitors’ responses to the central premise of navigating a linked series of images and to test if differing interface technologies had much influence on visitors’ choices and experience. Taking this as our starting point we scoped a MVP (‘Minimal Viable Product’ – in trendy development lingo) which would present visitors with a random image that linked to four others by visual coincidences the software found between images or by the metadata associated with those images. We wanted to test the use of gestural interfaces against a more conventional touchscreen interactive.

In order to do this we created two interactive experiences, one using a touchscreen, the other using Microsoft Kinect to enable visitors to interact through a gestural interface. The two interactives were tested in the Queen’s House in Greenwich next to the Stubbs paintings. Museum volunteers monitored visitors’ experiences and logged them against a session ID shown on screen. They also asked visitors a few simple questions if they were amenable. The findings were that people spent on average 27 seconds with gesture UI (with many more attempts than touchscreen) and 23 seconds with the touchscreen UI. Nearly all visitors (especially under 20’s) were drawn to and enjoyed the gesture interface though often found it harder to master. 9% of visitors needed help with the Touchscreen  against 43% with the Gesture sensors, with over 60s especially needing guidance. Multiple people wanting to interact at the same time was a problem to address with the gestural interface. We couldn’t find any statistical difference between images linked visually or by metadata. Visitors generally did understand they were engaged in linking or connecting related images but didn’t relate these linked images to the question posed.

Testing the gestural interface at Queen's House
Testing the gestural interface at Queen’s House

(One questions, for example, was ‘What would you collect on your voyage round the world?’). As the volunteers generally noted a lack of engagement with the question we came to the conclusion that this could have been due to not having strong questions that directed purpose so therefore people lacked the motivation to make choices they felt strongly about. We determined the questions have to be more challenging or provocative? We also found through the volunteers questions that, as we might expect, ‘Digital Empathy’ meant little to most visitors! A few visitors did suggest it meant things like ‘feeling and learning through digital technology’ after using the interactive, which was felt to be a positive understanding of the concept. We did get some nice general feedback like: “Exploration is finding new inspiration”, or “We met new paintings and experienced new technologies”.

Waving in the Grant Museum of Zoology

The results of this initial test phase informed the design of the version to be displayed in the Grant Museum of Zoology. It was felt that we should run with the gesture based interface as it was far more engagi

ng but give immediate feedback to reinforce modes of interaction and select in software the visitor nearest to the screen to avoid confusion in groups. The gesture for ‘click’ was also refined using a ‘Tinder like’ swipe right or left, as the close-fist ‘grab to click’ gesture was tricky to master for some people. We felt a springy network of images would enable visitors to see and connect more with their branching journey, and give immediate feedback to again strengthen interaction and entice people to play spontaneously. We also developed more onscreen guidance to help visitors comprehend the activity and guide them through the process. All this had to be contained in a compact shelf-mounted enclosure as space is tight in the wonderfully jam packed museum. The Grant team developed provocative questions to help with engagement and of course make the exercise more meaningful.  At the end of each visitor session the ‘reveal’ showed metadata associated with the images chosen.

testing at the Grant Museum
testing at the Grant Museum

Ship shape and 3D fashion in the Captain Cook Memorial Museum

After reviewing the experience at the Grant the collective decision was made to abandon gestural interfaces and the connected mesh and create a whole new interactive experience. Even though version 2 at the Grant was felt to be radically improved and intuitive by many, including Martin from CEDE, we feel the limited feedback we did receive from the Grant indicates it is not an interface that worked for many people in a museum context. In particular being close to the screen, but not touching it as one would with a touchscreen, is a big issue. A full body Kinect interface could work much better, especially if the interactive physically constrains people to interact the correct distance from the screen and Kinect, but we would still have problems of (especially older) visitor reluctance to ‘wave arms’ in front of a screen.

Following on from the Travellers’ Tails version displayed at the Grant Museum the Cook version was rather more ambitious in terms of he depth of the experience offered and larger in scale, freed from the very tight size restrictions at the Grant.  It attempted to re-engage with the core aims of the project and be an experiential, playful, creative non-textual exploration of the digital collections. It also dropped the question structure used at the Grant and focused more on free form exploration.

We felt that that a Tangible User Interface (TUI) could well be a preferable option in that it would be more intuitive, probably less embarrassing to use (especially for older people) and we think could really help with enabling engagement and maybe even that elusive ‘digital empathy’.

The data display metaphor used an ‘island like’ topography, each island formed of dynamic groupings of icons representing the linked digital images. At the end of the visitor’s journey the interactive would deliver textual feedback about the images that have been collected.

The Explorer at the Captain Cook Memorial Museum
The Explorer at the Captain Cook Memorial Museum

In addition to the changes to the interface and physical installation the web based content management and tagging tool, which is built, will be fully integrated.

A very large curved 4K screen was utilised to help enable a more immersive experience. This was housed in a large, curved cabinetry that was designed to reference the hull of a ship.

The first version of the physical controller used a simple plywood CNC’d representation of a ship with two illuminated buttons (red and green referencing port and starboard ship lighting) and contained a sophisticated motion sensor. This enabled the visitor to tip the ship forward and back to control speed and roll it to turn left or right. In this way they could then navigate to islands and use the buttons to select or reject the images they found there.

We were unfortunately plagued by technical problems at the Cook. Initially we had problems with an overheating PC as the small room at the top of the museum which housed the exhibit was so hot in

Fidian 4
mahogany controller shaped like a ship

those summer months. We improved the ventilation in the cabinet with extra fans and custom built a PC. We then encountered some hard to trace intermittent faults with the gesture sensor. Eventually the one in the ship in the Cook and our test unit failed so we replaced them both with a simpler and less sensitive device which proved totally reliable for the rest of the tour. We also at this stage produced a second version of the ship, this time CNC’d from a solid lump of mahogany. We engaged a 3D product designer to custom create a model of Captain Cook’s Bark Endeavour in order to machine the model. We also linked it to the cabinet with super tough multicore cable used in robotics applications and restrained the ship with chains from its bow like anchors. The exhibit was repainted from the Cook’s Farrow and Ball yellow to a tough imposing battleship grey. These modifications gave us a robust platform which we tested for a few weeks back at the NMM after the Cook closed. This was also a good opportunity to review
user interface etc., and under the direction of the NMM we under took a number of interface changes and modifications including machining another ship, lengthening chains, repainting, and adding signage.

 

Tough enough for the Hunterian Museum and Art Gallery

 

It was also requested we armour the exhibit before its next trip, which was to the Hunterian Museum and Art Gallery in Glasgow. We routed out grooves to take curved 6mm acrylic sheets in front of and behind the screen in order to protect it form knocks etc. This seemed to do the trick as the exhibit’s stint in Glasgow passed uneventfully. It was then brought back down to Soda’s studio to be stripped down, checked over and serviced prior to installation at the Horniman.

Horniman Museum

The Horniman repainted it again and modified the cabinet to a more compact form which works well. Behind the scenes we made some changes to the online tagging system so that the NMM can batch change metadata more easily.

As mentioned up top, it’s been a journey, a voyage of exploration in itself, which is how it should be on this kind of project. That said it’s very tricky balancing an experimental R&D project with something that is exhibit-able in major museums up and down the country. That old maxim that if you’re not failing sometimes you’re not trying hard enough is cold comfort to visitors expecting a polished exhibit, however much you explain it’s part of a research process. I also think it has been difficult keeping all partners on side throughout the development, which is crucial to the successful function of an Agile development process. That said it’s been a privilege to work on such an exciting project and I hope the combination of the physical ship or other TUI (Tangible User Interface) with a collection depicted in a 3D virtual world has some legacy at the NMM or other institutions.

Testing the version at the National Maritime Museum
Testing the version at the National Maritime Museum

Posted

in

by

Tags: