So the more we sat through a day of usability sessions, the more this came to me and I just wanted to jot down some quick notes here.
Perception, location, proximity.
I had some new screen designs to run through, but instead of the normal usability session of getting the participant to try the design on a specific device, we tried something new.
We’d already pre-screened for iOS and Android smartphone users. But instead of diving straight in we talked to each one as they arrived, asking questions about their usage of any device they had:
- What devices do you have in the home? (including desktop/laptop and land-line telephones)
- Which ones move around with you and which pretty much stay where they are?
- What’s your primary email checking device?
Only after 10 or so of these conversational probes did we ask them to read the first scenario, tell us which of their devices would they use to perform it, and tell us why. Then we gave them that device and watched them run through the screens, gathering feedback as we went.
Afterwards we repeated the conversation, reading the next scenario, and then choosing a device to try it on. It didn’t have to be the same one.
Over & over I noticed that the first choice for one task was not to use any of their ‘electronic’ devices. I felt this was to do with perception, location, proximity.
Does it sound hard/complicated? Previous experiences?
Where am I when looking to complete the task?
What device do I have to hand? Is it the “right device” based on Perception & Location?
The screens were all designed ‘mobile first‘, built using ‘responsive design‘, and we gave the participants the ability to use (near enough) the same devices they had at home.
I have more notes to go through, and maybe everybody already “sees” this and it took me these sessions to figure it out, but it’s certainly coloured the way I’m approaching creating experiences across devices now.
Love to hear anyone else’s thoughts.