I’ve used eye tracking as part of projects before and, I felt, got some interesting results. After some more tweets I added
In response Jared gave 6 important points to consider:
- Heat map settings vary on sensitivity settings. What are the settings?
- ET only measure focal gaze. There’s more to vision than center focal points.
- ET doesn’t take into account peripheral vision. Do you every see anything out of your peripheral vision?
- ET only record based on recorder resolution. Not every point of focal gaze is registered. What’s missing?
- ET’s are known to have significant errors in data collection. Missed data. False positives.
- ETs are notorious for reporting data differently for different users. Are you comparing apples to apples?
and ended with
Which, considering the 6 points, is a fantastic point. Now, the example I was thinking of, we ran the same page template past 5 participants. Each participant saw the template used on different pages of a flow, so I feel the “apples to apples” answer was yes. The blank area was the far right of the page. It was only “looked” at when we removed the graphical borders and reduced the gutters between it and the main content. To me this was expected as I believed we were consistently seeing banner blindness in effect. To some others in the room it was ‘eye-opening”.
Maybe the whole attitude of using ET to check if someone “saw” something is wrong. Jared’s second & third points nail this. They saw it, if only in their peripheral vision. Maybe it has more to do with the user assembling the environmental markers available to them and deciding that these particular cues don’t help them in making the map needed to complete what they need to do.
On Twitter Angela Robertson added this to the thread
Which, like most things in our industry, is true. I’ve lost count of how many times I hear “journey maps” in meetings now compared to just 6 months ago.
I’ve never used ET as the sole basis to initiate change or justify a project. We have two fantastic HFI certified UX/Usability professionals who run the Tobii equipment and keep me honest by pairing this with interviews, screenshots, video/audio recordings and their own notes to give a well rounded assessment of the user sessions we conduct, both in and out of a lab setting. This is just one data point, of many, which is open to interpretation and should be balanced with your research as a whole.
I think Jared’s 6 points have tweaked my view on ET. I still feel it has value as one tool of many in my toolbox. The same as wireframes, paper mockups, server logs, interviews, or Chalkmark sessions.