Tasks and metrics for the comparison of user interfaces

Bad usability and performance limits the use of an user interface. In order to improve usability and performance, an evaluation of an user interface is essential. In this paper we present a set of tasks and metrics which can be used for the comparison and analysis of photo storage applications. For the building process of tasks and the selection of metrics we used literature research combined with interviews and questionnaires. This guarantees both the representation of scientific findings and the representation of actual user needs. For evaluating the task corpus' completeness and adequateness we evaluated the photo storage application Google Photos within the context of a main study. 20 participants contributed to this study. The results show that our set of tasks and metrics allow a comparison of the photo storage application Google Photos. It enables a deeper insight into users' opinions as well as showing the weakness of a software. Based on this set, researchers are able to evaluate photo storage application on various user interfaces and are able to compare them. The study and therefore the paper was motivated by the fact that there is hardly any standardized criteria that allows a neutral comparison of UIs in general. We focused on photo storage application-UIs and examined the question how such a task corpus could look like and which metrics are necessary to determine the before mentioned performance.

Members: Daniel Schmaderer, Anna-Maria Auer

Keywords: user interfaces, user study, metrics, tasks, photo sorting

Goals

  • find tasks and metrics scientists use to compare or evaluate UIs, especially for photo sorting
  • find out, what real life users actually need when using photo sorting tools (What kind of tools do they use? What are they missing? What are users doing when using photo sorting tools? Which demands do users have on such tools? Do users also sort physically, how?)
  • build a set of tasks and metrics, that is adaptable for all kind of UIs that support photo sorting
  • set is to be evaluated in a user study

Background

This study is motivated by the fact, that there is no universal task-corpus for the comparison and evaluation of User Interfaces. Researchers generate their own tasks when doing an evaluation. This way of proceeding risks some bias towards the tool researchers developed. Their focus is obviously proofing the effectiveness of their tool, so they will likely choose tasks that support the strengths of their tool. Our task-corpus should reduce this risk and provide an objective way of testing and evaluating UIs. The main aim is to have a standardised way of testing UIs one day, so the provided study can be seen as a first step towards this.

Updates

Results and current work (2019-03-01)

First results and current work (more...)


Pre-Study (2019-01-10)

Pre-Study: Design and Results (more...)


Introduction, Background, Approach & Goals (2018-11-21)

This project is about the topic „Tasks and metrics for the comparison of user interfaces“. (more...)


Further Resources