Use cases and implementation considerations (2018-12-12)

Tagged as: blog, tangibles, single, multiple, multi touch, implementation, literature
Group: J In this blog entry we have a look at possible use cases for the study as well as the technical approach.

Use Cases and Implementation

Since our last blog post we had two more meetings with Jürgen Hahn to clarify our topic. In the beginning we gathered literature to get a good overview about the current situation of the use of tangibles on multitouch tables. Although we found a lot of papers about multitouch tables and tangibles in general, there was no paper about our main focus which is the comparison of the use of single vs. multiple tangibles. The only fact referring our topic is the knowledge that the use of multiple tangibles could confuse the user, especially if there are five to ten or more tangibles on the table. Users start to forget the meaning or function of the tangibles at such an amount.

In the next step we focused our literature review on use cases which we could potentially use for our use case. In this case we found some studies where tangibles were used in different context. Most of the studies used multiple tangibles. For example one study explored the synergy between live music performance and tabletop tangible interfaces. There they used tangibles as musical instruments. By connecting tangibles you could a music composition. Furthermore we also did a brainstorming about possible use cases ourselves. They are all listed in our seafile log.txt. Games, dj tables, board games to name some of them would definitely be very interesting to implement and test, but as we are limited in time, most of these ideas couldn´t be implemented in a short time. Talking about implementation we also investigated on possible frameworks for the implementation of our use cases. TUIO, opencv and reacTIVision were on the shortlist.

Focus on literature review and picture sorting as use cases

At the meeting with Jürgen Hahn on 28.11.2018 we discussed literature review with zooming, marking, deleting and copying interactions as a possible use case. Next to that we found some papers about interacting with pictures on multitouch tables. Out of that we talked about a picture sorting use case. Literature review and picture sorting were our two main use cases at this point. As we are planning to implement just one of these two use cases we have to choose one of them. Before taking this decision we are going to develop both use cases to the point where we have tasks for single and multiple tangibles for each use case.

Interviews for picture sorting use case

We barely found good picture sorting studies for our purpose. Because of that we did interviews with five participants, who are regularly sorting pictures, to find out about their process of sorting pictures. By asking them, how they are sorting pictures, we got some input on possible features for our use case. We also explained them the purpose of sorting pictures on a multitouch table and how we want to implement it. The results of the interviews showed some interesting aspects we did not think of. Here is a summary of the features for our picture sorting use case suggested by the participants: Open/create groups/folders with names Enlarge and rotate pictures Copy and delete pictures Search function for existing groups Undo actions like deleting a picture Select more than one picture to sort them more efficiently Photo collage with different grids

Technical Considerations

We researched technical options for the implementation of the tasks we want to see conducted in our study. For this we evaluated software frameworks compatible with the given hardware setup. As technical conditions we have a multitouch table (MTT) running Linux (Debian) displaying the operating system and programs. Additionally the screen of the table is recorded from below by a camera. So we needed object detection for tracking the tangibles as well as a protocol for communicating with our application. We found two options to accomplish this:

Use “OpenCV” for image recognition and transfer the state of the tangibles via the “TUIO” protocol to our python app. For this solution we probably would have had to look into programming with C++ and write a lot of code to glue all components together. The second option, and finally the one we will go with is to use the Framework “Reactivision”, which was used for tangible apps before and internally uses “TUIO” itself. This option seems likely to minimize the coding expenditure and prevent unnecessary technical problems.

Reactivision is compatible with multiple programing languages like Java. The tangibles must use fiducial markers for this approach.

Outlook

Our next steps will consist of making a final decision on our task field and refining its tasks in detail as well as designing and estimating the programming expenditure for our app. Also the refined tasks will be evaluated in a focus group to reveal possible obstacles.