Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
Letzte ÜberarbeitungBeide Seiten der Revision
lehre:ws18:fsm_18ws:group_g:2019-03-08_analysis_and_discussion_of_the_literature_research_results [08.03.2019 15:46] wom62811lehre:ws18:fsm_18ws:group_g:2019-03-08_analysis_and_discussion_of_the_literature_research_results [08.03.2019 16:06] wom62811
Zeile 69: Zeile 69:
 Based on our results the technological taxonomy was adapted by adding Desktop Computer UI, subdivided in Mouse and Keyboard to the class 2D. The class 3D UI was expanded by the subcategories Presenter, Controller, Handle, Stylus and Handheld- Clicker. Hand Motions and Face Recognition were added to Natural UI. Furthermore the category Combinations was divided into the subcategories Multi-View UI and Multiple Interfaces consisting of the subcategories Multiple Classes and Same Class. Finally, the class Adaptive was expanded by the subcategory EMG (Electromyography) Interface. Based on our results the technological taxonomy was adapted by adding Desktop Computer UI, subdivided in Mouse and Keyboard to the class 2D. The class 3D UI was expanded by the subcategories Presenter, Controller, Handle, Stylus and Handheld- Clicker. Hand Motions and Face Recognition were added to Natural UI. Furthermore the category Combinations was divided into the subcategories Multi-View UI and Multiple Interfaces consisting of the subcategories Multiple Classes and Same Class. Finally, the class Adaptive was expanded by the subcategory EMG (Electromyography) Interface.
  
-{{ :lehre:ws18:fsm_18ws:group_g:results_ar_user_interface.png?400 |}}+{{ :lehre:ws18:fsm_18ws:group_g:results_ar_user_interface.png?600 |}}
 Figure 3: Types of AR User Interfaces (aggregated from 2015 - 2017) based on our Taxonomy Figure 3: Types of AR User Interfaces (aggregated from 2015 - 2017) based on our Taxonomy
  
Zeile 77: Zeile 77:
 The combination of touch and body motion (7, 5.18%) occurred most, whereas other combinations occurred once or twice in the analysis. The combination of touch and body motion (7, 5.18%) occurred most, whereas other combinations occurred once or twice in the analysis.
  
 +===== Results of Evaluation Structure =====
  
 +==== General Results ====
 +In general, 152 evaluations were conducted in 135 publications, whereof most publications (101, 78.81%) included one evaluation and 22 (16.3%) publications two to three evaluations. Regarding the focus, 83 publications conducted solely user-based evaluations, 36 publications solely system-based ones and 3 publications included both evaluation types. 61,48% of the AR related paper conducted a user-based evaluation. In most publications evaluations were conducted in laboratory environments (96, 63.16%), in contrast to final usage contexts (field study, 18, 11.84%). 2 (1.32%) publications conducted evaluations in both environments. Preliminary and pilot studies were included in 26 (19.3%) publications and 51 publications included within-subject experiment design, and 17 between-subject design.
 +
 +==== Participants ====
 +85 (69.11%) publications conducted an evaluation involving participants, 37 (29.84%) publications conducted an evaluation without participants and in two (1.61%) publications evaluations were conducted with and without participants. In total, 4567 participants were involved in all evaluations. The amount of participants was mentioned in 83 (95.40%) publications. The values range from two to 1000 participants per evaluation. Five publications conducted evaluations with more than 100 participants (107, 169 [46], 683, 865 and 1000). The percentage of the three highest amounts of participants involved approximately 55.79% of all participants.
 +
 +The average age of participants was mentioned by 54 evaluations (47.06% of participant involving publications).74% of the participants in our analysis had a mean age of less than 30 years.  We split the average age into the following ranges:
 +
 +  * 0 - 9 years: 3.7%
 +  * 10 - 19 years: 9.26%
 +  * 20- 29 years: 61, 11%
 +  * 30 - 39 years: 16,67%%
 +  * 40 - 59 years: 5.56%
 +  * >= 60 years: 3.7%
 +
 +The gender distribution was mentioned by 54 (62.07%) publications. In total, 1516 (48.11%) participants were male and 1630 (51.73%) were female (c.f. table 4). The gender Other was mentioned in three publications with a total of five (0.16%) participants. For this reason the gender Other was added to the taxonomy.
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_gender.png?400 |}}
 +Table 4: Percentage of Gender
 +
 +Regarding the participants’ profession, 52.08% came from an university context, 16.67% from the general public and 31.25% from other sources. In 30 evaluations the profession was not explicitly mentioned (38.46%). 40% of evaluations did not mention the profession of participants.
 +
 +==== Results of Evaluation Areas ====
 +We classified evaluations after the areas from Swan II and Gabbard
 +(2005), Dünser and Billinghurst (2005) and Dünser et al. (2008). An evaluation
 +can be represented by one or more areas. As we additionally analyzed system-based publications, we added the category System Technology to the exclusively user-based approach from Swan II and Gabbard (2005) and Dünser et al. (2008). Table 5 shows the adapted evaluation areas with references, examples from the publication analysis and the percentage of all analyzed areas.
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_evaluation_area.png?400 |}}
 +Tabel 5: Evaluation Areas adapted from Dünser and Billinghurst (2005), Dünser et al.(2008), Swan II and Gabbard (2005)
 +
 +Furthermore, combinations of areas were analyzed. The combinations of User Performance and Usability (12, 31.58%) and User Performance and Perception (10, 26.31%) occurred most. Other combination occurred one to three times.
 +
 +===== Results of Evaluation Methods =====
 +The distribution of evaluation methods is presented in the following:
 +  * Qualitative Analysis: 33.33%
 +  * Technological Measurements: 29%
 +  * Subjective Measurements: 16.67%
 +  * Objective Measurements: 16.33%
 +  * Usability Evaluation: 2.33%
 +  * Informal Evaluations: 2.33%
 +
 +
 +We added the category Technological Measurements to our taxonomy, as we included system-based evaluations next to user-based ones. Qualitative analysis was the preferred evaluation method, followed by Technological Measurement. Subjective and objective measurements occurred in similar quantity and usability and informal evaluation methods were used in very few publications.
 +
 +16 different combinations of evaluation methods occurred in the analyzed publications. The combination of Subjective measurements and Qualitative Analysis 15
 +occurred most (15, 24.59%), followed by Objective Measurements and Qualitative Analysis (13, 21.31%) as well as Objective Measurements and Subjective Measurements (11, 18.03%). Other frequencies of combinations ranged from 1 to 4.
 +
 +The following figures 4-9 show the results of each evaluation method in detail:
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_qualitative_analysis.png?400 |}}
 +Figure 4: Qualitative Evaluation Techniques (aggregated
 +from 2015 - 2017) based on our Taxonomy
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_techn_measurement.png?400 |}}
 +Figure 5: Technological Measurements (aggregated from
 +2015 - 2017) based on our Taxonomy
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_subj_measurement.png?400 |}}
 +Figure 6: Subjective Measurements (aggregated from 2015 -
 +2017) based on taxonomy
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_obj_measurement.png?400 |}}
 +Figure 7: Objective Measurements (aggregated from 2015 -
 +2017) based on our Taxonomy
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_usability_ev.png?400 |}}
 +Figure 8: Usability Evaluation Techniques (aggregated
 +from 2015 - 2017) based on taxonomy
 +
 +{{ :lehre:ws18:fsm_18ws:group_g:results_informal_ev.png?400 |}}
 +Figure 9: Informal Evaluation Techniques (aggregated
 +from 2015 - 2017) based on our Taxonomy
 +
 +
 +===== Combination of Evaluation Areas and Methods =====
 +Due to lack of inconsistency in the classification of publications, two publications had to be excluded from this comparison. However 143 evaluations were analyzed and 57 different combinations occurred. The frequency of occurrence ranged from 1 to 48. The combination of the evaluation area System Technology with the evaluation method Technological Measurements occurred 48 times (33.57%), followed by the combination of User Performance and Objective measurements with 7 occurrences (4.9%). Unique combinations appeared 36 times in the analysis.
 +
 +===== Limitations & Conclusions =====
 +The proposed taxonomies represent a collection of results from various literature-based sources that were set in relation to each other by combining them in hierarchical classifications. Our results are not representative for the total amount of AR related publications available in the literature, but were only collected from the analysis of our limited publication corpus. Like the exclusion criteria for publications that did not correspond to our definition described in the section Methodology, the venue proceedings and time interval were selected subjectively, but based on approaches or succeeding works from the literature.
 +In general, our result is influenced by multiple subjective judgments and external factors, but however founded on various methodologies from the literature, so that we were able to draw comparisons between the results. Our analysis of 135 state-of-the-art AR publications has shown the generalizability of our taxonomies, as we were able to classify our entire publication corpus. Therefore, our taxonomies can be applied to each type of AR publications that contains at least one technological aspect, either including an evaluation or not.
 +
 +===== Outlook and next Steps =====
 +We just peer-reviewed papers from other groups and are intended to the retrieve the feedback from our paper. We hope to retrieve constructive criticism, that we can discuss in our team and improve our paper.  
 +Next steps are writing the rebuttal, improve our paper and prepare the final presentation.
 +
 +See you in our last blog article in two weeks :)
 +
 +
 +===== References =====
 +