Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
lehre:ws18:fsm_18ws:group_d [21.03.2019 18:32] Julia Sagederlehre:ws18:fsm_18ws:group_d [21.03.2019 18:48] (aktuell) – Quellen angefügt Julia Sageder
Zeile 5: Zeile 5:
 members_            : Julia Sageder, Ariane Demleitner, Oliver Irlbacher members_            : Julia Sageder, Ariane Demleitner, Oliver Irlbacher
 keywords_           : ranking, Condorcet, Schulze method, Likert, evaluation, Computational Social Choice, COMSOC, comparison, scales, voting, multiple ranking, versus ranking, pairwise ranking keywords_           : ranking, Condorcet, Schulze method, Likert, evaluation, Computational Social Choice, COMSOC, comparison, scales, voting, multiple ranking, versus ranking, pairwise ranking
-photo_img           : :lehre:ws18:fsm_18ws:ranking_voting_group_D.png?nolink&150 | Ranking candidates+photo_img           : :lehre:ws18:fsm_18ws:ranking_voting_group_D.png?nolink&250 | Ranking candidates
 shortdescription    : Likert-type scales are considered as a popular tool in questionnaires and evaluations to gather reviews and opinions from participants taking part in the evaluation process. The need for new Likert-type scale alternatives has arisen through a set of scientifically ambiguous guidelines for the creation of Likert-type scales on the one hand, and through various criticisms of incorrect scientific analysis of Likert-type scales on the other. Within the evaluation process, several objects of investigation can be ranked and thus can be rated over the assigned rank. Now, do ranking methods achieve the same results/winners as Likert-type scales? A preliminary (18 German and Colombian participants) and a main study (24 participants) were conducted to investigate this topic. The results of the explorative pre-study lead us to the design of the ranking method for the main-study. Our findings of the main-study show that multiple ranking scales (evaluated with the Schulze method) achieve the same results when determining a ranking winner, as Likert-type scales. For this reason, it can be assumed that multiple ranking scales are an alternative to the commonly used Likert-type scales. Due to a mistake in our study design, we cannot proof that participants are faster with multiple ranking scales compared to Likert-type scales, further studies have to answer this question. shortdescription    : Likert-type scales are considered as a popular tool in questionnaires and evaluations to gather reviews and opinions from participants taking part in the evaluation process. The need for new Likert-type scale alternatives has arisen through a set of scientifically ambiguous guidelines for the creation of Likert-type scales on the one hand, and through various criticisms of incorrect scientific analysis of Likert-type scales on the other. Within the evaluation process, several objects of investigation can be ranked and thus can be rated over the assigned rank. Now, do ranking methods achieve the same results/winners as Likert-type scales? A preliminary (18 German and Colombian participants) and a main study (24 participants) were conducted to investigate this topic. The results of the explorative pre-study lead us to the design of the ranking method for the main-study. Our findings of the main-study show that multiple ranking scales (evaluated with the Schulze method) achieve the same results when determining a ranking winner, as Likert-type scales. For this reason, it can be assumed that multiple ranking scales are an alternative to the commonly used Likert-type scales. Due to a mistake in our study design, we cannot proof that participants are faster with multiple ranking scales compared to Likert-type scales, further studies have to answer this question.
  
Zeile 17: Zeile 17:
 ----  ---- 
  
-==== Background ====+==== Background (outdated) ====
  
 When ranking a set of candidates, there is a bunch of ranking methods to find out which candidate wins the voting and therefore, wins the highest rank in comparison. For example, when comparing websites (A, B, C, D, E), we want to select the “best” website. The “best” can be clarified by multiple criteria that must be assessed. This can be done by evaluating and comparing presentational forms of the questionnaire (e.g. the design or the complexity of the websites) or by evaluating and comparing calculation methods for computing the ranking outcome. Each vote for the “best” subject, e.g. website, is a subjective decision. By choosing suitable types, sequences or expressions of questions or options, the ranking is intended to get closer to objectivity. To identify saliences and to gradually approach research questions, a custom questionnaire structure has to be generated and verified by conducting a pre-study/pre-studies and a main-study. As a result, the “best” ranking questionnaire structure is intended to be the one that proximately matches a set of expectation values. When ranking a set of candidates, there is a bunch of ranking methods to find out which candidate wins the voting and therefore, wins the highest rank in comparison. For example, when comparing websites (A, B, C, D, E), we want to select the “best” website. The “best” can be clarified by multiple criteria that must be assessed. This can be done by evaluating and comparing presentational forms of the questionnaire (e.g. the design or the complexity of the websites) or by evaluating and comparing calculation methods for computing the ranking outcome. Each vote for the “best” subject, e.g. website, is a subjective decision. By choosing suitable types, sequences or expressions of questions or options, the ranking is intended to get closer to objectivity. To identify saliences and to gradually approach research questions, a custom questionnaire structure has to be generated and verified by conducting a pre-study/pre-studies and a main-study. As a result, the “best” ranking questionnaire structure is intended to be the one that proximately matches a set of expectation values.
  
  
-==== Goals ====+==== Goals (outdated) ====
  
 <WRAP center round todo 60%> <WRAP center round todo 60%>
Zeile 43: Zeile 43:
 ---- ----
  
-==== Further Resources ====+==== Further Resources and References ====
  
 Image source: https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Preferential_ballot.svg/220px-Preferential_ballot.svg.png Image source: https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Preferential_ballot.svg/220px-Preferential_ballot.svg.png
 +  * Felix Brandt, Vincent Conitzer, Ulle Endriss, and Jerome Lang (Eds.). 2016. Handbook of computational social choice. Cambridge University Press, New York.
 +  * Yann Chevaleyre, Ulle Endriss, Jerome Lang, and Nicolas Maudet. 2007. A Short Introduction to Computational Social Choice. SOFSEM 2007: Theory and Practice of Computer Science. SOFSEM 2007. Lecture Notes in Computer Science 4362 (2007).
 +  * Paul M. Fitts. 1954.  The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology 47, 6 (1954), 381–391.
 +  * Fajwel Fogel, Alexandre d’Aspremont, and Milan Vojnovic. 2016. Spectral Ranking using Seriation. Journal of Machine Learning Research 17 (2016), 1–45.
 +  * Maurits Kaptein, Clifford Nass, and Panos Markopoulos. 2010.  Powerful and Consistent Analysis of Likert-Type Rating Scales. CHI 2010 (2010), 2391–2394.
 +  * Rensis Likert. 1932. A Technique for the Mesurement of Attitudes. Archives of Psychology 22 (June 1932), 5–55.
 +  * N. Menold and K. Bogner. 2016. Design of Rating Scales in Questionnaires. GESIS Survey Guidelines. GESIS - Leibniz Institute for the Social Sciences, Mannheim. https://doi.org/10.15465/gesis-sg_en_015
 +  * Judy Robertson. 2012.  Likert-type Scales, Statistical Methods, and Effect Size. Judy Robertson writes about researchers’ use of the wrong statistical techniques to analyze attitude questionnaires. Commun. ACM 55, 5 (May 2012), 1–45.  https://doi.org/10.1145/2160718.2160721
 +  * Joerg Rothe, Dorothea Baumeister, Claudia Lindner, and Irene Rothe. 2012. Einfuehrung in Computational Social Choice. Individuelle Strategien und kollektive Entscheidungen beim Spielen, Waehlen und Teilen. Spektrum Akademischer Verlag Heidelberg.
 +  * Markus Schulze. 2003. A new monotonic and clone-independent single-winner election method. Voting Matters 17 (2003), 9–19.
 +  * Markus Schulze. 2018.  The Schulze Method of Voting. CoRR abs/1804.02973 (2018).  http://arxiv.org/abs/1804.02973
 +  * Kerstin Voelkl, Christoph Korb, and Christoph Korb. 2017. Deskriptive Statistik - Eine Einführung für Politikwissenschaftlerinnen und Politikwissenschaftler (1. aufl. ed.). Springer-Verlag, Berlin Heidelberg New York.