Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
lehre:ws18:fsm_18ws:group_d [21.03.2019 18:38] – Bild vergrößert Julia Sagederlehre:ws18:fsm_18ws:group_d [21.03.2019 18:48] (aktuell) – Quellen angefügt Julia Sageder
Zeile 17: Zeile 17:
 ----  ---- 
  
-==== Background ====+==== Background (outdated) ====
  
 When ranking a set of candidates, there is a bunch of ranking methods to find out which candidate wins the voting and therefore, wins the highest rank in comparison. For example, when comparing websites (A, B, C, D, E), we want to select the “best” website. The “best” can be clarified by multiple criteria that must be assessed. This can be done by evaluating and comparing presentational forms of the questionnaire (e.g. the design or the complexity of the websites) or by evaluating and comparing calculation methods for computing the ranking outcome. Each vote for the “best” subject, e.g. website, is a subjective decision. By choosing suitable types, sequences or expressions of questions or options, the ranking is intended to get closer to objectivity. To identify saliences and to gradually approach research questions, a custom questionnaire structure has to be generated and verified by conducting a pre-study/pre-studies and a main-study. As a result, the “best” ranking questionnaire structure is intended to be the one that proximately matches a set of expectation values. When ranking a set of candidates, there is a bunch of ranking methods to find out which candidate wins the voting and therefore, wins the highest rank in comparison. For example, when comparing websites (A, B, C, D, E), we want to select the “best” website. The “best” can be clarified by multiple criteria that must be assessed. This can be done by evaluating and comparing presentational forms of the questionnaire (e.g. the design or the complexity of the websites) or by evaluating and comparing calculation methods for computing the ranking outcome. Each vote for the “best” subject, e.g. website, is a subjective decision. By choosing suitable types, sequences or expressions of questions or options, the ranking is intended to get closer to objectivity. To identify saliences and to gradually approach research questions, a custom questionnaire structure has to be generated and verified by conducting a pre-study/pre-studies and a main-study. As a result, the “best” ranking questionnaire structure is intended to be the one that proximately matches a set of expectation values.
  
  
-==== Goals ====+==== Goals (outdated) ====
  
 <WRAP center round todo 60%> <WRAP center round todo 60%>
Zeile 43: Zeile 43:
 ---- ----
  
-==== Further Resources ====+==== Further Resources and References ====
  
 Image source: https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Preferential_ballot.svg/220px-Preferential_ballot.svg.png Image source: https://upload.wikimedia.org/wikipedia/commons/thumb/1/18/Preferential_ballot.svg/220px-Preferential_ballot.svg.png
 +  * Felix Brandt, Vincent Conitzer, Ulle Endriss, and Jerome Lang (Eds.). 2016. Handbook of computational social choice. Cambridge University Press, New York.
 +  * Yann Chevaleyre, Ulle Endriss, Jerome Lang, and Nicolas Maudet. 2007. A Short Introduction to Computational Social Choice. SOFSEM 2007: Theory and Practice of Computer Science. SOFSEM 2007. Lecture Notes in Computer Science 4362 (2007).
 +  * Paul M. Fitts. 1954.  The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology 47, 6 (1954), 381–391.
 +  * Fajwel Fogel, Alexandre d’Aspremont, and Milan Vojnovic. 2016. Spectral Ranking using Seriation. Journal of Machine Learning Research 17 (2016), 1–45.
 +  * Maurits Kaptein, Clifford Nass, and Panos Markopoulos. 2010.  Powerful and Consistent Analysis of Likert-Type Rating Scales. CHI 2010 (2010), 2391–2394.
 +  * Rensis Likert. 1932. A Technique for the Mesurement of Attitudes. Archives of Psychology 22 (June 1932), 5–55.
 +  * N. Menold and K. Bogner. 2016. Design of Rating Scales in Questionnaires. GESIS Survey Guidelines. GESIS - Leibniz Institute for the Social Sciences, Mannheim. https://doi.org/10.15465/gesis-sg_en_015
 +  * Judy Robertson. 2012.  Likert-type Scales, Statistical Methods, and Effect Size. Judy Robertson writes about researchers’ use of the wrong statistical techniques to analyze attitude questionnaires. Commun. ACM 55, 5 (May 2012), 1–45.  https://doi.org/10.1145/2160718.2160721
 +  * Joerg Rothe, Dorothea Baumeister, Claudia Lindner, and Irene Rothe. 2012. Einfuehrung in Computational Social Choice. Individuelle Strategien und kollektive Entscheidungen beim Spielen, Waehlen und Teilen. Spektrum Akademischer Verlag Heidelberg.
 +  * Markus Schulze. 2003. A new monotonic and clone-independent single-winner election method. Voting Matters 17 (2003), 9–19.
 +  * Markus Schulze. 2018.  The Schulze Method of Voting. CoRR abs/1804.02973 (2018).  http://arxiv.org/abs/1804.02973
 +  * Kerstin Voelkl, Christoph Korb, and Christoph Korb. 2017. Deskriptive Statistik - Eine Einführung für Politikwissenschaftlerinnen und Politikwissenschaftler (1. aufl. ed.). Springer-Verlag, Berlin Heidelberg New York.