Inter-Rater Reliability Assignment
In statistics, inter-rater reliability is the agreement amount amongst the raters. This is also when “different researchers observe the same behavior independently (to avoided bias) and compare their data. If the data is similar, then it is reliable” (McLeod, 2013). Accuracy and reliability are improved upon as well when the reviewer’s trainings are up to par and improved.
There are ways to improve this kind of reliability. Maintaining the noise levels of a specific area or environment is one way to improve upon inter-reliability as this is something that can get in the way when it comes to reliability and accuracy, on the part of the reviewer’s research. Accuracy and scoring are affected by random errors, which could happen if the environment’s noise levels are not the same across the board of the research (Babisch, 2002). If the environment’s noise level cannot be kept under control, it is normally imperative to make sure that the individuals conducting the research know the noise level prior to the research itself. That should be the case because you want to minimize as much as possible the risk of the results from the research being influenced in any way.
It is also essential to know what rating system is being using too. The reverse coding scale is often used for researchers, especially at the National Institute of Health, where positive ratings are judged by lower values. In saying that, when using a scale, 1 would be the best score and 10 would be the worst if counted on this kind of scale. There is a lot of possibilities for errors when it comes to the rating system in general, so having some kind of training before use would help the research aspects (Kothari, 2004).
Training videos can be a huge assistance to researchers who are new to the field in regards to the concept of inter-rater reliability. There are quite a few topics that should be addressed in those training videos. Every individual who works in the field of research has to make sure the information that they give out to others are valid and correct. Not to mention, decisions being made in regards to funding in this brand of work is influenced as well from the researcher’s ends. Of course, one has to also know what scoring system that they are using so review is essential in this case. I would believe that any reviewer of any kind would have to know what they are reviewing and looking for and this is regarding assigning scores for evaluation. Finally, criteria understanding is crucial and all for the sake of accuracy (Kothari, 2004).
Researchers in the assistance area could be training for a variety of purposes when it comes to observing the doll and how the kids associate with the dolls. A researcher has to know what it is that they are observing specifically when it comes to behavior for initially before the research even begin. It is equally as essential for people doing the research to not just pay attention to the more obvious aggressive behavior like kicking and hitting, but instead to things not generally noticed in an obvious manner i.e. slight pushing or handling of the doll in general.
What makes up a specific rating, a criterion so to speak is also very crucial to the aspect as researchers have to be trained in this aspect. A type of scale that is seen as a reversal model which makes ten the most aggressive or negative and one being the least aggressive or negative. One ratings would indicate non-aggressive, non-existent, positive, or no interaction between at all between the doll and the child in a manner deemed positive. One ratings would be punching and kicking behaviors, thus highly violent behaviors. In order for conclusions and outcomes to be made available in an accurate fashion, training videos will be made available for research assistants for the purposes of their importance when it comes to presenting accurate research. That specific training tool will stress the need for accurate research, why assistants are essential in the research itself, and when it comes to explaining ratings scales.
Babisch, W. (2002). The noise/stress concept, risk assessment and research needs. Noise and health, 4(16), 1.
Kothari, C. R. (2004). Research methodology: Methods and techniques. New Age International.
McLeod, S. (2013). What is Reliability? Retrieved from https://www.simplypsychology.org/reliability.html