Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. You can use Attribute Agreement

74 KB – 7 Pages

PAGE – 1 ============
WWW.MINITAB.COM MINITAB ASSISTANT WH ITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks use d in the Assistant in Minitab Statistical Software. Attribute Agreement Analysis Overview Attribute Agreement Analysis is used to assess the agreement between the ratings made by appraisers and the known standards. You can use Attribute Agreement Analysis to determine the accuracy of the assessments made by appraisers and to identify which items have the highest misclassification rates. Because most applications classify items into two categories (for example, good/bad or pass/fail), the Assistant analyzes only binary ratings. To eval uate ratings with more than two categories, you can use the standard Attribute Agreement Analysis in Minitab ( Stat > Quality Tools > Attribute Agreement Analysis ) . In this paper, we explain how we determined what statistics to display in the Assistant reports for Attribute Agreement Analysis and how these statistics are calculated. Note No special guidelines were developed for data checks displayed in the Assistant reports.

PAGE – 2 ============
ATTRIBUTE AGREEMENT ANALYSIS 2 Output There are two primary ways to assess attribute agreement: The percentage of the agreement between the appraisals and the standard The percentage of the agreement between the appraisals and the standard after removing the effect of percenta ge of agreement by random chance (known as the kappa statistics) The analyses in the Assistant were specifically designed for Green Belts. These practitioners are sometimes unsure about how to interpret kappa statistics. For example, 90% agreement between appraisals and the standard is more intuitive than a corresponding kappa value. Therefore, we decided to exclude kappa statistics from the Assistant reports. However, the disadvantage of only reporting the percent of agreement is that the value includes b oth the agreement due to using a common assessment standard and the agreement by chance; the kappa statistic removes agreement by chance in its calculation. For this reason, when you use the Assistant, we encourage you to select an equal number of good and bad products across evaluations so that the percentage of agreement by chance is approximately the same. The Assistant report displays pairwise percentage agreement values, which is different than the results from Stat > Quality Tools > Attribute Agreeme nt Analysis . For example, an appraiser collects 2 trials on each test item. In the Assistant report, if the Appraiser matches the standard for test item X on the first trial but not on the second trial, the Appraiser gets credit for 1 match. In the analysi s from the Stat menu, the Appraiser only gets credit when his or her ratings for both trials match. See Methods and Formulas in Minitab Help for the detailed calculations used in the Stat menu analysis. The Assistant reports show pairwise percentage agree ments between appraisals and standard for appraisers, standard types, trials, and the confidence intervals for the percentages. The reports also display the most frequently misclassified items and appraiser misclassification ratings.

PAGE – 3 ============
ATTRIBUTE AGREEMENT ANALYSIS 3 Calculations The pair wise percentage calculations are not included in the output in the standard Attribute agreement analysis in Minitab ( Stat > Quality Tools > Attribute Agreement Analysis ) . In fact, kappa, which is the pairwise agreement adjusted for the agreement by chan c e, is used to represent the pairwise percent agreement in this output. We may add pairwise percentages as an option in the future if the Assistant results are well received by users. We use the following data to illustrate how calculations are performed. App raisers Trials Test Items Results Standards Appraiser 1 1 Item 3 Bad Bad Appraiser 1 1 Item 1 Good Good Appraiser 1 1 Item 2 Good Bad Appraiser 2 1 Item 3 Good Bad Appraiser 2 1 Item 1 Good Good Appraiser 2 1 Item 2 Good Bad Appraiser 1 2 Item 1 Good Good Appraiser 1 2 Item 2 Bad Bad Appraiser 1 2 Item 3 Bad Bad Appraiser 2 2 Item 1 Bad Good Appraiser 2 2 Item 2 Bad Bad Appraiser 2 2 Item 3 Good Bad Overall accuracy The formula is Where X is the number of appraisals that match the standard value N is the number of rows of valid data

PAGE – 4 ============
ATTRIBUTE AGREEMENT ANALYSIS 4 Example % Accuracy for each appraiser The formula is Where N i is the number of appraisals for the i th appraiser Exa mple (accuracy for appraiser 1) Accuracy by standard The formula is Where N i is the number of appraisals for the i th standard value Exam % Accuracy by trial The formula is Where N i is the number of appraisals for the i th trial Example (trial 1)

PAGE – 5 ============
ATTRIBUTE AGREEMENT ANALYSIS 5 Accuracy by appraiser and standard The for mula is Where N i is the number of appraisals for the i th appraiser for the i th standard Example (appraiser 2, standard Misclassification rates The overall error rate is Example , the misclassification rate is Example If appraisers r , the misclassification rate is Example If appraisers rate the same item both ways across multiple trials , the misclassification rate is

PAGE – 6 ============
ATTRIBUTE AGREEMENT ANALYSIS 6 Example Appraiser misclassification rates , the misclassification rate is Example (for appraiser 1) , the misclassification rate is Example (for appraiser 1) If appraiser i rates the same item both ways across multiple trials , the misclassification rate is Example (for appraiser 1) Most frequently misclassified items th Example (item 1)

74 KB – 7 Pages