No. | Indicator type | Definition in the current study | |
---|---|---|---|
1 | Sensitivity and specificity | Sensitivity was the ability of SMOT tool to correctly identify HCPs who failed in the CMSM; and specificity was the ability of SMOT tool to correctly identify HCPs who passed the CMSM | |
2 | HCPs failure proportions (p) | Overall proportion (failed) | The number of HCPs recognized failed by the tool/SP methodologies in all four criteria (a-d) were divided into all tested HCPs |
By failure criteria (c, d) | The number HCPs recognized failed in sub-failures criteria (a, b, c, d) separately by the tool/SP methodologies were divided into all tested HCPs | ||
3 | Agreement (reliability) | Overall agreement (observed) | The proportion of True Positive (TP, real failed) plus True Negative (TN, real passed) divided by all values: (a + d)/(a + b + c + d) × 100) [14] |
Kappa statistic | K was measured inter-tool reliability in non-chance agreement of HCPs failures: \(\kappa \equiv \frac{{p_{o} - p_{e} }}{{1 - p_{e} }}\) where po is the relative observed agreement between tool and SP, and pe is the chance agreement. The K ranged 0 to 1. If the tool and SP are in complete agreement then k = 1 [15] | ||
4 | Likelihood ratio (LR) | LR+ | To measure how much the odds of the failure increase when the tool is positive (HCP failure) in comparison with SP methodology, and calculated through \(\frac{Sensitivity}{1-Specificity}\) [15] |
LR− | To measure how much the odds of the pass increase when the tool is negative (HCP pass) in comparison with SP methodology, and calculated through \(\frac{1-Sesitivity}{Specificity}\)[15] |