How has your assessment been validated?

To ensure that our brief assessment is capable of a similar level of accuracy as lengthier, conventional psychometric instruments, we perform extensive ongoing validity and reliability testing. To establish whether we measure what we think we measure we test construct validity by looking at the associations between our measures and existing ones. We use statistical methods like factor analysis to measure internal validity – i.e. whether items are co-varying in an expected way. Finally, reliability is addressed by testing whether responses are consistent across time (test-retest reliability) and across different items.

In addition to thorough investigations of multi-disciplinary empirical and field research, statistical validation of questions and underlying factor structure of the model has been carried out to further establish criterion, content, and construct validity. We incorporate already well-validated Big 5 items into our testing scales to assess correlations between our own items and existing ones, as an additional measure of construct validity along with demographic information to establish normative scores deconstructed by sex, age, and geographic location.

Protocols are in place for an ecologically valid field test of both the conceptual underpinnings, and the statistical properties, of our FitScore framework. To this end we are testing the model in real-life teams, who provide not only detailed feedback on the perceived accuracy of their scores, but externally verifiable measures to assess the relationship of our FitScores to various components of actual job satisfaction and performance.

Feedback and Knowledge Base