The Commission on Certification for Health Informatics and Information Management (CCHIIM) is an AHIMA commission dedicated to assuring the competency of professionals practicing HIIM. CCHIIM serves the public by establishing, implementing, and enforcing standards and procedures for certification and recertification of HIIM professionals. CCHIIM provides strategic oversight of all AHIMA certification programs. This standing commission of AHIMA is empowered with the sole and independent authority in all matters pertaining to both the initial certification and ongoing recertification (certification maintenance) of HIIM professionals.
Commissioners represent a broad spectrum of health informatics and information management professionals, and must be AHIMA-certified and meet certification, work experience, and leadership requirements in order to serve on the commission.
The Standards is a comprehensive technical guide that provides criteria for the evaluation of tests, testing practices, and the effects of test use. It was developed jointly by the American Psychological Association (APA), the American Educational Research Association (AERA), and the National Council on Measurement in Education (NCME). The guidelines presented in The Standards, by professional consensus, have come to define the necessary components of quality testing.
AHIMA / CCHIIM certification exams are valid, reliable, and legally defensible assessment instruments which measure the competency of potential certificants against a codified and relevant body of health informatics and information management (HIIM) competencies (also referred to as knowledge, skills, and abilities). The subject matter (also referred to as a body of knowledge, or BOK for short) represented by these competencies is further segmented across specific roles and disciplines throughout the HIIM profession as a whole by the requisite levels of depth, breadth, and experiences necessary for successful job performance, as exemplified by each respective AHIMA certification.
CCHIIM EDCs are comprised of experienced, credential-specific subject matter experts, representing HIIM leaders, practitioners, and other relevant stakeholders. EDCs are responsible for the specific oversight and performance of their respective credential's certification examination. EDC responsibilities are codified in the CCHIIM operating code, and typically include recurring review of content relevancy, both item-level and examination-level performance data, and expertise with respect to establishing the cut score for their respective certification examinations.
CCHIIM plans for and conducts comprehensive job analyses for each certification examination, depending how quickly and substantively the competencies assessed by a given certification examination changes. Typically, the frequency of these job analyses is approximately every three to five years. Consistent with best practices, the job analysis process involves a diverse and representative sample of stakeholders, including recently certified professionals and employers / supervisors. These stakeholders assess the criticality of current workplace practices, skills, tasks, and responsibilities, with respect to importance and frequency of performance. The results of the job analysis influence to what extent the competencies are revised for each respective certification examination. Ultimately, the job analysis process is a fundamental quality assurance component of the relevancy, currency, and validity of competencies assessed by each certification examination.
The job analysis serves as the foundation for the examination blueprint. First, the individual competencies are grouped into domains that represent specific and similar areas of content. Next, the percentage weighting of each content domain is determined, in part, through the individual competency statement criticality scores, considered collectively, within each domain. This weighting of domains relative to one another allows the EDCs to determine how much, or to what extent, each domain is assessed (both by the number and difficulty of test items), relative to the other domains. For example, domains with competencies that have higher criticality scores (i.e., more important and / or more frequently performed) typically represent a larger percentage of test items than those domains with lower criticality scores for its respective competencies.
The examination specifications are typically established or revised at the same time as the development of the examination blueprint. The specifications usually include the total number of test items (both scored and non-scored), test item type(s), such as multiple-choice or other, total test duration, scoring methodology, etc.
Items appearing on each certification examination are created, reviewed, revised, and ultimately approved by CCHIIM-authorized item writers and item reviewers. These item writers receive extensive training prior to creating any raw test items. Potential item writers must apply for consideration and receive approval prior to beginning their training in item writing. First, item writers must complete an introductory, online course covering the basics of item writing. After completion of the online course, potential item writers then participate in an interactive, web-based seminar on advanced item writing techniques. Once an item writer completes both the online, introductory course and the interactive, web-based seminar on advanced item writing, they are then allowed access to the secure online item writing platform, where they can begin entering draft items.
Experienced item writers further facilitate item development from this point forward through a continuous, two-step process involving review and revision of draft items for content accuracy (performed by content reviewers) and conformity to the item writing guidelines (performed by guideline reviewers). All potential items must be approved through both steps before being added to the pool of non-scored, or experimental, items within the master item bank. All approved items are assigned to the corresponding certification examination queue for eventual inclusion as a non-scored, experimental item.
All certification examination items begin their testing life as a non-scored, experimental item, often referred to as pre-test items. Each certification exam has a small percentage of predetermined, pre-test items, and the specific number of pre-test items is included within the publicly-available examination blueprint. Only after the performance data demonstrate that any pre-test item meets or exceeds previously established statistical criteria, will an experimental item be considered for inclusion as an operational, or scored item.
Examination performance is periodically reviewed by each respective EDC. As dictated by item performance, continuous and routine revisions occur with respect to removing poorly-performing items and replacing them with effectively-performing pre-test items. More substantive updates or revisions to certification examinations require that subsequent versions are equated to prior versions, and also that multiple, concurrent forms are also equated to one another. Changes to the examination blueprint and / or specifications resulting from a job analysis dictate that that the cut score or performance standard is revisited, and adjusted through a widely-accepted, best-practice methodology, such as the Angoff method, when warranted. Aggregate data for all certification examinations is publicly available to candidates, certificants, and other relevant stakeholders. This aggregate data includes the passing rates of first-time test takers and total number of credentials awarded during the stated period of time.