BCBA/BCaBA EXAMINATION REPORT FAQs

Who are the “just passed” candidates and why are scores compared to this group instead of to all passing candidates?

Candidates who have “just passed” the exam are defined as those scoring between the passing score (400) and 1 standard error of measurement above the passing score (414).

Because one purpose of the BACB Exam Report is to help candidates identify content areas where improvement is needed in order to pass, performance is compared to those who have “just passed.” The average score of these candidates is representative of the minimum amount of knowledge required to pass the exam.

The purpose of this comparison is to represent the approximate amount of improvement needed to pass the exam on a future attempt. The average score of “just passed” candidates estimates the actual performance that retaking candidates must achieve; therefore, it is the most accurate representation of the necessary amount of improvement. The average score for all candidates who pass the exam is substantially higher than the average score of the “just passed” candidates and would greatly overestimate the amount of improvement required.

What does standard error of measurement mean?

The term standard error of measurement (SEM) describes the imprecision inherent in all measurement. In professionally developed exams, such as those for the BCBA and BCaBA credentials, the SEM is usually quite small because these exams have a high degree of reliability.

When a candidate takes an exam, the score that they receive is their observed score for that administration. The BACB administers many forms of the BCBA and BCaBA exams each year.  If a candidate could take every possible form of the exam, the average of all of their scores would be equal to their true score. Because taking every form of the exam is not practical or possible, we use the SEM to describe the amount of variation in observed scores that results from measurement error.

The standard error of measurement is often used to estimate a confidence interval. This is a way of quantifying the probability that an observed score is an accurate representation of the true score.

There is a 68% chance that a candidate’s true score will fall within ± 1 SEM of their observed score. Expanding the range to ±2 SEM results in a 95% confidence interval. A candidate obtaining a score of 80 on an exam with 100 questions and an SEM of 2, can be 68% confident that their true score is between 78 (-1 SEM) and 82 (+1 SEM) and 95% confident that their true score is between 76 (-2 SEM) and 84 (+2 SEM). In other words, if the candidate were to take the exam again, assuming that their knowledge level did not change, there would be a 95% chance that their score would fall between 76 and 84.

What do the error bars on the average scores for “just passed” candidates mean and why are they different sizes?

The error bars on the columns showing the scores of the “just passed” candidates represent the range of scores that fall between plus and minus one standard error of measurement (SEM) from the average score obtained by the “just passing” candidates for each content area. Generally, scores based on fewer items will have larger SEMs than scores based on many items. This is why the error bars are larger for the individual content areas as compared to those for the major sections of the exam (i.e., Basic Skills, Client-Centered Responsibilities).

If your score on a content area is lower than the bottom of the error bar, that area may warrant more attention than areas where your scores fell within the error bar range.

Do I have to obtain a higher score than the “just passed” candidates in every content area in order to pass the exam?

No, the Pass/Fail decision is based on the whole exam rather than on each content area.

The purpose of the exam is to determine whether candidates have mastered enough knowledge of applied behavior analysis to become a BCBA or BCaBA, so we must look at overall performance. It is normal for people to have strengths and weaknesses and for the strengths to compensate for weaknesses as people grow within their careers. This is why the compensatory scoring model is best suited for certification exams.

My score bars are above the top of the error bars in some content areas and below the bottom of the error bars in other areas. Visually, it looks like the areas where my bars are above should be more than enough to compensate for the areas where my bars are below. Why did I fail?

The number of questions in each of the content areas varies, so averaging them will not determine your overall percentage correct score. It is also not possible to directly compare the relative heights of your score bars from one content area to another in order to evaluate whether a strength in one area is sufficient to compensate for a weakness in another area. Keep in mind that areas where your performance was weaker may contain more items than areas where your performance was stronger.

Should I only study the content areas where my score is below the bottom of the error bar?

Information on your performance by content area was provided to assist you in identifying relative areas of strength and weakness. However, use caution when interpreting your content area performance. The percentage correct scores were calculated on fewer items and therefore may not be predictive of your understanding of the content area. When preparing to take the exam again, it is important to study all content areas. Studying only the areas in which you obtained lower scores might result in improved performance on those areas but decrements in other areas.

Why are the average score bars for “just passed” candidates on my exam report different from the average score bars on my friend’s exam report?

For security reasons, we administer multiple forms of each exam during the testing window. Candidates are randomly assigned to the test forms, so your friend took a different form of the exam. We calculate the average scores of the “just passed” candidates and the standard errors of measurement separately for each of the exam forms that are in use during the testing window.

We equate the exam forms to a base exam to ensure that the overall difficulty of the exams remains at a consistent level. It is possible, however, for the average level of difficulty for the items within individual content areas to vary slightly from one exam form to the next. It is also possible for the knowledge level of the “just passed” candidate groups assigned to different forms to vary slightly.