Security Update: Test Development & Publishing


This is the second article in a four-part series.

In our last newsletter we outlined Prometric’s investment in multiple security initiatives that span the entire examination lifecycle. Their combined objectives are to provide maximum protection to intellectual property and further integrity as a hallmark of our test center operations. We previously highlighted efforts within the area of Results & Analyses in the exam lifecycle. In this article we will provide an update on three capabilities within the area of Test Development & Publishing.


Resource Level Blocking (RLB)

Resource Level Blocking allows clients to react quickly to required content changes. RLB not only blocks items from being presented, it allows suitable replacements of that content to be seamlessly inserted into the exam without republication. Through RLB, exams are published with additional content that is held in reserve until it is required.

Example: If, following the publication of an exam, a problem is detected in either the answer key or the text of the item itself, a parameter file is transmitted to all of the testing centers alerting the system to block that item from view and replace it with one of the items within the reserve pool. This process ensures that all candidates receive the same number of items in their exams and avoids potential issues with the quality of the exam.

While the most common use of this feature is to block delivery of an item that has been found to be invalid, it can also be used to enhance the security of an exam as well. Items can be systematically replaced, either in response to an identified breach or as part of an ongoing strategy of item rotation designed to mitigate the risks associated with overexposure.

Linear-on-the-fly testing (LOFT)

Linear-on-the-fly testing (LOFT) is an effective alternative to traditional fixed-form delivery – in which a pre-determined set of items are administered to candidates.With LOFT, a test is constructed by selecting items from the bank each time a candidate takes an exam. The number of possible forms is limited only by the size of the item bank and the requirements imposed on the construction process, such as item difficulty and number of content domains and sub-domains. Because of the unlimited number of forms, test security is less of an issue than in traditional linear form-based construction, where a small number of fixed forms are administered to the entire population of candidates.

In traditional fixed-form test delivery, a small number of forms (often 2 to 4) are built in advance according to the test specifications. The forms are then released, and candidates will be randomly administered one of those forms when they test. With LOFT an entire pool of items is released to the field, accompanied by computer code to intelligently design a new form every time a candidate takes the test. This ensures each candidate will receive a virtually unique test, making attempts at item harvesting and content sharing far more difficult.

Computerized Mastery Testing (CMT)

Computerized Mastery Testing (CMT) provides several key advantages over linear testing, such as shorter overall exam lengths and greater control over decision error. To achieve these advantages CMT employs a variable number of content blocks, known as testlets. In a CMT model, testlets are collections of items that precisely match the content domains and difficulty levels specified in the exam design. At the completion of each testlet the candidate’s performance is assessed against pre-established standards that reflect the probability of being able to make a pass/fail determination. If a candidate has either demonstrated mastery (pass) or non-mastery (fail), the exam ends. If a candidate’s performance is still in a zone of uncertainty, in which neither a pass or a fail can yet be established, another testlet is presented. This decision-making process continues until the candidate can clearly be identified as either having passed or failed the exam.

The variable nature of the candidate experience makes meaningful item harvesting or memorization difficult, because candidates cannot predict which testlets they will see during any particular exam. Additionally, the stopping rules enforced by the CMT algorithm minimize item exposure and thereby increase the effective “shelf life” of the exams.

RLB, LOFT and CMT are just three examples of the numerous intellectual property protection methodologies available. The targeted use of these tools should be a key component of an overall security strategy that balances investment across the exam lifecycle, thus extending the life of items and exams.

Additional methods and tools that you may want to discuss with Prometric include:

  • randomization of questions,
  • exam launch codes,
  • interactive scenarios with dynamic branching, and
  • our propriety item development and item banking solution.

Particular characteristics of testing programs make some techniques more appropriate than others. If you are interested in incorporating any of these capabilities into your examination programs, please consult your Prometric Client Services Manager.

We will continue our updates to you on these security efforts in the next newsletter, as well as cover the next step in the exam lifecycle, Technology Enablement.