Alpha Testing

From E-Learning Faculty Modules


Contents

Module Summary

“Alpha testing” refers to in-house within-team assessments of online learning objects, modules, and sequences when an online learning development is initially complete. This module describes what alpha testing is, how it is done, how changes are prioritized after the testing, and related details. Alpha testing is usually done before beta testing, which is a form of assessment involving the broader learning public (in general to test for learning efficacy). This module should be read as the precursor to the “Beta Testing” module.


Takeaways

Learners will...

  • consider what “alpha testing” is in an instructional design context, what it is generally, and when it occurs in the design and development sequence
  • review how alpha testing is generally done and some automated testing methods
  • list some quality metrics applied during alpha testing, where assessment instruments come from (and how to create them), and some common forms of assessment instruments
  • review the elements of learning design tested during an alpha test
  • think about how proposed changes and fixes are prioritized after an alpha test, the revision work following an alpha test, and how to archive the documents related to an alpha test


Module Pretest

1. What is alpha testing (in an instructional design context)? What does it entail? At what point in the design and development cycle is alpha testing done, and why?

2. How is alpha testing generally done? Are there some automated testing methods for alpha testing of digital learning objects and modules and sequences?

3. What quality metrics is alpha testing supposed to address? Are the assessment instruments generic or customized or a mix of both? What are some common forms of assessment instruments?

4. What elements of learning design are tested during an alpha test? How? Why?

5. How are proposed changes prioritized post alpha test? What happens after alpha testing? What work is entailed? What assessment documents are kept?


Main Contents

1. What is alpha testing (in an instructional design context)? What does it entail? At what point in the design and development cycle is alpha testing done, and why?

“Alpha testing” (followed by “beta testing”) refers to the assessment of a range of design requirements for digital learning objects, modules, and online learning sequences (long courses, short courses, and others). This testing is often done by members of the design and development team or by in-organization testers. (It is a non-public and non-external type of testing.)

Alpha testing involves assessing the following elements:

  • content accuracy (fact-checking)
  • language revisions and editing
  • every designed functionality and learning path
  • technical standards
  • object playability (on different operating systems, web browsers, mobile devices, etc.)
  • look-and-feel consistency
  • proper branding (if relevant)
  • accessibility (of imagery, audio, video, and other elements)
  • legal compliance (intellectual property, privacy protections, and others)
  • source reference citations
  • and others

The elements being tested depend on the original authorizing documents that enabled the project (including the definition of deliverables), the project stylebook, legal standards, learning objectives, and technical (technology-based) standards, among others. New assessment features may be added as new needs are defined. For example, new and unexpected “bugs” may require patches, or new learning in a field may affect the learning contents, and so on. Not everything in an alpha test may be addressed though under deadline and under budget, so there will have to be thoughtful vetting of the suggested changes.

The alpha testing usually occurs after the instructional design has been approved, the various objects developed, and the entire learning sequence provisionally complete. Various individuals have developed different parts of the course, and while everyone should have been working off the stylebook, deviations may have occurred. The point is that the learning should look all-of-a-piece and be experienced in a coherent way. Alpha testing works to make sure that everything functions individually and together. (Even if a full design was created by one person, alpha testing is still critical because the technological and legal standards are unforgiving. Also, the legal and tech and domain spaces are constantly changing, and in the time that a project took to be finalized, the goalposts may have moved.)

Alpha testing does not take the place of testing done along the way, but it focuses on the semi-completed work to ensure that the elements work as stand-alones and also as interacting units.


2. How is alpha testing generally done? Are there some automated testing methods for alpha testing of digital learning objects and modules and sequences?

Alpha testing is defined initially by collecting the necessary standards for the online learning project, from the authorizing documents (awarded grant documents, for example), the legal standards, the stylebook, learning objectives, and technical standards. This information is codified into checklists or rubrics or tables. Assessors conduct walk-throughs of the various learning objects, and they mark up the lists with details of what they are observing.

In smaller projects, the designers/developers are also the assessors, and corrections may be made on-the-fly, and then recorded or documented. The alpha test is a necessary step to ensure that the deliverables are up-to-standard and function as advertised. Testing is a requisite step for all of instructional design—whether or not this step is formalized as an alpha test.

Some types of digital learning objects may be assessed within the tech tool [such as the learning management system (LMS) or online survey tool].

3. What quality metrics is alpha testing supposed to address? Are the assessment instruments generic or customized or a mix of both? What are some common forms of assessment instruments?

A digital learning object, module, and learning sequence should be factually accurate, of learning value, properly metadata-ed, legal, accessible, fully functional, fully cited, technologically interchangeable, and other factors. The requirements for the contents are based on the authorizing documents (by the project funders)…the learning domain…the applied technologies…and other factors.

Assessment instruments for alpha tests may be generic in some cases, customized in others, and a mix of generic and customized in others. For example, accessibility templates are available which offer generic measures of accessibility. (As a matter of fact, there have been online assessment tools for website accessibility for decades.) Customized assessments are specific to a project and the particular learning objects, modules, and learning sequences designed for the particular context. For example, targeted online learning focuses on the specific needs of particular learners (often outliers on a bell curve). These standards may not apply to other projects but are critical to the particular case. In most cases, the assessment instruments are a combination of both generic and customized standards.

The alpha testing instruments are usually in the forms of digitized checklists, rubrics, tables, or other forms—to capture the observations to enable the fixes and to keep a clear record.


4. What elements of learning design are tested during an alpha test? How? Why?

The core raison d’etre of online learning is to enable the learning in an effective way, while observing legal requirements, and within the technological and domain content and budgetary constraints of the particular design and development context.

The focus on minute details may result in losing sight of the actual objective—the learning value of the designed objects. In terms of what aspects of a learning design are tested, they include the following:

  • The learning welcome
  • The instructor presence
  • The course setup and sequence
  • The informational value of the learning
  • The clarity of directions
  • The risks of “negative learning” (or misinterpretation of the learning experience)
  • The effectiveness of cognitive scaffolding / learner supports
  • The accuracy of the anticipated amount of time for the work
  • The dependencies of the learning (such as on textbooks, social media videos, and other elements)
  • The datedness of the materials
  • The writing clarity, consistency, coherence, appropriateness of voice and tone (and what happens to the messaging when the writing is translated from the main language into other languages using automated language translators)
  • The communicated sense of credibility and trust
  • The clarity of the navigational cues
  • The alignment of learning objectives – assignments – formative assessments – learning outcomes
  • The consistency and appropriateness of the branding and reputation
  • And others

The follow-on beta test more directly addresses the learner angle of a learning design, albeit from live learners external to the team. (The design and dev team becomes overly familiar with the learning contents from the subject matter expert / domain expert / content expert, so they may not see learning gaps and other challenges that a beta test may reveal.)


5. How are proposed changes prioritized post alpha test? What happens after alpha testing? What work is entailed? What assessment documents are kept?

Ideally, if possible, all the identified issues would be addressed. And ideally, no critical fixes would be required. If the project had been professionally managed from the beginning, there should not be costly mistakes this late in the process.

  • Early on, the technologies should have been thoroughly vetted.
  • A consensus stylebook should have been created and followed.
  • Appropriate templates should have been made.
  • Content should have been accurately vetted for accuracy.
  • Writing should have been revised and edited…before inclusion in a slideshow…recorded in a screencast or other video, etc.
  • Scripts should have been vetted for accuracy.
  • Accessibility accommodations should have been made along the way.
  • Testing should have been done along the way.
  • Fast-moving changes in the domain / technologies / legal…should have been addressed along the way. (This assumes that the team members have been paying attention.)

In general, all critical fixes have to be addressed before going live, unless they are prohibitively expensive to fix. If proper design and development steps were taken, critical fixes should have long ago been addressed and not discovered this late in the process.

The more fixed and ingrained mistakes are, the harder it is to undo these later in a project. If critical errors are present, it is possible to pull digital learning objects, modules, and sequences.

Finally, all assessment documents and decisions are recorded and maintained in the raw files for the project.

Examples

How To

Respective local teams should design their own methods for alpha testing, with a focus on what works for them. The process should result in a professional product at the end.

Possible Pitfalls

Alpha testing does not take the place of assessments and tests along the way. The earlier the interventions occur, the better.

Module Post-Test

1. What is alpha testing (in an instructional design context)? What does it entail? At what point in the design and development cycle is alpha testing done, and why?

2. How is alpha testing generally done? Are there some automated testing methods for alpha testing of digital learning objects and modules and sequences?

3. What quality metrics is alpha testing supposed to address? Are the assessment instruments generic or customized or a mix of both? What are some common forms of assessment instruments?

4. What elements of learning design are tested during an alpha test? How? Why?

5. How are proposed changes prioritized post alpha test? What happens after alpha testing? What work is entailed? What assessment documents are kept?


References

Extra Resources