Beta Testing

From E-Learning Faculty Modules


Contents

Module Summary

Takeaways

Learners will...

  • consider the role of beta testing in instructional design
  • review how model learners are recruited for beta testing and optimal learner features
  • consider ways to test online learning sequences with recruited learners
  • list deal breakers and non-starters for the designed online learning and consider how to address these
  • consider how to leverage the beta testing data for improved instructional design in the future


Module Pretest

1. What is “beta testing” in instructional design? What is the role of beta testing as compared to alpha testing? Is beta testing always done with instructional designs? Are there equivalencies to “beta testing” conducted during the design and development of online learning objects, modules, and sequences?

2. How are model learners recruited for beta testing? How representational do they have to be of the target learners for the online learning sequence? How important is diversity in this learner group? Why is it important not to stereotype while trying to find representative learners? How are such learners repaid for their time and insights?

3. What are some way to test online learning sequences with recruited learners? How should beta tests be designed? Through focus groups? Through surveys? Through learning system event log (or other) data? What are stand-ins for beta testing if beta testing is too expensive to conduct?

4. After beta testing, what are deal breakers and non-starters for the designed online learning? Why? How should these be addressed?

5. How can designers and developers leverage beta testing data for improved instructional design for the current project…but also for improved instructional design in the future?


Main Contents

1. What is “beta testing” in instructional design? What is the role of beta testing as compared to alpha testing? Is beta testing always done with instructional designs? Are there equivalencies to “beta testing” conducted during the design and development of online learning objects, modules, and sequences?

Before an online digital learning object or course goes live, it often goes through alpha testing to ensure that the contents meet the defined specifications, and “beta testing” is then done to understand how the learning objects are received by learners. Alpha testing occurs in-house of the team or the sponsoring organization; beta testing is done by the creators of the learning objects, but the learners themselves are from outside of the organization.

Beta testing is done with formal learning designs in corporations but fairly rare in higher education.

Beta testing—or running learning objects by potential learners—is done informally during the design and development of online learning objects and modules. Here, team members stand in for learners, and they assess what they’ve designed through pilot testing mock-ups. Mock-ups are used because fully developed objects will be too expensive if gaps in the design are discovered. It is much more efficient to shred a design than a developed object with dozens to hundreds of hours invested.


2. How are model learners recruited for beta testing? How representational do they have to be of the target learners for the online learning sequence? How important is diversity in this learner group? Why is it important not to stereotype while trying to find representative learners? How are such learners repaid for their time and insights?

For corporations, members of the general public who may experience their designed learning are usually recruited through advertising and rewarded with free services or products.

In academic environments, target or model learners are recruited from the general student population. Occasionally, there is more selectivity depending on the learning and the targeted audiences. Target learners should generally be representational of the likely learner once the online learning is published into the learning management system (LMS) and college catalogs. The risk in over-targeting is that the researchers only have a small and non-representational sample. There are risks to stereotyping learners who are not actually representational of the larger population of potential learners. In an academic context, student participants are rewarded for their investment by small gift certificates (within the limits of the Institutional Review Board constraints on human subjects research).

Online learning is designed not only for the bulk of a population in a Gaussian or bell curve but also to the outliers, so the more diverse a test population can be, the richer the potential feedback and insights about the designed online learning.


3. What are some way to test online learning sequences with recruited learners? How should beta tests be designed? Through focus groups? Through surveys? Through learning system event log (or other) data? What are stand-ins for beta testing if beta testing is too expensive to conduct?

Researchers test online learning objects and sequences with learners for several core reasons:

  • They want to understand how the online learning is received.
  • They want to find gaps in the learning design.
  • They want to know how well the learning fits with the perceived level of knowledge of the target learners.
  • They want to understand the fit of the learning for outlier learners, both those with high and low levels of content knowledge.
  • They want to understand how well the online learning travels to different contexts (cultural; modalities—F2F, hybrid, or online; and so on).
  • They want to know how well a theorized learning approach works or doesn’t work.

Based on the type of information researchers want, they need to define the questions and test contexts in order to satisfactorily answer those questions. They also need to be open to new discoveries as those emerge.

One approach that may be taken to test online learning sequences is to invite reviewers to experience the learning and then to debrief their experiences with focus groups, questionnaires / surveys, and through analyzing their online behavior on the various learning systems. Eye-tracking has become popular as one method with the low cost-of-access to eye-tracking software built to use web cams.

Learning sequences may be tested holistically and over time. Learning objects may be separated out and tested individually.

The beta tests may occur in face-to-face (F2F) contexts, in blended or hybrid contexts, or in wholly online contexts.

If there are insufficient resources (time, money) to conduct beta testing, or if there is not local expertise available, it is possible to use online surveys and back-end data to collect information from learners. If the learning sequence is instructor-led, the instructor may be asked for feedback about how the designed learning objects are working. Another approach is to bring on subject matter experts (who teach that subject) for their evaluations.


4. After beta testing, what are deal breakers and non-starters for the designed online learning? Why? How should these be addressed?

What a design and development team …and the project funders…see as deal breakers and non-starters will depend on the context.

Some very serious challenges with learners may be if they experience negative learning, are turned off or stymied by the learning design, experience negative messaging about the creator’s brand, perceive negative social messages, or find the online learning unnavigable.

If online learning has been designed well, with savvy and light testing along the way, there should not be any serious deal breakers.

Optimally, what the team will receive will be some insights for smaller changes to the learning.


5. How can designers and developers leverage beta testing data for improved instructional design for the current project…but also for improved instructional design in the future?

When data is collected about an online learning design, some of the feedback will be based on close-ended questions, and others will be based on open-ended ones. There will be some degree of expected variance in feedback, and there will also be surprises.

The design and development team has to identify what is most salient and constructive and apply those comments to design changes.

Transferring insights from one project to another has strengths and weaknesses, but it helps to continuously learn how learners learn…how they engage multimedia…and what their preferences may be for learning particular subject matter with particular technologies. Again, it is important to test assumptions and see if the learning has actually been efficacious. Learning contents have to be updated over time, and these revisions should be informed by empirical data.

Examples

How To

Possible Pitfalls

Module Post-Test

1. What is “beta testing” in instructional design? What is the role of beta testing as compared to alpha testing? Is beta testing always done with instructional designs? Are there equivalencies to “beta testing” conducted during the design and development of online learning objects, modules, and sequences?

2. How are model learners recruited for beta testing? How representational do they have to be of the target learners for the online learning sequence? How important is diversity in this learner group? Why is it important not to stereotype while trying to find representative learners? How are such learners repaid for their time and insights?

3. What are some way to test online learning sequences with recruited learners? How should beta tests be designed? Through focus groups? Through surveys? Through learning system event log (or other) data? What are stand-ins for beta testing if beta testing is too expensive to conduct?

4. After beta testing, what are deal breakers and non-starters for the designed online learning? Why? How should these be addressed?

5. How can designers and developers leverage beta testing data for improved instructional design for the current project…but also for improved instructional design in the future?


References

Extra Resources