Peer Reviews of Learning Labels

Thursday, 07 November 2019 947 Views

Number one suggestion from practitioners for use and adoption of the learning labels is providing a way to verify the expectations. To them, the process is akin to an accreditation or book review. Many ask the questions: “Who says these learning labels are accurate? How do they conduct a review?”.

The learning labels system includes a fully functional process to verify the expectations. Currently, this works as a peer review process, though could be used in other verification processes too. The basic premise is a reviewer checks each end point on a Skill Map (a graphic layout of the traditional label, ideal for these reviews); end points could be a skill, methods, or standards. If they agree with the representation, then check a box and optionally leave a comment.

In addition, a practitioner might use the Skill Map application as they are reviewing the resource and check each time a skill is applied and click and hold to measure the intensity. This can also be accomplished by doing a click and hold for each skill after reviewing the resource (probably more common at first).

The application records all this data and who made the review and their profile information. Later, this can be negotiated for credibility concerns (if necessary). Future iterations might include support for machine learning or AI to aid the reviewer. Fortunately, I think this process works well for later advancements with this review process. But, let’s get back to the human peer review process.

This line by line check of skill representations makes sense for the following reasons:

First, the verifier might be interested in only the skills in their discipline. For example, a standards provider, might review an experience to ensure it properly applied their standards (and that is all). Might not have the expertise, time, or interest to verify other skills.

Second, I see a verifier going line by line checking as skills are applied. Might have the application open as they are reviewing the resource. Might agree to only some of the skills represented on the label.

This framework and thoroughness provide a strong basis to make these learning labels accurate. Skill Points ® are a next generation learning measurement. Each variable in the calculation gets verified: level of difficulty (largely based on standards) and focus values (based on the collected frequency and intensity); there are also coefficients based on other factors.

A teacher, professor, or trainer considering using a learning resource (book, game, activity, etc.) views a learning label (represented as a Skill Map), which shows how many practitioners agree to the expectations and the most recent comments (on a skill by skill basis). This is especially useful if standard providers participate in the process too.

Contact us for a free consultation to introduce learning labels into your curriculum, onboarding, or training program. If you are interested in partnering, email us at: .