What rating scales can be used to effectively rate technical skills?
Why is this scale important? Ratings can be inherently tough to measure because they are often based on abstract concepts (e.g. good, better, best…). Accurately defining each skill level reduces confusion by respondents, leads to better decision-making based on the data, and enables more effective processes to be put in place to encourage progress from one level to the next.
When I hear “tech skills inventory” or “assessment” my first thoughts go to that dreaded spreadsheet companies present to new hires to fill out. Typically, the assessment is a list of technology products and services that an individual reviews and gives a rating of something like 1-10 for each item on the list. Totally subjective and almost completely useless.
Many technology platforms are so large and complicated these days that a single rating is, again, not useful. Someone might very experienced in some parts of these tools and completely inexperienced in others. How do you use a single number/rating to communicate your skill level?
Skill scope/granularity: A topic for another day.
Skill Levels are… Subjective
How does one effectively respond to a skill inventory with subjective rating systems? For example: 1-5, 1-10, “beginner to expert”, whatever the scale. These systems might be useful for folks to start a follow-up conversation, but really not much more.
Why do these systems not work? With a quick search, there are a number of examples out there that use ‘average’ as a descriptor in their rating systems. Some folks (trainers, consultants, etc.) might have enough exposure to have a relatively decent idea of what ‘average’ means, but most individuals won’t. Not having that broader view, that sense of “Average”, leaves these scales to individual perceptions and biases – not something that usually leads to accuracy.
For many consumers of these ratings systems, it would be helpful to try a less subjective system.
How about something like the following? Sure, there’s still plenty of room to improve verbiage and refine names and descriptions more. But a step forward from the purely subjective systems being used.
(0) None – Has no knowledge of a topic. May recognize the name of a skill or feature but doesn’t have a description.
(1) Knows About – Is aware of a topic, but doesn’t have practical, usable skill with it yet.
(2) Functional User – Have used the skill, feature, or discipline and can apply it in practical scenarios.
(3) Advanced User – Have used the skill, feature, or discipline extensively and can help others applying the skill.
(4) Expert User – Experienced in all aspects of the skill or feature, use the skill, and are able to teach or train others to use the skill.
I know, I know… you’re looking at that numbering system and thinking “a developer came up with this” because it starts with zero. Well, yes and no. I left it at “zero-based” because if you’re at the bottom of the rating scale, you have zero knowledge of that topic, subject, product, etc. It’s something we can discuss.
In my opinion, there’s good clarity between 0, 1, and 2. Between 2, 3, and 4 though, may still need some clarification.
Questions and Feedback
- I’ve been working independently for a while now, are organizations still doing these “spreadsheet” assessments?
- What have other folks seen out there? Systems that work or have positive components.
(I mean, you can tell me about systems that don’t work too, but that’ll just be for the laugh…)
- Does the suggested system make sense?
- Is the system non-subjective enough to make a difference in value of responses?
- What do or don’t you like about it?
I’d love to hear your feedback in Comments, on Twitter, via email. Whatever works.