Skills Assessment and Behaviourism





This was going to be a short twitter thread, then it got too long, so I made a blog post instead. I read an opinion piece in the Toronto Star today and I’m concerned. Mostly I’m concerned about the train of thought it represents. The article, “We need to start giving soft skills more credit“, is the newest version of similar work around soft/transferable skills that’s been around for years, but now with AI.

This seems like a good thing, because employers want employees with strong transferable skills, and colleges and universities already teach technical skills, and programs are designed so that students pick up transferable skills along the way. My problem is that the discourse is always focused on a behaviourist understanding of people. It presupposes that:

  1. Students must be explicitly taught something to learn it
  2. Evaluation means learning happened

The first assumption is great when you’re talking about technical skills. We can assume that when a student comes to university they don’t know how to do certain things, be that writing strong essays, doing calculus, lab work, etc. But when it comes to transferable skills things are different, because everyone will come into PSE with varying abilities in these skills.

This isn’t math, where you can generally assume that by the end of the first semester everyone will be doing something new and so they’ll all be trying to learn. Transferable skills are something we pick up from everything we do, from experiences.

How do we teach transferable skills then? We teach them in an authentic experience. Students learn how to collaborate or teamwork or leadership by doing those things as they work with others toward a goal. That’s why we call them transferable, because they are learned in situation A and are useful in situation B, C, and D, and when they use those skills from situation A in situations B, C, and D they’re strengthening them by going through the new experience.

Finally, assuming that students must be taught something to learn it takes away their agency. People are not some blank slate on which the professor writes. They have skills, experiences, and talents that will all come to bear on the new experiences and learnings.

Lets move to #2, the article says “When a student graduates now, there is no formal record of them ever acquiring these sought-after skills. Without a measure to quantify the amount of those skills acquired, labour markets cannot gauge the supply of these skills from academic institutions, so the market for these skills remains opaque.”

They aren’t focusing on ensuring PSI program outcomes are specifically teaching the skills in either an explicit or holistic manner. Instead the focus is on how we assess students skills. They’re saying that if we don’t measure it, it doesn’t exist.

Assessment, as you may be aware, is something I love. But, and I can’t stress this enough, the assessment needs to actually determine what’s happening.

Skills can be assessed in a few ways. The current method most post-secondaries use is that they build skills into the outcomes and assume that if a student went through an experience they gained skills. This is the concept behind the co-curricular record. That’s not what the authors are recommending because it’s not explicit enough about the specific skills. Instead this type of assessment is done through observation based on specific outcomes.

So if you’re assessing transferable skills, you need to build an observational assessment to be used by someone overseeing them. Easy enough. But wait, these are transferable skills and, as I said already, everyone has them already to some level and so everyone is coming in with wildly different levels.

Ok, so let’s put a pre-assessment and post-assessment in, that way we can track how the student has improved while in the program. But these are transferable skills, and the reason they’re called soft skills is that a lot of it has to do with the perception of the observer, which means that bias comes into play a lot more than with assessment of technical skills. We know how to solve that problem too: more assessors. So now you have multiple assessments at at least two points, before and after, but to be more accurate you may want assessment during. The end result is that a student is having their transferable skills formally assessed through observation multiple times during a set period and those recordings come together to determine their beginning skills and ending skills, giving the quantified data for the “formal record” that students can then give to employers.

If you were in the workplace, would you be willing to go through that type of assessment of your transferable skills? It seems dehumanizing to me. John Warner talked about this the other week:

“For me, an easy way to judge whether or not something violates an ethic of care for students is whether or not I would agree to be subjected to the same requirement of a condition of doing my work.”

As academics push more and more for authentic assessment, that the method of assessing an emerging focus and political agenda (transferable skill education) is qualitative behavioural assessment seems like a huge step backward.

That type of assessment does two things, it creates a specific hierarchy where the student is experiment subject and the instructor is the researcher, and it conflates measurable with evidence of learning.

Are students being measured against a standard? Good. But what if the student has already achieved that standard? Why are they there? Are they measured based on improvement? Then the student who has a 100% improvement but is still below where an employer wants is seen as doing better than the person who improves 10% but was already at the level employers are looking for. Does the assessment actually mean anything? Is it happening in an authentic situation? Is the student engaged in the improvement or is it something done to them? What level of agency do students have?

There’s a different way, and it requires using the Constructivist or Humanist educational paradigms instead of Behaviourist. Both of these take into account the context the student is within. The goal is to not only allow for but to encourage student agency.

A plan for the teaching and assessing of transferable skills

  1. The PSE program must have explicit outcomes that include both technical and transferable skills
  2. The PSE program units (ie. courses) have explicit outcomes, tied to the program level outcomes, that include both technical and transferable skills
  3. Early in the program students are taught about the technical and transferable skills they are expected to have on graduation, and what that means
  4. The program units include experiences that are designed to allow the development of the specific transferable skills in addition to technical skills
  5. Students are provided with opportunities for intentional reflection at multiple points through the program to self-assess their transferable skills and determine what they feel they need to focus on in the future (formative assessment)
  6. Students are provided with an opportunity at the end of the program to express the technical and transferable skills they have gained through the program and their prior experiences and how they gained them, as well as what they intend to do in the future to improve them (sumative assessment)

The outcome of this method is not some sort of transferable (soft) skills transcript that lists how good someone is at a skill, but instead is a person who has the skills, knows they have the skills, and can explain to someone else that they have them and how they gained them. Instead of removing student agency, we design the integration of transferable skills into a program in a way that centers student agency.

Thoughts? Leave a comment below.


Leave a Reply

Your email address will not be published. Required fields are marked *