Don't tick the wrong box
Assessment systems continue to be the most challenging area in an RTO's operations and yet the most critical to demonstrate quality outcomes. When dealing with assessments in Australia's VET environment, we need to consider both the assessment system used by the training organisation, and the outcomes produced by that system. The "assessment evidence" is collected and used to make a competency judgement against the unit(s) of competency.
I would like to use this article to reflect about the "assessment evidence", and particularly to assessment evidence used to support decisions around completed tasks, and the demonstration of skills.
Quite often in my work as an auditor, I see "Observation checklists" based on tick boxes next to text copied/pasted from the unit of competency's performance criteria.
Assessment activities used to produce evidence of a candidate's skills, will always require a task to be completed, under the conditions and standards (relevant the unit of competency element and performance criteria), and will provide candidates an opportunity to demonstrate the skills required to perform the mentioned task. Knowing is not the same as doing, and VET is about doing. That is a fundamental principle for the design of the assessment, but as I mentioned above, I will focus here on the evidence produced, and not so much on the task itself.
Do we have rules to accept assessment evidence in Australia's VET sector? Yes, the rules of evidence are: Valid, Authentic, Sufficient and Current, and these rules must guide assessors during the collection of evidence.
Ok, let's start with Validity. What is considered as "Valid" evidence? According to the Standards for RTOs, evidence used to make a competency judgement must confirm "...that the learner has the skills, knowledge and attributes as described in the module or unit of competency and associated assessment requirements." In other words, the assessment evidence collected confirms the candidate's ability (performance evidence and knowledge evidence) to achieve each outcome (Element) described in the unit of competency, under each condition/standard (Performance Criteria).
How can we prove that an outcome has been achieved? The evidence must provide details about: what was achieved, when it was achieved, in which context. A tick in a box will not provide that information.
Some assessors think they can just tick candidates off as competent, based on their "professional judgement". And on occasions they felt insulted when evidence, used to make the judgement, was requested. Quite often, I hear... I used my criteria from 20 years of working experience. To be very clear here, I am not questioning an assessor's industry experience. I celebrate that. But competency-based assessment is an evidence-based system. In other words, judgement is made based on evidence collected.
When someone performs a task, it results in either producing a product or delivering a service. If a product, or sub-product is produced, the product itself will constitute valid evidence that the assessor can then assess against a benchmark (the unit). Assessing products requires comparing the product's characteristics, features and use, to the outcomes described in the relevant Element/PC from the unit. Assessors can use records of the product's characteristics, for example, if the product is an object you can have details of physical characteristics (length, size, weight, height, resistance, conductivity, etc.), or if the product is something more intangible such as a plan, some characteristics that can be recorded and assessed could include content, relevance of information provided, usability, veracity of instructions, feasibility of projections/forecasts.
If the task is a service (delivered to internal or external clients), records of the service provided will constitute valid evidence. For example, if the service is to resolve a customer complaint, evidence could include records of the complaint resolution, feedback from the client, photos, videos and records of the observation of the candidate dealing with the client (details of the protocol/procedures followed, techniques used, skills demonstrated, etc.).
The quality, quantity and relevance of the evidence collected must support the assessor's judgement. In general terms, learning in the vocational education and training spectrum means a consistent change in the candidate's attitudes. In other words, the candidate is able to use the new skills and knowledge consistently, and apply them in different contexts and situations.
The above means that evidence must demonstrate that the candidate had performed the task(s) more than once. In some cases, the unit of competency indicates a specific minimum number of occasions for a task to be performed. RTOs should use industry engagement activities to determine a benchmark for sufficient evidence, in line with industry standards. This is the requirement under the rule of sufficiency.
The assessment evidence constitutes a legal document and as such, the authenticity of the evidence is paramount. How can we prove that the evidence presented was either, produced by the candidate, or talks about the candidate? What measures are we using to demonstrate authenticity? In VET, there are three types of evidence we can use Direct, Indirect or Supplementary.
When collecting direct evidence, it is important that the identity of the candidate is confirmed and that the assessor observed or witnessed a task being completed or through oral questioning, and details of the event registered (i.e. date, time, location, duration).
Tools used to produce indirect evidence, such as finished products, written assignments, tests, or portfolio of evidence from a project, must include measures to confirm authenticity. This could include photographic or video evidence, further questioning from the assessor about the procedure(s) used to complete the task and how that procedure would be adapted if the situation/context was different. Many RTOs use a "declaration of own work" by the candidate as well.
Supplementary evidence produced by third parties such as supervisors, colleagues, or clients, can represent a challenge. This evidence is usually produced in the workplace. Measures to prove authenticity could include using referees to confirm the claims made in the third-party reports, or providing an opportunity for the assessor to visit the workplace for further observations/interviews.
Finally, evidence collected must meet the rule of currency. This may be particularly challenging in an RPL assessment. Assessment evidence must prove that the candidate demonstrated the relevant skills and knowledge at the time that the competency judgement was made, or in the very recent past to the judgement. What constitutes a "very recent past"? In some cases, the unit of competency provides information about currency, if no information is provided in the unit, RTOs should use industry engagement activities to establish a criterion for currency, in line with industry standards. In general terms, any evidence collected, or about something that happened more than two years prior to the assessment, judgement is potentially not current (in some industries evidence from more than two years ago may be accepted).
The bottom line here is that an assessment judgement must be made after the assessment evidence has been collected and compared against the unit requirements.
The assessment evidence recorded (the facts of the case), demonstrates the learner has the skills, knowledge and attributes as described in the unit, and represents the legal proof of the competent/not yet competent judgement, that will be available for eventual procedures such as appeals, validations, audits or reviews. This evidence must meet the rules of evidence otherwise the RTO will be in breach of Clause 1.8 of the Standards for RTOs.