Performance Solution Provider

Welcome to our Blog

Latest Posts

in Training Evaluation

Is VET Trapped in The Capabilities Vs Performance Issue?

2 November 2017

Frequently, I encounter VET practitioners whose actions and comments indicate an assumption that building capability and enhancing performance are the same thing.

Learning alone will not yield performance results. There is no business or performance measure that improves because of what people know; these measures improve because of what people do with what they know, and VET practitioners have not only no control of what our students do with what they learn, but very little is done to measure performance results.

What Is the Difference Between Capability and Performance?
Enhancing capability or skill is a learning outcome. It means that people have the capability to perform in some manner. It does not mean that they will.
A performance outcome occurs when people take what they know and turn it into what they do on the job. And, of course, making the conversion from learning to doing requires a work environment that supports the capability that was developed.

Engaging industry stakeholders in the planning of our Training and Assessment Strategies will help individuals and organisations to use the capabilities we develop in the VET sector, to improve performance.

I good process to go through with industry stakeholder is to review the "skill. . .will. . .hill. " process; and work together in better training evaluations.

People develop skills but then need both the will (motivation) to apply that skill, and ability to overcome any hill (obstacle) in the work environment that could impede application. Only then can performance result from capability that has been developed. For this to happen, we need more and better collaboration between RTOs, SSOs and Industry.

We know that performance is what people do on the job. We also know that, too frequently, people acquire capability that they never use on the job. Yet VET training is expected will yield results. Training Package Developers play an important role here.

As VET professional, we need to make performance—and not just learning—our business. And we can do that in two ways:

  1. We keep clear in our minds the difference between skill and performance. Training Packages are Occupational Standards and should focus on outcomes and performance.
  2. We view the building of capability as a means to the end, not the end. Our end goal is to enhance on-the-job performance that benefits the organisation. Industry engagement will provide information about how the work environment will support skills we plan to develop. We need to partner with industry who can work with us to ensure skills will transfer to the workplace.
in Learning Design

Can VET Match Micro-learning Solutions?

31 October 2017

Using Skill Sets to meet industry needs.

Vocational Education and Training must provide solutions and support individuals and industry in Vocational Preparation and Vocational Development (Continuous Professional Development).

Although our VET system is a leader in Vocational Preparation, mainly because of government funding conditions, RTOs are losing opportunities in Vocational Development programs.

Non-accredited training programs are providing an incredible range of learning opportunities to support our workforce with professional development. These programs are presented in different formats, from online platforms and symposiums, to summits and conferences. And, importantly, these micro-learning options are meeting current industry needs.

To compete in a corporate training and development world, RTOs should look at these opportunities, and use micro-learning techniques to meet that demand.

The flexibility of training packages that allows for the delivery of stand-alone units and skill sets, is not recognised for government funded programs, which today accounts for more than 70 per cent of all VET training delivered in Australia.

Rapid changes in industry processes and technological advances, together with the definitive adoption of robotics in the workplace, have created a growing need for continuous development of skill sets.

The Australian government should update funding programs to include skill sets and stand-alone units, as this is the easiest way to measure the return on investment in these training programs.

I started looking at international trends for micro-learning, and discovered some interesting statistics. According to the Association for Talent Development (ATD) 92 per cent of organisations (worldwide) are using micro-learning plans, and over 67 per cent of organisations not using micro-learning are planning to start. For RTOs to develop industry relevant training products, we should look at these statistics.

Micro-learning techniques have three primary benefits and this is why organisations are considering these options:

  • Micro-learning is cheaper and faster. Materials take less time to source, produce, maintain and consume, than full qualifications. This enables re-use and re-packaging of micro-learning programs. It also allows trainers to focus on quality without sacrificing amount of training, because those irrelevant skills are not included in the program.
  • People are more engaged. Employees today devote 1 per cent of their time to learning (roughly 24 minutes a week), check their phones 150 times a day, and switch tabs every minute. Micro-learning fits perfectly into this continuous diet of email, Slack, and social media.
  • People learn more. Though there are many factors that drive effective learning, managing cognitive load is one of the most important. The problem with typical learning experiences like lectures or long e-learning videos is that they present too many things at once for too long a period of time.

These are real benefits, but they don't necessarily translate to improved performance on their own. Through industry consultation we discovered that timing plays an important part, and the key is to have a training solution to solve current problems.

One of the most difficult and least scalable things organisations must do is motivate their employees, and learning requires a lot of sustained motivation. Compliance training is a good example.

But how can we identify the right time when our participants' motivation is high?

There are reliable triggers that open up motivational windows in which individuals are willing, even excited, to learn. These windows can last from a few months (Think: when someone is given a new role or responsibility), to a few weeks (Think: when someone has a big deadline or presentation coming up), to a few minutes (Think: when someone is walking into a big meeting for which they're not fully prepared).

In today's competitive environment, RTOs are required not only to set Learning Objectives to describe what participants will be able to do at the end of the training, but also Application Objectives to determine how and when those skills and knowledge can be used and applied to attract participants at the right time (highly motivated).

Learning experiences presented to learners at the wrong time will produce little or zero results, and the margin for error is very slim.

Continuous review of our VET Sector, Training Packages, funding arrangements is required, and our Nationally Recognised Training System should be adapted to meet emerging needs in vocational education and adult learning trends. This new generation of micro-learning solutions is certainly making an impact.

in HR Management

Three Reasons Why Compliance Training Fails

31 October 2017

We are in the training industry, yet many training programs, including some formal training programs, fail to have a positive effect on our RTO's performance.

In this article, I will analyse the top three reasons why RTO Compliance Training Fails.

Lack of Alignment with RTO's Needs
The payoff from a training program comes from the business measures that drive it. Simply put, if a training program is not aligned or connected to a business measure, no improvement can be linked to the program. Too often, training is implemented for the wrong reasons – a trend, to meet regulatory requirements, or perceived need that may not be connected to an RTO's measure.

Initial training needs may be linked to the objectives and evaluation by using a consistent four-level concept:

  1. Reaction (How we want students to perceive the program and its outcomes)
  2. Learning (What new skills and knowledge we want students to learn)
  3. Application (How we want students to use the new skills)
  4. Impact (What RTO performance metrics we want to change)

Without the business connection at Level 4, the program will have difficulty achieving any results.

One major RTO faced this problem directly as it reviewed its Trainers' Professional Development Plan. Several PD sessions were conducted to further develop trainers' skills and knowledge to assess students. The PD sessions were not connected to any RTO performance metric, such as number of non-compliances in clause 1.8, number of rectifications identified in validations, etc. The PD sessions were also not connected to the RTO's operations and participants couldn't use procedural skills back on the job, and therefore, the RTO didn't improve assessment practices.

Failure to Recognise Non-Training Solutions
If the wrong solution is implemented, little or no payoff will result. Too often, training is perceived as a solution for a variety of performance problems when training may not be an issue at all.

A recent evaluation of a community college illustrated this problem. In its training program, the college attempted to prepare career counsellors so they could provide advice to potential students about training products. The problem the college had was a significant number of students enrolled into inappropriate courses. This meant the training produced little change in the outcomes.

An impact study subsequently revealed that the culprit was the enrolment procedure that accepted enrolments prior to potential students' interviews with career advisers. When probed for a reason for the poor results, the college realised that unless its enrolment procedure changed to provide time for career advisers to interview potential students prior to enrolments being accepted, the results would not change.

Attempting to solve job performance issues with training will not work when factors such as systems, job design and motivation are the real issues. To overcome this problem, staff training must focus on methods to analyse performance rather than conduct traditional training needs assessments – a major shift in performance improvement that has been developing for many years.

Up-front analysis should be elevated from needs assessment, which is based on skills and knowledge deficiencies, to a process that begins with business needs and works through the learning needs.

Lack of Specific Direction and Focus
Training should be a focused process that allows stakeholders to concentrate on desired results. Training objectives should be developed at higher Kirkpatrick levels than traditional learning objectives. These objectives correspond with six measures that lead to a balanced approach to evaluating the success of training. Most training programs should contain objectives at multiple levels, ideally including those at Levels 3 and 4.

An RTO's internal training is often decided without consulting all stakeholders. What are the RTO's performance needs for the CEO, the Marketing Manager, the Training Manager, the Quality and Compliance Manager? When developed properly, and in consultation with all relevant stakeholders, these objectives provide important direction and focus.

Training designers and developers must focus on application and effect, not just learning. Facilitators need detailed objectives to prepare individuals for the ultimate outcomes of the learning experience: job performance change.

Participants need the direction provided by Level 3 and 4 objectives to clearly see how the training program's outcome will actually help the RTO.

Not all programs will need to undergo such detailed up-front analysis, but it is a critical issue that needs more attention, particularly when training is expected to have an effect on the RTO's performance.

in Learning Design

VET Is Not About Content

17 September 2017

Too many trainers still stand behind a podium, relying on content to drive learning. It's time for those trainers and instructional designers working in the Vocational Education sector to realise that content is not what drives learning in VET.

We don't teach content, we teach people. We teach people to achieve outcomes, to perform a job under industry standards.

Some factors to consider

Information overload. Students can watch speakers, read information sheets and research content at any time. Students need the interaction, the engagement and the experience.

Internet provides access to info graphics, case studies, blogs, podcasts, videos, tweets, about almost anything. As VET practitioners, we can make good use of them, but these publicly available resources will not make a relevant learning experience for our learners.

We live in the Information Age—and there's too much of it! For example, according to some estimates published by the Association for Talent Development (ATD), there are more than 120,000 books and texts on leadership development, with 3,000 more being published each year. We don't have a content problem; we have a filter problem. We must filter that content through the context of whom we're trying to connect with and teach.

Content is what we're pouring into people. Context is everything that makes those people unique. It's why they're doing the training: the conditions where they will be applying their learning, the expectations of their clients and workplace. It's their age, interests, attention span, engagement level and beliefs.

People learn in the silence. We learn in the pauses, reflection and meditation. Don't you have your best ideas when meditating, in the shower, while driving, or when falling asleep? We learn in the spaces in between life. We can't deliver lectures to learners anymore; that's not how people learn.

Content is only one part of the equation. VET programs should always be based around the learn-say-do-reflect model. It's about providing an experience. We can't teach someone to ride a bike or drive or how to use new technology without putting them on the bike or in the car or the device in their hands.

Attention-span deficit. We live in the digital era, where our mind switches on and off every 5 to 20 minutes. The average song you listen to is about three to four minutes. The average watching time of a YouTube video is three to five minutes. Any scene in a movie runs between a quick moment and no more than 15 minutes before switching to a new scene. It takes no more than 15 to 20 minutes to read any article in any paper. TED Talks are 18 minutes. Stories in the news last no more than a few minutes, unless they are documentaries.

We can't lecture or speak to learners (of any age) for more than 15 to 20 minutes at a time. Their attention will be gone after that. People start wondering what's next. They check their smartphones. They look at the clock.

Students need more space. Spaced learning is about engagement, conversations and one-to-one interaction. It's about exercises, simulations, demonstrations, and students teaching students. Spaced learning is about reflection, giving participants time during the session to turn their insights into actions.

After a training course, people are going back to their lives, their desks, their email and texts, or the next most important thing on the list, but not back to reflect.

VET programs must provide the framework to support student's learning, post training. We need to provide action plans, explain exactly what they need to do immediately to get to the next level, and how to progress. In other words, follow-through on promises made with the learning objectives.

VET is not about content because we don't teach content, we teach people. We facilitate learning experiences. That's what we do.

in Quality and Compliance

Trainer’s upgrade: a cost or a solution?

13 September 2017

Since the Assistant Minister for Vocational Education and Skills, the Hon Karen Andrews MP, announced the most recent amendment that affects the requirements for trainers and assessors to work in VET, many RTOs' managers and trainers have considered this as another compulsory course that will be recorded as an operational expense.

Like other compliance requirements, many people in this sector believe there has been limited analysis regarding how this training will affect an RTO. Yes, it means the RTO will comply with the standards, but will the added cost solve any problems?

If this training doesn't have a positive effect on the stakeholder's performance and results it is not a solution. In this article, I would like to analyse the desired and potential effects of this TAE upgrade for RTOs and trainers.

Firstly, let's clarify the requirement. Under the updated Standards for RTOs, trainers and assessors using the TAE40110 Certificate IV in Training and Assessment as their teaching credentials must hold the following two units before 1 April 2019:

  • TAEASS502 Design and develop assessment tools, and
  • TAELLN411 Address adult language, literacy and numeracy skills.

Why are trainers required to further increase skills in developing assessment tools and addressing adult LLN skills?
According to statistics published by ASQA, approximately 75% of RTOs fail to demonstrate compliance against assessment practice requirements, and matching LLN skills of students with course entry LLN levels.

Is there a performance issue?
Yes, there is a clear performance issue with assessment practices. Assessment systems used by RTOs are not meeting training package requirements, principles of assessments, and do not produce sufficient, valid, authentic and current evidence.

The second issue is related to students being enrolled into courses without determining whether entry LLN skill levels have been met.

What is happening or not happening?
Based on my experience as an auditor, I have identified five critical factors that affect RTOs assessment practices:

  1. Units of competency are not unpacked effectively.
  2. Assessment evidence is not analysed correctly.
  3. Assessment collection methods, tasks and evidence are poorly mapped to the unit of competency requirements.
  4. Adequate instructions are not given to assessors on how to administer assessment tools and interpret assessment evidence.
  5. Inconsistent administration of assessment tasks.

Can these issues be solved with training?
We can only solve problems with training if there is a gap in skills. And yes, trainers and assessors currently working in VET have significant gaps in skills/knowledge, particularly those required to:

  1. Interpret units of competency.
  2. Develop effective assessment tools (instructions and tasks) to collect evidence against the requirements of units of competency.
  3. Implement assessment practices in line with the Principles of Assessment, and
  4. Collect assessment evidence that meets the relevant unit of competency requirements and the Rules of Evidence.

But performance issues go beyond an RTO's assessment practices. They directly relate to gaps in the skills of its trainers, lack of support and effective quality assurance systems, which play an important role.

Is TAEASS502 Design and develop assessment tools the solution?
It could be, but it won't if we continue to do the same as we have being doing with the previous upgrades BSZ to TAA and TAA to TAE.

Let's start with the outcomes included in the unit. TAEASS502 elements are:

  • Determine the focus of the assessment tool
  • Design the assessment tool
  • Develop the assessment tool, and
  • Review and trial the assessment tool.

This unit is relevant to four out of the five performance issues listed above, and will provide trainers with the opportunity to develop at least the first two sets of skills listed in the skills gap.

When the training solution is designed, developed, delivered and assessed, the impact objectives must be considered. In other words, this course must be adopted not only as the training to meet the new requirement under clause 1.14 (trainers' credentials), but as the training solution that will support the RTO to meet the requirements under clause 1.8 (assessment practices).

Considering the structure of the Standards for RTOs, being non-compliant with clause 1.8, will also produce non-compliance with clauses 1.4, 1.12, 2.1, 3.1 and 8.4. Furthermore, this course should also have a positive effect that improves the compliance status with clauses 1.9, 1.10 (validations) and 1.16 (trainers' relevant PD).

In summary, a Statement of Attainment with the TAEASS501 unit can give the RTO a tick in clause 1.14, but the real benefit, and return on investment, will only happen if trainers develop the skills required to perform the necessary tasks to meet requirements under clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4.

If we compare the cost of the course with the benefits of maintaining compliance with clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4, the potential positive return on investment is evident. RTOs then should see this course as an investment and not a cost. An investment that will produce real, tangible benefits far greater than the investment itself, and I will suggest that RTOs should measure this benefit.

Obviously, if the course doesn't produce a positive effect on operations, the investment will become a cost. This means it is critical that RTOs discuss with the training provider the desired application and impact objectives for the course.

RTOs will need to ensure trainers and assessors will have the opportunity and the support to apply the skills learnt. This may require a change to current practices. For example, trainers should be more involved in designing, developing and reviewing assessment tools, and validation processes may need to be strengthened so they have a greater effect as quality review and control processes.

How can we measure the application of the skills? What data needs to be collected?
There are some points that need to be considered here:

  • What new knowledge will be applied?
  • What new tasks will be performed? What new steps?
  • What new procedures and processes will be implemented or changed?
  • What new guidelines will be implemented or changed?

The answers to the above questions will help us to determine what data will be collected.

For a standard RTO, new tasks could include: interpreting unpacking units of competency, analysing assessment evidence required, considering learners' needs during the design of assessment tools, considering the rules of evidence during the design of the evidence collection plan, or reviewing mapping documentation. These tasks/steps will have an effect on the processes of designing, developing, and using assessment tools, and for this reason, RTOs must review/update procedures and guidelines that are already in place, to support the application of the new skills.

The reason to measure the application is not only to confirm the success of the training, but also for continuous improvement. The analysis of the application data should reveal if the skills could be enabled or if there were any barriers. The RTO can use this information to overcome barriers and better exploit various ways to maximise the positive effect on the assessment practices.

How can I measure the effect of the application of the new skills?
At this level, the RTO wants to measure the effect on assessment practice outputs, quality, cost and time.

With regards to outputs, the RTO could measure increase in the number of assessment tools developed (whether developed completely in-house or based on customising commercially available products), or increase in the number of assessment validations completed. With regards to quality, the RTO should measure the number of rectifications identified in validations, internal audits, and number of non-compliances identified by ASQA. Costs can be determined by measuring the reduction of costs associated with engaging external consultants to develop assessments, costs associated with rectifying assessment tools, and/or assessment evidence collected. Finally, the RTO can measure, for example, a reduction of time required to develop/modify assessment tools.

The opportunity is there and whether this upgrade will have a positive effect on our VET sector will depend on RTOs and the trainers' approach.

in Assessment Practices

Collecting relevant assessment evidence

8 August 2017

Don't tick the wrong box

Assessment systems continue to be the most challenging area in an RTO's operations and yet the most critical to demonstrate quality outcomes. When dealing with assessments in Australia's VET environment, we need to consider both the assessment system used by the training organisation, and the outcomes produced by that system. The "assessment evidence" is collected and used to make a competency judgement against the unit(s) of competency.

I would like to use this article to reflect about the "assessment evidence", and particularly to assessment evidence used to support decisions around completed tasks, and the demonstration of skills.

Quite often in my work as an auditor, I see "Observation checklists" based on tick boxes next to text copied/pasted from the unit of competency's performance criteria.

Assessment activities used to produce evidence of a candidate's skills, will always require a task to be completed, under the conditions and standards (relevant the unit of competency element and performance criteria), and will provide candidates an opportunity to demonstrate the skills required to perform the mentioned task. Knowing is not the same as doing, and VET is about doing. That is a fundamental principle for the design of the assessment, but as I mentioned above, I will focus here on the evidence produced, and not so much on the task itself.

Do we have rules to accept assessment evidence in Australia's VET sector? Yes, the rules of evidence are: Valid, Authentic, Sufficient and Current, and these rules must guide assessors during the collection of evidence.

Ok, let's start with Validity. What is considered as "Valid" evidence? According to the Standards for RTOs, evidence used to make a competency judgement must confirm "...that the learner has the skills, knowledge and attributes as described in the module or unit of competency and associated assessment requirements." In other words, the assessment evidence collected confirms the candidate's ability (performance evidence and knowledge evidence) to achieve each outcome (Element) described in the unit of competency, under each condition/standard (Performance Criteria).

How can we prove that an outcome has been achieved? The evidence must provide details about: what was achieved, when it was achieved, in which context. A tick in a box will not provide that information.

Some assessors think they can just tick candidates off as competent, based on their "professional judgement". And on occasions they felt insulted when evidence, used to make the judgement, was requested. Quite often, I hear... I used my criteria from 20 years of working experience. To be very clear here, I am not questioning an assessor's industry experience. I celebrate that. But competency-based assessment is an evidence-based system. In other words, judgement is made based on evidence collected.

When someone performs a task, it results in either producing a product or delivering a service. If a product, or sub-product is produced, the product itself will constitute valid evidence that the assessor can then assess against a benchmark (the unit). Assessing products requires comparing the product's characteristics, features and use, to the outcomes described in the relevant Element/PC from the unit. Assessors can use records of the product's characteristics, for example, if the product is an object you can have details of physical characteristics (length, size, weight, height, resistance, conductivity, etc.), or if the product is something more intangible such as a plan, some characteristics that can be recorded and assessed could include content, relevance of information provided, usability, veracity of instructions, feasibility of projections/forecasts.

If the task is a service (delivered to internal or external clients), records of the service provided will constitute valid evidence. For example, if the service is to resolve a customer complaint, evidence could include records of the complaint resolution, feedback from the client, photos, videos and records of the observation of the candidate dealing with the client (details of the protocol/procedures followed, techniques used, skills demonstrated, etc.).

The quality, quantity and relevance of the evidence collected must support the assessor's judgement. In general terms, learning in the vocational education and training spectrum means a consistent change in the candidate's attitudes. In other words, the candidate is able to use the new skills and knowledge consistently, and apply them in different contexts and situations.

The above means that evidence must demonstrate that the candidate had performed the task(s) more than once. In some cases, the unit of competency indicates a specific minimum number of occasions for a task to be performed. RTOs should use industry engagement activities to determine a benchmark for sufficient evidence, in line with industry standards. This is the requirement under the rule of sufficiency.

The assessment evidence constitutes a legal document and as such, the authenticity of the evidence is paramount. How can we prove that the evidence presented was either, produced by the candidate, or talks about the candidate? What measures are we using to demonstrate authenticity? In VET, there are three types of evidence we can use Direct, Indirect or Supplementary.

When collecting direct evidence, it is important that the identity of the candidate is confirmed and that the assessor observed or witnessed a task being completed or through oral questioning, and details of the event registered (i.e. date, time, location, duration).

Tools used to produce indirect evidence, such as finished products, written assignments, tests, or portfolio of evidence from a project, must include measures to confirm authenticity. This could include photographic or video evidence, further questioning from the assessor about the procedure(s) used to complete the task and how that procedure would be adapted if the situation/context was different. Many RTOs use a "declaration of own work" by the candidate as well.

Supplementary evidence produced by third parties such as supervisors, colleagues, or clients, can represent a challenge. This evidence is usually produced in the workplace. Measures to prove authenticity could include using referees to confirm the claims made in the third-party reports, or providing an opportunity for the assessor to visit the workplace for further observations/interviews.

Finally, evidence collected must meet the rule of currency. This may be particularly challenging in an RPL assessment. Assessment evidence must prove that the candidate demonstrated the relevant skills and knowledge at the time that the competency judgement was made, or in the very recent past to the judgement. What constitutes a "very recent past"? In some cases, the unit of competency provides information about currency, if no information is provided in the unit, RTOs should use industry engagement activities to establish a criterion for currency, in line with industry standards. In general terms, any evidence collected, or about something that happened more than two years prior to the assessment, judgement is potentially not current (in some industries evidence from more than two years ago may be accepted).

The bottom line here is that an assessment judgement must be made after the assessment evidence has been collected and compared against the unit requirements.

The assessment evidence recorded (the facts of the case), demonstrates the learner has the skills, knowledge and attributes as described in the unit, and represents the legal proof of the competent/not yet competent judgement, that will be available for eventual procedures such as appeals, validations, audits or reviews. This evidence must meet the rules of evidence otherwise the RTO will be in breach of Clause 1.8 of the Standards for RTOs.

in Quality and Compliance

Why ASQA’s new audit model should help improve VET

20 March 2017

ASQA went through a significant transformation of its regulatory approach during 2016, and it is fair to say that the changes are promising.

The dimensions of the VFH debacle, embarrassed the whole system, and certainly opened a debate about the ROI on the regulator's performance during its first five years of existence. So far, ASQA hasn't been the catalyst expected for the VET sector.

To be fair with the regulator's team, the structure of the VET sector has not supported ASQA to achieve its mandate.

Perhaps the first mistake was to assume the market could sort out product quality issues, but the reality has shown that the market hasn't been able to decipher a training sector where both "good" and "dodgy" providers operate under the same label of Nationally Recognised Training (NRT).

At the age of five, ASQA now knows our sector, and the current approach seems to have a strong basis on that knowledge.

I refuse to call our system a failure, that would be an untruthful diagnosis and unfair, as many RTOs provide high-quality work. The vocational education and training sector generates hundreds of thousands of good stories every year, stories about individuals developing talent and opening the doors to employment and social inclusion for many.

What are the system's issues then? Dysfunction. A couple of years ago, a Minister referred to the VET sector as "convoluted and dysfunctional", a very accurate diagnosis.

Vocational Education and Training requires:

  1. Funding
  2. Industry relevant training
  3. Educational pathways for learners
  4. Regulation of RTOs
  5. Evaluation.

There is no strategic vision for the VET sector, no connection among the five areas listed above.

Funding is not connected with industry or learners' needs. The government only buys seats and not outcomes, there is no ROI study available for any funding model, or training package; what we have are cost analyses. But a cost analysis cannot connect resources used with outcomes, and cannot be used to evaluate results later.

Relevance of training outcomes are not connected to industry. The government has mandated SSOs with the creation of training packages, but SSOs don't participate either in the regulation of training delivery, or the evaluation of outcomes. Today nobody is accountable for the effectiveness of training packages; we only have entities accountable for writing training packages.

Until we fix these dysfunctional issues, we are not going to get an effective, efficient, and high-quality VET sector, and these incongruences will continue to depreciate the NRT brand.

Why ASQA's new audit model can help us to improve the situation
Firstly, the regulator is now better integrating the principles of quality audits into its own approach.

It is difficult to perform an audit against the Standards for RTOs 2015 and apply the principles of outcome focus, simply because the Standards have a rather procedural focus, and the whole VET Quality Framework lacks benchmarks for training outcomes.

In fact, the Standards for RTOs 2015, are more suitable for an accreditation system, and not to manage the operations of RTOs. But ASQA is trying to do the best with what it has available by:

  • Looking at the application of Standards from the "students' experience" perspective, and
  • Collecting evidence from different sources.

The different phases of the student experience will help the regulator to confirm whether certain outcomes have been achieved. For example, by asking students whether they received the information and support needed to make an informed decision prior to commencing training, the regulator will gather relevant evidence about the outcome of the RTO's marketing and enrolment process. Asking students and trainers whether they think the amount of training was about right, ASQA will collect relevant information about the suitability of the amount of training from the key stakeholders: trainers and students.

The Standards for RTOs define requirements for some core functions that training providers must execute, but these functions are organised and listed by content domain, and not in the order they are performed. Basically, every Standard for Quality Management Systems, including ISO 9001, are written the same way.

The effect that ASQA's new approach should have is primarily moving the focus of regulatory audits away from paperwork and towards outcomes. This approach will allow those hundreds of high-quality RTOs, to maintain compliance without increasing administrative costs in useless paperwork, and concentrate on quality training delivery and assessment. In other words there is a better alignment with compliance and quality.

Secondly, ASQA's next important step is collecting evidence from different sources; initially this will be fundamentally from students and trainers.

I honestly think ASQA's new approach will help to maximise benefits from current regulations and arrangements, and I would like to make a few recommendations:

  1. Publish all audit reports. By publishing audit reports, the regulator will promote transparency. This will help to moderate audit criteria, promote accountability within the sector, and constitute great reference and educational material.
  2. Regulate number of students per RTO. ASQA should regulate the number of students enrolled for a period. RTOs should have, as part of their registration, an approval for a specific number of students and they should only be allowed to increase this number if they can demonstrate access to the resources required. This will provide assurance that RTOs have the resources, and not only the strategies, to deliver the courses to the students enrolled.
  3. Include industry representatives in audits. Current arrangements keep training packages and training delivery on parallel paths, totally isolated from each other. SSOs and IRCs do not always have direct representation from those who set industry standards. Therefore, their potential contribution to measuring whether students have met occupational industry standards is unlikely. ASQA needs to identify industry representatives that can support auditors to measure outcomes in relation to meeting occupational industry standards.
  4. Regulate employers. ASQA should regulate employers for the delivery of apprenticeships and traineeships. The on-the-job supervision of apprentices is a critical component of their training, but employers are not properly audited against their responsibilities as per the apprenticeship contract. This will help to improve the training received on-the-job.
  5. Reform (improve) the VET Quality Framework and Training Packages. But that is a topic for another post.
in Training Delivery

Why provide feedback to students?

20 March 2017

Many trainers find the topic of providing feedback confusing, and some colleagues only look at regulatory implications of the trainers'/assessors' feedback, so I thought it might be a good topic for review.

To better understand feedback, let's define what it is and what it should do. The kind of feedback I'm talking about is written or verbal responses to answers or performance on questions or activities.

The main purpose of feedback is to reduce gaps between current competencies (skills, knowledge and performance) and desired competencies (skills, knowledge and performance).

Feedback has been shown to help learners most when it specifically addresses forgotten information or strategies, difficult aspects of performance, or a faulty interpretation (misunderstanding), explains Hattie and Timperley in their article on Review of Educational Research, "The Power of Feedback."

Feedback doesn't help nearly as much when it addresses a lack of understanding, as this implies that the training didn't meet its goals or has one or more of the following problems:

  • Training didn't consider the prior knowledge levels of participants (for example, we assumed they knew more than they did)
  • The delivery of training is problematic (for example, participants were unable to find or review parts they sought to review)
  • Content, practice, or assessment elements are problematic (for example, there is inadequate practice to help remember or apply training on the job).

Trying to fix a lack of understanding in training is generally beyond the scope of feedback. Even well-written feedback, given in the right circumstances, cannot always help because participants don't always understand or use it.

Feedback Types and Conditions
Hattie and Timperley reviewed training feedback meta-analyses (a statistical approach to combine results from multiple studies), to show what types of feedback are likely to help the most and the least. The most powerful outcomes came from feedback about tasks and how to do them more effectively. Goal-oriented feedback and cues (hints) could also be effective. The least powerful outcomes came from praise, rewards and punishment (extrinsic rewards).

They also looked at how to make effective types of feedback work well. Remember at the beginning of this article I said the main purpose of feedback is reducing gaps between current competencies and desired competencies. Hattie and Timperley explain that to reduce this gap, feedback must answer three questions:

  • What are the goals?
  • What progress am I making towards these goals?
  • What do I need to do to make better progress?

Clear goals along with knowing where you are and how to progress, target the right places to focus effort to reduce gaps between current knowledge and actual performance and desired knowledge and performance. Some feedback strategies work opposite to this and include non-specific or fuzzy goals, accepting poor performance, and not offering enough information. Research shows that when people don't know what to do, feedback can be demotivating.

Goals must supply actions and outcomes for a specific task or performance. They must also include success criteria that allow for consistent performance when facing common obstacles. In other words, goals are defined in the units of competence. Feedback cannot lead to a reduction in the "gap" if the goal and the criterion aren't clear. Otherwise, people may rely on any method that works (for them), and their methods may have undesirable consequences.

Telling people how they are doing shouldn't wait for summative assessment. People need specific feedback against specific goals (with success criteria) while learning so they can learn to self-correct. Feedback is required also in formative assessment activities, even if those activities are not used to make a competency judgement.

I hope you can see that feedback is complex, and we shouldn't write it only as an add-on response to assessments, or to meet compliance requirements. We need to better integrate feedback into the design of instruction to support learning.


  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research.