Skip to content

Redefining Learning in Child Welfare: A New Era of Simulation-Based Training

Theory, meet practice.

At least that's how College of Social Work Social Research Institute (SRI) Director Chad McDonald and Research Assistant Professor Erika Marks talk about their partnership designing and implementing simulation training for child welfare workers.  Combining their extensive understanding of learning theory and extensive practice experience as child welfare workers and administrators, Dr. McDonald and Prof. Marks are helping to redefine how simulations and micro-skill development in child welfare are implemented. 

Their approach to both the design and implementation of child welfare simulation training is significantly different from what most child welfare agencies do.  Typically, a training simulation is about what’s happening on the stage.  Participants volunteer or are chosen from a crowd to come to the front and act out a particular scene.  Once things have played out in the simulation, the trainer often leads a brief feedback discussion about what happened and the choices the learners made during the scene.

Photo of Chad McDonald and Erika Marks sitting in front of a space made to mimic a homeWhile this is also done in simulations that are part of the larger training SRI runs with the Utah Division of Child and Family Services (DCFS) for new child welfare employees, there are additional, critical learning elements included in these simulations.  Rather than train for broad situations, like a home visit, the trainings designed in the last several years by Prof. Marks and Dr. McDonald—both former child welfare workers—break down the situation into a 10-minute scene that emphasizes micro-skills such as providing rules and rapport building.   

Prof. Marks explained, “The job of a child welfare worker is very complex, with a high number of variables.  Breaking down the work into micro-skills makes those skills more achievable and makes it easier for employees to hone those skills.”

Once a facilitator has helped a participant identify two specific skills they want to practice, the simulation begins.  Facilitators remain engaged with participants to coach on the targeted skills as needed.  Immediately after the simulation, the facilitator offers precise feedback, listing at least one thing the participant did well and two or three things they can work on.  “We want participants to have a positive experience,” said Prof. Marks.  “By giving concrete feedback, we’re helping employees know more clearly where they are so they know where they need to go.”

This is generally standard practice.  But this is also, perhaps, where SRI’s training looks significantly different—who is “trained.” 

As each of these scenes plays out, the DCFS employees in the audience complete a digital form in real-time when they see a particular set of skills being used by those in the simulation.  At the same time, the co-facilitator—a skills expert—is also indicating when they see specific skills demonstrated.  Once the situation on stage has played out, and the facilitator has given stage participants feedback, the co-facilitator shares a screen of the aggregate results of the training group compared to what the co-facilitator indicates what was or was not demonstrated.   While the aggregate differences are discussed, each person can also see their own individual results.

The timing of this is important.  “When feedback is given immediately after the simulation, the co-facilitator can reframe while the memory is new,” said Dr. McDonald.  “It helps cement what they need to do to see and apply the skill successfully.”  He emphasized that timing is important for all trainees—whether they correctly identified skills or not.  “Discussion paired so quickly to the experience of identification helps reinforce learning for all learners.”  In this method, learners identify demonstrations of microskills in addition to trainers honing in on skills larger majorities of learners missed or mislabeled.

The way these simulations emphasize audience involvement is also a key part of how they reinforce learning.  Actively involving the audience, Dr. McDonald explained, is one way to maximize learning.  “Not training the simulation observers is a missed opportunity.  Passive observation doesn’t build skills.  Asking participants to apply the material to themselves does.”  He continued, “We’re utilizing learning theory to expand both the quality and quantity of training opportunities for child welfare workers.”

Prof. Marks and Dr. McDonald have created step-by-step guides for facilitators and co-facilitators so that facilitators can be as consistent across multiple simulations as possible.  Instead of only having standardized clients (considered the gold standard in research literature), these guides help develop more standardized facilitators as well. 

Photo of a simulation lab space representing the bedroom of a small childOften when a training is simulation-based, what happens on stage and the discussion afterward are highly variable.  The facilitator chooses a few key points to focus on, and the conversation varies as seems appropriate in the moment.  While there are things that can be valuable about this approach, it makes evaluating the effectiveness of the simulation training very difficult.  “By reducing variability across simulations, we’re creating simulations that are more research-valid environments,” said Dr. McDonald.  “The more we can standardize the delivery of the training, the more confidence we can have in the efficacy of the training.”

And so far, they’re finding this kind of training is making a difference. 

Immediately before and after each simulation, participants take self-assessments to measure the degree of confidence they have in their knowledge and skills.  No matter where a participant was before a simulation, they have indicated statistically significant improvements in their self-assessment of knowledge, skill, and confidence after the training, a finding Prof. Marks describes as remarkable for a single training session.  “We’ve been told in focus groups that one of the primary things managers worry about the most with new child welfare workers is instilling that confidence,” she said.  “It’s scary to knock on a door, knowing you might change the lives of the people inside.  Developing that eagerness and willingness to knock is huge.”  Also of note: the results are the same whether the simulation is conducted online or in person.  With such promising results in training, the next step for the team is to test if these gains in confidence lead to increased competencies in the field.

Prof. Marks is enthusiastic about the impact of this project.  “I love creating engaging training to help caseworkers.  The work is often really difficult, with few opportunities to expand knowledge.  I love finding ways to bring excitement and innovation to this field of work.”

Last Updated: 1/16/24