This section outlines the key components of implementing changes to practice. Baseline clinical audit results are evaluated, and these findings are used to assist with identifying barriers and enablers to the utilisation of evidence, and planning and implementing change. Grimshaw et al. write that “unfortunately our evidence on likely effectiveness of different strategies to overcome specific barriers to knowledge translation remains incomplete. Individuals involved need to: identify modifiable and non-modifiable barriers relating to behaviour; identify potential adopters and practice environments; and prioritise which barriers to target” (p.5) (Grimshaw, JM et al. 2012). This approach to identifying barriers and targeting strategies to these barriers forms the basis of the JBI practice change method, Getting Research into Practice (GRiP) (Harvey, Gill, Kitson & Munn 2012; Kurmis et al. 2015; Munn, Zachary et al. 2015; Pearson, Field & Jordan 2009; Stephenson et al. 2015).

Getting Research into Practice

The GRiP method aims to compare the findings of the audit, identify barriers and enablers to the utilisation of evidence, and assist in developing implementation strategies to reduce the evidence-to-practice gap. The GRiP method can be undertaken in three stages: 1) evaluating baseline audit findings, 2) identifying barriers and enablers to evidence utilisation and 3) developing and implementing strategies for change. 

Evaluate baseline audit findings

Before any implementation changes to practice commence, it is important to compare current practice to best practice (using the findings from the baseline audit). These findings (both good and bad) should be presented to the staff involved in the project and the working group overseeing the project. Feedback may be provided in a number of forms—verbal, printed or electronic—and it may be that a variety of these are used (when possible). Feedback can be delivered one-on-one with individuals, although there may be benefits in providing feedback to a group in a constructive way (Cooke et al. 2018). Everyone in the organisation either directly or indirectly involved needs to feel they can access data at any time. Considerable time and effort must be spent sharing the outcomes of the research. The data need to be presented in a variety of modalities to ensure everyone has the opportunity to remain informed and therefore engaged in the change process.

It is important that the facilitator reassures the project team that poor compliance in the baseline audit is to be expected in most instances. Feedback is delivered to the working group by the project leader with the purpose of identifying barriers and enablers to evidence utilisation. The project leader is required to lead the team’s review and discussion of the compliance (audit) results for each evidence-based criterion. Maintaining objectivity during the interpretation of the data is essential.

Generally, people are willing to be involved in practice change if they believe there is a good reason behind it, if they know what they need to do and if they feel they have a say in how it is brought about. When these factors are not present, it is likely that people will be resistant to change. A robust process of engagement will make clear to stakeholders the exact purpose of the change and the evidence on which the need for change is based.

Clinical practice is complex and dynamic. There are many contributing factors that might impact on practice and, in turn, on whether practice is based on the evidence. Generally, a single group or individual is not responsible for these factors; therefore, any process of change needs to encompass a no-blame attitude.

Identify barriers and enablers to evidence utilisation

While it is important to identify issues or barriers to achieving best practice, it is also important to identify existing strengths or enablers as these will help you develop strategies for practice change. Using the findings from the baseline clinical audit, the working group, led by the project leader, should aim to identify any barriers and enablers to the utilisation of evidence in practice.

Although there are common barriers to evidence implementation, there are also differences across contexts. Lack of knowledge among staff about best practice is a common barrier. Another is lack of knowledge about how to implement the best practice identified by research. Addressing these barriers, which call for training and education/raising awareness, may be easier than addressing organisational barriers, such as a lack of existing incentives that encourage best practice in health organisations. Cultural or religious-based barriers to change are also harder to address. Psycho-social barriers refer to feelings, attitudes or resistance to change due to beliefs, values and previous experiences. Resource limitations (financial, material and human) are often a major stumbling block or barrier to evidence implementation, particularly in developing country settings.

Determine and implement strategies for change

Once there has been a careful review of the baseline data against the recommended practice and the identification of potential barriers, it is time to start designing the most effective and efficient way to manage the proposed barriers and achieve a successful implementation of the best available evidence. Just as the barriers and enablers to evidence utilisation are context driven, so too are the strategies to overcome and enable change.  

The wide range of barriers that underpin the gap between evidence about best practice and actual practice suggests that to facilitate evidence implementation, strategies that work on many fronts are required. In one context, a particular strategy (for example, building capacity in healthcare workers) will be more effective/important than in another. In most cases, strategies will be required that change behaviour as well as organisational culture and systems. Monitoring of change is always important, as this encourages implementation and can be used to direct future strategy for improvements. 

A number of strategies can be used to promote evidence implementation. Strategies highlighted in the literature include: (Foy et al. 2005; Grimshaw, J et al. 2012; Perry et al. 2019)

  • Reminders (manual or computerised): which may prompt the performance of a patient-specific clinical action, for example.
  • Audit and feedback: which can be understood as any summary of clinical performance over a specified period of time.
  • Local consensus process: which is the inclusion of relevant professionals in discussions to agree on the approach for managing a particular clinical problem that is important and requires change.
  • Patient-mediated interventions: which is specific information sought from or given to patients.
  • Local opinion leaders: where healthcare professionals nominated by their colleagues as being educationally influential lead the utilisation of evidence by example.
  • Educational materials: which can be the distribution of recommendations for clinical care (such as clinical practice guidelines, audio-visual materials or electronic publications).
  • Educational outreach visits: which consist of a personal visit by a trained person to healthcare professionals in his or her own specific setting.
  • Interactive educational meetings: which involves participation of healthcare providers in workshops that include discussion and/or practical skill development and use.
  • Didactic educational meetings: which are traditional lecture-style methods with minimal participant interaction.
  • Financial incentives: which are monetary payments directly rewarding healthcare providers for specified behaviours.
  • Multifaceted interventions: which are a combination of two or more of any of these strategies.

A comprehensive narrative description of the strategies implemented should provide sufficient details to readers or those who may want to use similar strategies to promote change. The information provided should address and describe in detail what the strategy was and who was involved/targeted. It should also clarify who delivered the strategy. If there were resources developed (e.g. posters/algorithms), these should be described in this section and included as an appendix. This information should be presented in table format (see the GRiP matrix in Table 4).

Table 4: Getting Research into Practice matrix 

Barrier

Strategy

Resources

Outcomes

  • What was the barrier?
  • What was the action to overcome the barrier (e.g. development of tool, delivering educational sessions, development of pamphlets)?
  • What resources did you use to achieve a desirable outcome (e.g. tool, charts, educational package, seminars, extra staff)?
  • What was the result?
  • How was an improvement measured?

Go to Step 6: Re-assess practice using a follow-up audit





2019 © Joanna Briggs Institute. All Rights Reserved