BY ANNE BERGEN* & ELIZABETH SHANTZ**

*Director, Knowledge to Action Consulting;
**Knowledge Mobilization and Training Manager, Canadian Water Network

This blog post gives an overview of helpful practices in research impact evaluation, with case study examples from the Canadian Water Network’s recent evaluation work.

Key Messages

  • Understand why you are evaluating: your evaluation goals will impact how and what you evaluate;
  • Map out your evaluation criteria: inputs, outputs, outcomes, and assumptions. A logic model can be a helpful tool to do this;
  • Build relationships with end-users and plan for evaluation at the beginning of research projects or programmes.
  • Be flexible and pragmatic. Revisit and adapt your evaluation plan. 

Research impact evaluation is an exercise in understanding how a project or body of work has contributed to change: in people, places, policy, and practice.That means that research impact evaluation goes beyond counting the outputs of research (e.g., publications, reports, presentations), to tracking and assessing uptake and outcomes and impact.

The question of what should be considered “research impact” is complex. Counting outputs is relatively easy, but outputs aren’t evidence of impact. It’s much more difficult to draw a link between research activities and changes in individuals, organizations, or systems.

As research impact tends to be indirect, it’s helpful to think about research activities as contributing to changes, rather than as a direct cause. That is, individual changes in knowledge, attitudes, and understanding may influence decisions and behaviour. However, these decision and behavioural changes occur due to uptake of multiple pieces of information, assimilation with existing knowledge and experience, and the larger context. Similarly, changes in “policy and practice” involve uptake and use of multiple pieces of evidence and external pressures. 

Case Study Example: Canadian Water Network

At Canadian Water Network (CWN), our goal has been to fund research that addresses real-world water challenges that affect public health, the environment and our economy.However, demonstrating links between research and decisions that affect these factors has been difficult, as policy and practice decisions take a long time to change, and changes generally result from a number of influences rather than from a single project.

In 2012, CWN launched an evaluation to better understand the contribution of the last ten years of CWN research to these impacts. We used a 3-stage process:

  1. Analysis of existing reports and documents
  2. Interviews with project researchers
  3. Interviews with research users to learn about the impact of each project in their organization.

Step 1: Define Your Evaluation Goals and Create a Logic Model or Theory of Change

Keep a close eye on your evaluation goals

When planning to evaluate research impact, questions about the scope of evaluation should be defined as early in the process as possible. Revisit these goals often.

What kind of impact do you expect? In what timeframe? For whom? Under what conditions? Based on what evidence?

What aspects of your research do you want to evaluate? What are you going to do with the resulting information?

Case Study Example: Canadian Water Network
We had a number of goals for CWN’s evaluation project:
  1. Identify and share success stories
  2. Identify the elements of impactful research to inform future projects 
  3. Determine whether user-driven programs led to more uptake and impact than researcher-driven programs 
  4. Establish a comprehensive database summarizing inputs, outputs and outcomes for all projects

Build a logic model to explain your research impact pathways

Creating a logic model or theory of change defines the links between research activities, outputs, and desired outcomes, as well as the expected end-users or target populations for change. A logic model makes it easier to map contributions of research activities and outputs to uptake, use, and impacts.

Logic Model Key Components

End-users/ target populations

Who are the audiences/targets of change of your research activities and outputs? These are the people and groups who can tell you about research uptake, use, and impact.

Activities & outputs

What are your research and knowledge translation activities that might impact your end-users? Think about in-person activities like meetings, workshops, and collaborations, as well as written and visual outputs.

Outcomes & impacts

What are the expected outcomes, and impacts of your research and knowledge translation activities? What are the changes in people’s knowledge, attitudes, skills, and actions? What are the community and systems level impacts?

Assumptions & contexts

What needs to be in place for activities to lead to desired outcomes? What context is necessary for the activities to have impact? When do you expect these changes to occur?

CWN Logic Model

CWN Logic Model

Adapted from the University of Wisconsin-Extension evaluation logic model template

Case Study Example: Canadian Water Network
CWN’s logic model - adapted from the University of Wisconsin Extension logic model, outlines the expected pathways between research inputs, activities, outputs or products, and short-, medium- and long-term outcomes. These categories formed the framework within which we gathered and interpreted data.

CWN’s logic model from inputs to outputs to impacts shown here is a helpful starting example for mapping out your research impact pathways. Other examples include Morton’s (2015) matrix of outcomes and indicators for research impact and Phipps et al.’s (2016) logic model for knowledge mobilization, which spells out benefits for researchers and end-users (http://jces.ua.edu/the-co-produced-pathway-to-impact-describes-knowledge-mobilization-processes/.)

Step 2: Build Relationships to Identify and Amplify Impact

Ideally, you’ll start planning for research impact evaluation by defining goals and creating a logic model at the beginning of your research project. Research impact may take years or decades, but trying to reach out to end-users long after the fact to learn about uptake and use is unlikely to be helpful.

Building relationships and involving end-users throughout the research process will make it easier to follow-up over time, and will build more opportunities for impact into the research process. In addition, end-user involvement and collaboration will amplify your research impact.

Research impact takes time. Don’t expect to see changes in policy or practice right away. Consider also that although it will likely be easier to identify contributions from broader programmes of research, rather than single projects, these impacts will take a relatively longer period of time to emerge. Since it’s rarely possible to wait decades to evaluate impact, focus on identifying shorter-term indicators of potential future impacts.

Case Study Example: Canadian Water Network
CWN gathers evaluation information throughout a project, including information on outputs, short-term outcomes that have occurred, and medium- or long-term outcomes that are expected to occur later. This process helped us to forecast potential future impacts and highlighted areas where we could follow up during this evaluation to learn whether expected impacts actually occurred.

Building strong relationships with researchers and end users is critical, and we believe relationships were a key factor in the high response rates to our interview requests (74% of researchers and 68% of end users) – even for projects that were 10+ years old!

Step 3: Be Attentive to Unexpected Outcomes: Revise and Revisit Your Evaluation Plan

Once you have evaluation goals and a logic model in place, keep revisiting and revising the goals and logic model. The outcomes you expected in Year One may not be the outcomes that are emerging in Year Five. In particular, your logic model for research impact should be updated about annually. The goals of your evaluation may shift, depending on changing pressures from funders and other stakeholders.

Case Study Example: Canadian Water Network

As our evaluation proceeded, we updated and changed some of the outcome categories in the logic model due to emerging impacts. Our priorities also shifted from identifying the elements of impactful projects and programs to identifying and sharing success stories. Having a large number of goals for the evaluation may have been a complicating factor – the more evaluation goals you have, the more difficult to achieve them all!

Step 4: Be Pragmatic. Don’t Let Evaluation Overwhelm Your Actual Work. Learn From the Evaluation

Remember that an evaluation cannot capture every single outcome or impact, that research impacts cannot necessarily be directly compared across projects or programmes. Finding a balance between doing research and evaluating research can be challenging. Those seeking to understanding research impact should be mindful that evaluation has costs, as well as benefits, for researchers and end-users. Working to minimize the burdens of data collection and reporting will make ongoing research impact evaluation more sustainable.

Carefully designed and implemented evaluation can help you learn how to better mobilize knowledge and create research impact. Build in time and space to learn from the evaluation results, and make evaluation part of a regular reflective practice.

Case Study Example: Canadian Water Network
We originally intended to complete CWN’s evaluation in-house within 6-12 months, but found that it took almost 2 years, 2 interns, and the assistance of Knowledge to Action Consulting to gather all of the comprehensive data we were interested in. Database development was the most time-consuming aspect of the evaluation.

Looking ahead, we plan to continue successive evaluation work. By strategically collecting data, building relationships and scaling back our evaluation plan, we hope to benefit from a more streamlined process.

Note: This blog post was adapted from the authors’ recent previous work, including presentations to Alberta SPOR unit, OMAFRA, and CKF16.

About the Authors 

Anne is a consultant who helps people and organizations transform knowledge into action (https://knowledgetoaction.ca). She gained mixed methods research expertise through her PhD training in Applied Social Psychology, and believes that a common understanding of problems and solutions can be built through engaged research and collaborative action. Anne uses approaches grounded in social science theory to develop evaluation frameworks for collecting rigorous, meaningful, and actionable data. She has worked with diverse stakeholders (academic, community groups, non-profit, government) to create knowledge mobilization activities and strategies for projects, and programmes, and organizations.

Elizabeth has been a knowledge mobilizer at Canadian Water Network since 2010. Her background in industrial/organizational psychology and experience as a community-engaged researcher inform her work at CWN; she is responsible for supporting the development and implementation of knowledge mobilization strategies in CWN programs to effectively bring together researchers and decision makers. She also evaluates the impact of CWN research on policy and practice, develops plain language products to share the results of CWN research, and designs tools and training programs to build knowledge mobilization capacity.

Resources

Better Evaluation http://betterevaluation.org

Economic and Social Research Council (2011).Branching Out: New Directions in Impact Evaluation from the ESRC’s Evaluation Committee. Appendix 1 – Conceptual Framework for Impact Evaluation. Retrieved from: http://www.esrc.ac.uk/files/research/evaluation-and-impact/branching-out-new-directions-in-impact-evaluation-from-the-esrc-s-evaluation-committee/

Lavis, J. N., Robertson, D., Woodside, J. M., McLeod, C. B., & Abelson, J. (2003). How can research organizations more effectively transfer research knowledge to decision makers? Milbank Quarterly, 81, 221-248.

Morris, Z. S., Wooding, S., & Grant, J. (2011). The answer is 17 years, what is the question: understanding time lags in translational research. Journal of the Royal Society of Medicine104, 510-520. Retrieved from http://jrs.sagepub.com/content/104/12/510.full

Morton (2015a) Progressing research impact assessment: A ‘contributions’ approach Research Evaluation, 24, 405-419. Retrieved from http://rev.oxfordjournals.org/content/24/4/405

Morton, S. (2015b). Creating research impact: the roles of research users in interactive research mobilisation. Evidence & Policy: A Journal of Research, Debate and Practice 11, 35-55.

National Collaborating Centre for Methods and Tools (2012). Evaluating knowledge translation interventions: A systematic review. Hamilton, ON: McMaster University. Retrieved from http://www.nccmt.ca/resources/search/114.

Phipps, D.J., Cummings, J. Pepler, D., Craig, W. and Cardinal, S. (2016). The Co-Produced Pathway to Impact describes Knowledge Mobilization Processes. J. Community Engagement and Scholarship, 9(1): 31-40.