Skip to main content
Migration and Home Affairs

Using Evaluation and Evaluative Reasoning for Successful P/CVE Activities in Ever Changing Contexts

Paulo Teixeira

When thinking about the specific role that evaluation can/should have in promoting the success of P/CVE activities, specially in turbulent times and unstable contexts where new threats emerge, one can approach this from different perspectives.

The first perspective is one that is as “textbook” as apparently evident, but in practice not always implemented, designing evaluations focusing on changes and not on what is done/activities. The truth is many evaluations of P/CVE programmes, projects or activities fail in the design phase by not focusing on what really matters…change.

One specific kind of tool that can be useful to this is developing a Logic Model or a Theory of Change (ToC) of a P/CVE activity that can help us to clarify the short, medium and long term changes associated with a certain intervention. With a clearer value chain one can better focus evaluations and develop better, more meaningful evaluation objectives and a relevant portfolio of evaluation questions. Another important approach is getting data on how the initiatives are perceived by the intended target groups (changes felt, initiatives value for these groups).

This focus on change and the use of instruments like the ones identified, can lead to more meaningful evaluations, that can be one key element to promote sustainable change. With that in mind, it’s important and good practice to clearly define the Evaluation Purpose, a clear understanding of the objectives of the evaluation and the purpose/use for which it is being undertaken and the Evaluation Scope, as evaluations need to be clear about if their analysis is focusing on a particular project, a policy theme or strategy or a broader range of programming that collectively contributes to CVE activities (a multidimensional evaluation).

A second perspective we can use to strengthen evaluation practices, and better identify new threats in the P/CVE field is to aim for Stronger Evaluation Research Designs. In fact P/CVE evaluations are often characterised by weak research designs and most take a largely descriptive approach sometimes using a single type of data and collection methods. Even if one can be happy to see an increase in the number and quality of P/CVE evaluations, some interventions still lack any kind of evaluation or have only internal self-evaluation as a form of assessment, rarely focusing on change or impact.

Research suggests that M&E efforts in P/CVE initiatives usually focus heavily on monitoring (tracking progress and outputs against what was previously planned), not on assessing broader impact on trends toward radicalisation or violent extremist activity. Several factors account for this emphasis on monitoring, not the least of which is the difficulty in effectively evaluating the impact of P/CVE initiatives. Indeed, when we are designing evaluations for P/CVE initiatives we face both Analytic and Practical challenges. The main Analytical challenge is the historical difficulty of attributing change directly to programming efforts when evaluating initiatives/programmes/projects. Efforts to establish robust causality claims run into two major obstacles: the impossibility of “measuring a negative” or proving that violent activity or radicalisation would have otherwise occurred had there not been an intervention.

When looking at Practical Challenges, data availability and reliability are some of the most common obstacles when trying to evaluate change (both at the outcomes and impact levels). P/ CVE initiatives deals with sensitive political issues and data, because of that local populations, government officials or programme staff may be reluctant to make certain information accessible. I’s also important to note that there are areas where security or ethical concerns limit the access to data.

The key message is that the sensitive and security-relevant nature of many questions asked in an effort to assess attitudes and support for VE can reduce the reliability of the gathered information.

Evaluations of P/CVE initiatives, in order to fulfil their promise and maximise their usefulness in helping with emerging threats of recent years like digital transformation, rising violent right-wing extremism (VRWE), issues of mental health and even other vulnerabilities that have emerged due to the pandemic, should aim to access/measure different areas of change.

A New Paradigm of Terrorist and Extremist Influence

Measuring changes in attitudes is important in many P/CVE interventions evaluations. To measure changes in social, political, and ideological beliefs held by individuals targeted by an intervention, specifically, their attitudes toward the use of violence and their ideological leanings.

Change is commonly assessed by measuring an individual’s knowledge of VE, as well as his or her perception of it. The difficulty lies in that this are sensitive topics and classical approaches like surveying target groups encounter the fear of the prospect of discussing sensitive topics, sometimes in less than secure environments, and this tend to lead to getting inaccurate data. The solution for these challenges for evaluations could lie in the use of strategies that increase anonymity and the perception of anonymity when asking questions around sensitive topics to enhance the confidentiality of responses. The use of these data collection techniques, random response experiments, use a set of techniques so that respondents can answer a question without survey administrators knowing their individual responses. An example of a very practical way to access changes in attitudes is the use of Endorsement Experiments.

These involve measuring the support for specific policies in a control group and a “treatment group”. We ask members of a control group about their support for specific policies, and the same for members of the treatment group but these are also told that certain policies are supported by militant groups or VE Organisations. A comparison of the results gives us the extent to which knowledge of support by militant groups or VEOs for a policy altered or influenced responses, thus serving as an indirect measure of support for, or attitudes toward, VE.

These methods aim to increase confidence levels and alleviate respondent’s concerns about giving sensitive and potentially dangerous informations.  Again, these approaches works better when supported by a strong theory of change, and rigorous research, like we pointed out before. Developing a robust Theory of Change (ToC) is an important tool for evaluators in order to identify an intervention objectives and associated metrics, because each P/CVE relevant measure is explicitly linked to how the intervention aims promote change. Of course ToCs need to be tested and refined through ongoing evaluations and with learning from relevant research that is focused on the drivers of violent extremism.

The weakness of this type of metrics is the underlying assumption about the relationship between extremist beliefs and violent activity. A second level of change to be accessed is in behaviours and activities, like changes in individual engagement with VE groups and activities (including consumption of VE propaganda and online participation) or the opposite, participation in nonviolent acts or activities promoting tolerance or peace. Changes in behaviours can be measured by using a mix of surveys, interviews, case studies and anecdotal evidence, as well as by collecting data on incidents of violence and violent offenders.

Tracking recidivism rates (i.e., incidents of relapse into violent or criminal activity) of former offenders these are all standard approaches to assess P/CVE interventions aimed at deradicalising. Another way to measure changes in behaviours and activities is the use of life stories to understand/illustrate these changes, one can use storytelling supported in a set of biographical interviews, this approach can/should be supported by visual elements, like photography and video, Visual Storytelling can be very effective to evaluate these interventions. Finally it can be important to measure the relationships and social networks in P/CVE evaluations, but this is rarely done because of ethical and technical/logistical difficulties, but, for instance measuring levels of cohesion, integration, and engagement of individuals in a community is relevant. Using the ever growing set of tools of network analysis is also something to keep in mind.

KEY IDEAS

  • More robust evaluation designs is a key area of investment when aiming for better evaluations. Better P/CVE evaluation designs usually are mixed method, use both quantitative and qualitative data, have a participatory dimension involving all relevant stakeholders and can have both a more formative and summative focus, according to their main goal and intended use but should definitively have a change/impact focus.
  • Using a Theory of Change or a Logic Model is useful in several point of the intervention’s lifecycle. We should use ToCs in the planning stage to focus on change and in designing evaluations to pose better evaluation questions and choose meaningful indicators and metrics.
  • Promote the use of data collection strategies that promote anonymity and its perception by key stakeholders. Data security is critical to ensuring confidentiality is maintained. The inherently sensitive nature of CVE programs means that participation in and the evaluation of CVE programs can be extremely sensitive to the individuals involved.
  • Even if they pose the same ethical challenges of experimental designs, we should aim for the increase use of quasi-experimental evaluation designs involving different groups, where the allocation of participants to groups is not random. There are a number of types of design nested under this broader approach which can involve testing the same group before and after the intervention; cross-sectional comparisons of control and experimental groups; and a combination of before and after testing between groups that receive the intervention and those which do not (difference-in-differences method), there are also different approaches to matching participants across groups.
  • To mitigate the ethical issues pointed out, evaluators must guarantee that all participants in a specific evaluation are exposed to the intervention so that they are all recipients of the potential benefits of that intervention. There are options that allow us to overcome this ethical challenge, one is to use a switching-replications design, in which the initial control and treatment groups are switched during the evaluation process.
  • The development of evaluation rubrics1 and choosing culturally, locally valued relevant indicators and metrics can also be very helpful to measure and interpret new threats of different “configurations” of traditional ones.
  • Using participatory approaches is key, involving all relevant stakeholders in all phases of the evaluation, design, implementation, analysis and reporting leads to better design and data analysis but also to greater evaluation use.
  • To maximise evaluations usefulness we should also increase the use of visualisation strategies for gathering but especially for presenting information in more engaging ways. Visual storytelling, photographic and video based narratives using stories told in the first person can lead to more change in perceptions and attitudes than traditional reporting.

A final reference to two key thoughts, first the critical investment that should be made to implement impact evaluations that look across different projects/initiatives and consider the larger/macro context to get a better sense of “what works” and not, but also to create a shared understanding and dialogue across different stakeholders in the P/CVE field. The second and final thought is regarding the importance of investing in capacity building in the M&E field in P/CVE by implementing training activities but also by connecting academics and practitioners and testing new evaluation tools at the local and international levels. The challenge is great but the prize is invaluable.

(1) Judy Oakden, Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of 1 evidence. Better Evaluation, Melbourne, Victoria, 2013

Bibliography

Beaghley, Sina, Todd C. Helmus, Miriam Matthews, Rajeev Ramchand, David Stebbins, Amanda Kadlec, and Michael A. Brown. Development and Pilot Test of the RAND Program Evaluation Toolkit for Countering Violent Extremism. Santa Monica, CA: RAND Corporation, 2017.
Kurt Braddock, Experimentation & quasi-experimentation in countering violent extremism: Directions of future inquiry. Washington DC: Resolve Network, 2020.
Dawson, Laura, Charlie Edwards, and Calum Jeffray. Learning and Adapting: The Use of Monitoring and Evaluation in CVE: A Handbook for Practitioners. London: Royal United Services Institute, 2014.
Development & Training Services, Inc. CVE Evaluation: Introduction and Tips for CVE Practitioners. Development & Training Services, Inc., 2015.
United States Agency for International Development (USAID). An Inventory and Review of Countering Violent Extremism and Insurgency Monitoring Systems. Prepared by Lynn Carter and Phyllis Dininio. Washington, DC: USAID, 2012.
Ris, Lillie, and Anita Ernstorfer. Borrowing a Wheel: Applying Existing Design, Monitoring, and Evaluation Strategies to Emerging Programming Approaches to Prevent and Counter Violent Extremism. Cambridge, MA: CDA Collaborative, 2017.
Van Hemert, Dianne, Helma van den Berg, Tony van Vliet, Maaike Roelofs, Mirjam Huis in ‘t Veld, Jean-Luc Marret, Marcello Gallucci, and Allard Feddes. Synthesis Report on the State-of-the-Art in Evaluating the Effectiveness of Counter-Violent Extremism Interventions. Impact Europe, 2014.