Not all evidence is created equally

In the policy world, not all evidence is created equally.

I’m not talking about forensic or criminal evidence (though those areas have hierarchies too). I’m referring to the evidence we need to make a good policy choice. Policy decisions in policing include if my police department should support a second responder program to prevent domestic abuse, or should dedicate officers to teach D.A.R.E. in our local school? (The answer to both questions is no). The evidence I’m talking about here is not just ‘what works’ to reduce crime significantly, but also how it works (what mechanism is taking place), and when it works (in what contexts the tactic may be effective).

We can harvest ideas about what might work from a variety of sources. Telep and Lum found that a library database was the least accessed source for officers when learning about what tactics might work[1]. That might be because for most police officers, a deep dive into the academic literature is like being asked to do foot patrol in Hades while wearing a nylon ballistic vest. But it also means that the ‘what works’, ‘how works’, and ‘when works’ of crime reduction are never fully understood. Too many cops unfortunately rely on their intuition, opinion or unreliable sources, as noted by Ken Pease and Jason Roach:

Police officers may be persuaded by the experience of other officers, but seldom by academic research, however extensive and sophisticated. Collegiality among police officers is an enduring feature of police culture. Most officers are not aware of, are not taught about and choose not to seek out relevant academic research. When launching local initiatives, their first action tends to be the arrangement of visits to similar initiatives in other forces, rather than taking to the journals.[2]

In fact, not only do officers favor information from other cops, but it also has to be from the right police officers. Just about everyone in policing at some point has been told to forget what they learned at the police academy. That was certainly my experience. And those courses were usually taught by experienced police officers! It’s too easy to end up with a very narrow and restricted field of experience on which to draw.

Fortunately, while a basic understanding of different research qualities is helpful, you do not need to have an advanced degree in research methodology to be able to implement evidence-based policing from a range of evidence sources. It’s sufficient to appreciate that there is a hierarchy of evidence in which to place your trust, and to have a rudimentary understanding of the differences between them. I’m not dismissing any forms of research, but I am saying that some research is more reliable than others and more useful for operational decision-making. For example internal department research may not be great for choosing between prevention strategy options, but it is hugely useful for identifying the range of problems.

There are a number of hierarchies of evidence available. Criminologists are familiar with the Maryland Scientific Methods Scale devised by Larry Sherman and colleagues[3]. In this scale, studies are ranked from 1 (weak) to 5 (strong) based on the study’s capacity to show causality and limit the effects of selection bias. Selection bias occurs when researchers choose participants in a study rather than allow random selection. It’s not necessarily an intent to harm the study, but it is a common problem. When we select participants or places to be part of a study, we can unconsciously choose places that will perform better than randomly selected sites. We cherry-pick the subjects, and so bias the study and get the result we want. It’s why so many supposedly great projects have been difficult to replicate.

But the Maryland scale only addresses academic research, and police officers get knowledge and information from a range of sources. Rob Briner’s work in evidence-based management also stresses this. Go into any police canteen or break room and you hear anecdote after anecdote (I wrote a few weeks ago about the challenges of ‘experience’). These examples of homespun wisdom—also known as carefully selected case studies—are often illustrative of an unusual case and not a story of the mundane and ordinary that we deal with every day.

This professional expertise can also be supplemented by the stakeholder concerns of the local community (including the public or the local enforcement community such as other police departments or prosecutors). Knowing what is important to stakeholders and innovative approaches adopted by colleagues is useful in the hunt for a solution to a crime or disorder problem.

Police also get information from experts and internal reports. In larger forces, reports from statistics or crime analysis units can be important sources of information. Organizational data are therefore useful to officers who try and replicate crime reduction that a colleague in a different district appears to have achieved. All of these sources are important to some police officers, and they deserve a place on the hierarchy. But because they are hand selected, can be abnormal, might be influenced internally, or have not been subjected to a lot of scrutiny, they get a lower place on the chart.

In the figure on this page, I’ve pulled together a hierarchy of evidence from a variety of sources and tried to combine them in such a way that you can appreciate different sources and their benefits and concerns. And hopefully this might help you appreciate a little how to interpret the evidence from sources such as The National Institute of Justice’s CrimeSolutions.gov website and The UK College of Policing’s Crime Reduction Toolkit. In a later post I’ll try and expand on each of the levels (from 0 to 5*).

 

  1. Telep, C.W. and C. Lum, The receptivity of officers to empirical research and evidence-based policing: An examination of survey data from three agencies. Police Quarterly, 2014. 17(4): p. 359-385.
  2. Pease, K. and J. Roach, How to morph experience into evidence in Advances in Evidence-Based Policing, J. Knutsson and L. Tompson, Editors. 2017, Routledge. p. 84-97.
  3. Sherman, L.W., et al., Preventing Crime: What works, what doesn’t, what’s promising. 1998, National Institute of Justice: Washington DC.

What we have learned from Philadelphia foot patrols

With the recent publication of our comparison of foot patrol versus car patrol, it ‘s worth a quick review of all that we learned from the Philadelphia Foot Patrol Experiment. Especially as the paper Liz Groff took the lead on is available for free from the publishers until the end of May.

The original foot patrol experiment paper described our randomized controlled field experiment which saw the Philadelphia Police Department place 240 officers on 60 violent crime hotspots (randomly selected from a list of 120) for the long hot summer of 2009. And it was hot walking the streets of Philadelphia in a ballistic vest – we all remember the fieldwork and empathizing with the officers who did it all summer!

At the end of the experiment, 90 violent crimes had been prevented, resulting in a net reduction of 53 violent offenses after some displacement. This was a 23 percent reduction in violent crime as a result of foot patrol in carefully-targeted areas – a unique finding for policing.

How was this achieved? We found that pedestrian stops increased by 64 percent in the foot patrol areas, probably increasing the likelihood that offenders would be stopped, and subsequently reducing their enthusiasm for carrying a firearm. We learned some other things that summer:

  1. There was no community backlash within the foot patrol areas. To the contrary, members of the local community were really upset when their foot patrol officers were eventually removed, and they let the PPD know about it in no uncertain terms.
  2. The image of foot patrol as a punishment posting changed to a degree within the PPD. Good commanders became convinced that foot patrol was a practical tactic in high crime areas, and some patrols remained in place after the experiment.
  3. The fool patrol officers got a real feel for their foot patrol areas, developing community and criminal intelligence in the months they spent on foot.
  4. The foot patrol officers engaged in more pedestrian stops than their vehicle-bound colleagues, and they also dealt with many more disorder incidents – an activity that is always an issue in the summer in Philadelphia. They dealt with fewer serious crime incidents, yet were undoubtedly responsible for the decline in violent crime.
  5. Importantly, the foot patrol officers engaged is a different type of police work than their colleagues in cars. Less response-driven, they engaged in more order maintenance and community-related activities. They did not replace the activities of the cars, but rather work in a complementary fashion, being co-producers of community safety with their colleagues. Even if they sometimes wandered a little.

Unfortunately, we also learned – in a subsequent Criminology article headed up by two enterprising graduate students, Evan Sorg and Cory Haberman – that the gains achieved during the foot patrol experiment did not last. The effects dissipated as soon as the foot patrol officers were removed, and in fact some effects were starting to wear off as the foot patrol experiment continued into the late summer.

My colleague Jen Wood took the lead on the qualitative component so important to understanding the nuance of the foot patrol experiment. We learned that officers negotiated order based on geography, people and space, and varied their strategies and tactics based on their knowledge of the people and the environment.

The experiment was generously awarded with a research award from the IACP and from the American Society of Criminology’s Division of Experimental Criminology, but more importantly it helped people recognize the Philadelphia Police Department as an innovative department willing to try new things, take risks, and learn. And while it involved a lot of researchers, they nearly all volunteered their time on top of their normal duties. Temple University and the College of Liberal Arts generously helped out with some fieldwork costs, indicative of their desire and ongoing commitment to moving the city forward; But the experiment – which learned so much – did not cost the city taxpayers a single cent.