Blog

Policing explained in a few graphs

As a new semester starts and I look forward to teaching a new group of undergraduates, I am reminded that some of them have not had extensive exposure to law enforcement or policing research. How then to encapsulate the essence of what is useful to know about policing in a brief enough format? These are some of the graphs I use to help my new students understand the challenges of policing in the 21st century.

1. Policing is overwhelmingly a social service

Graph no. 1. This is from the second edition of my book “Intelligence-Led Policing“. The area of each box represents the volume of incidents in 2015 in the City of Philadelphia (about 1.5m in total). These incidents can come from verified calls for service from the public (something really took place as confirmed by a police officer), or from officer-initiated events (such as drug incidents). 

What is clear from the graphic is that violent crime plays such a small part in the day-to-day demands on police departments, even in Philadelphia, one of the more troubled cities in the U.S. While the media frets over homicide, it can be seen in the lower right as one of the least noticeable boxes in the graph. The majority of the police department’s workload is the day-to-day minutiae of life in a big city.

2. Impacting early in the crime funnel is the key to public safety

Graph no. 2 is another image from my Intelligence-Led Policing book. The crime funnel represents what happens to a random selection of 1,000 crimes that affect the public (top bar). It shows the loss of cases through the criminal justice system. These are British national data derived from public records, but the comparisons to the U.S. are very similar. If you take a random selection of 1,000 crimes actually suffered by the public (violence, robbery, vehicle theft, residential burglary, theft and criminal damage) you can see that they only report 530 to the police, who in turn record just 43 percent of the original total.

Of these 429 events, 99 are detected (solved or cleared in some way) and of these, 60 end up with a day in court. The majority of those are found guilty or plead the same, but in the end only four of the events from the original 1,000 end up in a custodial sentence for the offender. This is an incarceration rate of 0.4% based on crime suffered by the community.

The main point here is that impacting higher in the crime funnel will be more effective because it affects the numbers below and affects a larger number of actual cases. Improving the detection rate will have an impact on prosecutions, pleas and incarceration, but only to a minimal level. Being prevention focused and changing the higher numbers is much more impactful. Consider if you could have a 10% change on one level. Where would it be most effective?

3: Public perception of the police has little to do with crime

Graph no. 3. Going back nearly 20 years, it is clear from the graph that violent crime in the U.S. has continued a decline right through to at least 2013. Things have changed in the last year or two but that isn’t the point here. Public perception in terms of confidence in the police does not seem to be tied to the crime rate. Confidence in the police is pretty high, especially compared to nearly every other occupation (except the medical field). After all in June this year, it was found that only 27% of Americans say they have “a great deal” or “quite a lot” of confidence in newspapers, so police are doing pretty well. The graph shows however that there isn’t a direct and easy correlation between the crime rate and confidence in the police. It is more complicated. Public confidence is more complicated than just crime reduction. This graph is from a forthcoming book I am in the middle of writing called “Fighting Crime” and it will be a crime reduction guide for mid-level police command staff.

4: Policing is changing drastically

Graph no. 4 isn’t even about policing, but it is about external policy impacts on policing. When I joined the police service in 1984 and started policing on the old H district in East London, we had to deal with a few behavioral health patients because we had St. Clement’s hospital mental health hospital a stone’s throw from the nick. So perhaps we were immune to the gradual change in the social structure of society, a change that has affected the U.S. as well. Graph no. 4 is from the U.S. Substance Abuse and Mental Health Services Administration.

30 years ago, nearly two-thirds of U.S. mental health spending was on inpatient and residential care: in other words, professionally trained carers. Over the last 30 years this has reduced to only be about one-third of national spending. As graph number 4 shows on the right, the corresponding increase has not been in outpatient treatment, but in retail prescription drugs. This has essentially shifted the supervision of people having a behavioral health crisis onto the community – and the police.

It is indicative of just one change in society that has made demands on the police. Police officers are not really trained to be mental health professionals, and yet society has demanded they take a role in the care and control of people with behavioral health crises because society seems unwilling to pay for appropriate professional care. Behavioral health is just one factor, and students are often able to consider many other areas where the police are now the lead agency in areas that were never originally police issues.

Source: This is from SAMHSA data and screen captured from a PowerPoint I use in when consulting and training police departments on intelligence-led policing.

5. Policing is increasingly a safer occupation

This is one graphic that I confess I didn’t create myself, but it tracks with my research on this area and mimics many similar charts online. It is sourced from the Officer Down memorial page. While things have changed in the last couple of years, the good news is that policing has become increasingly safer (as has society) in the U.S. over the last 30 years.

What I find interesting is that increases in police fatalities are linked to government policy changes that alter the role of police in society. The overarching point remains that policing has become a much safer occupation over the last 30 years or more. It now does not feature in the top 10 list of dangerous occupations (This next graph from WaPo isn’t counted in my list – it’s a freebie).

6: Policing is most effective when focused on specific people and places

OK, this one takes a bit to get your head around. Graph no. 6 is the Evidence-Based Policing Matrix from Cynthia Lum and colleagues has three dimensions and four symbols. The key symbol is the black circle – this indicates an effective intervention (white is ineffective, grey is a mixed result, and the red triangle is a harmful intervention). Where the black circles are most concentrated are in the areas where interventions focus proactive activity on places that are at the neighborhood level or smaller, or on people. This (and other evidence) suggests that general interventions that are applied across whole jurisdictions or cities are less effective.

The bottom line? Police interventions that actually reduce crime are locally focused and tailored to specific problems, not citywide interventions.

Are there more graphics that depict the reality of 21st Century policing? Of course. But these ones seem to be significant. Ping me if you have others you think are relevant and important.

Year-to-date comparisons and why we should stop doing them

Year-to-date comparisons are common in both policing and the media. They involve comparing the cumulative crime count for the current year up to a certain date and comparing to the same point in the preceding year. For a Philadelphia example from April of this year, NBC reported that homicides were up 20 percent in 2017 compared to 2016. You can also find these types of comparison in the Compstat meetings of many police departments.

To gauge how reliable these mid-year estimates of doom-and-gloom are, I downloaded nine years (2007-2015) of monthly homicide counts from the Philadelphia Police Department. These are all open data available here. I calculated the overall year change as well as the cumulative change monthly from year to year. In the table below you can see a row of annual totals in grey near the bottom, below which is the target prediction as a percentage of the previous year (white text, blue background). For example, the 332 homicides in 2008 were 14.7% lower than the previous year, expressed in 2007 terms.

Let’s determine that we can tolerate our prediction to be within 5 percent plus or minus the eventual difference between this year and the preceding year. That stipulates a fairly generous 10% range as indicated by the Low and High rows in blue.

Each month you can see the percentage difference between the indicated year-to-date at the end of the month, and the calendar year-to-date (YTD) for the same period in the previous year. So for example, at the end of January 2008 we had 21.9% fewer homicides than at the end of January 2007. By the time we get to December, we obviously have all the homicides for the year, so the December percentage change exactly matches the target percentage difference.

Cells highlighted with a green background have a difference on the previous year that is within our +/- 5 percent tolerance. By the end of each January, we only had one year (2012) with a percentage difference that was within 5 percent of how the city ended the year. The 57% increase in January 2011 was considerably different that the eventual 6% increase over 2010 at the end of December. When Philadelphia Magazine dramatically posted “Philly’s Murder Rate is Skyrocketing Again in 2014” on January 14th of that year, the month did indeed end up nearly 37 percent over 2013. But by year’s end, the city had recorded just one homicide more than the preceding year – a less dramatic increase of 0.4%.

In fact, if we seek out a month where the difference is within our 10% range and later months will remain consistently accurate through to the end of the year, then we have to wait until the months shown with a border. 2009 performed well, however while 2010 was fairly accurate throughout the summer, the cumulative totals in September and October were more than 5% higher than the previous year when the year ended only 0.3% higher.

To use calendar YTD comparisons with any confidence, we have to wait until the end of October before we can be more than 50% confident that the year-to-date is indicative of how we will enter the New Year. And even then we still have to be cautious. There was a chance at the end of November 2010 that we would end the year with fewer homicides, though the eventual count crept into increase territory.

The bottom line is that with crimes such as homicide, we need not necessarily worry about crime panics at the beginning of the year. This isn’t to say we should ever get complacent and of course every homicide is one too many; however the likely trend will only become clear by the autumn.

Alternatives exist. Moving averages seem to work okay, but another alternative I like is to compare full (annual) YTDs to the prior annual (i.e. full 12 month) YTD. So instead of (for example) comparing January-April 2010 to January to April 2009, you could compare the 12-month change May 2009-April 2010 against the May 2008-April 2009 total. I’ve done that in the red graph below. The first available point is December 2008 and as we know from the previous table, the preceding 12 months had outperformed the annual year 2007 by 14.7%. But then each subsequent month measures not just the calendar YTD but the 12-month YTD.

The result is a graph that shows the trend changing over time from negative (good) territory to positive (bad for homicides because it show an increase). Not only do you get a more realistic comparison that is useful throughout the year, you can see changing trend. Anything below the horizontal axis is good news – you are doing well. Above it means that your recent 12 months (measured at any point) was worse than the preceding 12 months.

You can have overlapping comparison periods. The graph in blue below compares 24 months of accumulated counts with the 24 month totals for the previous year. For example, the first point available is December 2009. This -11.7% value represents the change in total homicides from the 24 months January 2008 to December 2009 and compares it to the 24 month total from a year previous to this (January 2007 to December 2008). For comparison purposes, I have retained the same vertical scale but note the change in horizontal axis.

You can see there is more smoothing, but the general trend over time is still visible. Lots of variations available and you might want to play with different options for your crime type and crime volume.

Classic experiments in evidence-based policing

So far, it’s been a fun semester teaching evidence-based policing for the first time. We have covered everything from evidence-based medicine to research design and the Maryland Scientific Methods Scale, and even some basic stats so that we can understand confidence intervals. It’s been particularly rewarding to see students who have spent years in policing exploring and learning about the world of research evidence that supports and helps their world, a world of which many have been until now unaware.

What I am also learning is that those of us in the police education field have done a lousy job of explaining what we do and why it is important to advancing policing and the practice of law enforcement. There is a range of classic studies that are not well known, and an absence of knowledge around these – and other important works – fuels the never-ending cycle of operational decisions that fly in the face of all we know about what works, and what doesn’t. Police still support strategies and crime reduction tactics that are known to not work.

In light of this, I started putting together a list of experiments of which that I thought my students should be aware. The original studies are described in a range of works from academic journal articles to long-winded reports. All pretty impenetrable for most folk, especially busy cops. So I have copied and pasted the key pieces of information into a single page per study, copied from the original sources directly. I cite them at the bottom of each page so you know the source.

This isn’t an exhaustive list, and I intend for it to grow, but for now the list comprises:

  1. The Kansas City Preventive Patrol Experiment
  2. The Newark Foot Patrol Experiment
  3. The Philadelphia Foot Patrol Experiment
  4. The Minneapolis Domestic Violence Experiment
  5. The Minneapolis Hot Spots Policing Experiment
  6. The Philadelphia Policing Tactics Experiment
  7. The Sacramento Hot Spots Policing Experiment
  8. The Queensland Procedural Justice Experiment

I will add to these over time, but for now if you want a copy, download a pdf of the one page summaries.

Note: If you are using these summaries to write a college paper, you should refer to the original study and cite it appropriately. All I have done is edit a copy-and-paste, but I’m 1) not writing a term paper and 2) not passing this off as my own work. If you do, that’s plagiarism. 

The Modifiable Areal Unit Problem

The Modifiable Areal Unit Problem (MAUP) is a potential source of error that can affect spatial studies which utilize aggregate data sources (Unwin, 1996). Geographical data are often aggregated in order to present the results of a study in a more useful context, and spatial objects such as census tracts or police beat boundaries are examples of the type of aggregating zones used to show results of some spatial phenomena. These zones are often arbitrary in nature and different areal units can be just as meaningful in displaying the same base level data. For example, it could be argued that census tracts containing comparable numbers of houses are better sources of aggregation than police beats (which are often based on ancient parish boundaries in the UK) when displaying burglary rates.Preview

Large amounts of source data require a careful choice of aggregating zones to display the spatial variation of the data in a comprehensible manner. It is this variation in acceptable areal solution that generates the term ‘modifiable’. Only recently (well, the last 30 years) has this problem been addressed in the area of spatial crime analysis, where ‘the areal units (zonal objects) used in many geographical studies are arbitrary, modifiable, and subject to the whims and fancies of whoever is doing, or did, the aggregating.’ (Openshaw, 1984 p.3).

As the study area for crime incident locations has effectively infinite resolution, there exists a potentially infinite number of different options for aggregating the data. Numerous administrative boundaries exists, such as enumeration districts, wards, counties, health authority areas, etc. Within modern GIS, it is an elementary task to automatically generate a huge variety of non-overlapping boundaries. Regular, often square, grids are common, though polygons have been used in other studies of crime distribution (Hirschfield et al., 1997). The number of different combinations of areal unit available to aggregate data is staggering. Openshaw (1984) calculated that if one was to attempt to aggregate 1,000 objects into 20 groups, you would be faced with 101,260 different solution combinations. Although there are a large number of different spatial objects and ways in which a large geographical area can be sub-divided, the choices of areal units tend to be dominated by what is available rather than what is best. Police crime data is often mapped to police beats, even when the information is passed to outside agencies such as neighborhood watches or local councils who might benefit from more relevant boundary structures.

The MAUP consists of both a scale and an aggregation problem, and the concept of the ecological fallacy should also be considered (Bailey and Gatrell, 1995). The scale problem is relatively well known. It is the variation which can occur when data from one scale of areal units is aggregated into more or less areal units. For example, much of the variation in census areas changes or is lost when the data are aggregated to the ward or county level.

The aggregation problem is less well known and becomes apparent when faced with the variety of different possible areal units for aggregation. Although geographical studies tend towards aggregating units which have a geographical boundary, it is possible to aggregate spatial units which are spatially distinct. Aggregating neighbors improves the problem to a small degree but does not get round the quantity of variation in possibilities which remains.

For a paper that discusses the MAUP and possible solutions, see:
Ratcliffe, J. H. and McCullagh, M. J. 1999 ‘Hotbeds of crime and the search for spatial accuracy’, Geographical Systems 1(4): 385-398. Paper available here.

Also see the Ecological Fallacy.

References:

Bailey, T. C. and Gatrell, A. C. 1995 Interactive Spatial Data Analysis, Second Edition: Longman.

Hirschfield, A., Yarwood, D. and Bowers, K. 1997 ‘Crime Pattern Analysis, Spatial Targeting and GIS: The development of new approaches for use in evaluating Community Safety initiatives.’, in N. Evans-Mudie (ed) Crime and health data analysis using GIS, Sheffield: SCGISA.

Openshaw, S. 1984 ‘The modifiable areal unit problem’, Concepts and Techniques in Modern Geography 38: 41.

Unwin, D. J. 1996 GIS, spatial analysis and spatial statistics’, Progress in Human Geography 20(4): 540-441.

The ecological fallacy

The ecological fallacy is a situation that can occur when a researcher or analyst makes an inference about an individual based on aggregate data for a group. For example, a researcher might examine the aggregate data on income for a neighborhood of a city, and discover that the average household income for the residents of that area is $30,000.

To state that the average income for residents of that area is $30,000 is true and accurate. No problem there. The ecological fallacy can occur when the researcher then states, based on this data, that people living in the area earn about $30,000. This may not be true at all, and may be an ecological fallacy.

ecological fallacyCloser examination of the neighborhood might discover that the community is actually composed of two housing estates, one of a lower socio-economic group of residents, and one of a higher socio-economic group. In the poorer part of town, residents earn on average $10,000 while the more affluent citizens can average $50,000. When the researcher stated that individuals who live in the area earn $30,000 (the mean rate) this did not account for the fact that the average in this example is constructed of two disparate groups, and it is likely that not one person earns around $30,000.

Assumptions made about individuals based on aggregate data are vulnerable to the ecological fallacy.

This does not mean that identifying associations between aggregate figures is necessarily defective, and it doesn’t necessarily mean that any inferences drawn about associations between the characteristics of an aggregate population and the characteristics of sub-units within the population are absolutely wrong either. What it does say is that the process of aggregating or disaggregating data may conceal the variations that are not visible at the larger aggregate level, and researchers, analysts and crime mappers should be careful.

 

Harm-focused policing

On 28th January, 2015 I gave the Police Foundation‘s Ideas in American Policing lecture on the topic of harm-focused policing. This brief blog provides some background details to the talk. Please note that for a number of reasons (including photograph copyright) I am not distributing copies of the PowerPoint slides.

Harm-focused policing weighs the social harms of criminality and disorder with data from beyond crime and antisocial behavior, in order to focus police priorities and resources in furtherance of both crime and harm reduction.

Example information and data sources could include drug overdose information that could help triage drug markets for interdiction, traffic fatality data to guide police patrol responses, and community impact assessments to prioritize violent street gangs. For a summary of the core of the presentation and a grey-scale version of some of the graphics, please see:

Ratcliffe, J. H. (2015). Towards an index for harm-focused policing. Policing: A Journal of Policy and Practice, 9(2), 164-182.

You can visit the journal site and access the paper here (or here) and watch my annotated video of the lecture below (you might want to make it full screen so you can read the slides).

During the presentation, I had a couple of quotes. Here are the quotes and their sources.

“to establish priorities for strategic criminal intelligence gathering and subsequent analysis based on notions of the social harm caused by different sorts of criminal activity”. The source for this is page 262 of Ratcliffe, J. H. & Sheptycki, J. (2009) Setting the strategic agenda. In J. H. Ratcliffe (ed.) Strategic Thinking in Criminal Intelligence (2nd edition) Sydney: Federation Press.

“Weighting crimes on the basis of sentencing guidelines can be justified on good democratic grounds as reflecting the will of the people. …… it remains far closer to the will of the people than any theoretical or even empirical system of weighting that academics might develop.” The source for this is Sherman, L. W. (2013). Targeting, testing and tracking police services: The rise of evidence-based policing, 1975-2025. In M. Tonry (Ed.), Crime and Justice in America, 1975-2025. Chicago: University of Chicago Press. Page 47.

 

Why we shouldn’t fixate on homicide numbers

There are some certainties in life. Death, taxes, the Eagles snatching defeat from the jaws of victory. And the annual January media fixation with homicide rates as the barometer of everything from a city’s moral compass to the effectiveness of the police chief.

I spent a couple of days speaking to various reporters about the homicide numbers in Philadelphia, and how they were significantly down on a few years ago, but had remained largely unchanged since last year. ‘What could we gather from this?’ ‘What were the implications?’ ‘Were police department strategies starting to falter?’ ‘What does it mean for the mayor and police commissioner?’

Taking more time than I really had, given I am trying to update ‘Intelligence-Led Policing’ for a second edition, I tried to explain that the homicide figures are a really bad choice of metric. For just about anything. For example, a not insubstantial number of homicides occur between people who know each other, and often take place indoors. How are the police department supposed to anticipate and prevent those homicides? Even if they develop a ‘Minority Report’ predictive capacity, we have a reactive legal and criminal justice system: it isn’t keen on letting the police just wander into your house and lock you up for pondering murder. And for the homicides that take place on the street? Sitting in on numerous Philadelphia Police Department crime briefings and listening to the homicide reports, it is clear that many are the result of minor disputes that flared up with little-to-no warning or are the result of disputes between participants in gangs or drug organizations who conceal their business and would never seek the intervention of the police.

The difference between a homicide and an aggravated assault is also largely outside of police control. Could be the shooter has lousy aim or is firing gangster style, there is a delay in getting the victim to the hospital, or simply medical mismanagement. Once a person decides to shoot someone else, they are easily able to in the US because we allow them the opportunities to do so. Our legislators seem unwilling to help the police with this, so again, little chance for police influence here.

I examined a summary of every incident recorded by the Philadelphia Police for the last available full year (2013) to estimate how much police patrol energy is expended on responding to homicide incidents. In Philadelphia, the city receives millions of calls for service, and from these – as well as police-generated activity – an INCT database is created. This database contains every incident where a police officer was required to act, and ranges from dog bites and graffiti to shootings and homicides, and from assistance to city agencies and delivering messages, to removing debris from the interstate or arresting a drunk driver. In 2013 there were in excess of 1.65 million incidents. What percentage of these related to homicide? 0.021%. Less than one quarter of one tenth of one percent.

I explained to the reporters that aggravated assaults and robberies were also down, and due to their greater number generally, this was a much better way to indicate the crime health of the city. They said they got it, but their hands were tied: “the public interest is in homicides”.  So we still got story after story about the homicide rate. Not a major grumble: reporters have to make a call and write what they think is the story. But I wonder if the fascination with homicides is really driven by public demand, or by the media? I can’t believe there was massive public outcry that drove the claim that “Philly’s Murder Rate Is Skyrocketing Again in 2014”… especially only two weeks into the year (you had to go back and checked the post date didn’t you?).

Traditionally, homicides have been used because they are easily comparable between cities, because police departments have recorded other incidents in different ways, or because in the past (sometimes not so distant) the police have distorted the crime figures. But homicides comprise so little of the work of a police agency, and the chances of most people being a victim of homicide are so low, that they tell us little about the experienced crime rate or the quality of life for city residents.

We need to start moving to more holistic measures if oversight and strategy are to be more data driven and evidence based. Harm-focused policing that examines and weighs all incidents, and includes other harms to communities, such as traffic accidents or the potentially deleterious impact of unrestrained pedestrian investigations, is increasingly possible with the big data sets that public agencies generate. We need to evolve beyond our fixation with homicide if we are to move the discussion about safety and harm forward.

But in the meantime, Philadelphia, be glad that shootings and robberies are also down.

(This post was updated shortly after posting to correct the homicide incident rate)

Schrödinger’s crime hotspot

Attendance at the recent 2014 American Society of Criminology conference brought a chance to catch up with friends and observe a couple of splendid presentations (and quite a few awful ones). A couple of sessions in particular reaffirmed to me the gulf between some academic criminology and public policy. I watched as speakers attempted to parse in ever-increasing detail the boundaries of crime hotspots. Discussions continued around the efficacy of street blocks as potentially more accurate units of analysis compared to census block groups, and I could see the accuracy issue being a hot topic in predictive policing workshops. Hotspot boundary definition appeared to be the end in itself.

As my colleague Ralph B. Taylor has argued, “hot spots exist in the data world but not the real world” (Taylor, 2009) ¹. In this they are unlike land use parcels, behavior settings or street blocks – places that exist in the data and real worlds. He goes on to contend that “hot spots are amalgams of different types of locations” and we therefore have a construct validity problem. Simply because we see a cluster of events, does not mean we have a new entity (a crime hotspot) but rather a collection of events that exist as a cohesive entity only in the abstract world. To think otherwise is to commit a reification fallacy (Gould, 1981). When we move to the real world, and think about operationalizing a strategy to address our hotspots, things can unravel as it becomes clear that this collection of points exists for different reasons, each of which need addressing.

I thought of Ralph Taylor’s comments as I sat in the audience, and pondered the analogy between crime hotspots and Schrödinger’s cat. Erwin Schrödinger’s feline thought experiment was designed to explain why the Copenhagen interpretation of quantum superposition was flawed, because the cat exists in a paradoxical state of being neither alive nor dead. The hypothetical animal is placed in a steel box with a Geiger counter, a vial of poison, a hammer, and a small amount of radioactive substance, small enough to have only a 50/50 chance of being detected over the course of an hour. If the radioactive substance decay is detected by the counter, the hammer is triggered to smash the vial, release the poison and kill the cat. Only by looking in the box can the observer determine whether the cat is alive or dead. Prior to opening the box, the cat’s health is unknown and could be considered simultaneously alive and dead. The observer opens the box with the express intent of confirming the wellbeing of the cat. Before opening the box, the cat’s condition is unresolved and abstract. In the same way, crime hotspots are in a largely abstract state until we look at them from a particular viewpoint.

This brings me to two points that appeared rather lost on some of the conference speakers. First, crime mapping and the application of GIS to crime problems is not the end of the analysis – it is the start of it. Digital cartography is a necessary abstraction of the real world, and to think otherwise is to be oblivious to the classification, simplification and symbolization that takes place. It is through these processes that unrelated events can often be made apparently similar. The nighttime beating and robbery of a drug dealer will often be classified in the same manner as the punch a school child receives as they are relieved of their smart phone by a classmate. In a map of crime for the year, these events will likely be cartographically identical, but unlikely to be prevented in the future with the same response. Just because two crimes share geographic proximity, doesn’t mean they necessarily share a common cause (a point I’ve made elsewhere).

This brings me to a related second point. Crime hotspots (in the abstract world) are only made real when they are mapped for a purpose. When a police captain asks for a map of robbery hotspots, the captain is bringing a purpose to the analysis. He or she wants to deploy a surveillance team, task a crime prevention officer, or know where to assign more foot patrol officers. An academic, meanwhile, might want to seek underlying causal factors and understand why crime concentrates in certain areas. With the knowledge of the eventual purpose, a proficient analyst can create tailored hotspots that map to the parameters of the user’s needs. The maps would be different, but no less useful. Our captain brings a lens through which he or she interrogates the hotspot, and by looking at a map of crime hotspots they are made real and are understood. It is the captain, not the analyst, who opens the box.²

Both the captain and the academic bring to the analysis a predetermined purpose, and although different, the crime hotspots that each uncovers are equally valid. In opening the box, and staring at the map through the lens of a proposed application, crime hotspots are made real and understood. At this point, they can serve a purpose. But until then, they remain in an abstract state and their accuracy, or even their state as being alive or dead as viable entities, remains unknown. Like Schrödinger’s cat.

Works cited

Taylor, R. B. (2009). Hot spots do not exist, and other fundamental concerns about hot spots policing. In N. Frost, J. Freilich & T. Clear (Eds.) Contemporary Issues in Criminal Justice Policy: Policy Proposals from the American Society of Criminology Conference (pp. 271-278). Belmont, CA: Cengage/Wadsworth.

Gould, S. J. (1981). The Mismeasure of Man. New York: Norton.

Note

¹ And continues to discuss in greater detail in his forthcoming book, Taylor, R. B. (2015) Community Criminology. New York: New York University Press.

² In discussing this with John Eck he added the suggestion of a Schrödinger’s policy, where two policies sit in a box and are both in a state of existence, until someone looks at the data. Then only one policy becomes viable and exists.

Jerry’s top ten crime mapping tips

Some tips for crime mappers…

Tip 1: Include a scale bar. A map is all about geography, and what is the point if a map reader cannot tell how far one place is from another? Use sensible numbers such as 0 – 5 – 10 miles, and adjust your scale bar accordingly. Be careful when using automatic scale bars; they are rarely spot on first time. If you are presenting to an international audience, they will appreciate a map showing miles and kilometers. 5 miles is close enough to 8 kilometers for presentation purposes.

Tip 2: Include a North arrow. It may not seem much, but it takes up very little room, is easy to do and does help a few viewers. Some people say there is a limit to the value of North arrows. For example, it is probably not necessary to show a North arrow on a map of the of the US; but if in doubt, include a North arrow.

Tip 3: Simple and clear titles. Don’t forget a title for your map, and use a simple one that means something to a range of people. You never know who will use your map later on and may misinterpret what they are seeing (alas, I speak from experience). When deciding on a title, use the KISS principle (Keep It Simple Stupid!). Often the type of crime, the place, and the date range – is enough, but sometimes a more provocative title garners attention. An explanatory sub-title can be helpful.

Tip 4: Use color carefully. Color is a marvel – that should be used sparingly. Think about how appropriate your color use is, and use color for those things that you want to emphasize. Strong colored backgrounds tend to destroy any hope of seeing symbols on top of them. Try and use paler (more insipid) backgrounds to shade regions (if they have to be shaded at all) with dark or bold bright symbols over them. Don’t be afraid to experiment (and make improvements), but remember the maxim “less is more”. You might also want to view the companion web pages on color and presentations.

Tip 5: Really understand your data. Know what you are presenting, and understand the limitations of the data. This is especially true if you are presenting maps of data created by someone else. Having a map showing minute by minute burglary patterns is useless if your burglary data (like most) has start and end times hours or days apart. Repeat victimization is also a real consideration and most GIS will simply place one dot on top of another. Do something about this, or at least be ready to explain to your audience, either in a caveat or in person.

Tip 6: Use thematic mapping cautiously. Thematic mapping really simplifies what used to be a complex procedure, but the automatic settings used by most GIS still leave a lot to be desired. The automatic settings when making maps using ‘quantiling’ or ‘equal count’ for example, tend to end up with categories that use fractions that, while technically accurate, mean little to most viewers. Be prepared to customize them to more sensible values. GIS are also stupid in that they will let you make choropleth maps of things that should just not be mapped in that way. The automatic features should not be invoked without understanding your data and the thematic map processes. Graduated symbols such as circles should only be used to denote increasing values of something (e.g. value of goods stolen, or number of burglaries at a location), while area shading is preferable for showing proportional rates (assaults per 100,000 people) than raw rates (because the map can be skewed by different sized areas and therefore populations).

Tip 7: Legends are generally essential. A legend is essential if you have any type of shading or symbology. It will also help you remember what the map portrays months later. Use sensible numbers. 1 to less than 5 means something to most people. 1.000325 to 2.4352 is in the realm of nonsense, unless you have a very technical audience. In fact, I know a few technical people and they get offended by this. If you have complex numbers, you could always change the scale from ‘low criminal activity’ to ‘high criminal activity’ (or similar) and lose the numbers. The audience will appreciate it.

Tip 8: Caveats mean you are not lying. To make a map with a title saying; “Melbourne burglaries, 2014” implies that you are mapping all of the recorded burglaries. However it is still an unfortunate reality that geocoding rates are rarely 100% and you should tell the reader of the real rate, and any other caveats. This is especially the case if the geocoding hit rate is less than 85%. It saves embarrassing questions later, when someone points to an area of known burglaries that is featureless due to geocoding problems. You should append the caveat to the map itself, as maps and text can often be separated by others, either by accident or nefarious design.

Tip 9: Limit the information you show. As map complexity increases, a limit is reached beyond which map comprehension in the reader actually decreases. Sometimes it might be better to produce two or more maps instead of one monster that loses all meaning. A person can differentiate about 5 different types of symbol at a glance, and any more needs to be constantly checked back to the legend. Why put them through that? Also consider the function of additional features such as national parks, public toilets & railway lines. Are railway lines relevant and therefore necessary to your map? If you suspect that burglars are using them to gain access to properties then perhaps yes, but they are hardly relevant for a map of drug sale locations.

Tip 10: Check the map appearance in grayscale. If you map is a real success then it will be copied and disseminated – this is the real mark of success. Unfortunately until color copiers become standard you should run your map under a photocopier to see what comes out of the other end. This will give you an idea of what becomes indistinguishable or illegible after reproduction. Small, italicized text is particularly vulnerable, as are similar shades of color.

Ten ways to make your crime maps more ‘interesting’

PFPE mapAt various conferences and visits to police stations I have seen quite a few maps, and some have been great – really well presented, laid out and prepared. Alas many are awful, so I’ve put this page together as a brief guide to those perhaps less versed in the cartographic ways. You do not have to adhere to the guidelines here, but they might improve the readability and quality of your maps. If you do not follow these suggestions, ArcGIS will not self-destruct in a fit of cartographic rage – but this is part of the overall problem. The software does not understand your data and will let you do just about anything you want – even if it is wrong.

Note: If you are a Computer Science major and confused by the concept of sarcasm, feel free to click over to the vanilla version for you earnest types.

Tip 1: Do not include a scale bar. This will make it much more interesting as your map readers have to guess the distance between objects. One of the main aims of mapping crime is to compare areas and examine the proximity of objects, so why make it easy for the uninitiated to understand you map? Without a scale bar nobody will have a clue how far things are apart and this gives you the opportunity to have impromptu quizzes or make things up as you are presenting. If you accidentally include a scale bar use a scale that goes “0 —– 6.75 ——13.25 kilometers” instead of the usual “0 —5 —10” or similar. Big complex numbers really impress audiences.

Tip 2: Do not include a North arrow. Hundreds of years of cartographic tradition have no place in the new millennium – we are in the digital age and therefore all maps automatically have North at the top: even if we have to rotate the map to get it to fit on the page. Anyway, if you have visitors from outside your suburb, city, or country, why should they want to know in which direction is North so they can orientate themselves? They probably are not interested anyway.

Tip 3: Use jargon and special codes in the title of the map. Including special codes and police service jargon in the title of your map will make it, and you, look more professional. A title such as “B-type crimes for sectors GF and YTU for shifts R4 and R5” really impresses audiences. Make sure you also use dates in a mixture of European and American format at international conferences (without telling the audience which you are using). After all, 10/10/00 is the same either way, and what can the rest of the world teach us? Better still, don’t have a title at all (or have one that warbles on for three or more complete lines), and never put your name on the map – that way there is nobody to blame.

Tip 4: Find the color palette, and use every one. Color is what maps are all about. Use as many colors as you can find. There really are no rules about inappropriate choices of color, so bright cheerful pinks are fine for displaying child murder sites. If you have interesting symbols at particular places (such as body dump locations) try to de-emphasis them by making the background color glaring and bright. This will detract from your murder sites and make the viewer only see the underlying light industrial land use – much more important. Other features such as roads and railways and national parks are probably not very relevant, but they fill up the map nicely so give them a bold, bright color. This will further detract from your important points and distract the viewer – making them think there is less crime.

Tip 5: Don’t worry too much about understanding your data. The important thing is the presentation and the display. Don’t worry about showing maps with dots all over the place, often obscuring other dots. The audience will get the general picture and do not need complicated things like graduated circles to show how many crimes have occurred at the same place. This is just pedantic mapping for airy-fairy academics. And certainly do not worry about it when showing maps of repeat victimization. Also, if you can only geocode to the level of a zip code, still show the viewers the very best detail you can, right down to the street corner. Let “spurious accuracy” be your guide.

Tip 6: Make the most of thematic mapping. Those automatic thematic map menu items are there to be used as much as possible, and negate the need to really understand what they do. After all, it always looks great so it must be right! If you have a numerical date variable, use the graduated circle. I especially like those maps that show the time of day of an offence as a graduated circle. The bigger the circle – the later in the day. Stellar cartography right there. Another one is to use the bar graph function when comparing big numbers and little numbers. You can never see the little bars unless your nose is against the screen – how that one makes us all laugh in my office.

Tip 7: Legends have had their day. In bygone years there was a time for legends, but that age has most definitely passed. In modern cartography – especially for presentations, you will be there to explain the symbols and the values associated with different colors. And if you forget, don’t worry – the audience will understand. Hey, we’ve all done it. If you have to be passé and include a legend, then for color shaded areas use impressive looking numbers such as “3.01453 to less than 6.03215”, instead of “3 to less than 6”. This will impress the audience no end as you obviously have a grasp of quantum arithmetic.

Tip 8: Caveats look weak. You are out there to impress with your map. Having a caveat, especially for the geocoding rate, looks weak and as if you have not put enough effort in. To suggest that you have not been able to map every point will make you look bad next to all the other crime mappers who must obviously be better at it than you. After all, how can you make a good impression if you have data error? Being misleading is just helpful.

Tip 9: Get as much information onto a map as possible. Maps take time to produce so it is important to squeeze in as much information as possible. This is especially true for maps with symbols. Try and use more than 5 different types of symbol on a map, and ideally make them roughly the same size and color. It would be unfair to give added weight to one, so make them as indistinguishable as possible. If you can make them illegible from the back of a room on a PowerPoint presentation then that also helps because it makes people have to concentrate and come closer.

You could also try to disguise unhappy symbols like the locations of assaults and murders with unrelated symbols such as public lavatories and libraries. This can of course work both ways, it might suggest that there have been a lot of robberies, but most of your viewers will be fooled into thinking your area is well stocked for public utilities.

Tip 10: Never let anyone photocopy your map. A map is a work of art and should never be disseminated – ever. Stick it on the wall in the office, and use it in presentations but never let it out of your sight. Someone might take it down from the wall and photocopy it – ruining the point of the thing. Worse, they might actually use it to make a good public safety decision. The best way to teach them not to use your work is to make symbols and background colors roughly the same shade. A mid-blue and a mid-red blend nicely in color, producing a pleasing effect on the eye, but are indistinguishable when photocopied into grayscale. That will teach them to steal your maps!