This software was designed for a particular purpose, for research that Eric McCord, Travis Taniguchi and I were conducting in the Department of Criminal Justice at Temple University on crime in Philadelphia (PA) and Camden (NJ). I’m simply sharing it out of the generosity of my spirit to the crime mapping community. I therefore offer no warranty whatsoever to either the functionality of this software or to the suitability for your purpose. While there is no deliberate malicious code in there from me (I can’t speak for Microsoft), please be aware that I offer no warranty and you might have that one-in-a-million hardware configuration that causes this program to erase your hard drive, fry your monitor and electrocute your dog.
There is no point proceeding beyond here if:
- You do not have crime and location data in x and y coordinates. The software was designed to work with x and y coordinates in a cartesian/projected files, and not with data in lat long or any other kind of degrees and minutes situation.
- You have a Mac or an old PC. The software is written in the latest Visual C# architecture from Microsoft. Macs or older PCs will not be able to cope. If you do not have the .NET framework for programming languages installed on your machine, then when you try and run the program it will ask to install the .NET framework. Please do – it will be useful for other applications. Most modern PCs have the .NET framework automatically installed.
- You can’t get data in and out of your GIS in a comma-separated values (*.csv) file. If not sure, then GIS program help files can explain the process. Furthermore, if your crime events are recorded as shapefiles in ArcGIS, it is easier to open the dbf file associated with your shapefile using Excel, and then save the file as a csv file. It can be edited in Excel so that only x, y, date data are available. Make sure you do not change your dbf file.
- You are not prepared to accept the caveats mentioned earlier.
- You don’t accept that this is free software without a hint of support.
What does it do?
The traditional approach to a buffer analysis is to determine a buffer distance around an object, and count the number of crimes within that buffer. In this example, a bar (shown as a green dot) has a buffer of a few hundred feet around it (buffer as blue disc) and a number of crime incidents inside and outside the buffer (red dots). Traditional approach is to give every event inside the buffer a value of 1, and every event outside a value of zero.
The software here provide 5 different inverse distance weighting approaches, so that instead of getting a crime count in the buffer, you can get a crime intensity. In the graphic shown here, the inverse weighting is a quartic one, but the program also allows for linear and exponential approaches.
The eventual score for each criminogenic location (the green dot) is the sum of the weightings for all the crime events within the buffer distance. For example, if there were only the three highlighted crimes in this image and no other crime events, the intensity crime score for the location at the green dot would be: 0.02+0.98+0.53 = 1.53
What do I have to do?
When you run the program, you should already have two csv files ready. One is a simple x,y list of locations (the green dots in the graphics above). These are the places around which you will measure a crime intensity value. You also need a second csv file, also of simple x,y coordinates, that has the crime events. When you run the program, you will see the following screen (you might be asked to install the Microsoft .NET framework).
The program is very simple to operation. Start by clicking the Locations file button and load the program with the csv file that has the criminogenic locations of interest (the green dots). Then click the Crimes file button and load in the csv file with the crime events.
You now only have to make a couple more decisions. You should choose a suitable bandwidth as the limit beyond which points are no longer considered. The bandwidth is shown in the images above as a blue disc. The decision of a bandwidth is entirely your choice. For example, we have examined crime within a couple of blocks of street-corners in Philadelphia, and the average Philadelphia block is about 400 feet. So we used a value of 800 feet. However bear in mind that crime events close to the buffer (say 700 feet away) count very little to the crime intensity total. So sometimes you might want to extend the buffer further. If your data are in feet (your x,y coords) then the bandwidth is also in feet; if in meters, then the bandwidth distance is in meters. It simply inherits the distance metric of the source data. Don’t mix x and y coordinates from different projections, and don’t use lat long data.
Your final decisions are the choice of weighting and distance calculation techniques. The weighting technique choice os from one of five. Each measures distance from the center of the buffer differently. Distance is a choice of either as the crow flies (Euclidean) or using a grid approach (Manhattan); see the manual for more details. Once you have chosen, simply hit Run!
The program will run quite quickly, and usually in about a second or two, unless you have huge data files. Other buttons provide quick help on a particular button (?) or help through the pdf manual, tell you about the version of the program, and provide an exit route to leaving the program.
The output is a simple one. The program creates a single output file. Your original input files are not changed in any way. The output file has some preamble with analysis details and it also records the input filenames and analysis date/time. It then has a header row and then three values for each row. Each row has an X and Y coordinate of a location (straight from the Locations file) and then the newly calculated Intensity value.
X, Y, Intensity 114237, 775889, 45.7 117811, 776009, 23.1 And so on…
The output filename is based on your Locations file, and will end up in the same folder as your Locations data. The output filename is the same as your Location file but with the date and time added to the filename, in the format _yyyy-mm-dd_hhmmss.csv. The date and time are added from the time the output file is created, so if you ran an analysis on a location file called “locations.csv” on December 25th, 2006, as 2.45pm and 30 seconds, the output file would be: Locations_2006-12-25_144530.csv
Choosing a weighting algorithm
- Linear has a weighting method of 1/d. This declines smoothly from 1 to 0 as distance increases to the bandwidth. Crime events half the distance to the bandwidth contribute a value of 0.5 to the final intensity calculation.
- Quartic kernel weighting method employs the quartic kernel algorithm from Bailey, T.C. and Gatrell, A.C. (1995) Interactive Spatial Data Analysis (London: Longman). Each quartic kernel value is scaled up by multiplying the value by 10,000. Crime events half the distance to the bandwidth contribute an approximate value of 0.537 to the final intensity calculation, in a non-linear decline.
- Exponential (.10) weighting method employs a non-linear, inverse distance decline based on an exponential value such that crime events half the distance to the bandwidth contribute a value of 0.1 to the final intensity calculation.
- Exponential (.25) weighting method employs a non-linear, inverse distance decline based on an exponential value such that crime events half the distance to the bandwidth contribute a value of 0.25 to the final intensity calculation.
- Exponential (.33) weighting method employs a non-linear, inverse distance decline based on an exponential value such that crime events half the distance to the bandwidth contribute a value of 0.33 to the final intensity calculation.
NOTE: Whenever the dialog restarts, either because you did not enter a required program parameter, or when you wish to run a second analysis, the weighting technique defaults back to the linear (default) option. Please ensure you set the correct weighting option every time the program runs. The following graph shows the effects of these different algorithms.
- Download the zip file (details below), unzip the file and double-click on the Setup.Exe file.
- The program defaults to a location of C:\Program Files\JRatcliffe.net\Intensity Calculator\ but this can be changed.
- If you are prompted to download and install the Microsoft.NET framework, please do so. It is free and is required to make many Windows software programs run. It is often installed on new machines, but if not you will need to install it to make this program run.
- Click Start > Programs > Intensity Calculator
For further help, see the pdf manual accessible through the program’s main screen.
Ratcliffe, J.H. (2007). Buffer Intensity Calculator (version 2.3) [Computer software]. Philadelphia, PA: Temple University.
A published journal article on buffer intensity analysis
McCord, E. S., and Ratcliffe, J. H. (2009). “Intensity value analysis and the criminogenic effects of land use features on local crime patterns.” Crime Patterns and Analysis, 2(1), 17-30.
Abstract: Research has shown that crime tends to cluster around certain categories of land uses; for example, assaults group around bars and thefts and vandalism gather in neighborhoods bordering high schools and shopping centers. Environmental criminology explains the criminogenic propensities of these places as the result of increased crime opportunities and activities that attract higher numbers of potential offenders. Current methodologies used to quantify the volume of crime around criminogenic locations, however, lack precision in identification, measurement, and comparison. This article attempts to improve upon previous methodologies by employing a new technique that weighs crime events based on their relative proximity to the land use under study within a constraining buffer. The methodology allows researchers to apply statistical tests and make comparisons across land use, crime types, and jurisdictions. The process is demonstrated with a case study of the clustering of street robberies around subway stations in Philadelphia, PA, USA.