...

Criminal Justice and Behavior

by user

on
Category: Documents
23

views

Report

Comments

Transcript

Criminal Justice and Behavior
Criminal Justice and
Behavior
http://cjb.sagepub.com/
Sex Offender Risk Assessment, Sources of Variation, and the Implications of Misuse
Jennifer L. Lanterman, Douglas J. Boyle and Laura M. Ragusa-Salerno
Criminal Justice and Behavior 2014 41: 822 originally published online 17 January 2014
DOI: 10.1177/0093854813515237
The online version of this article can be found at:
http://cjb.sagepub.com/content/41/7/822
Published by:
http://www.sagepublications.com
On behalf of:
International Association for Correctional and Forensic Psychology
Additional services and information for Criminal Justice and Behavior can be found at:
Email Alerts: http://cjb.sagepub.com/cgi/alerts
Subscriptions: http://cjb.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
>> Version of Record - Jun 9, 2014
OnlineFirst Version of Record - Jan 17, 2014
What is This?
Downloaded from cjb.sagepub.com by guest on June 10, 2014
515237
research-article2014
CJBXXX10.1177/0093854813515237Criminal Justice and BehaviorLanterman et al. / Sex Offender Risk Assessment
Sex Offender Risk Assessment, Sources
of Variation, and The Implications of
Misuse
Jennifer L. Lanterman
University of Nevada
Douglas J. Boyle
Laura M. Ragusa-Salerno
Rutgers, The State University of New Jersey
The current study examines whether or not prosecutors in New Jersey are properly using the state’s sex offender risk assessment tool, and the implications of improper implementation. All prosecutors and public defenders who handle Megan’s Law
cases in New Jersey participated in two confidential surveys. The results of those surveys were used to score a fact pattern.
Results reveal that prosecutors are not consistently implementing risk assessment, that there are several sources of disparities,
and that these disparities can result in a substantial variation in risk assessment scores. The implications of these disparities
are that offender risk is often over-classified, thereby increasing offender supervision costs and potentially compromising
public safety. The authors make recommendations for formal training on the proper use of risk assessment tools, as well as
an assessment supervision plan. Although this research was conducted in New Jersey, the findings have implications for risk
assessment tools employed by other jurisdictions.
Keywords: sex offenders; risk assessment; implementation
T
he accuracy of risk assessment can have important implications for individual offenders
as well as society. This is particularly true in the context of assessment of dangerousness
for individuals convicted of sexual offenses in the state of New Jersey and other states that
employ risk-based assessments. An individual offender’s risk designation determines
whether, for example, she or he appears on an Internet registry of offenders and whether
community members are notified of his or her presence in the community. Given the stigma
attached to sexual offenses (Pratt, 2000; Presser & Gunnison, 1999; Zevitz & Farkas, 2000),
and the consequences of this stigma on successful reintegration (Mingus & Burchfield,
2012), it is important to ensure that risk assessments are conducted and implemented
Authors’ Note: The authors want to thank our Advisory Board members: Hester Agudosi, Esq., Michael
Buncher, Esq., Dr. Susan Furrer, Mary Murphy, Esq., Dr. Philip Witt, and Dr. Kristen Zgoba. They are also
especially grateful to the public defenders and prosecutors who completed their survey. The project described
in this manuscript was one part of a larger examination of Megan’s Law completed by the Violence Institute of
New Jersey pursuant to a New Jersey state statute mandating this evaluation. All correspondence concerning
this article should be addressed to Dr. Douglas J. Boyle, Director of Research, The Violence Institute at Rutgers
University, BHSB E1530, 183 South Orange Avenue, Newark, NJ 07103, USA; e-mail: Douglas.Boyle@
Rutgers.edu.
CRIMINAL JUSTICE AND BEHAVIOR, 2014, Vol. 41, No. 7, July, 2014, 822­–843.
DOI: 10.1177/0093854813515237
© 2014 International Association for Correctional and Forensic Psychology
822
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 823
accurately and fairly. From a societal perspective, it is also important as accurate risk assessment allows for more rational and efficient allocation of scarce law enforcement resources.
The New Jersey Registration and Community Notification Law, N.J.S.A. 2C:7-1-7-11,
also known as Megan’s Law, was enacted on October 31, 1994. In the summer of 1995, the
New Jersey Attorney General (NJAG) appointed a committee to develop an instrument that
would allow county prosecutors to assess sex offender risk in a uniform manner (Witt,
DelRusso, Oppenheim, & Ferguson, 1996). The Registrant Risk Assessment Scale (RRAS)
is the result of the committee’s work. The purpose of the RRAS is to make pre-release determinations of offenders’ risk of recidivism and the seriousness of the offenses if offenders do
reoffend (Witt et al., 1996). The score from the RRAS places an offender in a designated
risk tier (i.e., low [Tier 1], moderate [Tier 2], or high [Tier 3]), which guides if and how
community notification is carried out. In 2001, a law was enacted that created the New
Jersey Sex Offender Internet Registry (hereafter, Internet Registry), whereby the risk tiers
produced by the RRAS are used to determine which offenders are included in the Internet
Registry.1
Since the passage of Megan’s Law, New Jersey’s Administrative Office of the Courts
(NJAOC) has produced an annual report on the implementation of Megan’s Law. The findings from these annual reports indicate that some counties2 are home to a higher frequency
and rate of Tier 1 offenders, whereas other counties are home to a higher frequency and rate
of Tier 3 offenders. The NJAOC reports also indicate that certain RRAS factors are the
bases for numerous tier challenges. These consistent disparities in the geographical distribution of risk tiers and the bases of tier challenges led legislators to question whether
Megan’s Law and the RRAS are being properly implemented by the prosecutors responsible for sex offender risk assessment in each of the counties. To explore this issue, the New
Jersey State Legislature tasked members of a state academic institution with undertaking a
study of the implementation of Megan’s Law. The present study was initiated to determine
whether or not prosecutors in New Jersey consistently tier sex offenders according to established guidelines that have been in place since 1996.3 While New Jersey is the only state to
use the RRAS, our findings have implications for any state that employs risk-based assessments of offenders.
Prior Research
Risk Assessment
Risk assessments are used in the criminal justice system to predict the relative risk of a
variety of phenomena, including general, violent, and sexual recidivism. Research suggests
that when risk assessment instruments are properly developed and used to predict behavior,
they have the potential to produce modestly accurate predictions and significantly improve
classification decisions (Gottfredson & Moriarty, 2006; see also Bonta, Wallace-Capretta,
& Rooney, 2000; Connolly, 2003; Gottfredson, 1987a, 1987b; Gottfredson & Gottfredson,
1986). The predictions and improved classification can assist decision makers in properly
directing and conserving resources, and in potentially increasing public safety (Gottfredson
& Moriarty, 2006).
Approaches to risk assessment exist on a continuum that includes unstructured clinical,
structured clinical, empirically guided, clinically adjusted actuarial, and actuarial methods
(see Hanson, 1998, and Witt & Barone, 2004, for a review of these assessment methods).
Downloaded from cjb.sagepub.com by guest on June 10, 2014
824 Criminal Justice and Behavior
Each type of instrument produces a final risk assessment based upon a different balance
between clinical and actuarial assessment. Harris (2006) explains that the relative superiority of any one of the methods over the others is dependent on the questions asked, and that
if the focus of the assessment is the “aggregated long-term risk posed by a group of individuals [then] actuarial instruments almost certainly provide the most valid means of assessing such risk” (p. 39). Research has consistently supported the superiority and increased
accuracy of risk classification for actuarial instruments over clinical judgment (e.g.,
Ægisdóttir et al., 2006; Bengtson & Långström, 2007; Grove & Meehl, 1996; Grove, Zald,
Lebow, Snitz, & Nelson, 2000).
Offender actuarial risk assessment instruments are largely scored through a review of an
offender’s static and dynamic risk predictors. Static risk predictors are offender characteristics that remain consistent over time, such as age, race, marital status, and criminal history.
In contrast, dynamic risk predictors are factors that can change over time, such as treatment
completion or failure, psychological state, coping skills, and personality characteristics.
Research has shown that static risk factors are useful for determining long-term risk levels;
however, dynamic factors are deemed to be more accurate in predicting current risk levels
as they can adapt to changes in risk level over time (Craig, Browne, & Stringer, 2003). As
such, it is not uncommon to see actuarial risk instruments with solely static predictors or a
mixture of both static and dynamic predictors.
Sex Offender Risk Assessment
Actuarial instrumentation is the best method for assessing long-term risk. Consistent
with Harris (2006), there is no better type of assessment for use with a sexual offending
population given the important public safety concerns. This position is supported by the
number of actuarial risk assessments that have been developed and tested for use with a
sexual offending population in the last two decades (e.g., the Violence Risk Appraisal
Guide [VRAG; Harris, Rice, & Quinsey, 1993], Sex Offender Risk Appraisal Guide
[SORAG; Quinsey, Harris, Rice, & Cormier, 1998], Rapid Risk Assessment of Sexual
Offense Recidivism [RRASOR; Hanson, 1997], Static-99 [Hanson & Thornton, 1999],
Static 2002 [Hanson & Thornton, 2003], Minnesota Sex Offender Screening Tool–Revised
[MnSOST-R; Epperson, Kaul, & Hesselton, 1998], and the Risk Matrix 2000 [RM2000;
Thornton et al., 2003]). While many actuarial instruments have been developed solely for
use with sex offenders (e.g., the SORAG, RRASOR, Static-99), a few instruments have
been designed for other types of offenders (i.e., violent offenders) and have since been
proven valid with a sexual offending sample (e.g., VRAG, Psychopathy Checklist–Revised
[PCL-R; Hare, 1991]).
New Jersey’s Use of the RRAS
New Jersey currently utilizes the RRAS to conduct sex offender risk assessment. A committee composed of psychologists, assistant county prosecutors, deputy attorneys general,
and corrections administrators in New Jersey began work on the creation of the RRAS in
1995 (Witt et al., 1996). Some prosecutors began using the RRAS to assess sex offender
risk in 1995, and it was fully implemented in all New Jersey counties in 1996 (P. Witt, personal communication, June 24, 2010). The RRAS is empirically guided in its construction,
and was subjected to a test of its internal structure and concurrent validity by examining the
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 825
scores and risk tiers of convicted sex offenders on probation, in prison, repetitive-compulsive sexual offending inmates at the Adult Diagnostic and Treatment Center (a sex offender–
specific correctional facility), and offenders subject to civil commitment (Witt & Barone,
2004). Time and budget constraints prevented a validation study of the instrument prior to
its implementation (Witt et al., 1996). Therefore, New Jersey’s RRAS is an empirically
guided risk assessment rather than an actuarial risk assessment.
The RRAS examines four domains, including seriousness of past offenses, offense history, characteristics of the offender, and community support. These four domains cover 13
static and dynamic factors. The seriousness of past offenses domain addresses degree of
force, degree of contact, and age of the victim. The severity of offense history is scored by
assessing victim selection, the number of offenses and victims, duration of offensive behavior, time since last offense, and history of anti-social acts. The offender characteristics
domain captures information on offender response to treatment and substance abuse.
Community support is assessed through involvement in therapeutic support, residential
support, and employment or educational stability. Each factor is scored as low risk (0),
moderate risk (1), or high risk (3). After each factor is scored, the factors are then weighted
according to the strength of support for each factor as a predictor of re-offense and the seriousness of re-offense according to the extant research (Witt et al., 1996). The three seriousness of offense factors are weighted times five, the five offense history factors are weighted
times three, the two offender characteristics factors are weighted times two, and the three
community support factors are weighted times one. The weighted point totals are added
together for a total RRAS score. The RRAS score places offenders into low-risk Tier 1
(0-36 points), moderate-risk Tier 2 (37-73 points), or high-risk Tier 3 (74-111 points). These
tier designations determine the extent of community notification and which offenders will
appear in the Internet Registry.
Implementation
Despite the potential benefits of risk assessment instruments in general and actuarial risk
assessments in particular, the benefits of structured risk assessment have yet to be fully realized in the criminal justice system. Gottfredson and Moriarty (2006) explain that the unfulfilled potential of risk assessments is due to problems with development and implementation.
Improper implementation of risk assessment instruments in the criminal justice system is
attributable to intentional manipulation (Schneider, Ervin, & Snyder-Joy, 1996) and a lack
of proper training and supervision (Lowenkamp, Latessa, & Holsinger, 2004).
Manipulation
The intentional manipulation of risk assessment instruments is generally attributable to
practitioners’ efforts to circumvent the loss of discretion (Bonta, Rugge, Scott, Bourgon, &
Yessine, 2008), to suit personal beliefs (Schneider et al., 1996) and priorities (Corbett,
2008; Lipsky, 1980), the fit of an instrument within an agency’s mission (Dal Pra, 2004),
and the political implications of the instrument’s use (Schlager, 2009; Schneider et al.,
1996). The proper use of empirically guided and actuarial risk assessment instruments
requires that those utilizing the instruments agree to yield their discretion in favor of the
instrument guidelines. This requirement forces practitioners to eschew reliance on their
judgment, which has been developed over years working in their fields (Schlager, 2009).
Downloaded from cjb.sagepub.com by guest on June 10, 2014
826 Criminal Justice and Behavior
When practitioners are required to yield their discretion, they are forced to consider the
possibility that their professional judgment is flawed, which may detract from their feelings of self-efficacy. This loss of control is likely to be contentious (Schlager, 2009).
Schneider et al. (1996) explain that proper implementation may be undermined by professionals who are reluctant to “permit quantitative prediction systems to replace their professional judgments” (p. 111).
Practitioners may also manipulate risk assessment instruments to serve a personal agenda.
Lynch (1998) relays that there is a body of literature indicating that criminal justice policy
is not implemented without reshaping by the workers responsible for carrying out policyrelated tasks. Front-line personnel often set their own agendas, organizing and completing
their work in a manner consistent with their own perspectives and priorities (Corbett, 2008;
Lipsky, 1980), while subverting the directives and tasks they view to be problematic (Lynch,
1998). In a review of Richard McCleary’s work (1975, 1977, 1978), Lynch (1998) explains
that agents discounted management directives to prepare reports and “at the cost of accurate
accounting even used paperwork . . . for their own benefit” (p. 845). The research suggests
that criminal justice practitioners do not readily comply with the policy or practice directives from superiors. Furthermore, they appear to complete their work, including the application of risk assessment instruments, in a manner consistent with their perspectives and
priorities; a situation lending itself to the manipulation of risk assessments and possible
over-classification of risk (Andrews, Bonta, & Wormith, 2006; Gottfredson & Moriarty,
2006; Schneider et al., 1996).
The philosophical and political implications of a risk assessment tool may also lead practitioners to manipulate the instrument. Dal Pra (2004) explains how the fit of a risk assessment instrument into “the organization’s mission . . . is a critical consideration if the
instrument is to be effective” and properly implemented (p. 9). If proper use of the instrument requires practitioners to engage in behavior contrary to their agency’s mission or to
make assessments that conflict with their roles, then there may be a greater risk of practitioners manipulating the risk assessment instrument. Schneider et al. (1996) and Schlager
(2009) also explain that organizational politics, the uncertainty of risk assessments, and the
political implications of incorrect risk assessments may lead practitioners to manipulate risk
assessments to over-classify cases at higher risk levels. This over-classification leads to less
efficiency and higher costs to the system (Schneider et al., 1996).
Training, Proficiency, and Supervision
The most valid and reliable actuarial risk assessment instrument will fail to fulfill its
intended purpose if those responsible for its implementation intentionally manipulate the
instrument or, as a result of insufficient training and supervision, inappropriately use the
instrument. To ensure that users of the instruments are capable of proper implementation,
agencies must provide formal initial and periodic refresher training on proper use of the
instruments, and potential users within an agency must demonstrate individual proficiency,
as well as inter-rater reliability with the instrument. Agencies must also provide a mechanism for oversight or supervision to ensure that practitioners are using the instruments
appropriately and for their intended purposes.
To reap the full benefits of any risk or needs assessment, it is imperative that agencies
provide formal training to individuals expected to use the instruments (Bonta, 2002; Bumby,
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 827
2007; Lowenkamp et al., 2004). The training should consist of formal guidance on the
instrument by a qualified individual with a high degree of knowledge in the area of offender
classification and expertise in the specific instrument for which training is being provided
(Lowenkamp et al., 2004). Research highlights the need for the training to be provided by
formally trained individuals rather than informally by non-certified trainers, such as
untrained coworkers providing “bootleg” training (Andrews et al., 2006; Lowenkamp et al.,
2004). In addition to understanding the purpose and proper use of the instrument, time
should be allocated for hands-on practice with the risk assessment instrument (Bumby,
2007; Kreamer, 2004). Practice and proficiency in the use of a risk assessment instrument
are important, because research indicates that there is a relationship between training and
proficiency with an instrument and an instrument’s predictive validity (see Andrews et al.,
2006; Lowenkamp et al., 2004).
In addition to formal initial training with a risk assessment instrument, it is important to
ensure that practitioners continue to properly conduct assessments after the initial training.
Quality assurance in the risk assessment process can be achieved through follow-up training to review instrument scoring rules and oversight through continued, formal quality control procedures (Kreamer, 2004; Lowenkamp et al., 2004). Kreamer (2004) explains that
staff responsible for training and quality assurance may also have a tendency to drift from
the original instrument scoring rules, so agencies should also provide refresher training for
individuals filling these roles.
In sum, the available research indicates that the criminal justice system has not reaped
the full benefits of structured risk assessment due, in large part, to improper use of the
instruments. Formal training, individual proficiency, inter-rater reliability, and oversight are
necessary to ensure reliable risk assessment results. Failure to provide appropriate training
and oversight will necessarily detract from the reliability of the instrument and the validity
of the results.
Current Study
The current study was undertaken to evaluate the process of sex offender risk assessment
in New Jersey, as per a mandate from the state legislature. Specifically, this study examines
whether or not prosecutors are properly using the RRAS, and if the assessment process
could benefit from further standardization. This study addresses the following two research
questions:
Research Question 1: Are county prosecutors and courts following established procedures to
determine a sex offender’s tier designation?
Research Question 2: Can recommendations be made to standardize procedures for evaluating
the risk of re-offense and assigning tier designations?
Method
Procedure
The law mandating this study did not include a provision for access to data or data collection, so an advisory committee was formed to facilitate access to the parties and data
necessary to complete the study. We proposed several data collection methods. Ultimately,
we focused on two confidential surveys of prosecutors and public defenders and a fact
Downloaded from cjb.sagepub.com by guest on June 10, 2014
828 Criminal Justice and Behavior
pattern to address Research Question 1.4 The results from the surveys and fact pattern
informed responses to Research Question 2. A unique design aspect emanating from this
collaborative approach is that we secured participation from all of the prosecutors and public defenders who deal with Megan’s Law cases in the state.
We convened a meeting of our expert advisory board, conducted two surveys, and scored
a fact pattern based on the survey responses to address Research Question 1. During the
meeting, advisory board members highlighted and discussed several issues with the RRAS,
RRAS manual, and implementation. The prosecutor–public defender surveys were focused
on identifying sources of disparity in the application of the RRAS. First, the authors conducted an e-mail survey of all prosecutors and public defenders (i.e., surveys were sent to
each individual’s work e-mail address). In this open-ended survey (hereafter, Survey 1), we
asked each respondent to provide a list of RRAS factors, terms, or explanations in the
RRAS manual that were either unclear, open to interpretation, or that they knew from experience were not being applied consistently across counties, thereby introducing a degree of
discretion to the risk assessment process that could lead to disparate tier designations. Then,
we took those responses and created the second survey (hereafter, Survey 2). Survey 2 (see
Appendix A) is composed of 34 questions regarding various offender (Questions 5-28),
victim (Question 4), or case characteristics (Questions 1-3), as well as the sources of information used to score the RRAS (Questions 29-34). Eight of these questions ask the respondents to rate the risk of specific offender, victim, or case characteristics (i.e., low, moderate,
or high risk). This survey was web-based, and access information for the confidential password-protected survey was sent to each individual’s work e-mail address. Participants were
guaranteed that the surveys were confidential, and that their individual responses would not
be disclosed.
We created a fact pattern based on the RRAS factors identified by the prosecutors and
public defenders as sources of disparity in sex offender risk assessment. A fact pattern is a
description of all of the details of a case. This fact pattern highlighted particular characteristics or circumstances identified as problematic to examine the impact of differential interpretation on risk scores. Next, we identified a county in which the prosecutor and the public
defender concurred and scored consistently low (hereafter, low-risk county) and a county in
which the prosecutor and public defender generally concurred and scored consistently high
(hereafter, high-risk county) in Survey 2. Finally, the responses from these two counties
were used to score the fact pattern and to identify the potential impact scoring disparities
have on the RRAS score and tier assignment. Two of the authors of the present manuscript
independently rated the responses to Survey 2. Any discrepancies in the ratings were
resolved in consultation with the remaining author.
Sample
The sample for the prosecutor and public defender survey included all 21 county prosecutors and four public defenders who handle Megan’s Law cases in New Jersey. Each of the
21 counties has an assistant prosecutor responsible for handling the tiering of sex offenders.
There are four public defenders throughout the state who handle Megan’s Law cases. Three
of the public defenders serve five counties each and one public defender serves six counties.
The public defenders completed separate surveys for each county she or he represents.
Prosecutors were instructed to complete the surveys to address the environment and activities in their respective counties, and public defenders were instructed to provide responses
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 829
reflective of the environment and activities in each county that she or he serves, rather than
providing duplicate responses for all of the counties that she or he represents. This sample
is unique in three ways. First, this sample allows us to examine the opposing views of prosecutors and public defenders. Second, we can examine the variation of prosecutorial use of
the risk assessment instrument. Third, public defenders have the unique ability to see the
variation in how prosecutors and judges from different counties handle issues in different
ways based on actual tiering hearing experiences. There was a 100% response rate for the
prosecutor–public defender surveys.
Data and Analysis
Survey 1 consisted of a single, open-ended question, which produced qualitative
responses. Survey 2 consisted of a mix of 34 open- and closed-ended questions. Some of the
responses were categorical and either nominal or ordinal (i.e., rank low, moderate, or high
risk), whereas other responses were qualitative. The ordinal responses to eight of the questions were used to score the fact pattern. The ordinal responses correspond with point values, which were then tallied for a score. All quantitative data from Survey 2 were analyzed
with PASW 17.0.
Results
Prosecutor and Public Defender Surveys
The responses to Survey 1 highlighted several RRAS factors that are unclear, allow for
disparate interpretation, or that are intentionally misapplied. Specifically, respondents indicated that the following factors, terms, and scenarios posed problems when scoring the
RRAS: degree of force, degree of contact, history of anti-social acts, how risk is determined
for offenders who successfully complete treatment or who are on a waiting list for treatment, and how household exception, sole sex offense, substance abuse remission, frequent
relocation, and stable employment are defined.
Survey 2 responses were examined for the presence of intra-county and inter-county
disparities, within-role disparities among prosecutors and public defenders throughout the
state, and possible explanations for the disparities. Intra-county scoring discrepancies
between prosecutors and public defenders are displayed in Appendix B. The matrix depicts
discrepancies on how particular case characteristics or scenarios are handled in a given
county with an X for the 28 survey questions for which discrepancies can be identified.5
These discrepancies indicate that there is a disagreement in how a prosecutor reports scoring a particular RRAS factor and how a public defender reports that the prosecutor scores
the factor in a given county. The results in Appendix B indicate that every county had
instances in which the prosecutor and the public defender scored case characteristics and
scenarios differently, with a range of 3 (10.7%) to 13 (46.4%) discrepancies. The results
also indicate that certain characteristics or scenarios generate more intra-county scoring
disparities than others. Questions 4 (victim age), 29 (inclusion of cases in which the prosecutor dropped the charges), 30 (inclusion of cases where there was evidence of an offense,
but no charges were filed), and 34 (alleged facts from original complaint or factual basis for
a plea bargain) resulted in zero discrepancies. Zero discrepancies suggests that all Megan’s
Law prosecutors and public defenders throughout the state agree on how certain case characteristics or scenarios and the respective RRAS factors are scored in a particular county. It
Downloaded from cjb.sagepub.com by guest on June 10, 2014
830 Criminal Justice and Behavior
should be noted that this agreement does not guarantee that the RRAS is being properly
used; it is possible that prosecutors and public defenders agree to use the RRAS in a manner
inconsistent with the RRAS manual. Conversely, some questions generated a high frequency of intra-county scoring discrepancies, such as Question 15 (drug use as an antisocial act), which resulted in 14 scoring discrepancies. This high frequency of intra-county
disparity suggests that there is a lot of variation throughout the state in how sex offenders’
substance abuse histories are accounted for in the RRAS.
In addition to the disparate application of certain criteria or differential interpretation of
critical terms, the survey findings suggest that the type of information used to score the
RRAS also varies significantly by county, thereby introducing yet another source of disparity in the tiering process. Questions 29 through 34 on Survey 2 covered the types of cases
and information used to score the RRAS. There were zero intra-county discrepancies for
Question 29; in 20 counties, the legal parties agreed that sex offenses that were not prosecuted are included in the pool of data, whereas the prosecutor and public defender in one
county agreed that these offenses are not included. Similarly, there were zero discrepancies,
signaling universal agreement, for Questions 30 and 34; all prosecutors and public defenders agreed that they include sex offenses for which evidence exists, but no charges were
filed, and that they use the alleged facts from the original complaint rather than the factual
basis for a plea if the case disposition was by plea agreement. There was only one discrepancy for Question 33, which covers sex offenses dismissed as part of a plea agreement.
Prosecutors and public defenders in 20 counties indicated that these offenses are included
in the pool of data to score the RRAS; in one county, the prosecutor said yes and the public
defender said no. However, Questions 31 and 32 reflect a great deal of intra-county
discrepancy.
Question 31 asked whether sex offenses for which charges were filed, but that resulted in
a no bill at grand jury, were included in the RRAS data pool. There were seven intra-county
discrepancies. Of the 14 counties that did not report discrepant responses, 11 counties indicated that these offenses are included in the scoring, and 3 indicated that they are not
included. Question 32 asked whether sex offenses that resulted in a finding of not guilty at
trial were included in the RRAS data pool. This question also had seven intra-county discrepancies; only 3 of these were the same counties as those with discrepant responses to
Question 31. Of the 14 counties that did not report discrepant responses, the prosecutors and
public defenders in 7 counties indicated that they would include these offenses in the RRAS
data pool, whereas the legal parties in another 7 counties indicated that they do not include
these offenses.
There were eight questions in Survey 2 that required the respondents to rate the risk of
specific case characteristics. We examined intra-county discrepancies for each of these
eight questions and found a range of one to six discrepant responses (of eight possible
responses) in 20 counties.6 That is, prosecutor and public defender responses reflect an
intra-county disparity on at least one, and up to six, of the eight risk-rating questions in 20
counties. A separate examination of the eight risk-rating questions reveals that seven of the
eight questions resulted in intra-county discrepancies in 25% or more of the counties (e.g.,
38.1% of the counties had discrepancies on Question 2, 19.1% had discrepancies on
Question 3, 52.4% had discrepancies on Question 16, 57.1% had discrepancies on Question
17, 28.6% had discrepancies on Question 18, 38.1% had discrepancies on Question 19,
57.1% had discrepancies on Question 24, and 28.6% had discrepancies on Question 27).
One may expect prosecutors and public defenders to disagree about how to handle cases on
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 831
a philosophical level. However, the survey responses indicate a significant degree of variation in how RRAS factors are interpreted and scored within counties, as well as throughout
the state. Furthermore, as displayed in Table 1, respondents report significant within-role
variation on how these problematic factors or scenarios are scored. Even among prosecutors
who steer the risk assessment and tiering process, there appears to be significant variation
in how some factors are scored. For example, prosecutors reported responses to Question
16 that were evenly distributed across the low-, moderate-, and high-risk categories. These
intra- and inter-county discrepancies are particularly troubling when it comes to RRAS factors 1 and 2, as they are weighted by 5.7
Ideally, the RRAS and the RRAS manual would provide complete, clear, and concise
explanations that would facilitate systematic risk assessment and obviate rating discrepancies. As the responses to Surveys 1 and 2 indicate that this is not currently the case, we were
not surprised to see one-step discrepancies (i.e., low vs. moderate risk or moderate vs. high
risk). However, the RRAS and the RRAS manual are clear enough that there should not be
two-step discrepancies (i.e., low vs. high risk). As such, the two-step disparities were of the
greatest concern from the perspective of fairness and consistency of implementation. For
the eight risk-rating questions, we found a range of zero to three two-step discrepancies per
county. Only four counties had zero low-high discrepancies. We also found specific response
patterns in some counties, including the prosecutor scoring consistently low and the public
defender scoring consistently higher, general agreement between a prosecutor and public
defender scoring toward the lower end of the risk spectrum in some counties and at the
higher end of the risk spectrum in other counties, and prosecutors in specific counties consistently failing to rate risk on these eight questions. When one respondent in a county did
not score one of these questions, we did not count it as a discrepancy. Therefore, the frequency of intra-county discrepancies may be higher than reported.
The survey responses also yielded a few possible explanations for the variation in scoring for the RRAS. First, some respondents misread or misunderstand the RRAS. For example, when asked about work through a temp agency, one respondent replied, “intermittent
but appropriate and therefore low risk.” The intermittent but appropriate designation is
moderate risk, not low risk, according to the RRAS manual.
Second, some respondents refuse to rate some factors as low risk, even though the RRAS
manual supports a low-risk score on a given term. For example, when asked how to rate
removal of victim’s clothing without force, one respondent indicated, “Our office takes the
position that removal of a victim’s clothing can only be accomplished with force.” This
response indicates that no matter who removes the victim’s clothing, including the victim
him- or herself, and that no force was used, this factor will not be rated low risk. The RRAS
manual indicates that the degree of force should be scored as low risk if the offender uses
nonviolent methods in the attempt to obtain sexual gratification.
Third, some respondents refer to materials outside of the RRAS to determine a score on
the RRAS. For example, when asked how to rate an offender fondling himself but not the
victim, a respondent cited the New Jersey Code of Criminal Justice definition of sexual
contact (2C:14-1) to justify a moderate score rather than referring to the description in the
RRAS manual. The RRAS manual supports an assessment of low risk for this scenario.
Fourth, several respondents indicated that judges in their counties deviate from the
RRAS. For example, one respondent indicated that a judge in a particular county “never
met a job that was good,” suggesting that she or he would never allow employment to be
Downloaded from cjb.sagepub.com by guest on June 10, 2014
832 Criminal Justice and Behavior
Table 1: Frequency of Survey Question Rating Discrepancies by Role
Role
Survey Question
2. How does your county
categorize removal of victim’s
clothing without force?
3. How is it handled when an
offender fondles himself under
his clothing but does not touch
the victim?
16. How would you score an
offender who has successfully
completed sex offender
treatment, but his or her
therapist will not provide a
report?
17. How would you score an
offender who is currently
participating in sex offender
treatment, but his or her
therapist will not provide a
report?
18. How would you score an
offender who has successfully
completed sex offender
treatment, and is therefore
not actively participating in a
treatment program?
19. How would you score an
offender who has made an
effort to get him- or herself on
a waiting list for sex offender
treatment, but is not actively
participating in a treatment
program because she or he is
on a waiting list?
24. How do you score an offender
who has stable housing, but
lives alone, has successfully
completed parole, and is not
on community supervision
for life?
27. How is work through a
temp agency scored if the
offender is supervised, but is
required to change temporary
employers with regularity?
Related
Prosecutora
Public Defenderb
RRAS
Intra-County
Factor Low Mod. High DNSc Low Mod. High DNSc Discrepancies
1
15
5
0
1
13
6
1
1
8
2
7
13
0
1
3
12
0
6
4
9
7
4
6
4
5
0
16
0
11
9
12
3
2
4
5
2
11
3
12
11
19
0
0
2
12
0
8
1
6
11
13
3
4
1
9
1
10
1
8
12
4
14
1
2
3
5
12
1
12
13
1
16
0
4
5
14
0
2
6
Note. RRAS = Registrant Risk Assessment Scale.
a. The study sample includes 21 prosecutors, one for each of the 21 counties.
b. The study sample includes four public defenders. Three public defenders serve five counties and one public
defender serves six counties.
c. DNS stands for “did not score,” which was used when a survey respondent provided a response, but did not
rate the risk.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 833
scored as low risk. There were several other responses which indicated that the prosecutors
and public defenders would agree to score a factor in a particular way, but the judge deviated from the RRAS on that issue or it was difficult to tell whether she or he was following
the RRAS. For example, another respondent indicated that she or he would rate the question
regarding removal of the victim’s clothing as low risk, but that the judge in his or her county
usually does not follow the RRAS in that section.
The findings from the prosecutor and public defender surveys provide mixed responses
to Research Question 1. These results indicate that prosecutors and public defenders are not
uniformly following established procedures to determine sex offender tier designations
both by accident and on purpose. Some actors appear to follow the guidelines, whereas
other actors clearly fail to properly score the RRAS.
Fact Pattern
To demonstrate the impact of rating disparities between the counties, we created a fact
pattern (see Appendix C) using the RRAS criteria that legal actors involved with Megan’s
Law have identified as being the most significant and frequent sources of disparity. Next,
we identified a low-risk county and a high-risk county from Survey 2. Then, we used the
Survey 2 responses for these two counties to calculate risk points for the fact pattern. The
responses from the low-risk county scored the fact pattern with 6 points, whereas the
responses from the high-risk county resulted in a score of 27 points. This means that all
other things held equal, there is a 21-point difference in how two counties would score the
same offender, which is more than enough points to change an offender’s tier designation.
Twenty-one points is the difference between being a Tier 1 offender not subject to community notification or inclusion in the Internet Registry and a Tier 2 offender with targeted
community notification and inclusion in the Internet Registry, or a Tier 2 offender and a Tier
3 offender with door-to-door notification and inclusion in the Internet Registry. In both
cases, the upward discrepancies carry with them more social consequences and employment restrictions.
Discussion and Policy Implications
The current study demonstrates that the RRAS is not being uniformly implemented
throughout the state of New Jersey. The disparate application of the RRAS and determination of risk tiers are attributable to several issues. First, several RRAS factors and critical
terms are unclear, and susceptible to misinterpretation or discretion. The factors and terms
highlighted in this study as being significant sources of variation are the same RRAS factors
and related terms that the NJAOC Annual Reports indicate are the most significant sources
of objections by offenders who challenge their tier designations, lending further support to
the argument that the RRAS is not being uniformly implemented throughout the state.
Second, the findings suggest that the prosecutors and public defenders have different
criteria for what types of data go into the pool to score the RRAS. These different standards
may be related to the degree of reliance on sources outside of the RRAS manual,8 such as
the Attorney General Guidelines for Law Enforcement for the Implementation of Sex
Offender Registration and Community Notification Laws (New Jersey Office of the Attorney
General, 2007) and elements in specific sex offense statutes in the New Jersey Code of
Criminal Justice.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
834 Criminal Justice and Behavior
Third, judges may be a contributing factor to the RRAS score and tiering disparities.
Prosecutors and public defenders report that even if they agree on how to score a specific
factor, sometimes the judge will consistently override the decision or consistently refuse to
rate the risk as low on particular factors.
Fourth, there is no formal training and very little oversight on the use of the RRAS.
Members of the advisory committee as well as Survey 1 respondents consistently highlighted the lack of formal training and oversight as problems that contribute to the improper
use of the RRAS. New prosecutors or public defenders who are assigned the task of working on Megan’s Law cases are either “trained” on the use of the RRAS by a coworker or
forced to read through the brief RRAS manual and attempt to figure scoring procedures out
on their own. There is also very little intra-office oversight on the application of the RRAS.
The combination of no formal training and little-to-no oversight leads to a situation where
legal actors are trying to teach themselves how to use an empirically guided risk assessment
instrument, they may be taught the bad habits or erroneous interpretations of factors or criteria from coworkers who received no training, and the only possible check on the RRAS
scores and tier designations they calculate is a potential challenge by a sex offender him- or
herself. As a result, the RRAS scores and tier designations derived from this process may
not reflect the actual level of risk presented by groups of offenders, as intended by the
RRAS.
The purpose of empirically guided or actuarial risk assessment instruments for sex
offenders is to improve the accuracy of risk assessment. Improved accuracy is achieved, in
part, by removing unwarranted discretion from the assessment process to reduce disparities
and systematize sex offender tier designation, but that cannot happen if those responsible
for implementation are not properly using the instruments. When risk assessments are
improperly used, the scores represent an amalgamation of discretion, bias, and error rather
than an objective assessment of the public safety risk and possible service needs of groups
of sex offenders. In this study, the result of improper implementation of sex offender risk
assessment is the over-classification of risk, which leads to greater financial cost to the
criminal justice system and detracts from the ability of practitioners to properly supervise
the offenders who objectively pose the highest risk to the community.9 In light of these findings, we make recommendations in response to Research Question 2 to ensure that the
established procedures for determining sex offender tier designations are followed. We
emphasize, however, that many of the recommendations based on the current study are
applicable to the use of other risk assessment instruments, not simply the RRAS.
Recommendations
Several recommendations are made based on the literature, feedback from the advisory
committee, and survey respondents to address the implementation problems identified in
this study. First, we recommend that anyone whose job involves the use or review of the
RRAS receive formal training on its proper use. The training should include a thorough
review of all of the factors and key terms, the RRAS manual, the caveat that no guidelines
other than the RRAS manual should be used to score the RRAS, appropriate sources of data
to score the RRAS, and directed practice sessions spent scoring sample cases. The practice
sessions serve two purposes. First, the practice sessions allow individuals to demonstrate
proficiency with the instrument, indicating that individuals understand all of the materials
covered in the training. Second, training sessions can be used to assess the degree of
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 835
inter-rater reliability. A high degree of inter-rater reliability indicates that all parties are
abiding by a well-defined set of assessment criteria, and that there should be fewer disparate
outcomes in similar cases.
Second, we recommend regular refresher training for all individuals who score or review
the RRAS and tier designations. Refresher trainings serve to remind all parties of the proper
way to score the RRAS and to maintain a high degree of inter-rater reliability.
Third, we recommend that a supervision plan be developed and implemented at either the
county or state level. The individual(s) assigned to this task should review a sample of cases
on a periodic basis to ensure proper implementation of the RRAS throughout the state. It is
important that this individual(s) also participate in the initial and refresher RRAS
trainings.
Limitations
There are a few limitations to the current study. First, the legislation did not include a
provision for access to data. This omission made it nearly impossible to carry out a study
with an ideal design, because the authors did not have access to offender files. The survey
design generated some very interesting findings, but is not an optimal design to address the
questions posed by the New Jersey State Legislature.
Second, the current study had 100% participation from prosecutors and public defenders
involved with Megan’s Law, but did not involve judges or private defense lawyers. The
NJAOC refused to allow judicial participation in the study. Judges play an integral role in
the process, as they have the final say on sex offenders’ RRAS scores and tier designations.
Private defense lawyers were not included in the study, either. It is possible that private
defense lawyers represent different types of offenders than those represented by public
defenders. As such, it is possible that the current study does not represent the complete universe of viewpoints held by legal actors involved with the implementation of Megan’s Law
in New Jersey.
Conclusion
Risk assessments to determine the likelihood of recidivism have increased in popularity,
and are now widely used throughout the criminal justice system. These instruments have
become increasingly complex, and are used to evaluate the risk of general, as well as violent
and sexual recidivism. Prior research highlights the failure of criminal justice practitioners
to harness the full potential of valid risk assessment instruments due to improper implementation. The consequences associated with the failure of sex offender risk assessments are
amplified due to the widespread fear of sex offenders and the commonly held belief that all
sex offenders are uniformly dangerous and very likely to reoffend if given the opportunity.
This reality, along with the increasing complexity of risk assessments, may lead criminal
justice practitioners to either inadvertently or intentionally misuse instruments and overclassify cases at higher risk levels. The over-classification of risk results in instruments with
reduced predictive validity and increased costs to the system, rather than increased public
safety. The criminal justice system simply does not have the financial resources or personnel to provide intensive supervision to all sex offenders, and the over-classification of risk
Downloaded from cjb.sagepub.com by guest on June 10, 2014
836 Criminal Justice and Behavior
prevents criminal justice practitioners from allocating sufficient resources to those sex
offenders who truly are at high risk for recidivism.
The Sex Offender Registration and Notification Act (SORNA), Title I of the Adam Walsh
Child Protection and Safety Act of 2006, is intended to be a uniform, national sex offender
classification system. The classification levels (i.e., Tiers 1, 2, and 3) are to be solely determined by offenders’ offenses of conviction, rendering empirical and actuarial risk assessment unnecessary. The federal government has sought to secure compliance by sanctioning
states that do not adopt this classification system with reductions in federal funding.
However, many states, including New Jersey, have not implemented SORNA, and some
states, including California, have indicated that they will not adopt the SORNA classification system, because they believe that their existing classification systems are superior (see
the National Conference of State Legislatures, 2010, for more information). Recent research
comparing the tiers outlined in the SORNA, Static-99R, Static-2002, and existing statespecific tiering systems in Florida, Minnesota, New Jersey, and South Carolina indicates
that the SORNA tiers were unrelated to sexual recidivism in Minnesota, New Jersey, and
South Carolina, and inversely related to sexual recidivism in Florida (Zgoba et al., 2012).
The research of Zgoba et al. (2012) also indicates that actuarial assessments and existing
state tiering systems, including the RRAS, showed better predictive validity than SORNA.
Given the poor predictive validity of SORNA, particularly with respect to the most serious
sexual offenders, it is possible that many states will opt to retain their current methods of
sex offender assessment and tiering systems. As such, there remains a need for accurate risk
assessment. Furthermore, the financial implications of not adopting the SORNA classification system underscore the need to more accurately rate risk to properly allocate reduced
supervision and treatment resources to the offenders who pose the greatest risk to public
safety.
The current study examined the implementation of sex offender risk assessment in
New Jersey. The results indicate that the RRAS is not being properly implemented for a
variety of reasons. Specifically, improper implementation is the result of ambiguity in the
instrument’s user manual, different standards for the type of data used to score the RRAS
in each county, reliance on materials outside of the RRAS manual for guidance, a lack of
proper training and oversight on the RRAS, and judicial overrides resulting in higher risk
classification. It should be emphasized that for most cases in the majority of counties rating discrepancies would not result in a change in an individual’s risk tier designation
based on his or her county of residence. However, as presented earlier, these discrepancies could result in a hypothetical offender being tiered in different risk categories based
on whether he lived in one of the two counties that had the most extreme discrepancies in
ratings. The enumerated recommendations for training and oversight have the capacity to
substantially improve the implementation, inter-rater reliability, utility of the RRAS and
sex offender risk assessment instruments in general, reduce costs to the system, lead to
greater uniformity in the risk assessment (and resulting extent of community notification)
of offenders across counties, and possibly increase public safety if they are uniformly and
consistently adhered to by practitioners. These recommendations may also help improve
sex offender risk assessment in states that decline to adopt the SORNA classification
system.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 837
Appendix A
Prosecutor–Public Defender Survey 2 On Rras Sources of Disparity
At the beginning of each survey, we asked each respondent to select their county and
whether they are a prosecutor or public defender. Each county was assigned a unique code,
so only the researchers know to which county a set of responses refer.
Introduction
The state legislature and the governor enacted legislation requiring the Violence Institute of
New Jersey at University of Medicine and Dentistry of New Jersey (UMDNJ) to evaluate the
implementation of Megan’s Law tiering across the 21 counties of New Jersey. As part of our
evaluation, we are examining how the various counties apply the tiering standards set out in
the Registrant Risk Assessment Scale (RRAS). We are asking that you respond to the following questions, and that you add any comments you believe will assist us with this task. We
will not identify any judge, prosecutor, public defender, or county by name in our reports.
Questions
Degree of Force
1. What factors differentiate between low and moderate risk?
2. How does your county categorize removal of victim’s clothing without force? Does it fall into
the low-risk or moderate-risk category?
Degree of Contact
3. How is it handled when an offender fondles himself under his clothing but does not touch the
victim? Is it considered fondling under the clothing or no contact?
Age of Victim
4. How is victim’s age scored if the victim lied about his or her age? Is age scored by the age she
or he said they were or by their actual age at the time of the offense?
Victim Selection
5. With regard to the household member exception, how does your county handle non-blood
relatives who stay in the home, or individuals who intermittently live in the home?
Number of Offenses/Victims
6. Please explain how your county defines sole sex offense.
7. What constitutes credible evidence of additional offenses/victims if there is no conviction
(i.e., statements made by the defendant)?
History of Anti-Social Acts
8. Are arrests that never result in prosecution or conviction included in this category?
Downloaded from cjb.sagepub.com by guest on June 10, 2014
838 Criminal Justice and Behavior
9. Do you include the current sex offense(s) as an anti-social act?
10. Are traffic violations included in this category?
11. Are school disciplinary records that never lead to offender involvement with the legal system included in this category?
12. Do you score job loss as anti-social under any circumstances?
13. If you do not generally score job loss as an anti-social act, what kind of behavioral issues
related to the job loss would cause you to include the job loss as an anti-social act?
14. What sort of behavior is included in the category “sexual deviancy not the subject of criminal prosecution”?
15. Is a drug use history included in this category?
Response to Treatment
16. How would you score an offender who has successfully completed sex offender treatment
but his or her therapist will not provide a report (assuming the offender has signed a release)?
17. How would you score an offender who is currently participating in sex offender treatment
but his or her therapist will not provide a report (assuming the offender has signed a release)?
18. How would you score an offender who has successfully completed sex offender treatment
and is therefore not actively participating in a treatment program?
19. How would you score an offender who has made the effort to get him- or herself on a waiting list for sex offender treatment, but is not actively participating in a treatment program
because she or he is on a waiting list?
20. What types of treatments are included in the response to treatment and therapeutic support
categories? Is it just sex offender therapy or are others types of treatment included? If so,
please note what else is included in addition to sex offender therapy.
21. Do you consider other types of treatment that you think the offender needs while rating this
item (e.g., substance abuse treatment)?
Substance Abuse
22. How long does an offender need to be off of drugs/alcohol to be considered “in remission”?
Therapeutic Support
23. How do you distinguish between response to treatment and therapeutic support?
Residential Support
24. How is it scored when an offender has stable housing, but lives alone, has successfully completed parole, and is not on community supervision for life?
25. Are specific timeframes used to define frequent relocation by a sex offender?
Employment/Educational Stability
26. How is stable employment defined? Is there a time requirement? Does there have to be a certain
type of supervision? How is that supervision measured? Do some jobs qualify while others do not?
27. How is work through a temp agency scored if the offender is supervised but is required to
change temporary employers with regularity? Is it scored as intermittent but appropriate or
inappropriate?
28. How does parole supervision or lack thereof affect perceived level of risk or stability?
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 839
Issues affecting multiple categories
29. Does your county include sex offenses that were not prosecuted (i.e., in seriousness of
offense and offense history categories)?
30. Does your county include sex offenses for which evidence of an offense exists but no
charges were filed?
31. Does your county include sex offenses for which charges were filed, but resulted in a no bill
at grand jury?
32. Does your county include sex offenses that resulted in a finding of not guilty at trial?
33. Does your county include sex offenses dismissed as part of a plea agreement?
34. If a sex offender pleads guilty to a sex offense, does your county score for facts contained in
the factual basis for the plea or the (alleged) facts from the original complaint that might not
have been included in the guilty plea?
Appendix B
Intra-County Scoring Discrepancies Between Prosecutors and Public Defenders
Counties With Discrepancies Between Prosecutors and Public Defenders
Survey Question
2
3
4
5
6
8
9
10
11
12
15
16
17
18
19
20
21
23
24
25
27
28
29
30
31
32
33
34
Total
1
2
3
X
X
4
5
6
X
X
X
X
X
X
7
8
9
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
12
X
X
X
11
13
14
X
15
16
17
X
X
X
18
19
20
X
X
X
X
X
10
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
13
8
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
21
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
6
7
7
13
9
9
9
8
10
12
8
5
6
8
10
11
8
10
3
Total
8
4
0
11
8
3
8
8
3
9
14
11
12
6
8
9
12
10
12
1
6
2
0
0
7
7
1
0
Note. Questions 1, 7, 13, 14, 22, and 26 are not included in the matrix, because they are qualitative questions that do not allow
for scoring of discrepancies.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
840 Criminal Justice and Behavior
Appendix C
Fact Pattern and Scores
Fact Pattern
An offender has one conviction for coaxing a 9-year-old child from his neighborhood (without threats) to pull his pants down behind the dugout of the neighborhood baseball field
with the promise of candy. Once the child had his pants down, the offender touched himself
under his clothing, but did not touch the child. He is currently on a waiting list for treatment,
but is not actively participating in treatment because he is on the waiting list. He has stable
housing, but lives alone. He works regularly through a temporary employment agency.
Factors Not Disputed
Age of victim:
Victim selection:
Number of victims:
high risk: 3 × 5 = 15 points
moderate risk: 1 × 3 = 3 points
low risk: 0 × 3 = 0 point
Factors Disputed in Survey Responses
Degree of force:
Degree of contact:
Response to treatment:
Therapeutic support:
Residential support:
Employment stability:
low risk: 0 × 5 = 0 point
moderate risk: 1 × 5 = 5 points
high risk: 3 × 5 = 15 points
low risk: 0 × 5 = 0 point
moderate risk: 1 × 5 = 5 points
low risk: 0 × 2 = 0 point
moderate risk: 1 × 2 = 2 points
high risk: 3 × 2 = 6 points
low risk: 0 × 1 = 0 point
moderate risk: 1 × 1 = 1 point
high risk: 3 × 1 = 3 points
low risk: 0 × 1 = 0 point
moderate risk: 1 × 1 = 1 point
high risk: 3 × 1 = 3 points
low risk: 0 × 1 = 0 point
moderate risk: 1 × 1 = 1 point
Range of Possible Risk Points Based on Survey 2 Responses
Degree of force:
Degree of contact:
Response to treatment:
Therapeutic support:
Residential support:
Employment stability:
0-15 points
0-5 points
0-6 points
0-3 points
0-3 points
0-1 point
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 841
Risk Points
Risk Points From a County That Scored Low:
Force:
moderate risk
Contact:
low risk
Response to Treatment:
low risk
Therapeutic Support:
low risk
Housing:
low risk
Employment:
moderate riska
Total:
5 points
0 point
0 point
0 point
0 point
1 point
6 points
Risk Points From a County That Scored High:
high risk
Force:
Contact:
never encountered
Response to Treatment:
high risk
high risk
Therapeutic Support:
Housing:
high risk
Employment:
low risk
Total:
15 points
0 point
6 points
3 points
3 points
0 point
27 points
a. Prosecutor scored low and public defender scored moderate, so we went with the higher
score in an effort to be conservative on the point differential.
Notes
1. In New Jersey, only Tier 2 and Tier 3 designated sex offenders are mandated to have an entry on the Internet Registry.
2. New Jersey is comprised of 21 counties and risk determinations are made by the prosecutors in each individual county.
3. The authors were not tasked with evaluating the predictive validity of the Registrant Risk Assessment Scale (RRAS),
nor did the authors collect any data that would allow them to evaluate the predictive validity of the RRAS.
4. The study and survey were approved by an Institutional Review Board prior to administration.
5. Six questions were purely qualitative and discrepancies cannot be reliably identified in the responses. Therefore, they
are not included in the matrix.
6. One county did not have any discrepant responses on the risk-rating questions.
7. Factor 3 is also weighted by 5, but none of the Survey 1 responses highlighted it as a problem and the Survey 2 question
regarding victim’s age produced zero discrepancies.
8. The RRAS manual can be found in Exhibit E of the Attorney General Guidelines for Law Enforcement for the
Implementation of Sex Offender Registration and Community Notification Laws available at http://www.nj.gov/oag/dcj/
megan/meganguidelines-2-07.pdf.
9. Under-classification of risk is possible, but, in this study, some prosecutors were manipulating the instrument to increase
the risk score, resulting in the over-classification of risk, based on their own beliefs or the perception of what judges want.
References
Ægisdóttir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., . . .Rush, J. D. (2006). The
meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist, 34, 341-382.
Andrews, D. A., Bonta, J., & Wormith, J. S. (2006). The recent past and near future of risk and/or need assessment. Crime &
Delinquency, 52, 7-27.
Bengtson, S., & Långström, N. (2007). Unguided clinical and actuarial assessment of re-offending risk: A direct comparison
with sex offenders in Denmark. Sexual Abuse: A Journal of Research and Treatment, 19, 135-153.
Bonta, J. (2002). Offender risk assessment: Guidelines for selection and use. Criminal Justice and Behavior, 29, 355-379.
Bonta, J., Rugge, T., Scott, T. L., Bourgon, G., & Yessine, A. K. (2008). Exploring the black box of community supervision.
Journal of Offender Rehabilitation, 47, 248-270.
Bonta, J., Wallace-Capretta, S., & Rooney, J. (2000). A quasi-experimental evaluation of an intensive rehabilitation supervision program. Criminal Justice and Behavior, 27, 312-329.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
842 Criminal Justice and Behavior
Bumby, K. (2007). The importance of assessment in sex offender management: An overview of key principles and practices
(OJP Award Numbers: 2005-WP-BX-K179 and 2006-WP-BX-K004). Silver Spring, MD: US Department of Justice,
Center for Sex Offender Management.
Connolly, M. M. (2003). Critical examination of actuarial offender-based prediction assessments: Guidance for the next generation of assessments (Doctoral dissertation) (2001-IJ-CX-0003). Washington, DC: US Department of Justice, National
Institute of Justice.
Corbett, R. P. (2008). A promising future for community corrections. Criminology & Public Policy, 7, 303-310.
Craig, L. A., Browne, K. D., & Stringer, I. (2003). Risk scales and factors predictive of sexual offence recidivism. Trauma,
Violence, & Abuse, 4, 45-69.
Dal Pra, Z. (2004). In search of a risk instrument. In National Institute of Correction (Ed.), Topics in community corrections: Assessment issues for managers (pp. 9-12). Washington, DC: US Department of Justice, National Institute of
Corrections.
Epperson, D. L., Kaul, J. D., & Hesselton, D. (1998, October). Final report on the development of the Minnesota Sex Offender
Screening Tool–Revised (MnSOST-R). Paper presented at the 17th Annual Conference of the Association for the
Treatment of Sexual Abusers, Vancouver, Canada.
Gottfredson, S. D. (1987a). The honest parole decision-maker’s guide to prediction and risk assessment: Everything you
always wanted to know about decision-making but were afraid to ask (Final Report, NIC TA Grant GG-9). Washington,
DC: National Institute of Corrections.
Gottfredson, S. D. (1987b). Prediction: An overview of selected methodology issues. In D. M. Gottfredson & M. Tonry
(Eds.), Prediction and classification: Criminal justice decision making, crime and justice: A review of research
(Vol. 9, pp. 21-53). Chicago, IL: University of Chicago Press.
Gottfredson, S. D., & Gottfredson, D. M. (1986). Accuracy of risk prediction models. In A. Blumstein, J. Cohen, J. Roth,
& C. A. Visher (Eds.), Criminal careers and “career criminals” (Vol. 2, pp. 212-290). Washington, DC: National
Academy of Sciences Press.
Gottfredson, S. D., & Moriarty, L. J. (2006). Statistical risk assessment: Old problems and new applications. Crime &
Delinquency, 52, 178-200.
Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical,
algorithmic prediction procedures): The clinical-statistical controversy. Psychology, Public Policy, and Law, 2, 293-323.
Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A metaanalysis. Psychological Assessment, 12, 19-30.
Hanson, R. K. (1997). The development of a brief actuarial risk scale for sexual offense recidivism (User Report 1997-04).
Ottawa, Ontario: Department of the Solicitor General of Canada.
Hanson, R. K. (1998). What do we know about sex offender risk assessment? Psychology, Public Policy, and Law, 4, 50-72.
Hanson, R. K., & Thornton, D. (1999). Static 99: Improving actuarial risk assessments for sex offenders (User Report 1999-02).
Ottawa, Ontario: Department of the Solicitor General of Canada.
Hanson, R. K., & Thornton, D. (2003). Notes on the development of Static-2002 (Corrections Research User Report No. 2003-01).
Ottawa, Ontario: Department of the Solicitor General of Canada.
Hare, R. D. (1991). The Psychopathy Checklist–Revised (PCL-R). Toronto, Ontario, Canada: Multi-Health Systems.
Harris, A. J. (2006). Risk assessment and sex offender community supervision: A context-specific framework. Federal
Probation, 70(2), 36-43.
Harris, G. T., Rice, M. E., & Quinsey, V. L. (1993). Violent recidivism of mentally disordered offenders: The development of
a statistical prediction instrument. Criminal Justice and Behavior, 20, 315-335.
Kreamer, S. (2004). Quality assurance and training in offender assessment. In National Institute of Corrections (Ed.), Topics
in community corrections: Assessment issues for managers (pp. 13-19). Washington, DC: US Department of Justice,
National Institute of Corrections.
Lipsky, M. (1980). Street-level bureaucracy: Dilemmas of the individual in public services. New York, NY: Russell Sage
Foundation.
Lowenkamp, C. T., Latessa, E. J., & Holsinger, A. M. (2004). Empirical evidence on the importance of training and experience in using the Level of Service Inventory-Revised. In National Institute of Corrections (Ed.), Topics in community
corrections: Assessment issues for managers (pp. 49-53). Washington, DC: US Department of Justice, National Institute
of Corrections.
Lynch, M. (1998). Waste managers? The new penology, crime fighting, and parole agent identity. Law & Society Review,
32, 839-869.
McCleary, R. (1975). How structural variables constrain the parole officer’s use of discretionary power. Social Problems,
23, 209-225.
McCleary, R. (1977). How parole officers use records. Social Problems, 24, 576-89.
McCleary, R. (1978). Dangerous men: The sociology of parole. Beverly Hills, CA: Sage Publications.
Mingus, W., & Burchfield, K. B. (2012). From prison to integration: Applying modified labeling theory to sex offenders.
Criminal Justice Studies, 25, 97-109.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Lanterman et al. / SEX OFFENDER RISK ASSESSMENT 843
The National Conference of State Legislatures. (2010, January 7). Cost-benefit analyses of SORNA implementation. Retrieved
from www.ncsl.org/issues-research/justice/cost-benefit-analyses-of-sorna-implementation.aspx
New Jersey Office of the Attorney General. (2007, February). Attorney General guidelines for law enforcement for the implementation of sex offender registration and community notification laws. Trenton, NJ: Division of Criminal Justice, the
State of New Jersey. Retrieved from http://www.nj.gov/oag/dcj/megan/meganguidelines-2-07.pdf
Pratt, J. (2000). Sex crimes and the new punitiveness. Behavioral Sciences & the Law, 18, 135-151.
Presser, L., & Gunnison, E. (1999). Strange bedfellows: Is sex offender notification a form of community justice? Crime &
Delinquency, 45, 299-315.
Quinsey, V. L., Harris, G. T., Rice, M. E., & Cormier, C. A. (1998). Violent offenders: Appraising and managing risk.
Washington, DC: American Psychological Association.
Registration and Community Notification Laws, N.J.S.A. 2C:7-1-7-11 (1994).
Schlager, M. D. (2009). The organizational politics of implementing risk assessment instruments in community corrections.
Journal of Contemporary Criminal Justice, 25, 412-423.
Schneider, A. L., Ervin, L., & Snyder-Joy, Z. (1996). Further exploration of the flight from discretion: The role of risk/need
instruments in probation supervision decisions. Journal of Criminal Justice, 24, 109-121.
Thornton, D., Mann, R., Webster, S., Blud, L., Travers, R., Friendship, C., & Erikson, M. (2003). Distinguishing and combining risks for sexual and violent recidivism. Annals of the New York Academy of Sciences, 989, 225-235.
Witt, P. H., & Barone, N. (2004). Assessing sex offender risk: New Jersey’s methods. Federal Sentencing Reporter, 16,
170-175.
Witt, P. H., DelRusso, J., Oppenheim, J., & Ferguson, G. (1996). Sex offender assessment and the law. Journal of Psychiatry
& Law, 24, 343-377.
Zevitz, R. G., & Farkas, M. A. (2000). Sex offender community notification: Managing high risk criminals or exacting further
vengeance? Behavioral Sciences & the Law, 18, 375-391.
Zgoba, K. M., Miner, M., Knight, R., Letourneau, E., Levenson, J., & Thornton, D. (2012). A multi-state recidivism study
using Static-99R and Static-2002 risk scores and tier guidelines from the Adam Walsh Act (NIJ Award Number: 2008MU-MU-0001). Washington, DC: U.S. Department of Justice, National Institute of Justice.
Jennifer L. Lanterman, PhD, is an assistant professor in the Department of Criminal Justice at the University of Nevada,
Reno. She previously worked as a research associate at the Violence Institute of New Jersey during her involvement with the
present study. Her research is focused on the management and treatment of high-risk offenders, and innovation and evidencebased practice in institutional and community-based corrections.
Douglas J. Boyle, JD, PhD, is the research director at the Violence Institute of New Jersey at Rutgers, the State University of
New Jersey, and is a faculty member at both the School of Public Health and the New Jersey Medical School at Rutgers
University. He obtained his PhD in clinical psychology from the State University of New York at Stony Brook, his JD from
New York University School of Law, and his BA from Columbia University. His recent work has appeared in Criminology &
Public Policy, Justice Research and Policy, the Journal of Interpersonal Violence, the Journal of School Violence, the Journal
of Family Violence, and Policing: An International Journal of Police Strategies and Management.
Laura M. Ragusa-Salerno, MA, is a research associate at the Violence Institute of New Jersey at Rutgers University and a
doctoral student of the Rutgers University School of Criminal Justice. Her primary research interests include violent crime,
sexual offender risk assessment, and evaluation research. She has published in peer review journals, including Criminal
Justice and Behavior, Criminology and Public Policy, the Journal of Crime and Justice, and the Journal of Interpersonal
Violence.
Downloaded from cjb.sagepub.com by guest on June 10, 2014
Fly UP