All About the Article Titled Meta-analytic Review of Competency to Stand Trial Research
Other REGULAR Article
Evaluating Competency to Stand up Trial with Bear witness-Based Practise
Journal of the American Academy of Psychiatry and the Law Online December 2009, 37 (4) 450-460;
Abstruse
Evaluations for competency to stand trial are distinguished from other areas of forensic consultation by their long history of standardized assessment beginning in the 1970s. Equally office of a special issue of the Journal on evidence-based forensic practise, this article examines three published competency measures: the MacArthur Competence Assessment Tool-Criminal Adjudication (MacCAT-CA), the Evaluation of Competency to Stand Trial-Revised (ECST-R), and the Competence Assessment for Standing Trial for Defendants with Mental Retardation (Cast-MR). Using the Daubert guidelines as a framework, we examined each competency measure out regarding its relevance to the Dusky standard and its mistake and nomenclature rates. The article acknowledges the by polarization of forensic practitioners on credence versus rejection of competency measures. Information technology argues that no valuable data, be it clinical apprehending or standardized data, should be systematically ignored. Consistent with the American Academy of Psychiatry and the Police force Practise Guideline, it recommends the integration of competency interview findings with other sources of data in rendering evidence-based competency determinations.
Bear witness-based exercise for evaluation of competency to stand up trial cannot be considered without first providing a clinical context and legal framework. Clinically, the movement toward empirically based assessments has created of import advances, some limitations, and substantial resistance. The Daubert standard provides a legal framework for evidence-based do in the forensic arena. This article begins with an overview of evidence-based do and the Daubert standard, which sets the stage for an extensive exam of competency to stand trial via three competency measures.
Paris1 ably documents the evolution of psychiatric practice from idiosyncratic clinical inferences and basic research studies to systematic investigations of testify-based practice. Applied mostly to treatment and handling outcomes, evidence-based exercise is an attempt to evaluate handling efficacies systematically via randomized command trials and meta-analyses.2,three These efforts to revolutionize mental health practices are not without critics,4,five who enhance problems with research design (e.g., weak outcome measures, diagnostic validity, comorbidity, and subsyndromal cases). Established practitioners sometimes are slighted past evidence-based researchers, who at present feel "entitled to criticize and rectify clinical authorities" perhaps motivated by "an iconoclastic or even patricidal tendency" (Ref. 5, p 327). While the phrase "patricidal tendency" is an overreach, information technology does capture the concerns of seasoned practitioners who see the possibility that their decades of experience will be devalued or even discredited by testify-based approaches. Moreover, the objectivity of testify-based researchers has been called into question considering they are motivated past payment and publication to produce noteworthy results.4 The acceptance of evidence-based methods inside the psychiatric community is clearly influenced by both concerns regarding research design and polarized professional attitudes. While the bulk of the commodity addresses inquiry findings, the side by side two paragraphs outline the every bit important topic of professional attitudes.
Professional person attitudes are an often overlooked simply cardinal component in the acceptance of evidence-based practice. Slade and his colleagues6 carefully evaluated the credence of an empirically based assessment model involving a constellation of standardized measures. Objections past practitioners to using the cess model take included concerns about its cost (35%), usefulness (38%), duplicated effort (23%), and elapsing (ten%). Equally evidence of polarized views, three of these same objections were seen past other practitioners as benefits including usefulness (45%), nonduplication of services (25%), and brevity (25%). Lessons from Slade et al. can clearly exist applied to forensic do regarding important determinants for the acceptance of evidence-based practise.
Aarons et al.vii,8 have gone a step further in studying how professional attitudes toward testify-based practice are reflected in effective interventions. Although they focused on treatment, several findings may be applicable to forensic do. The ii most salient objections to evidence-based practice were that clinical experience is meliorate than standardized methods and that practitioners know better than researchers. We revisit these objections afterward in the context of evidence-based competency measures. The next section addresses the admissibility of proficient evidence in calorie-free of the Daubert 9 standard.
Application of the Daubert Standard
The Supreme Court, in Daubert v. Merrell Dow Pharmaceuticals, Inc.,ix applied scientific principles to the admissibility of scientific show. It explicitly rejected the test established in Frye v. United States,10 which relied solely on general credence. While serving as gatekeepers, trial judges are to consider the following guidelines under Daubert:
-
Ordinarily, a key question to exist answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact volition be whether it can be (and has been) tested.
-
Another pertinent consideration is whether the theory or technique has been subjected to peer review and publication.
-
Additionally, in the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of mistake.
-
Finally, "general credence" tin can nonetheless accept a begetting on the inquiry. A "reliability assessment does non require, although information technology does permit, explicit identification of a relevant scientific community and an express conclusion of a detail degree of credence within that community [Ref. 9, pp 593–iv].
Guidelines ane and 3 specifically accost scientific methods. Guideline 1 relies on the construct of falsifiability ready along by Popper.11 Simply put, a conclusion cannot be accepted as true if there is no fashion that its truth or falsity tin can be proven—if it has never been tested. With reference to forensic concerns, can the concept be empirically tested and does the research have the potential to disprove the conclusion? Whereas Guideline 1 is more than theoretical, Guideline 3 is solidly methodological. Its error rate focuses specifically on the accuracy of measurement, which is affected by reliability and validity.
Daubert and two subsequent Supreme Court cases (General Electric Co. five. Joiner 12 and Kumho Tire Co. v. Carmichael 13) are referred to every bit the Daubert trilogy. In Joiner, the Courtroom specified that the trial estimate would exist the arbiter of scientific admissibility and could be overruled based only on the abuse-of-discretion standard. For mental health experts, the practical result of this ruling is that dissimilar trial judges inside the same jurisdiction may legitimately accomplish contrary conclusions about the admissibility of specific methods, such every bit competency measures.xiv In Kumho, the Supreme Court applied the Daubert guidelines beyond scientific evidence to all expert testimony. The practical effect of this decision was to prevent experts from circumventing Daubert by claiming that their expertise (e.one thousand., clinical practice) was nonscientific. The Court reaffirmed the flexibility in applying the Daubert guidelines, which may or may non be relevant in determining the reliability of the skilful testimony in a particular case. Welch15 extensively describes "Daubert's legacy of confusion" in allowing trial judges to apply whatever or all of the Daubert guidelines when admitting good testimony.
A comprehensive review of the Daubert decision is far across the scope of this article, given the hundreds of scholarly works in the psychological, medical, and legal literatures. Readers may wish to refer to the Federal Judicial Heart16 and special issues of Psychology, Public Policy, and Constabulary (vol. 8, issues 2–4) and the American Journal of Public Wellness (vol. 95, suppl. one) for a more thorough introduction. For our purposes, nosotros selectively review manufactures that provide fundamental insights in Daubert and examine several examples of how Daubert has been practical to standardized measures and legal standards.
Gatowski and her colleagues,17 in a national report of 400 state trial court judges, establish that near judges (i.e., ranging from 88% to 93%) believed that the individual Daubert guidelines were useful in deciding the admissibility of scientific prove. Not surprisingly, they had the most difficulty in fully understanding those directly involved in scientific method (Guidelines 1 and 3). In contrast, Guidelines ii and 4 were relatively piece of cake to grasp. Based on her work, we should conceptualize that more than scientific guidelines will generate greater discrepancies among trial courts.
Researchers and scholars have critically evaluated whether full general psychological tests encounter the Daubert guidelines for admissibility. For case, controversy and argue surround the sufficiency of the Rorschach18,19 and MCMI-III20,21 when evaluated according to Daubert guidelines. Regarding the MCMI-III, Rogers and his colleagues22 questioned the admissibility of any mensurate when the fault rate essentially exceeded its accuracy. Daubert reviews have likewise considered several forensic measures for which the adequacy of their psychometric backdrop has been debated: competency to confess measures23,24 and the Mental State at the Time of the Crime scale.25,26
Inside the context of family constabulary, Kelly and Ramsey27 provide a masterful analysis of validity equally information technology applies to psycholegal constructs and measures, along with a detailed list of specific benchmarks. Researchers and practitioners are likely to find this a valuable resource in evaluating forensic measures.
Author Disclosure
The opening paragraph of this article noted the professional person schisms between traditional practice and the growing movement toward prove-based practice. Among the broad assortment of criticisms, researchers take been singled out as motivated past personal and professional gain.five An alternative view is that traditionalists are every bit motivated to avert criticisms of their electric current clinical practices by researchers. Be that equally information technology may, a brief disclosure from the kickoff writer is in guild. Rogers has pioneered the apply of empirically validated forensic measures for more than ii decades, beginning in 1984 with the publication of the R-CRAS (Rogers Criminal Responsibility Assessment Scales)28 for assessing criminal responsibleness and subsequently the Structured Interview of Reported Symptoms (SIRS)29 for feigned mental disorders. Of detail relevance to this article, he is the main writer of the Evaluation of Competency to Stand Trial-Revised (ECST-R)30 and receives a royalty of approximately 30 cents for each ECST-R record class and summary sheet administered. Readers tin independently evaluate the post-obit analyses of competency measures in light of this disclosure.
Competency to Stand Trial
The standard for competency to stand up trial was established by the Supreme Courtroom'southward determination in Dusky v. United States 31 with a one-sentence formulation requiring that the defendant "has sufficient present ability to consult with his lawyer with a reasonable degree of rational understanding—and whether he has a rational as well equally factual understanding of the proceedings against him." Rogers and Shumanfourteen provide a legal summary of Dusky'due south three prongs: a rational ability to consult one's own chaser, a factual understanding of the proceedings, and a rational agreement of the proceedings. Practitioners should be familiar with the Dusky standard and relevant appellate cases.
Competency to stand up trial is especially important to show-based forensic exercise because of its prevalence; it represents the nearly common pretrial focal point within the criminal domain of forensic psychiatry. Bourgeois estimates suggest there are 60,000 competency cases per year, with rates of incompetency often falling in the 20- to thirty-percent range.32 When extrapolated from the number of actively psychotic and mentally matted inmates,33 the potential number of competency evaluations could hands be twice this guess.
Competency evaluations are also relevant to evidence-based forensic practice because of their long history of empirical validation. In his seminal work, Robey34 proposed in 1965 a standardized checklist for operationalizing competency to stand trial. With NIMH support, Lipsitt and his colleagues35 developed in 1971 the first standardized competency measure, the Competency Screening Exam (CST). Information technology was followed in 1973 by the Competency Assessment Instrument (CAI), adult and validated by McGarry and his squad36 at Harvard Medical Schoolhouse'southward Laboratory of Community Psychiatry. This historical perspective provides an essential insight: the foundation for evidence-based forensic do was established while the American University of Psychiatry and the Constabulary (AAPL) and its counterpart, the American Academy of Forensic Psychologists, were notwithstanding in their infancies. Unlike other forensic concerns, competency to stand trial has been the vanguard of evidence-based practise, championed for decades past prominent forensic psychiatrists and psychologists.
The importance of competency evaluations was recently underscored past the 2007 publication of the AAPL Practice Guideline.37 This guideline provides a thorough introduction to the legal framework and conceptual basis for conducting these evaluations. While information technology does not grapple direct with evidence-based practices, the guideline attempts to standardize competency evaluations past recommending xv specific areas of inquiry. Without providing standardized questions, it provides a nuanced statement that "Assessing and documenting a accused's functioning commonly requires request specific questions that systematically explore" competency-related abilities (Ref. 37, p S34). Parenthetically, the qualifying term "ordinarily" seems hard to understand. Nonetheless, the AAPL Chore Force recommends the employ of specific questions and a systematic examination covering 15 areas of inquiry. Could each forensic psychiatrist or psychologist develop his or her own specific questions and systematic examination of competency? Although theoretically possible, an affirmative response would advise marked optimism that does not take into account the need to establish the reliability and accuracy of their systematic examinations. A more than sound arroyo would be the integration of clinical interviews with standardized measures. In fact, this approach is embraced by the AAPL Task Force in its summary statement nearly competency measures: "Instead, psychiatrists should translate results of testing in low-cal of all other data obtained from clinical interviews and collateral sources" (Ref. 37, p S43).
Evidence-based practise cannot be achieved without standardization. For assessments, the apply of reliable and valid measures is the about direct and empirically defensible method of achieving this standardization. The balance of this commodity assumes that practitioners will integrate case-specific (clinical interview and collateral data) with nomothetic (standardized results) data. The standardized results, while only i component of competency evaluations, achieve four major objectives by systematizing the evaluation of key points, reducing the subjectivity in recording competency-related information, providing normative comparisons, and demonstrating the inter-rater reliability of observations and findings. Despite these important contributions to competency assessments, the circumspection of the AAPL Task Forcefulness is well founded; conclusions should not be based but on this source but should reflect a careful integration of multiple sources of data.
Overview of Competency Measures
The first-generation of competency measures was introduced in the 1970s. Of more often than not historical interest, first-generation measures have limited data on their psychometric properties, a lack of normative data, and poor correspondence to the relevant legal standard.38 Although reviews of these measures are readily available,39 this article focuses more than selectively on three published competency measures. Two measures are intended for general competency evaluations: the MacArthur Competence Cess Tool-Criminal Adjudication (MacCAT-CA)40 and the ECST-R.30 The tertiary measure, the Competence Assessment for Standing Trial for Defendants with Mental Retardation (Cast-MR),41 concentrates on defendants with mental retardation. The purpose of these competency measures is to provide standardized data to assist practitioners in reaching empirically based conclusions almost elements of competency to stand trial. As noted by i reviewer, it would be utterly naïve to effort to equate any test or laboratory findings with an ultimate or penultimate legal opinion.
The post-obit subsections provide a cursory description of the measures and their development. They are followed by a more in-depth exam of competency measures as a form of evidence-based practice.
MacCAT-CA Description
The MacCAT-CA was not originally developed equally a measure of competency to stand trial. Instead, the original MacArthur enquiry was intended to assess a much broader construct of decisional competence via a lengthy research measure, the MacArthur Structured Assessment of the Competencies of Criminal Defendants.42 It was afterward shortened and retrofitted for the evaluation of competency to stand up trial.
The MacCAT-CA is composed of 22 items that are organized into three scales: understanding (viii items), reasoning (eight items), and appreciation (6 items). Probably because of its original development equally a research measure, xvi of the 22 items practise not accost the defendant's example. Rather, the MacCAT-CA asks the examinee to consider a hypothetical case almost two men (Fred and Reggie) and their involvement in a serious, almost mortiferous, assault post-obit an altercation while playing pool.
The MacCAT-CA has excellent normative data for 446 jail detainees, 249 of whom were receiving mental health services. They were compared with 283 incompetent defendants in a competence restoration program. These normative data were used for clinical interpretation of data from the jail detainees to establish three categories. Minimal or no impairment had assessed deficits that fell inside 1 standard difference (SD) of the presumably competent detainees. Mild damage was designated equally the narrow band of deficits falling between 1 and ane.5 SD. Clinically significant impairment was designated as deficits at and above 1.5 SD. Unfortunately, this arroyo was unsuccessful for the appreciation calibration; the authors simply assigned cut scores to the three categories, based on their own hypotheses regarding delusional thinking.
ECST-R Description
The ECST-R is composed of both competency and feigning scales. Its competency scales parallel the Dusky prongs: Consult With Counsel (CWC; six items), Factual Understanding of the Court Proceedings (FAC; half-dozen items), and Rational Understanding of the Courtroom Proceedings (RAC; 7 items). For feigning, the ECST-R uses Atypical Presentation (ATP) scales that are organized past content (i.e., ATP-Psychotic and ATP-Nonpsychotic) and purported impairment (i.e., ATP-Impairment). Nearly competency items are scored on five-point ratings: 0, not observed; 1, questionable clinical significance; two, mild impairment unrelated to competency; 3, moderate impairment that will bear on but not by itself impair competency; and 4, astringent impairment that substantially impairs competency.
The ECST-R was developed specifically for the purpose of evaluating the Dusky prongs. The primal components for each prong were assessed by v competency experts via prototypical analysis. Those components retained an average of 6.10 on a 7.00 rating scale of their representativeness. Individual items for the competency scales were adult and pilot tested. The feigning scales were developed by using two primary detection strategies: rare symptoms and symptom severity.
The ECST-R has splendid normative data based on 200 competency referrals and 128 jail detainees. In addition, data were available for comparison purposes for 71 feigners every bit classified past simulation research or results on the SIRS.29 Cut scores were developed on the ground of linear T scores, which facilitates their interpretation. One limitation of the ECST-R is that its cut scores have not been validated for defendants with IQs of less than lx. Unlike the MacCAT-CA, which restricts its normative data to presumably competent participants, the ECST-R includes both competent and incompetent defendants in its normative group, thereby mirroring the unabridged population that it is intended to evaluate. This ascertainment is a likely explanation for the differences in cutting scores between the two measures. The ECST-R uses the post-obit classification: sixty to 69 T, moderate harm, usually associated with competent defendants; 70 to 79 T, severe impairment, which can reflect competent or incompetent defendants; 80 to 89 T, farthermost impairment, ordinarily associated with incompetent defendants; and 90 to 110 T, very extreme impairment, almost always associated with incompetent defendants.
Cast-MR Description
The Bandage-MR is composed of three competency scales: Basic Legal Concepts (25 multiple-option questions), Skills to Help Defense (15 multiple-option questions), and Understanding Case Events (10 open up-ended questions). Basic Legal Concepts is the ane most closely aligned with Dusky's factual understanding, whereas skills to assist defense uses hypothetical examples to evaluate the consult-with-counsel prong. Agreement case events asks for detailed recall (e.g., date and witnesses) of the alleged offense and the current criminal charges. Although not a perfect match, this concluding scale is most closely aligned with factual understanding.
The CAST-MR is an outgrowth of a doctoral dissertation. A small group of x professionals (lawyers, administrators, and forensic psychologists) rated the appropriateness of the CAST-MR content. On a 5-betoken scale, the ratings were somewhat variable, with Skills to Help in Defense reaching an average score of only iii.03 regarding the appropriateness of its content (Ref. 41, p 31).
The Cast-MR is administered as an interview, although examinees are given a re-create of the items to facilitate comprehension. According to its authors, the CAST-MR has a reading level of fourth class or less, which was calculated by taking two samples, each less than 400 words, and subjecting them to reading estimates.
Descriptive only not normative data are presented from two studies of criminal defendants. A full of 128 criminal defendants compose the following groups: no mental retardation or mental disorder (due north = 46), mental retardation only no competency evaluation, (n = 24), mental retardation and competent (n = 27), and mental retardation and incompetent (n = 31). The 2nd validation study indicated a moderate agreement (71%) between cutting scores and examiner judgment.
Competency Measures and Evidence-Based Practices
With Daubert used as the conceptual framework, this section examines competency measures as evidence-based exercise. We begin with an evaluation on the congruence between the competency measures and the Dusky standard. Next, we examine these measures in light of mistake and nomenclature rates.
Relevance of Competency Measures
The Supreme Court held in Daubert that expert testimony must be relevant to the matter at hand. Citing Federal Rule of Testify 702, information technology "requires a valid scientific connection to the pertinent inquiry as a precondition to admissibility" (Ref. ix, p 592). It describes relevance as a matter of "fit"; scientific validity is not sufficient unless it fits the specific matter nether consideration by the trial court. For competency determinations, the Supreme Courtroom in Dusky established the three prongs for which the "fit" or congruence of scientific bear witness must be considered.
Specific factual aspects of cases must besides be considered. For example, the three competency measures differ in the extent to which they have been evaluated for pretrial defendants with mental retardation. For scientific validity to be relevant, it must be "sufficiently tied to the facts of the case" (Ref. 9, p 591). Therefore, the post-obit assay examines the construct validity of competency measures in light of their specific applications to defendant categories.
Table 1provides a summary of the specific scales on competency measures with descriptive information regarding their type of inquiry and the complexity of their questions. Inquiries can exist either instance-specific (i.e., the content focuses on the accused's case) or hypothetical (i.e., the content is unrelated to the defendant'southward case). Obviously, example-specific data are likely to meet the Daubert guideline of existence "sufficiently tied to the facts of the case." In contrast, hypothetical data must be examined closely to decide its relevance or fit to a particular accused's instance. For instance, what would be the similarities in MacCAT-CA's aggravated set on betwixt friends and delusionally motivated crimes?
Table 1
Description and Congruence ("Fit") betwixt Dusky's Prongs and Selected Competency Measures
With respect to relevance and fit, iii competency measures accept the most in mutual in their assessment of Dusky's factual understanding of the court proceedings. Each evaluates the defendant's understanding of the court personnel and their respective roles at trial. The Bandage-MR provides the broadest appraisement of factual understanding with inquiries virtually common legal terms and basic information regarding verdicts and sentencing. The CAST-MR also has a specific scale for considering the defendant's memory of the offense and subsequent arrest. Recall of these events is likely to exist helpful in competency cases in which amnesia plays a central role. The MacCAT-CA also assesses court personnel and then uses a hypothetical case to evaluate criminal charges related to assault and matters such as plea bargaining. Although considered to exist factual agreement,40 this scale also requires rational abilities in deciding on the alternatives. Neither the CAST-MR nor MacCAT-CA assesses defendants' knowledge of their own criminal charges and the severity of these charges. The ECST-R focuses on both courtroom proceedings and defendants' understanding of their ain criminal charges.
Forensic practitioners should decide which is near relevant to a particular competency evaluation. As a simple reminder, the Bandage-MR has been validated only in defendants with mental retardation; it should not exist used for mentally matted defendants, with or without mental retardation. One strength of the ECST-R is that information technology both prompts and educates defendants with insufficient responses on factual understanding.
The competency measures are markedly divergent in their assessment of Dusky's consult-with-counsel prong. The MacCAT-CA uses a hypothetical assault to evaluate the defendant's ability to distinguish relevant and irrelevant information and consider choices related to matters such as plea bargaining. Therefore, it assesses rational abilities merely does not consider the bodily defendant-attorney relationship or the ability to communicate rationally. Nosotros take institute the MacCAT-CA specially useful in competency cases in which the accused has expressed an involvement in serving as his or her own attorney. The complexity of the textile provides a useful yardstick for evaluating the defendant's chapters to absorb and address complex legal material. The CAST-MR uses some hypothetical fabric (e.g., a theft) only generally relies on material in the accused's case. It emphasizes the ability of the defendant to cooperate with his counsel, while not acquiescing to others (e.one thousand., police or prosecutors). Although it does not assess the quality of the defendant-attorney relationship direct, it tin can provide valuable information nigh the defendant's willingness to cooperate. The ECST-R focuses on the nature of the defendant-attorney relationship; through open up-ended questions, it examines the quality of that human relationship and the accused'due south ability to identify and resolve disagreements in relationship to the trial.
For the rational-understanding prong, both the MacCAT-CA and the ECST-R arm-twist information about the probable outcome of the case. They differ in that the ECST-R examines how severe psychopathology may touch on the accused's rational abilities. The MacCAT-CA also includes several items about defendants' views and deportment toward their attorneys. This data may assist with the consult-with-counsel prong. The ECST-R too asks defendants to consider how they might make of import decisions almost their cases, such equally plea bargaining. The focus of the ECST-R inquiries is not on the decision itself but rather on the reasoning underlying the conclusion.
The foregoing give-and-take focused on the congruence betwixt competency measures and the Dusky standard. Beyond this critically of import discussion, the relevance of a measure out must besides consider its appropriateness for the intended population (i.e., impaired defendants). For example, does the length and complexity of competency questions substantially exceed the defendant's power to process this information? For normal (unimpaired) persons, the capacity to procedure information is generally limited to the magic number of vii ± 2 concepts.43 For language, individuals use verbal chunking consisting of half-dozen to 12 syllables per concept.44 Using the MacCAT-CA as a benchmark with 1.34 syllables per word, the midpoint for unimpaired persons would be: 7 concepts × 9 syllables ÷ 1.34 syllables per give-and-take = 47.01 words. The lower limit for unimpaired persons is 22.38 words. Defendants with serious mental disorders or mental retardation are likely to have substantial deficits in capacity to process information. In the absence of specific data, 1 option would be to use the lower limit for normal persons (i.due east., ≤22 words) every bit the upper limit for competency measures used with potentially impaired defendants. As summarized in Table 1, two scales of the Cast-MR appear to meet this guideline, with agreement case events existence particularly straightforward. In contrast, questions for the assist defense force calibration include preliminary information that increases the average length to 46.9 words. Likewise, two McCAT-CA scales are also problematic because of their word length: understanding (mean [M] = 45.31 words) and reasoning (1000 = 39.88 words). In direct contrast, the ECST-R took into account give-and-take length in the development of its items. As a result, the presented material is typically very short (i.e., fewer than ten words) on the ECST-R competency scales.
Error Rates and Competency Measures
A major strength of the three competency measures is the fantabulous data on their reliability and errors in measurement. As summarized in Table 2, trained practitioners are able to achieve a high level of inter-rater reliability on each measure, with exceptional estimates for the Cast-MR (r = 0.xc) and ECST-R (r = 0.93 and 0.996). Considering the reliability of traditional interviews cannot exist established, the employ of these competency measures addresses the scientific reliability of good testify.
Tabular array 2
Reliabilities and Mistake Rates of the 3 Competency Measures
The Daubert guidelines ask that experts accost the mistake rates associated with their methods. One audio approach to ascertaining error rates is to judge the accurateness of individual scores on competency measures. Calculated as the standard error of measurement (SEM), each competency mensurate produces small SEMs, indicating a high level of accurateness (Tabular array 2). Especially useful for courtroom reports and subsequent testimony is the 95 percent confidence interval. When an elevated score exceeds the benchmark by the confidence interval, the practitioner can testify regarding a very high likelihood that the defendant meets this classification. As reported in Table ii, proficient ratings of defendants that exceed the cutting scores by three or more points take at least a 95 percent likelihood of being accurate. Stated in Daubert terms, the mistake rate is v percent or smaller.
An important consideration in establishing fault rates is whether bogus (east.g., malingered) presentations will be mistaken for genuine incompetency. In this regard, the ECST-R is distinguished from the other two competency measures by its highly reliable scales that screen for feigned incompetency. Every bit noted in Tabular array ii, the ECST-R feigning scales accept very loftier reliabilities (Yard = 0.996) and exceptionally small 95 percent confidence levels (Chiliad = 0.35).
Classifications past Competency Measures
Equally an outgrowth of the previous section, practitioners must not merely consider the relevance of the psycholegal constructs but also the meaning of its classifications. Just put, how are these classifications established and what is their relevance to the Dusky standard? Melton and his colleagues were the first to raise the concern of whether competency measures "appear to permit gross incongruencies between detail ratings and calibration interpretations" (Ref. 32, p 154). Of interest, that criticism was leveled specifically at the ECST-R rather than beingness evaluated critically for competency measures in general. We will consider the scale classifications (interpretations) in the subsequent paragraphs.
The Cast-MR examination manual provides petty guidance for making nomenclature of competent and incompetent defendants with mental retardation. While cautioning that the CAST-MR is merely one role of the competence assessment, we annotation that the mean total score for the defendants with mental retardation was 25.six for incompetence versus 37.0 for competence. Because of small sample sizes and large variability, they provide the post-obit circumspection: "only a gross judge can be fabricated of the degree to which Cast-MR full scores discriminate betwixt groups establish to be competent versus those plant to be incompetent" (Ref. 41, p 19). In add-on, the lack of information about specific prongs is a limiting cistron most the Bandage-MR classifications.
The MacCAT-CA has the most problems of competency measures in establishing accurate classifications. Obviously, the group of hospitalized legally incompetent defendants should theoretically evidence clinically significant damage, given their combined psychiatric and legal status. The figures reveal that this is non supported, revealing a flaw in the test. This is not the case for nearly defendants who are really incompetent and hospitalized (see Ref 40, Tables 4–half-dozen): the agreement scale: 33.ii percent clinically significant impairment, xv.nine percent mild impairment, and l.9 percent minimal or no impairment; the reasoning scale: 41.3 percent clinically significant impairment, 13.viii percent mild impairment, and 44.9 percentage minimal or no impairment; and the appreciation scale: 44.5 percent clinically significant impairment, 9.2 percent mild damage, and 39.ii percent minimal or no harm.
Although classifications based on the ECST-R evidence a high concordance with legal consequence (88.9%), classifications by ECST-R scales are based on construct validity and the utilize of normative information. The ECST-R transmission provides extensive data on the accuracy of its measurements. What well-nigh the "gross incongruencies" criticism of the ECST-R of Melton and his colleagues32? They seem to stem mostly from apparent defoliation over the meaning of an ECST-R rating of iii. Equally previously noted, a rating of four shows substantially impaired competency by itself, whereas a rating of 3 shows scarce competency but does not, by itself, prove substantially impaired competency. However, the cumulative effects of a 3 rating tin can signal substantially impaired competency. Indirectly, the Melton et al. commentary did raise a valid question equally to whether consistent ratings of 2 (i.e., mild harm but unrelated to competency) could result in classification as having astringent impairment on the ECST-R competency scales. For two scales (FAC and RAC), such ratings would show only moderate harm, which is typically associated with competent defendants. For the third scale (CWC), it is theoretically possible to score in the severe range based only on ratings of 2. In reviewing the ECST-R normative data, we did not find a single case of any of the competency scales where this occurred. Despite its farthermost rarity (i.e., 0 for 356 defendants), practitioners may want to consider quickly screening ECST-R protocols for this remote possibility.
Concluding Remarks
Forensic practitioners should supplement the previous analysis with careful reviews from other researchers and scholars. Grisso39 provides a thorough review of the CAST-MR and the MacCAT-CA. Although the newest mensurate, the ECST-R is the merely one of these competency measures to be reviewed past the well-respected Mental Measurements Yearbook.45,46 By combining these sources, practitioners volition become knowledgeable regarding the strengths and limitations of competency measures.
Our informal observations propose that forensic psychiatrists and psychologists are divided with respect to their utilize of competency measures. However, the historical divisions between psychiatry and psychology on the utilize of standardized assessments are gradually disappearing. Equally evidence of their growing importance, an American Psychiatric Clan Task Force undertook a multiyear analysis of psychiatric measures resulting in a comprehensive textbook.47 Across these general trends, specific contributions to competency measures have been multidisciplinary from the early on efforts in the 1970s. If not based on disciplines, what accounts for this polarization? Nosotros believe that failures of both researchers and practitioners are to arraign.
Researchers sometimes overestimate the power of their standardized measures to evaluate complex clinical constructs. For instance, interview-based competency measures are typically composed of several dozen relevant constructs that are operationally defined. Even with exceptional care, these items tin can never fully capture the accused's operation with respect to the spectrum of competency-related abilities. For instance, standardized observations of attorney-client interactions would be valuable. Nonetheless, efforts in this direction take non been successful. Equally noted by Melton and his colleagues, "most attorneys accept neither the time nor the inclination to observe, much less participate in, competency-to-stand-trial evaluations" (Ref. 32, p 148). Beyond complex content, we suspect there is some professional person arrogance arising from the apply of sophisticated research designs and psychometric rigor. The "patricidal tendency" of researchers to diminish the contributions of seasoned practitioners may play a relevant role.
Practitioners sometimes exaggerate the limitations of standardized measures while possibly overvaluing their own expertise. Some resistance is encountered from the either-or fallacy wherein practitioners erroneously assume that they must choose between their own individualized methods and psychometrically validated measures. Equally found past Aarons et al.,seven,8 we suspect in that location is some professional person airs arising from views that practitioners are superior to researchers and their standardized methods.
Gutheil and Bursztajn48 wisely counsel that forensic practitioners avoid even the appearance of "ipse dixitism" with respect to unsubstantiated opinions. Substantiation should cover an array of relevant sources past knowledgeable experts. Every bit function of this substantiation, reliable and standardized information from competency measures should not be routinely ignored past forensic practitioners. We must tackle directly the professional objections to testify-based practice. Borrowing from Slade et al. 6: are these measures useful, nonduplicative, and time-efficient? With professional experience and expertise, practitioners can make informed decisions in selecting the appropriate competency measure to evaluate specific competency-related situations.
- American Academy of Psychiatry and the Constabulary
References
- ↵
Paris J: Canadian psychiatry beyond five decades: from clinical inference to testify-based practice. Can J Psychiatry 45:34–9, 2000
- ↵
Dongier M: Evidence-based psychiatry: the pros and cons. Tin J Psychiatry 46:394–5, 2000
- ↵
Lam RW, Kennedy SH: Using meta-analysis to evaluate show: practical tips and traps. Can J Psychiatry l:167–74, 2005
- ↵
Levine R, Fink M: The case against evidence-based principles in psychiatry. Med Hypotheses 67:401–10, 2006
- ↵
Maier T: Evidence-based psychiatry: understanding the limitations of a method. J Eval Clin Pract 12:325–9, 2006
- ↵
Slade K, Cahill South, Kelsey West, et al: Threshold 3: the feasibility of the threshold assessment grid (TAG) for routine assessment of the severity of mental health problems. Soc Psychiatry Psychiatr Epidemiol 36:516–21, 2001
- ↵
Aarons GA: Mental wellness provider attitudes toward adoption of bear witness-based do: the Prove-Based Practise Attitude Scale (EBPAS). Ment Health Serv Res half dozen:61–74, 2004
- ↵
Aarons GA, McDonald EJ, Sheehan AK, et al: Confirmatory factor analysis of the testify-based practice mental attitude scale in a geographically diverse sample of customs mental health providers. Adm Policy Ment Wellness 34:465–9, 2007
- ↵
Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.Due south. 579 (1993)
- ↵
Frye v. United States, 293 F. 1013 (D.C. Cir. 1923)
- ↵
Popper Grand: The Logic of Scientific Discovery. New York: Basic Books, 1959
- ↵
General Electric Co. v. Joiner, 522 U.Southward. 136 (1997)
- ↵
Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137 (1999)
- ↵
Rogers R, Shuman DW: Fundamentals of Forensic Practice: Mental Health and Criminal Law. New York: Springer, 2005
- ↵
Welch C: Flexible standards, deferential review: Daubert'south legacy of confusion. Harv J L Public Policy 29:1085–105, 2006
- ↵
Federal Judicial Center: Reference Manual on Scientific Evidence (ed two). Washington, DC: Federal Judicial Eye, 2000
- ↵
Gatowski SI, Dobbin SA, Richardson JT, et al: Request the gatekeepers: a national survey of judges on judging expert evidence in a mail-Daubert earth. Law Hum Behav 25:433–58, 2001
- ↵
Grove WM, Barden RC, Garb HN, et al: Failure of Rorschach-comprehensive-system-based testimony to be open-door under the Daubert-Joiner-Kumho standard. Psychol Public Policy Police 8:216–34, 2002
- ↵
Ritzler B, Erard R, Pettigrew G: A terminal answer to Grove and Barden: the relevance of the Rorschach comprehensive system for expert testimony. Psychol Public Policy Law 8:235–46, 2002
- ↵
Rogers R, Salekin RT, Sewell KW: Validation of the Millon Clinical Multiaxial Inventory for Centrality II disorders: does it meet the Daubert standard? Law Hum Behav 23:425–43, 1999
- ↵
Dyer FJ, McCann JT: The Millon clinical inventories, enquiry critical of their forensic application and Daubert criteria. Police Hum Behav 24:487–97, 2000
- ↵
Rogers R, Salekin RT, Sewell KW: The Millon Clinical Multiaxial Inventory: separating rhetoric from reality. Law Hum Behav 24:501–half-dozen, 2000
- ↵
Rogers R, Jordan MJ, Harrison KS: A critical review of published competency-to-confess measures. Police Hum Behav 28:707–18, 2004
- ↵
Grisso T: Reply to a critical review of published competency-to-confess measures. Law Hum Behav 28:719–24, 2004
- ↵
Rogers R, Shuman DW: The Mental State at the Time of the Offense mensurate: its validation and admissibility nether Daubert. J Am Acad Psychiatry Police force 28:23–viii, 2000
- ↵
Poythress Northward, Melton GB, Petrila J, et al: Commentary on the Mental Country at the Time of the Law-breaking measure. J Am Acad Psychiatry Law 28:29–32, 2000
- ↵
Kelly RF, Ramsey SH: Assessing and communicating social scientific discipline information in family and kid judicial settings: standards for judges and allied professionals. Fam Court Rev 45:22–41, 2007
- ↵
Rogers R: Rogers Criminal Responsibility Assessment Scales (R-CRAS) and Test Manual. Odessa, FL: Psychological Assessment Resources, 1984
- ↵
Rogers R, Bagby RM, Dickens SE: Structured Interview of Reported Symptoms (SIRS) and Professional Manual. Odessa, FL: Psychological Cess Resources, 1992
- ↵
Rogers R, Tillbrook CE, Sewell KW: Evaluation of Competency to Stand Trial-Revised (ECST-R) and Professional Transmission. Odessa, FL: Psychological Assessment Resource, 2004
- ↵
Dusky 5. United States, 362 U.S. 402 (1960)
- ↵
Melton GB, Petrila J, Poythress NG, et al: Psychological Evaluations for the Courts (ed three). New York: Guilford Press, 2007
- ↵
American Psychiatric Association: Psychiatric Services in Jails and Prisons (ed two). Washington, DC: American Psychiatric Association, 2002
- ↵
Robey A: Criteria for competency to stand trial: a checklist for psychiatrists. Am J Psychiatry 122:616–23, 1965
- ↵
Lipsitt PD, Lelos D, McGarry AL: Competency for trial: a screening instrument. J Psychiatry 128:105–9, 1971
- ↵
Laboratory of Community Psychiatry, Harvard Medical School: Competency to Stand Trial and Mental Disease (DHEW Pub. No. ADM-77-103). Rockville, MD: Department of Health, Instruction, and Welfare, 1973
- ↵
Mossman D, Noffsinger SG, Ash P, et al: Practice guideline of the forensic psychiatric evaluation of competence to stand up trial. J Am Acad Psychiatry Constabulary 35(Suppl):S3–72, 2007
- ↵
Heilbrun K, Rogers R, Otto RK. Forensic assessment: current status and future directions, in Taking Psychology and Law Into the 20-Beginning Century. Edited by Ogloff JRP. New York: Kluwer, 2002, pp 120–46
- ↵
Grisso T: Evaluating Competencies: Forensic Assessments and Instruments (ed two). New York: Kluwer Academic, 2003
- ↵
Poythress NG, Nicholson R, Otto RK, et al: Professional Manual for the MacArthur Competence Assessment Tool-Criminal Adjudication (MacCAT-CA). Odessa, FL: Psychological Assessment Resources, 1999
- ↵
Everington C, Luckasson R: Manual for Competence Cess for Standing Trial for Defendants with Mental Retardation: CAST-MR. Worthington, OH: IDS Publishing, 1992
- ↵
Hoge SK, Bonnie RJ, Poythress N, et al: The MacArthur adjudicative competence study: development and validation of a inquiry instrument. Police Human Behav 21:141–82, 1997
- ↵
Miller GA: The magical number vii, plus or minus two: some limits on our chapters for processing information. Psychol Rev 63:81–97, 1956
- ↵
Baddeley A: The magical number seven: still magic later all these years? Psychol Rev 101:353–six, 1994
- ↵
- ↵
- ↵
Blitz AJ, Pincus HA, Beginning MB, et al: Handbook of Psychiatric Measures. Washington, DC: American Psychiatric Press, 2000
- ↵
Gutheil TG, Bursztajn H: Avoiding ipse dixit mislabeling: mail-Daubert approaches to skilful clinical opinions. J Am Acad Psychiatry Police 31:205–ten, 2003
Source: http://jaapl.org/content/37/4/450
0 Response to "All About the Article Titled Meta-analytic Review of Competency to Stand Trial Research"
Post a Comment