Which of the following is the approach in which trained professionals interpret test results?

  • PDFView PDF

Under a Creative Commons license

Open access

Highlights

Respondents mostly chose actions relating to the purpose of supporting learning.

The desired use of test results was not the same as the actual use.

Different users perform actions on different levels and in various contexts.

Teachers needed various kinds of detailed information to perform an action.

Abstract

Despite the potential of using test data to support student learning, several studies have concluded that the actual use of test data remains limited. The present study addresses this problem by examining [1] the types of actions for which teachers, internal coaches, principals and parents within primary education want to use test results and [2] the information needed to perform these actions. The results obtained from the questionnaires show that the various users want to use test results for actions that support learning, which amounts to a discrepancy relating to actual use. Furthermore, the various users perform actions on different levels, thus indicating the need for tailored reports that fit the information needs of individual users. The results of the focus group method reveal the information needs of teachers, suggesting implications for the development of new score reports.

Keywords

Formative assessment

Test use

Test results

Information needs

Needs assessment

Cited by [0]

Dorien Hopster-den Otter is a Ph.D. candidate at the Research Center for Examinations and Certification [RCEC], a collaboration between Cito and the University of Twente. Her research interests include formative assessments, psychometrics and score report development. She is currently working on projects relating to the quality of educational tests and the development of an educational master’s programme for test development.

Saskia Wools is an educational researcher and manager of prototyping at Cito. Her research interests include the validity and validation of educational assessments. She is currently working on innovative projects involving formative assessments, educational technology and assessment quality.

Theo J. H. M. Eggen is a senior research scientist at the Psychometric Research Center of Cito and a professor of psychometrics at the University of Twente in the Netherlands. He is director of the Research Center for Examinations and Certification [RCEC]. His research interests include the quality of educational testing and computerized [adaptive] testing. He is also working on projects like PISA [international educational survey] and on a variety of projects on formative educational assessment.

Bernard P. Veldkamp is head of the Department of Research Methodology, Measurement and Data Analysis and the scientific director of the Research Center for Examination and Certification [RCEC]. His research interests include measurement optimization and behavioral data science. His interests focus on computerized assessment. His current projects include obtrusive assessment in online learning and a review study on the conditions for effective formative assessment.

© 2016 The Authors. Published by Elsevier Ltd.

Objective Personality Assessment

Elahe Nezami, James N. Butcher, in Handbook of Psychological Assessment [Third Edition], 2000

Pros and Cons of Computerized Psychological Assessment

Computerized assessment owes much of its recent growth and status to the unique advantages that computers offer to the task of psychological assessment in comparison to clinician derived assessments. First, computers are time and cost efficient. Computerized reports can be available shortly after the completion of the test administration, saving valuable professional time.

Another advantage of using computers in psychological assessment is their accuracy in scoring, inasmuch as computers are less subject to human error when scoring [Allard, Butcher, Faust, & Shea 1995; Skinner, & Pakula, 1986].

Third, computers provide more objective and less biased interpretations by minimizing the possibility of selective interpretation of data.

A fourth advantage of computerized reports is that they are usually more comprehensive and thorough than clinicians’ reports. In a computerized interpretation, the test taker’s profile is examined in comparison to many other profiles. Therefore, test information can be more accurately used to classify the individual, while describing the behaviors, actions, and thoughts of people with similar profiles. In sum, a well-designed statistical treatment of test results and ancillary information will generally yield more valid conclusions than an individual professional using the same information [APA, 1986].

Finally, computerized test administration may be more interesting to some subjects, who may also feel less anxious responding to a computer monitor [Rozensky et al., 1986] than the more personal context of a paper-and-pencil test.

While the advantages of computerized assessment are many, this method is not totally problem-free. One major problem associated with automated administration, scoring, and interpretation is misuse by unqualified professionals. Skinner and Pakula [1986] suggest that computerized assessment may inadvertently encourage use by professionals without adequate knowledge and experience. It is important to keep in mind that the validity of the information obtained by computerized psychological assessment can be ensured only in the hands of a professional with adequate training and experience with the particular test in question. Turkington [1984] pointed out that computerized assessment, when used by those with little training or skill in test interpretation, can do more harm than good.

A second risk of the computer-assisted assessment is that mental-health professionals might become excessively dependent on computer reports, and accordingly become less active in personally interpreting test data. Computerized reports cannot take the place of important clinical observations, which provide essential information to be integrated with results from formal testing [Butcher, 1995b].

A third problem comes from the fallacy that computer-generated assessments yield information that is necessarily factual. Matarazzo [1983, 1986] cautioned professionals against the face validity of computer-generated interpretations. It cannot be assumed that computer assessments generate precise scientific statements that cannot be questioned. Computer-based conclusions are not chiseled in stone, and a critical review of such interpretation is necessary for their credible use.

Fourth, b computer statements in a computer report might not provide specific information about the test taker useful for diagnostic purposes. Practitioners should be cautious of “Barnum-type” statements that some computer reports may generate. Basing clinical decisions on this type of statement can lead to inaccurate recommendations [Butcher, 1995].

Finally, a computerized report might include statements that do not apply to every patient. It is important to keep in mind that computer reports are general descriptions of profiles. It is quite possible that individuals with similar profiles will not possess all of the characteristics identified by a particular profile. It is incumbent on the professional to ascertain the accuracy of test reports for each individual client [Butcher, 1995b].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978008043645650094X

Technology-based mental health assessment and intervention

Christine E. Gould, ... Renee Pepin, in Handbook of Mental Health and Aging [Third Edition], 2020

Computerized assessment

Computerized assessment is a broad term that includes measurement via computer or tablet. Much of the research conducted thus far has centered around cognitive testing. Computerized assessment can be particularly beneficial for cognitive testing due to potential for greater recording accuracy and precision of timed tasks, easy scoring, and standardized administration without biases. As such, researchers have both developed and translated a number of assessments for cognitive impairment via computer or tablet. While the literature on computerized assessment of psychiatric conditions is limited, some suggest that this mode of assessment could facilitate discussion of issues that are less frequently disclosed to providers [e.g., alcohol use; Nemes et al., 2004].

Users of computer-based assessments report that this format is feasible. Older adults and individuals with cognitive impairment have rated computer assessments easy to use and understand [e.g., Fillit, Simon, Doniger, & Cummings, 2008]. When asked about their computerized assessment experience, older adults generally report that they believe test results reflect their abilities, though many note the concern that factors such as health condition and motor control affected their performance [Robillard, Lai, Wu, Feng, & Hayden, 2018]. Thus, it is important to broadly consider potential factors that may affect performance. While older adults see value in the computerized assessments, they may prefer that these assessments are used as an adjunct to, rather than a replacement for, care from clinicians [Robillard et al., 2018].

Assessment via tablet may present different functions than those offered on the computer. While tablets may be used for initial screening, their portability may facilitate monitoring progress over time. Older adults tend to respond positively to tablet-based assessments. In one survey, older adult participants [age range 65–88] were asked to imagine they were in a research study on antidepressants and subsequently introduced to iPad versions of the NIH Toolbox Psychological Wellbeing measure and the NIH Toolbox Cognition and Motor batteries [Lenze et al., 2016]. Eighty-five percent believed that assessment before and after treatment was important and participants found tablet-based assessments generally acceptable.

When tablet- and computer-based assessments have been adapted from paper-and-pencil tests, equivalence across tests cannot be assumed and some suggest that it is particularly important that instructions are explicit [Jenkins et al., 2016]. Considering technological familiarity and comfort are also crucial. Some suggest training for older adults to ensure sufficient understanding prior to the assessment [see Wild, et al., 2008] and ensuring that the user interface is clear. In addition, user attitudes toward the device may affect motivation and, subsequently, performance [Jenkins et al., 2016].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128001363000247

Internet-based psychotherapies

Gerhard Andersson, in Mental Health in a Digital World, 2022

Assessments

I move on to Internet-based computerized assessments which basically follow the emergence of Internet therapies and have been around for more than 20 years [Buchanan, 2002]. All Internet research including clinical implementations use online questionnaires to collect outcome measures, but also for screening, epidemiological, and online psychological experiments. With regards to questionnaires, there is ample evidence showing that psychometric properties and characteristics of measures remain stable and also have advantages such as reducing the possibility skip items [van Ballegooijen, Riper, Cuijpers, van Oppen, & Smit, 2016]. Moreover, it is also possible to use stepwise procedures with, for example, screening questions leading to further questions. Online assessment procedures are increasingly incorporated in clinical practice [Zimmerman & Martinez, 2012]. One cautionary note is, however, that the format should not change between paper-and-pencil and online questionnaires in research studies [Carlbring et al., 2007]. With regards to diagnostic procedures, self-report questionnaires cannot replace diagnostic interviews [Eaton, Neufeld, Chen, & Cai, 2000], but increasingly video consultations are and have the benefit of not requiring the client to visit a clinic [Chakrabarti, 2015].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128222010000083

Assessment

Gale H. Roid, W. Brad Johnson, in Comprehensive Clinical Psychology, 1998

4.17.5.1 Test Development

The Ethics Code [APA, 1992] states clearly that psychologists involved in the development and provision of computerized assessment services accurately describe the purpose, development procedures, norms, validity, reliability, and applications of the service as well as particular qualifications or skills required for their use. Psychologists participating in such product development should attempt to clearly link interpretive statements to specific client scores or profiles, qualify narrative statements to minimize the potential for misinterpretation and perhaps provide some form of warning statement to alert users to the potential for misinterpretation [Hofer & Green, 1985]. At the very least, the developer might note that the clinical interpretations offered in narrative printouts are not to serve as the sole basis on which important clinical decisions are made [Matarazzo, 1986]. Adherence to the highest standard of the profession would also require developers to provide rather detailed information regarding the system's development and structure in a separate manual. Because individual users are responsible for determining the validity of any CBTI for individual test-takers, availability of such system information is critical. Bersoff and Hofer [1991] noted that in spite of the apparent conflict between the developer's proprietary interest in the product and the clinician's need to responsibly evaluate the service, open and critical review of tests and CBTIs is critical for ensuring the quality of such materials and upholding the profession's ethical code.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0080427073000110

Technological developments in assessment

Robert L. Kane, Thomas D. Parsons, in Handbook of Psychological Assessment [Fourth Edition], 2019

Advantages and challenges in early adoption

Dating from the first attempts to implement computers as cognitive assessment devices, there was an appreciation of the potential advantages as well as the challenges and cautions that surrounded computerized assessment. The advantages included standardization in test administration superior to that of the human examiner, scoring accuracy, the ability to integrate response timing into a variety of tasks in order to better assess processing speed and efficiency, expanded test metrics that capture variability in performance, the ability to integrate adaptive testing for both test and item selection, and the ability to incorporate tests not easily done with booklets or pieces of paper. The cautions included that computers could be deceptive with respect to timing accuracy and developers of automated tests had to be aware of issues related to response input methods, operating systems, program coding, and computer architecture that could affect response timing and the measurement of test performance. In the early years of automated testing, computers were relatively new devices not yet fully integrated into daily life. Hence, there were also concerns about how an individual would react to being tested on a computer. These concerns were supported by studies addressing the effects of computer familiarity on test performance [Johnson & Mihal, 1973; Iverson, Brooks, Ashton, Johnson, & Gualtieri, 2009]. The biggest limitation in adopting computerized testing for clinical assessment was that available technology did not permit the assessment of important language-based skills, including verbal memory. Algorithms for analyzing speech patterns for indications for anxiety, depression, or unusual syntax or word usage were also not available or were in the early stages of development. Constraints related to speech recognition and language analysis limited the types of tests and methods that have been implemented on computer for clinical assessment.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128022030000201

Academic library services: quality and leadership

Luisa Alvite, Leticia Barrionuevo, in Libraries for Users, 2011

Universities in a globalised setting

Even though there is no agreement on the elements that comprise excellence in higher education, in the past decade we have witnessed a veritable explosion in university rankings. We can cite, for example, the Academic Ranking of World Universities [ARWU]1 published by the Center for World-Class Universities and the Institute of Higher Education of Shanghai Jiao Tong University. This system emphasises publications, citations and academic prizes, especially in science and technology. The QS World University Rankings2 relies heavily on academic peer review [which accounts for 40 per cent]. The SCImago Institutions Rankings [SIR]3 is built with data from Elsevier database Scopus. The SIR 2009 World Report ranks the best 2,000 worldwide research institutions and organisations and analyses their research performance in the period 2003–7 through five global output indicators. In turn, Webometrics Ranking of World Universities4 produced by the Cybermetrics Lab [National Research Council of Spain], offers information about more than 8,000 universities according to their web presence, a computerised assessment of the scholarly contents and visibility and impact of the whole university web domain.

Obviously, the results are not all similar because of the relative weight assigned to the indicators used, which means that we can find a single institution with widely divergent rankings depending on the list chosen. A careful statistical analysis of international ranking concludes that there is broad consensus about the first 10–12 universities, but after that the lists begin to diverge. The lack of an absolute set of performance criteria may mean that ‘world class’ standing will probably be based more on academic reputation than on a set of formal standards [Mohrman et al., 2008].

The popularity of this kind of ranking is a clear sign of the globalisation of knowledge and the internationalisation of university teaching. The more traditional comparisons among institutions within a single country have been eclipsed by observations that scrutinise a university’s position beyond the restricted political or linguistic frontiers.

In this context, networks of excellence have arisen. The International Alliance of Research Universities [IARU]5 is perhaps the network that best illustrates this phenomenon. It was set up in 2006 and includes ten leading research universities: Australian National University, ETH Zurich, National University of Singapore, Peking University, University of California at Berkeley, University of Cambridge, University of Copenhagen, Oxford University, University of Tokyo and Yale University. Equally noteworthy is the fact that several universities have set up a group that they define as the ‘Emerging Global Model [EGM]’ for the university of the twenty-first century. Mohrman et al. [2008] argue that the development of the EGM is both a response to and an influence upon the major factors in contemporary society. EGM universities look worldwide for research partners, graduate students, prospective faculty and financial resources. This group of global universities will form an elite subset in a larger universe of higher education institutions. The growth of international university associations demonstrates the interdependence of EGM universities through transnational activities.

Without downplaying the controversy over the methodology used in this kind of ranking, we believe that they are yet another element reinforcing higher education’s commitment to evaluation and quality and acting as an incentive and stimulus in the quest for excellence in institutions’ teaching and research. In European universities, the political measures aimed at intensifying research competitiveness and restructuring higher education systems have been ratcheted up in recent years. A well-known multinational organisation is the European Union’s Erasmus Mundus Programme,6 a cooperation and mobility initiative that enhances the quality of European higher education and promotes the European Union as a centre of excellence in learning around the world.

The EFQM Model of Excellence was established as a guide for rating the organisations that vied for the European Quality Award created by the European Foundation for Quality Management [EFQM].7 On 29 September 2009, the results of the revision of the EFQM Excellence Model were presented and a new version of the 2010 EFQM Model was previewed at the Annual EFQM Forum in Brussels. This model will coexist with the current model dating from 2003 throughout 2010.

Today EFQM is being accepted as a management model by organisations that are seeking institutional excellence, and it is a benchmark in Europe for excellence as its design encompasses the most up-to-date management practices within an organisation. This model is based on self-assessment and defines the parameters that must be taken into account in order to assess the maturity of the management system within any organisation.

EFQM applies the concept of quality to higher education by defining quality as the degree to which a continuum of differentiating features inherent in higher education fulfils a given need or expectation. Quality is an asset of an institution or programme that fulfils the standards preset by an accreditation agency. In order to be properly measured, this usually involves the evaluation of teaching, learning, management and results.8

The European Association for Quality Assurance in Higher Education [ENQA]9 was established in 2000 to promote European cooperation in the field of quality assurance. The idea for the association originates from the European Pilot Project for Evaluating Quality in Higher Education [1994–5], which demonstrated the value of sharing and developing experiences in the area of quality assurance. Subsequently, the idea was given momentum by the Recommendation of the Council [98/561/EC of 24 September 1998] on European cooperation in quality assurance in higher education and by the Bologna Declaration of 1999. The European Commission has, through grant support, financed the activities of ENQA since the very beginning. The third edition of Standards and Guidelines for Quality Assurance in the European Higher Education Area was published in 2009 [European Association for Quality Assurance in Higher Education – ENQA, 2009].

Likewise, the European Quality Assurance Register for Higher Education [EQAR]10 aims at increasing the transparency of quality assurance and thus enhancing trust and confidence in European higher education. EQAR will list quality assurance agencies that operate in Europe and have proven their credibility and reliability in a review against the European Standards and Guidelines for Quality Assurance [ESG]. It publishes and manages a register of quality assurance agencies that substantially comply with the European Standards and Guidelines for Quality Assurance [ESG] to provide the public with clear and reliable information on quality assurance agencies operating in Europe.

The different European Union member states have their own national and/or regional quality evaluation and accreditation agencies with the goal of contributing to improving the quality of the higher education systems through the evaluation, certification and accreditation of their programmes, faculty, institutions and services, including libraries.11

In a competitive and internationalised context, evaluation processes can spring from a regulatory requirement – either regional or nationwide – or can be conducted on the initiative of the institution itself in an effort to consolidate or accentuate the university’s prestige. The library must have solid grounding in quality management in order to adapt to the demands of the evaluations it might be subjected to in order to support the institutional strategies.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781843345954500017

Assessing Cognition and Social Cognition in Schizophrenia & Related Disorders

Amy E. Pinkham, Johanna C. Badcock, in A Clinical Introduction to Psychosis, 2020

Measurement of Basic Cognitive Abilities

Common Tools and Batteries

The cognitive deficits associated with schizophrenia, and other psychotic disorders, are multifaceted, with severity typically 1–2 standard deviations below normal. As a consequence, assessment of cognitive strengths and difficulties in people with, or at increased risk for, psychotic disorders is a core competency for clinical and neuropsychologists. A wide range of tools is available for the assessment of cognition in people with psychotic disorders, including brief screening instruments to more extensive batteries covering multiple cognitive domains [see selected examples in Table 8.2]. In 2008, the National Institute of Mental Health began the Measurement and Treatment Research to Improve Cognition in Schizophrenia [MATRICS] initiative [//www.matricsinc.org/]. It resulted in the MATRICS Consensus Cognitive Battery [MCCB] to encourage a more standardised approach to the assessment of key domains of cognition in schizophrenia and related disorders. The domains identified were speed of processing, attention/vigilance, working memory, verbal learning, visual learning, reasoning and problem solving, and social cognition. The MCCB exhibits sound psychometric properties and small practice effects, and is the US Food and Drug Administration gold standard outcome measure for the assessment of cognitive treatment effects in clinical trials of schizophrenia [Georgiades et al., 2017].

Table 8.2. Common Tools to Assess Basic Cognition

Test or batteryDomain[s] assessedMode of testingTimeUser informationPsychometric qualitiesMATRICS Consensus Cognitive BatteryCANTAB, Schizophrenia Test BatteryBrief Assessment of Cognition in Schizophrenia- AppCogState Brief Battery [Cognigram]Clinician-Rated Dimensions of Psychosis Symptom SeverityCognitive Assessment InterviewSchizophrenia Cognition Rating ScaleMeasure of Insight into Cognition Self-Rated.Cognitive Biases Questionnaire for PsychosisDavos Assessment of Cognitive Biases ScaleDysfunctional Attitudes ScaleThe Beck Cognitive Insight Scale
Objective Assessments
Processing speed, Attention, Working Memory, Verbal Learning, Visual Learning, Reasoning & Social Cognition 6 Paper & pencil tests, 1 via Computer; Computerised scoring program. 60–90 min User qualifications apply.
Available from www.matricsinc.org
Psychometric properties [see Georgiades et al., 2017].
Working memory,
Episodic memory,
Executive function,
Emotion recognition,
Cognitive flexibility,
Processing speed,
Sustained attention
Computerised ~ 60 min Available from www.cambridgecognition.com Reviewed in Barnett et al. [2010]
Working Memory,
Verbal Memory,
Executive Function,
Verbal Fluency,
Processing Speed,
Motor Function
Computerised 30 min Designed for use by a variety of testers.

Available from www.neurocogtrials.com

Validation of tablet-based assessment [see Atkins et al., 2017]
Psychomotor function, Attention, Working Memory, Learning Computerised 12–15 min Available from:
www.cogstate.com
Construct and criterion validity [Maruff et al., 2009]
Subjective Assessments
Single dimension of cognitive impairment Clinician rated Available from American Psychiatric Association
www.psychiatry.org/dsm5
Psychometric properties not yet published.
Working Memory, Attention, Verbal Learning, Reasoning and Problem Solving, Speed of Processing, Social Cognition. Clinician interview of patients and informants, 10 items ~ 15 min
per interview
Manual and rating form available from the author [Dr. J. Ventura] upon request. Development, reliability, validity and sensitivity to change [Ventura et al., 2010; Ventura, Subotnik, Ered, Hellemann, & Nuechterlein, 2016]
Attention,Memory,
Working Memory,
Language, Problem Solving, Motor Skills, Reasoning,
Social Cognition
Clinician interview of patients and informants, 20 items, rated on a 4-point scale. ~ 15 min
per interview
See Keefe et al. [2015].
Available from: www.neurocogtrials.com
Development, reliability, validity and treatment sensitivity [Keefe et al., 2015]
Attention, Memory, Executive functioning Self-rated, 12 items, rated on a frequency scale 0–3 ~ 5 min Copyrighted. Available from the first author [Prof A. Medalia] upon request. Internal consistency, test-retest reliability, test specificity and validity [Medalia, Thysen, & Freilich, 2008]
Cognitive biases
Jumping to Conclusions, Intentionalising, Catastrophising, Emotional Reasoning, Dichotomous Thinking Self-report.
30 vignettes, forced choice response format
Public domain Development, reliability, concurrent & construct validity. [Peters et al., 2014]. Cross-cultural validation in Japan [Ishikawa et al., 2017]
Jumping to Conclusions, Belief Inflexibility, Selective Attention to Threat, External Attribution Self-report. 42 items, 7 point rating scale Public domain Development, reliability, criterion validity & norms [van der Gaag et al., 2013]
Reliability & validity [Bastiaens et al., 2013]
Defeatist performance beliefs [DPB]; Need for approval beliefs [NFA] Self-rated.
DPB = 15-items
NFA = 10-items
5–10 min Public domain.
DPB items in Grant and Beck [2009]
Development [Weissman, 1978]. Internal consistency and criterion validity [Horan et al., 2010]
Self Certainty Scale, Self Reflectiveness Scale Self-report.
15-items
5 min Public domain Development, reliability & validity [Beck, Baruch, Balter, Steer, & Warman, 2004]. Qualitative review
[Riggs, Grant, Perivoliotis, & Beck, 2012]

Whilst the MCCB primarily uses paper and pencil tests, others are based entirely on computerised administration and scoring. The latter include more extensive assessment tools, such as the Cambridge Automated Neuropsychological Test Battery [CANTAB-Schizophrenia], along with briefer options providing reliable but rapid cognitive screening, such as the tablet-based Brief Assessment of Cognition in Schizophrenia [BAC App; Atkins et al., 2017] and the CogState Schizophrenia Battery [Maruff et al., 2009]. These—and other well-validated batteries—provide clinicians with a wealth of options to assess the nature and magnitude of cognitive impairments—and strengths—in your clients. Computerised assessment of cognition may offer a number of advantages over traditional paper and pencil neurocognitive batteries, such as more standardised delivery of task stimuli and instructions, higher efficiency of data collection, and lower rates of scoring errors. However, automated assessment has also been linked with increased rates of missing data and prior experience with digital technologies may influence task performance. Furthermore, standardised batteries may not capture all cognitive abilities relevant to your client. For example, people with schizophrenia exhibit a wide range of language problems, from the level of phonology to pragmatics and expressed in the form of abnormal speech perception, production, and linguistic content [Murphy & Benítez-Burraco, 2016]. Test batteries such as the MCCB noted here do not include a formal, systematic assessment of these language abilities. Overall, these strengths and weaknesses serve a useful reminder that the suitability of these tools for individual clients needs to be continually evaluated, and that the clinician-client relationship plays an important role in the process of cognitive testing.

Alternative Methods of Assessing Cognitive Ability

DSM 5 now includes a dimensional assessment of cognitive impairment within its new rating instrument, the Clinician-Rated Dimensions of Psychosis Symptom Severity [CRDPSS; American Psychiatric Association, 2013; Section III, Emerging Measures and Models]. The aim of the CRDPSS is to help provide a more individualised approach to treatment planning and highlight the potential need for treatment specifically targeting cognitive deficits [see Chapter 17]. This new assessment tool requires the clinician to make a judgement on the severity of cognitive impairment as experienced by the individual over the past seven days, with anchored ratings made on a five point scale [0 = Not present; 4 = present and severe]. Of note, diagnosis can be made without the use of the severity specifier, which implies that use of the tool is not considered essential. The advantages of the CRDPSS include its brevity, suitability for monitoring treatment progress, and potential for adoption in electronic medical records. However, a standardised approach to using the CRDPSS is lacking. For example, the range of cognitive [and social cognitive] domains considered by clinicians when deriving a single rating of cognitive impairment may vary between individuals, and over time. Furthermore, no evidence on the psychometric properties of this tool is currently available.

Other methods for screening cognitive impairment in people with—or at high risk for—psychotic disorders include semistructured clinical interviews and self-report measures [see examples in Table 8.2]. Such tools have often been designed with a greater emphasis on ‘real world’ competencies that may compliment [but not substitute for] objective neuropsychological assessment. As clinicians you therefore need to critically analyse the strengths and weaknesses of these instruments. For example, the Schizophrenia Cognition Rating Scale [SCoRS; Keefe, Poe, Walker, Kang, & Harvey, 2006] and the Cognitive Assessment Interview [Ventura et al., 2010] are interview-based measures of cognition that exhibit good reliability, validity, and sensitivity to change [Keefe et al., 2015; Ventura et al., 2016]. However, the need for informants who know a client well may limit the practicality of these measures in clinical practice. In addition, the reliability and sensitivity of these tools may vary as a function of rater training [Keefe et al., 2015] and stage of illness [Sanchez-Torres et al., 2016].

Several self-report scales have been developed to assess subjective cognitive dysfunction in schizophrenia. However, people with psychotic disorders often have problems with various forms of self-assessment, including difficulties self-assessing basic cognition. As a consequence, client reports often do not correlate with objective assessments of cognitive performance. Incomplete or absent awareness of cognitive difficulties—or lack of cognitive insight—undermines the utility of self-report measures when assessing people with psychotic disorders [Burton, Harvey, Patterson, & Twamley, 2016]. For example, a tendency to overestimate one's actual cognitive ability could mean that potential benefits from trying cognitive remediation would be missed. However, self-assessment can be clinically useful during case formulation discussions about cognition, regardless of whether performance is objectively good or bad. For example, ‘Those with poor performance could be helped to attempt to match their aspirations to accomplishments and improve over time. Good performers could have their functioning bolstered by recognising their competence’ [Harvey & Pinkham, 2016; p. 53].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128150122000080

Task difficulty of virtual reality-based assessment tools compared to classical paper-and-pencil or computerized measures: A meta-analytic approach

Alexandra Neguţ, ... Daniel David, in Computers in Human Behavior, 2016

2.2 Studies selection

The following criteria were used for the inclusion of studies in the meta-analysis: [a] assessed any cognitive process using virtual reality and analogous classical or computerized assessment tools of the same cognitive process; [b] provided sufficient data to compute effect sizes; [c] were English-based publications.

The initial search procedure revealed 146 records. Thirty-three additional records were identified through other sources [see Fig. 1]. After removing 16 duplicates, 163 potential abstracts were inspected. We excluded dissertations, publications in other languages than English, and studies that were not focused on virtual reality and neuropsychological assessment. A total of 115 potential articles were analyzed in detail based on their full text. Studies that used computer devices but did not provide full immersion via HMDs or gesture-based video-capture systems have been excluded. Thirteen studies met the inclusion criteria and were included in the meta-analysis.

Fig. 1. PRISMA flow diagram.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S0747563215301059

Cancer distress screening

Linda E Carlson, Barry D Bultz, in Journal of Psychosomatic Research, 2003

Clearly, there is widespread recognition of the need for distress screening in oncology settings, as evidenced by the proliferation of discourse and research around the issue. The model applied depends on the resources available and the need of any given population and center. However, at this juncture, the most highly recommended model would be based upon the marriage of computerized assessment and real-time scoring, followed by timely triage and availability of appropriate intervention options. Patients could complete a screening instrument such as the BSI, HADS, or distress thermometer online or on a handheld personal computing device, have this information immediately scored and summarized, and the report sent to the screening coordinator and placed on the patients chart. The coordinator in charge of triage would then assess the level of patient need and make the appropriate referral for intervention based on the model of levels of intervention, as described above. The medical staff would also be immediately aware of the level of patient distress so that the entire team would be able to intervene when appropriate.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S0022399903005142

Assessing cognitive function in clinical trials of schizophrenia

Jennifer H. Barnett, ... Andrew D. Blackwell, in Neuroscience & Biobehavioral Reviews, 2010

Cognitive dysfunction in schizophrenia is an important target for novel therapies. Effectively measuring the cognitive effects of compounds in clinical trials of schizophrenia could be a major barrier to drug development. The Measurement and Treatment Research to Improve Cognition in Schizophrenia [MATRICS] programme produced a consensus cognitive battery which is now widely used, however alternative assessments have advantages and disadvantages when compared with MATRICS. The Cambridge Neuropsychological Test Automated Battery [CANTAB] is a computerised assessment developed from animal behaviour paradigms and human neuropsychology. We review the utility of CANTAB according to MATRICS and CNTRICS recommendations. CANTAB tests have been used in more than 60 studies of psychotic disorders. Their neural bases are well understood through patient and neuroimaging studies and directly equivalent tests in rodents and non-human primates. The tests’ sensitivity to pharmacological manipulation is well established. Future studies should collect more data regarding psychometric properties in patients over short time periods, and should continue to study the tests’ relationships to functional outcomes. Computerised cognitive assessment may optimise the statistical power of cognitive trials by reducing measurement error and between-site variability and decreasing patient attrition through increased tolerability.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S0149763410000138

Which of the following is a particular concern when computers are used to score and interpret the results of complex psychological test?

Which of the following is of particular concern when computers are used to score and interpret the results of complex psychological tests? the set of rules was more accurate than the trained professionals.

Which term refers to the accuracy of the inferences interpretations or actions that are based on test scores?

The modern concept of validity [AERA, APA, & NCME, 1999] is multi-faceted and. refers to the meaningfulness, usefulness, and appropriateness of inferences made. from test scores.

What are the two main types of psychological test?

Tests can either be objective or projective: Objective testing involves answering questions with set responses like yes/no or true/false. Projective testing evaluates responses to ambiguous stimuli in the hopes of uncovering hidden emotions and internal conflicts.

What is the process of psychological assessment?

A psychological assessment can include numerous components such as norm-referenced psychological tests, informal tests and surveys, interview information, school or medical records, medical evaluation, and observational data. A psychologist determines what information to use based on the specific questions being asked.

Chủ Đề