Values tell us what we tend to do while personality traits tell us what we ought to do

Introduction

Personality—“a person’s nature or disposition; the qualities that give one’s character individuality”Footnote 1—is a key area of research in user modelling and user adaptive systems. One of the most popular ways to describe and measure personality is trait theory—where a person is assessed against one or more factors [e.g. ‘Conscientiousness’ or ‘Agreeableness’]. These measurable differences in how people interact with the world are prime targets for providing users with an appropriately tailored user experience. However, to facilitate these tailored user experiences, researchers first need to discover which aspects of personality are important for adaptation, and how to tailor experience to them.Footnote 2

One approach would be to measure users’ personality and ask them to use the system or evaluate its features. However, as noted in Paramythis et al.’s [2010] discussion on layered evaluation, one issue with using a user-based study for an adaptive system is that adaptation takes time, often more than is available during a study. One solution they advocate is an indirect study, where the user model is given to participants and they perform the task on behalf of a third party. This allows researchers to control the characteristics of the imaginary user, avoiding the time delay needed for populating the user model from actual user interactions with the system. An indirect study also ensures that the input to an adaptation layer is perfect, making it very suitable for layered evaluations. Indirect studies may also be required for other reasons—for example, they are needed when it is difficult to recruit a large enough number of target participants, such as in the work by Smith et al. [2016] for skin cancer patients.

Another way to investigate adaptation strategies and discover pertinent personality traits is by using a User-as-wizard approach [Masthoff 2006; Paramythis et al. 2010], which uses human behaviour to inspire the algorithms needed in an adaptive system. In a User-as-Wizard study, participants are given the same information the system would have, and are asked to perform the system’s task. Normally, participants will deal with fictional users, which allows us to study multiple participants dealing with the same user, controlling exactly what information participants get.

When using a User-as-Wizard or indirect approach for adaptation to personality research, the simulated user’s personality needs to be conveyed. However, there is a paucity of easy, validated ways to convey or represent the personality of a third party to participants. One option is to use real people, allowing participants to interact with a person with the desired trait. However, this is hard to control as it is hard to ensure participants adapt to personality instead of, for example, current affective state. Participants would have to spend considerable time with the individual to perceive their personality. Another option is to ask participants to “imagine a user who is extravert” or provide statements such as “John is neurotic”. This approach is unlikely to elicit empathy from participants due to a lack of context about the simulated user and could possibly be overlooked when placed with other data, such as test scores.

This is a non-trivial research problem: how to provide enough information about the personality of a simulated user for participants to identify and empathise with them, without making the simulated user seem one-dimensional and implausible. This paper details a methodology for conveying personality using validated personality stories.

In addition to conveying personality, these stories can be used as part of an alternative method of measuring personality.

Reliable and efficient personality measurement is still largely an open challenge. Whilst validated personality tests exist, completing them may create an overhead that is unacceptable to users: personality tests range from the Five Item Personality Inventory [FIPI test] [Gosling et al. 2003] to the 300-item International Personality Item Pool [IPIP-NEO] [Goldberg et al. 2006]. A problem with questionnaires is response bias, in particular, the bias introduced by acquiescence or ‘yea-saying’—the tendency of individuals to consistently agree with survey items regardless of their content [Jackson and Messick 1958]. This is an issue with many personality trait questionnaires, and was one reason why a new version of the Big Five Inventory [BFI-2] was produced recently [Soto and John 2017]. Questionnaires may also be undesirable for reasons described later. Current approaches to unobtrusively measure personality include analysis of blogs [e.g. Nowson and Oberlander 2007; Iacobelli et al. 2011], users’ social media content [e.g. Facebook, Twitter] [Gao et al. 2013; Golbeck et al. 2011; Quercia et al. 2011] or social media behaviour [e.g. Amichai-Hamburger and Vinitzky 2010; Ross et al. 2009]. These indirect approaches are however still far less reliable than direct approaches.

Using the personality stories as a basis, we propose an alternative and light-weight approach for reliably measuring personality, using so-called personality sliders with the stories at the slider ends, which is faster than completing many personality tests. We describe how identification with the people in personality stories can easily and engagingly be used to measure user personality. Personality sliders provide a broad characterisation of a personality trait, whilst at the same time making it less salient to participants what they are asked about. Personality sliders take about a minute to complete per trait [assuming an average reading speed], so are fast to administer and may save time particularly:

  • In studies or systems that require a user characteristic for which short questionnaires do not yet exist. Short questionnaires only exist for some personality traits [most noticeably the Five Factor Model], whilst the slider approach can be used for any personality trait as well as other user characteristics. Of course, the personality stories are created from questionnaire items, and using more items increases reading time. However, only one decision/interaction is required per trait [compared to one per item for the questionnaires], reducing cognitive load and decision time.

  • In studies that require both the measurement of the participants’ personality and the portrayal of the personality of fictional people—e.g. looking at the impact of self-similar personality on book recommendations for fictional users. Participants only need to read the stories once, so 1 min suffices to both complete the personality test and portray two fictional users’ personality.

  • In studies or systems that require obtaining personality measurements for multiple people provided by one person. For example, in Moncur et al. [2014], automated messages about babies in intensive care to their parents’ social network were adapted to individual receivers’ characteristics. This may require a parent to indicate the emotional stability of the people closest to them. Using the personality sliders, participants only have to read the stories once, and then only need to make one decision/interaction per personality trait per person.

Another advantage of using personality sliders is that it reduces response bias. Using the personality story sliders, participants need to judge which person they resemble more, so are not agreeing/disagreeing with individual items, removing bias due to acquiescence. Multi-item surveys also tend to suffer from straight-lining. Straight-lining occurs when participants give identical [or nearly identical] responses to items in a battery of questions using the same response scale [Zhang and Conrad 2014]. Requiring only one interaction per trait [as in the sliders] mitigates this. Finally, personality sliders provide a higher granularity of personality, as the sliders provide continuous rather than interval data, whilst most personality tests are restricted to a small number of points. This also means that the data is more appropriate for parametric analysis than traditional likert data.

To evidence the practical value of our methodology for conveying and measuring personality, we show how the personality stories and personality sliders have been successfully used in many of our studies [see Sect. 6].

Fig. 1

The methodology used in this paper for personality slider development

Full size image

Overview of methodology

Our methodology for conveying and measuring personality traits using personality stories [see Fig. 1] consists of the following stages:

  1. 1.

    Creating short stories about a person to express distinct personality traits [their target trait]: we use Resilience, Generalized Self-Efficacy, and those from the Five Factor model.

  2. 2.

    Iteratively validating the generated stories to ensure that the stories convey their target trait at high and low levels, and are able to robustly portray the desired trait by asking people to fill out a personality questionnaire for the person in the story [different from the questionnaires used for story creation]. Issues include both the case where the perceived score for a non-target trait [a personality trait other than the target trait] differs significantly between high and low story, and where the scores for these non-target traits lie outside a normative range. The pilots were conducted in the lab with later studies conducted using crowdsourcing for broader generalizability.

  3. 3.

    Validating the approach of measuring personality through stories by allowing users to pick which individual they are most like, using a slider. The values of these results were correlated with standardized personality tests for the same traits.

  4. 4.

    Outline how the slider values can be used to distinguish groups of users with distinct levels of personality traits. Before the sliders could be used in a system, or even applied experimentally to evaluate adaptation, we needed to define how to use the slider values. We summarise the advantages and disadvantages of the respective methods.

  5. 5.

    Validating the approach in an experiment where personality is likely to affect adaptation [i.e. use the stories in an experiment where you hypothesize that there ought to be an effect of personality]. We tested the approach in multiple studies.

Crowd sourcing participants

We rely heavily on rapid questionnaire responses from a participant pool to iteratively validate personality stories. Where the number of unique participants required was small, we used convenience sampling. However, our participant pool was too small for Five Factor Model validation as many iterations were required [explained in Sect. 4.3]. To expand our participant pool, we decided to use the crowd-sourcing service, Amazon Mechanical Turk [MT] [2012].

MT is helpful when requiring large numbers of participants for studies. However, valid concerns exist that data collected online may be of lower quality and requires robust validation methods. Many studies, such as those described by Weinberg et al. [2014] have tried to show the validity of using MT to collect research data. These studies have generally found that the quality of MT data is comparable to what would be collected from supervised lab experiments, if studies are carefully set up, explained, and controlled. We follow recommended best practice in our MT experimental design and procedures.

In our work we have obtained some insights into using crowd-sourcing to gather experimental data. We were initially concerned that crowd-sourced participants [workers] would simply complete questionnaires in a random fashion in order to be paid. However, we found no evidence for this. “Gaming the system” by random scoring did not occur: participants correctly identified the personality trait we were portraying.

MT holds statistics on each worker, including acceptance rate. This is available to all requesters [those setting tasks] representing the percentage of work submitted by a particular worker that was approved [by all requesters]. Thus if somebody consistently submits poor work, their acceptance rate drops. As requesters can set a high acceptance rate as a qualification for their tasks, this causes participants to value their acceptance rate, and complete tasks conscientiously. In addition to this, the integrated Cloze Test for English Fluency [Taylor 1953] was used as an attentional check to ensure participants were carefully reading the instructions, and had enough literacy skills to understand the task. We were also able to restrict participation to the United States only, which considerably drops the possibility of spam in the results.

The paper is structured as follows. Section 2 surveys the literature on measuring, conveying and adapting to personality. Section 3 describes the story creation process. Section 4 discusses the process of story validation. In Sect. 5, we test using the stories to measure user personality and outline how these results can be applied to group users by personality trait. Section 6 shows the application of the methodology by summarising many studies that investigated adaptation to personality and used the stories to convey or measure personality. Section 7 concludes the paper, discusses its limitations and provides directions for future work.

Related work

In this section, we describe the models of personality used in this paper and the rationale for choosing these, focusing specifically on trait theories and social learning approaches. We summarize the methods for obtaining users’ personality traits and then summarize how personality can be portrayed, building on these methods. Finally, we discuss adaptation to personality in recommender systems, persuasive systems, and intelligent tutoring systems. We focus on adaptation to particular personality traits and the acquisition and portrayal of personality in the studies conducted.

Table 1 The five robust dimensions of personality from Fiske [1949] to present

Full size table

Models of personality

Personality trait theories

Traits are defined as “an enduring personal characteristic that reveals itself in a particular pattern of behaviour in different situations” [Carlson et al. 2004, p. 583]. Over time, trait theorists have tried to identify and categorise these traits [Carlson et al. 2004]. The number of traits identified has varied, with competing theories arising. The best known include Eysenck’s three factors [Eysenck 2013], Cattell’s 16PF [Cattell 1957], and the Five-Factor Model [FFM] [Goldberg 1993]. More recently a general consensus towards five main traits [or dimensions] [Digman 1990; McCrae and John 1992] has emerged, shown in Table 1 [reproduced from Digman 1990]. Most psychologists consider the FFM robust [Magai and McFadden 1995], and a multi-year study found that individuals’ trait levels remained relatively stable [Soldz and Vaillant 1999]. The exact names of the traits are still disputed by psychologists [Goldberg 1993; McCrae and John 1992; Digman 1990], however we adopt the common nomenclature from John and Srivastava [1999] and refer to them as:

  1. I

    Extraversion: How talkative, assertive and energetic a person is.

  2. II

    Agreeableness: How good natured, cooperative and trustful a person is.

  3. III

    Conscientiousness: How orderly, responsible and dependable a person is.

  4. IV

    Emotional Stability [ES]: How calm, non-neurotic and imperturable a person is.Footnote 3

  5. V

    Openness to Experience: How intellectual, imaginative and independent-minded a person is.

Resilience

The FFM is the core model of personality, as it is considered to be stable [i.e. a person’s personality does not change, or changes very slowly]. However, people also have traits that vary more quickly, encapsulate several core traits or are more environment/experience–dependent. One example is resilience, which is an often poorly defined term that encapsulates “the ability to bounce back from stress” [Smith et al. 2010, p. 166]. Poor resilience is associated with depression [O’Rourke et al. 2010; Southwick and Charney 2012; Hjemdal et al. 2011] and anxiety [Connor and Davidson 2003; Hjemdal et al. 2011]. While not as stable as the FFM traits, resilience is a medium-term trait that may be improved by interventions [Smith et al. 2010].

Social learning approaches

The Social Learning approach to personality “embodies the idea that both the consequences and behaviour and an individual’s beliefs about those consequences determine personality” [Carlson et al. 2004, p. 593]. Whereas trait theorists argue that knowing the stable characteristics of individuals can predict behaviour in certain situations; advocates of the Social Learning approach think that the environment surrounding an individual is more important when predicting behaviours [Carlson et al. 2004]. Two popular Social Learning models are Locus of Control [Rotter 1966] [LoC] and [generalized] Self-Efficacy [Bandura 1994] [GSE].

An individual’s Locus of Control represents the extent to which a person believes they can control events that affect them [Rotter 1966]. A learner with an internal LoC believes that they can control their own fate, e.g. they feel responsible for the grades they achieve. A learner with external LoC believes that their fate is determined by external forces e.g. they believe that their grade is a result of the difficulty of the exam or their teaching quality. Self-Efficacy is defined as “the belief in one’s capabilities to organize and execute the courses of action require to manage prospective situations” [Bandura 1995, p. 2] and determines whether individuals will adapt their behaviour to make changes in their environment, based on an evaluation of their competency [Carlson et al. 2004]. It also defines whether an individual will maintain that change in behaviour in the face of adversity; GSE has been shown to be an excellent indicator of motivation [McQuiggan et al. 2008].

Measuring personality

There are many explicit or implicit approaches for measuring personality. Explicitly, personality traits can be obtained through self-reporting questionnaires, which typically ask users to rate to what extent certain statements apply to them. Multiple versions of such questionnaires exist—for example, the Five-Factor model [FFM] is often used in research, not only because there is broad agreement between psychologists, but because many validated questionnaires exist which measure it, with varying item numbers [e.g. 5 item FIPI [Gosling et al. 2003], 10 item TIPI [Gosling et al. 2003], BFI-10 [Rammstedt and John 2007], 20-item mini-IPIP [Donnellan et al. 2006], 40-item minimarkers [Saucier 1994a], 44-item BFI [John and Srivastava 1999], 50 item IPIP-NEO-50 [Goldberg et al. 2006], 60 item NEO-FFI [McCrae and Costa 2004], 240 item IPIP-PI-R, and 300-item IPIP-NEO Goldberg et al. 2006]. Questionnaires for other traits also exist [see Table 2 for questionnaires that have been used for other traits]. Advantages of measuring personality from self-reporting questionnaires include the ease of administration, the existence of validated questionnaires for most traits [so, easily extended to other traits], and transparency to users. Disadvantages are that they are often time consuming [leading to problems such as straight-liningZhang and Conrad 2014] and may be inaccurate [either because respondents see themselves differently then they really are, or because they want to portray a certain image to other people].

Personality traits can be measured implicitly using machine learning techniques. Personality can be inferred from user generated content in social media, e.g. Facebook Likes [Kosinski et al. 2014; Youyou et al. 2015], language used [Park et al. 2015; Oberlander and Nowson 2006], Twitter user types [e.g. number of followers] [Quercia et al. 2011], a combination of linguistic and statistical features [e.g. puctuation, emoticons, retweets] [Celli and Rossi 2012], and structural social network properties [Bachrach et al. 2012; Quercia et al. 2012; Lepri et al. 2016]. See Farnadi et al. [2016] for a comparative analysis.

Table 2 Examples of existing work on adapting to personality

Full size table

Alternatively other interaction data can be used, such as measuring personality traits from gaming behaviour. For example, Cowley and Charles [2016] use features that describe game player behaviour based on the temperament theory of personality, Yee et al. [2011] measure personality from player behaviour in World of Warcraft, Wohn and Wash [2013] from spatial customisation in a city simulation game, and Koole et al. [2001] using a common resources dilemma gaming paradigm. Implicit association tests have also been used, measuring reaction times to visual stimuli associated with contrasting personality descriptors [Grumm and von Collani 2007].

Non-verbal data can also be used from speech and video, such as prosody, intonation, gaze behaviour, and gestures. For example, Polzehl [2014] details how speech features can be used. Biel and Gatica-Perez [2013] use features from video blogs such as speaking time, speaking speed, how much the person looks at the camera. Staiano et al. [2011] use speech and gaze attention features from videos of meetings. Rojas et al. [2011] use facial features.

Finally, multi modal personality recognition can also be used; for example Farnadi et al. [2014] used a combination of textual [linguistic and emotional] features extracted from transcripts of video blogs in addition to audio-video features. Similarly, Srivastava [2012] used a combination of non-verbal behaviour and lexical features.

For a more in depth review of automated personality recognition including a summary of existing studies and which personality traits were recognised see Vinciarelli and Mohammadi [2014].

Advantages of measuring personality implicitly are that it can be done unobtrusively [as long as the data used is generated naturally] and tends to have good accuracy. Disadvantages are potential privacy implications [it is important that users provide explicit consent], the need for substantial data for the underlying machine learning algorithms [so it requires time to measure the personality of new users] and the poor availability of existing datasets for other applications. Dunn et al. [2009] investigated ease of use, user satisfaction, and accuracy for three interfaces to obtain personality, one explicit one [NEO PI-R, with 240 questions] and two implicit ones [a game and an implicit association test]. They concluded that an explicit way of measuring personality is better for ease of use and satisfaction.

Portraying personality

Personality can be portrayed in many ways, often inspired by the ways in which it can be measured. Firstly, participants can be shown content generated by someone who with the personality trait we want to portray, such as a blog post, audio recording, or video. This is hard to do well, as it is difficult to avoid conveying information beyond personality. For example, facial expressions [as may be present in video recordings], speech [as present in video and audio recordings], and linguistic content [as present in text and speech] provide superfluous information about affective state [Zeng et al. 2009]. Video, audio and text often also implicitly provide information about the person’s ethnicity/region of origin, age, gender, and opinions [Rao and Yarowsky 2010]. Additionally, it requires finding those with exactly the personality trait required, and obtaining their permission for using content they generate for this purpose.

Secondly, participants can be shown such content, but rather than using a person with a desired personality trait, the trait is portrayed by an actor, researcher or automatically generated based on what we know influences the measurement of certain personality traits. This provides more control, as an actor can be instructed to depict only one trait at the extreme, and to try to be neutral on other variables, such as affective state. Social Psychology and Medical Education commonly use actors to depict personality traits. For example, Kulik [1983] used actors to portray extraversion [actor smiled, spoke rapidly and loudly, discussed drama, reunions with friends, lively parties] and introversion [actor spoke more hesitantly, talked about his law major, lack of spare time, interest in Jazz]. Barrows [1987] describes stimulated/standardized patients as presenting the gestalt of the patient being simulated including their personality. The problem remains that actors also provide information about gender, age, ethnicity. Additionally, hiring good actors may be costly.

Portraying personality is also widely investigated in the Affective Computing community, particularly by virtual agents [Calvo et al. 2015]. For example, Doce et al. [2010] convey the personality of game characters by the nature and strengths of emotions a character portrays, and their tendency to act in a certain manner. However, this is still difficult to do well, and again it is hard to do it in a way that only a personality trait is expressed and nothing more.

Thirdly, a person can be described explicitly by mentioning the personality trait [e.g. “John is very conscientious”] or how the person behaves or would behave in certain circumstances [e.g. “John tends to get his work done very rapidly”]. For example, Luchins [1958] produced short stories to portray extraversion and introversion. These contained sentences such as “he stopped to chat with a school friend who was just coming out of the store” and “[he] waited quietly till the counterman caught his eye”. Using a single sentence with just the personality trait is easy to do, but it may not provide participants with a strong enough perception of the trait and it can easily be overlooked. Using a story solves this, but the story may not convey the intended trait.

In all of these cases, it is important that the portrayal of a personality trait is validated as accurately creating the impression of personality intended, and not producing additional impressions [of an unintended personality trait or attribute such as intelligence, etc]. For example, Luchins [1958] actually found that participants associated many other characteristics [such as friendliness] based on his stories. Kulik [1983] found that prior conceptions about the actors influenced people’s opinions.

Adapting to personality

There is growing interest in personalization to personality, as seen from the UMUAI 2016 special issue on “Personality in Personalized Systems” [Tkalčič et al. 2016] and the “Emotions and Personality in Personalized Systems” [EMPIRE] workshops. Research on personalization to personality has focused mainly in three domains: Persuasive Technology, Intelligent Tutoring Systems, and Recommender Systems. Table 2 presents a non-exhaustive list of such research.

As shown in Table 2, research on personality in Persuasive Systems has mainly focused on adapting messages [motivational messages, prompts, adverts, reminders] and selecting persuasive strategies. Adaptation tends to use the Five Factor Model, though there has also been work on adapting to susceptibility to persuasion principles and gamer types.Footnote 4 All papers cited use self-reporting questionnaires.

Research on personality in Intelligent Tutoring Systems has mainly focused on adapting feedback/emotional support, navigation [exercise and material selection] and hints/prompts. The Five Factor Model tends to be the basis for personality adaptation, though generalized self-efficacy [GSE] is also used. To assess personality, all papers cited used self-reporting questionnaires, except for Dennis et al. [2016], Okpo et al. [2016b] and Alhathli et al. [2016] who used indirect experiments in which participants made choices for a fictitious learner with a given personality.

Table 3 Self-report questionnaire for Generalized Self Efficacy [Schwarzer and Jerusalem 1995]

Full size table

Research on personality in Recommender Systems [see also Tkalčič and Chen 2015] has broadly considered the following topics: improving recommendation accuracy [Wu and Chen 2015], boot-strapping preferences for new users [Hu and Pu 2011; Tkalčič et al. 2011; Fernández-Tobías et al. 2016], the impact of personality on users’ preferences on recommendation diversity [Tintarev et al. 2013; Chen et al. 2016; Nguyen et al. 2017], cross-domain recommendation [Cantador et al. 2013], and group recommender systems [Kompan and Bieliková 2014; Quijano-Sanchez et al. 2010; Rawlings and Ciancarelli 1997]. Adaptation in recommender systems aimed at individuals tends to use the FFM. However, for group recommender systems other personality traits have been used [see also Masthoff 2015] such as cooperativeness. To assess personality all papers cited used self-reporting questionnaires, except Appel et al. [2016] who extracted personality from social media usage.

Creation of stories to express personality traits

This section describes the creation process of personality stories to express GSE, Resilience and the Five-Factor Model traits.Footnote 5 These stories will be validated and amended in the next section. Male names were used for all stories to keep gender constant. If “gender neutral” names had been used, then participants’ interpretation of the learner’s sex may have caused an unwanted interaction effect on the validation.

Stories for generalized self-efficacy

The self-report questionnaire for Generalized Self Efficacy Schwarzer and Jerusalem [1995] was used as a starting point, shown in Table 3.Footnote 6 Each questionnaire item is a positively weighted value. The overall score for GSE is the sum of each scale item, with a high score [max 40] indicating high GSE.

For the high GSE story, a selection of the questionnaire items were used and changed into the third person. For the low GSE story, the valence of the items was inverted. The stories were made more realistic by associating them with a character, a first year learner called “James” [the most popular male name in English in 2010, and therefore suitably generic]. The resulting stories are shown in Table 4.

Table 4 Stories used for Generalized Self-Efficacy, high and low

Full size table

Stories for resilience

For Resilience, questions were used from the Connor-Davidson Resilience scale [Connor and Davidson 2003]. These encapsulate 5 factors that contribute to resilience—Positive attitudes to change and strong relationships; Personal competency and tenacity; Spiritual beliefs and superstitions; Instincts and tolerance of negative emotions; and Control. Using questions from each factor, a story was composed for both high and low resilience [see Table 5] that are roughly symmetrical in order and content. The clauses ‘David is kind and generous’ [for both high and low stories] and ‘He is friendly’[in the low story] were added to counter the fact that the low resilience story depicted a fairly negative character.

Table 5 High and low resilience personality stories

Full size table

Table 6 Story construction for low emotional stability using the NEO-IPIP low items

Full size table

Stories for the five factor model

Unlike GSE and Resilience, the Five Factor Personality Trait Model does not describe a single trait. As discussed in Sect. 2.1.1, the five factors [traits] are Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience. Thus, the personality of any individual can be described by five scores, one for each of the factors. This means that stories had to be created for each trait, at both low and high level [totalling 10 stories].

To make the FFM Stories, we used the NEO-IPIP 20-item scales [Gow et al. 2005]: combining the phrases into sentences to form a short story, with the addition of a name picked from the most common male names. Unlike the GSE scale, these scales provided both positive and negative items, so the high and low story could be made from the positive and negative items respectively. Table 6 exemplifies how the stories were constructed. Table 7 shows the stories.

Table 7 Preliminary Stories expressing each FFM trait at high and low levels

Full size table

Validation of stories to express personality traits

This section describes the validation process of each story: how each story was checked that it correctly depicted the trait that it was intended to depict [the target trait].

A series of validation studies were performed for the stories constructed to convey Generalised Self-Efficacy, Resilience, and the traits from the FFM [Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience]. Each trait had two stories associated with it—one to express the trait at a high level, and one to express the trait at a low level.

For each trait, at least one validation experiment was conducted [the traits from the Five Factor Model required more, this is explained further in Sect. 4.3]. Each validation experiment utilized a between-subjects design: participants were shown either the high story or the low story, and then asked to rate the personality of the person depicted in the story using a validated questionnaire for the trait in question.

As outlined in Sect. 3, the stories were originally constructed using an existing personality measurement questionnaire. For validation purposes, a different measurement questionnaire was used for the same trait, as this used different language and terms to the story [preventing participants from just recognising phrases], and made the purpose of the experiment less obvious and decrease demand characteristics.

For the GSE and FFM stories, we also measured how the stories conveyed other traits [non-target traits], to check how they were conveyed. For GSE, we investigated how the stories conveyed the FFM traits and Locus of Control.Footnote 7 It has been shown previously [Judge et al. 2002; Hartman and Betz 2007] that GSE interacts with both of these measures, however, if we found an unexpected interaction this would allow us to correct the story. For the FFM stories we checked how the other four non-target FFM traits were conveyed.Footnote 8 For Resilience, which again used crowd sourcing, a different approach was taken, which is elaborated on in Sect. 4.2.

Generalized self-efficacy [GSE] validation

This experiment explored whether stories did correctly convey different levels of GSE, and what other personality traits were implied, using a different validated trait assessment questionnaire for GSE [Chen et al. 2001]. We also explored how the story depicted other traits in the FFM [using minimarkers Saucier 1994a] and a questionnaire for Locus of control [Goolkasian 2009]. Fifty participants [42% female, 52% male, 6% preferred not to say; 34% aged 18–25, 48% aged 26–40, 14% aged 41–65, 2% aged over 65, 2% preferred not to say] recruited through convenience sampling in a between-subject design, answered these questionnaires, after reading the GSE personality story. 26 viewed the low GSE story and 24 viewed the high GSE story.

Table 8 Results of t tests for GSE story validation

Full size table

Table 8 shows the results. t testsFootnote 9 were run for each of the traits to test whether the high and low GSE stories were significantly different from each other. This was significant at \[t[48]=-\,13.514\], \[p

Chủ Đề