Technical difficulties have been reported by some users of the search function and is being investigated by technical staff. Thank you for your patience and apologies for any inconvenience caused.

The safety and scientific validity of this study is the responsibility of the study sponsor and investigators. Listing a study does not mean it has been endorsed by the ANZCTR. Before participating in a study, talk to your health care provider and refer to this information for consumers
Trial registered on ANZCTR


Registration number
ACTRN12622000676718
Ethics application status
Approved
Date submitted
27/04/2022
Date registered
10/05/2022
Date last updated
9/06/2022
Date data sharing statement initially provided
10/05/2022
Type of registration
Prospectively registered

Titles & IDs
Public title
Computer's human-likeness and its effects on willingness to disclose health-relevant information in healthy adults
Scientific title
Computer's human-likeness and its effects on willingness to disclose health-relevant information in healthy adults
Secondary ID [1] 307007 0
None
Universal Trial Number (UTN)
Trial acronym
Linked study record

Health condition
Health condition(s) or problem(s) studied:
Self-disclosure of health behaviours 326126 0
Self-disclosure of emotional events 326130 0
Condition category
Condition code
Mental Health 323451 323451 0 0
Studies of normal psychology, cognitive function and behaviour

Intervention/exposure
Study type
Interventional
Description of intervention(s) / exposure
This study will be a randomized controlled trial with three experimental conditions (intervention arms).
Arm 1: A brief clinical assessment delivered by an online questionnaire
Arm 2: A brief clinical assessment delivered by a chatbot interviewer (reading and typing interface)
Arm 3: A brief clinical assessment delivered by a digital human (DH) interviewer

A community sample of adults (18-35 years old) will be block-randomized by gender to any of the three experimental conditions in a 1:1:1 ratio. For each condition, participants will complete a semi-structured clinical assessment automatically delivered by their allocated technology. The assessment comprises 24 health-relevant items collecting personal information about health behaviours and recent emotional events. The assessment is estimated to last for about 25 minutes.
The information below first describes the clinical assessment procedure and then the three types of digital assessment methods.
The clinical assessment
The semi-structured clinical assessment will be automatically delivered by a digital human (DH), a chatbot, or an online questionnaire. The assessment is broadly structured into three phases, namely, the initial rapport-building phase, the main assessment phase, and the closure. The initial phase is estimated to last for 5 minutes, involving the digital human (DH) or chatbot introducing themselves, initiating the conversation (e.g., “where are you from?”), and providing the scope for the interview (i.e., collecting some health-relevant information including health behaviours and recent emotional events). For the online questionnaire, the initial section will introduce the assessment scope and include the same rapport-building questions as in the chatbot or digital human condition. The main assessment phase is estimated to last for 15 minutes, comprising a list of close-ended questions asking for a wide range of health behaviours, and a few open-ended questions concerning experiences of recent emotional events. Finally, the closure phase is estimated to last for 5 minutes. The digital human (DH) or chatbot interviewer would remind the participants of the closure, deliver a generalized appreciation message (e.g., “I really appreciate your trust in sharing your personal experiences with me”), and close the interviewer with some uplifting questions (e.g., “Tell me about three things that you feel grateful to have in your life?). The interview questions will be delivered in a fixed sequence with some general receptive feedback provided in response to the answers. Participants can skip answering any questions by saying “I do not want to answer”.
All of the assessment items are either selected from well-established mental health and addiction screening questionnaire, validated lifestyle surveys for adults, or previous studies that have examined adults' self-disclosure of health information to conversational agents. The health behaviours items asking for habits of exercise, sugary drinks consumption, vegetables eating are selected from a lifestyle survey for university students (Silliman et al., 2004). The items asking for substance uses (tobacco, alcohol, cannabis) are selected from the well-established Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) suitable to be used for adults in New Zealand (Humeniuk, 2006). The item asking for intoxication over the past month is adapted from Schuetzler et al.’ study (2015) which examined users’ socially desirable responding to health-relevant and other personal information. All the sexual risk behaviour items are taken from the validated Sexual Risk Behaviour Scale (Fino et al., 2021) for university students. Lastly, the item asking for drunk driving practice is taken from a health risk behaviour study by Laska and colleagues (2009). For the emotional experiences items, three items asking about perceived loneliness are selected from the well-established UCLA Loneliness Scale (Russel, 1996). In addition, the open-ended items asking about recent emotional events are adapted from a study by Lucas et al. (2014), which examined the effect of human presence on self-disclosure behaviours during semi-structured interviews.

The digital human interviewer
The digital human (DH) interviewer is developed by Soul Machines Ltd (Auckland, New Zealand). The digital human interviewer is modelled to be a young adult female of mixed ethnicities. The digital human (DH) interviewer is autonomously animated and present on a website accessible to users from their personal computers and tablets. The digital human (DH) interviewer will be programmed to deliver the semi-structured interview questions in a fixed sequence and provide some general receptive feedback to users. As the digital human (DH) interviewer speaks, she will engage in human-like facial and body gestures, including displaying facial expressions, maintaining eye gaze, and moving her head and shoulders.
Participants will be informed that the digital human (DH) interviewer continuously collect speech and video data in order to communicate (e.g., to hear speech and make eye contact). These data will not be recorded, stored or analysed by the researchers. Soul Machines’ digital humans (DH) engage in data collection process compliant with the European Union General Data Protection Regulation (GDPR) (Soul Machines, 2021).

The chatbot interviewer
The chatbot interviewer is programmed using IBM Watson and then deployed to a personal website (Google Site). The chatbot interviewer will be displayed as a static humanoid character image on the web and accessed on a website from a computer. The chatbot interviewer will be programmed to deliver the semi-structured interview questions in a fixed sequence and provide some general receptive feedback.
Participants will be informed that their responses to the assessment will be collected by the IBM Watson. IBM engages in data collection process compliant with the European Union General Data Protection Regulation (GDPR).

The online questionnaire task
The online questionnaire will be created using Qualtrics. The online questionnaire will include the same assessment questions as in the digital human DH and chatbot conditions, and ask these questions in the exact same sequence as in the other conditions. The questionnaire will not provide any feedback to users’ inputs until to the point of submission (i.e., Your responses have been recorded. Thanks for completing this assessment.).

The intervention will take place on a desktop computer in a private clinic room at the University of Auckland Clinical Research Center. The total research session lasts for about 45 minutes. Participants will first be instructed to interact with their allocated technology by the researcher (a master's student) for about 10 minutes, including a demonstration of how to use the digital human or chatbot. Participants then will be left independently to complete a clinical assessment with their allocated technology. The researcher will be available (the master's student) in another room to provide help if needed. Participants' responses to the assessment will be recorded by the allocated technology. In addition, audio recordings will be taken for participants allocated to the digital human (DH) group (to double check the accuracy of their recorded verbal responses). Following the completion of the assessment, participants will spend about 10 minutes completing a follow-up online questionnaire on their impression of their allocated digital assessment method and their perceived sensitivty of each assessment item.

Intervention code [1] 323452 0
Behaviour
Intervention code [2] 323453 0
Treatment: Devices
Comparator / control treatment
Active control group (the online questionnaire group is the active control group)
Control group
Active

Outcomes
Primary outcome [1] 331185 0
Participants' responses to the study-specific semi-structured clinical assessment will be extracted to measure the presence of socially desirable responses. The presence of socially desirable responses will be measured by proportions of socially desirable responses to groups of sensitive and non-sensitive close-ended health items of the study-specific semi-structured clinical assessment, with socially desirable responding defined as the tendency to over-report socially favorable information and under-report socially unfavorable information. The 18 close-ended health items will be categorized into two groups of sensitive or non-sensitive health items. The scores of all sensitive and non-sensitive close-ended items will be aggregated respectively, with a higher score indicating a greater extent to self-report socially unfavourable information, and a lower score indicating a greater extent to self-report socially favourable information. Consistent with prior research (e.g., Schuetzler et al., 2018), we take more frequent engagements with health-protective behaviors including eating vegetables and doing exercises as socially favorable. We take more frequent engagements with health risk behaviors including drinking sugared beverages, using tobacco products, drinking alcohol, using drugs, engaging with sexual risk behaviors, engaging with unsafe driving practices, as socially unfavorable. In addition, we associate a lower level of loneliness (for the three closed-ended loneliness items) and no existing diagnosis of sexually transmitted infections (for the item asking for any diagnosis of sexually transmitted infections) with a socially favorable image.
Timepoint [1] 331185 0
During the research session when participants complete the assessment
Primary outcome [2] 331189 0
Participants' responses to the study-specific semi-structured clinical assessment will be extracted to measure participants' willingness to provide answers or not to two groups of sensitive or non-sensitive assessment item. The 24 assessment items will be categorized into two groups of sensitive or non-sensitive health items. Each participant will be scored by counting the number of sensitive and non-sensitive items for which they choose not to not provide an answer respectively. A unit of score represents one declination of answering an assessment item.
Timepoint [2] 331189 0
During the research session when participants complete the assessment
Primary outcome [3] 331190 0
Participants' responses to the study-specific semi-structured clinical assessment will be extracted to measure the amount of disclosure. The amount of self-disclosure will be indicated by the total word counts of the participant's responses to the groups of sensitive or non-sensitive open-ended items respectively (these items ask for positive or negative emotional events). The 6 open-ended assessment items will be categorized into two groups of sensitive or non-sensitive items.
Timepoint [3] 331190 0
During the research session when participants complete the assessment
Secondary outcome [1] 409129 0
Perceived anthropomorphism will be measured by the 5-item anthropomorphism scale (Powers & Kiesler, 2006).
Timepoint [1] 409129 0
In the follow-up questionnaire immediately following the completion of the assessment

Eligibility
Key inclusion criteria
The participants will be adults between 18 to 35 years old with English fluency (i.e., can speak, read, and write in fluent English).
Minimum age
18 Years
Maximum age
35 Years
Sex
Both males and females
Can healthy volunteers participate?
Yes
Key exclusion criteria
Participants will be excluded from the study if they have hearing difficulties or vision loss (As these participants may need special assistance with using the computer or hearing the researcher. Due to the limited resource, this study is not equipped to provide such assistance).

Study design
Purpose of the study
Treatment
Allocation to intervention
Randomised controlled trial
Procedure for enrolling a subject and allocating the treatment (allocation concealment procedures)
Block-randomisation of participants by gender was conducted by a member of the research team who was not involved in data collection. Allocations were concealed from the researcher involved in data collection in opaque envelopes. The researcher remained blinded to the participant's allocation until opening the envelope immediately prior to starting the instructions of using the technology for the participant.
Methods used to generate the sequence in which subjects will be randomised (sequence generation)
A randomisation table was generated using Research Randomizer software that block-randomised participants by gender. This randomisation was performed by a member of the research team who did not interact with participants.
Masking / blinding
Blinded (masking used)
Who is / are masked / blinded?

The people administering the treatment/s

Intervention assignment
Parallel
Other design features
Phase
Not Applicable
Type of endpoint/s
Efficacy
Statistical methods / analysis
Lucas et al. (2014) compared the effects of framing virtual humans as automatic versus teleoperated by real humans on users’ self-reported fear of judgment, impression management pressure, displays of sadness facial expressions, and self-reported willingness to disclose. In addition, Schuetzler et al. (2018) compared the effects of chatbot’s embodiment and responsiveness on people’s self-disclosure of health behaviours. They found that a responsive ,unembodied chatbot induced users’ socially desirable responding to a relatively sensitive health item (asking for intoxication over the past month) with a small to medium effect sizes with Cohen’f close to 0.25. Although these studies differ with our study in terms of technology designs (hence the manipulation of humanlike cues) and measurements, they provide some information in estimating a possible effect size of “perceived human presence” on self-disclosure behaviours. Hence, based on these studies, we choose f=0.25 as the basis of our power calculation. Using G*Power, we would need 159 participants at minimum in our sample to detect an effect size of Cohen's f= 0.25, with 80% power and alpha threshold of .05 for an analysis of variance (ANOVA). Hence, we would attempt to recruit at least 159 participants for this study.

The transcripts will be entered into LIWC2015 for automatic computation of the word counts of the participants’ responses to each open-ended assessment question. Afterward, the quantitative data will be entered into SPSS 27 for further analyses. Two multivarite analyses of variance (MANOVA) will be conducted first to compare overall differences in the three primay outcomes (i.e., socially desriable responses, willingness to provide answers, and disclosure amounts) for the sensitive and non-sensitive assessment items respectively across the three experimental conditions. To test the first hypothesis (H1), two ANOVA tests will be performed to analyse the differences in users’ responses to sensitive and non-sensitive close-ended health information items across the three conditions (i.e., DH, chatbot and questionnaire). A post-hoc test will be conducted following any significant F test result. To test the second hypothesis (H2), two ANOVA tests will be performed to detect any between-group differences in the total number of declined responses to sensitive and non-sensitive assessment items. To test the third hypothesis (H3), two ANOVA tests will be performed to detect between-group differences in the total word counts of users’ responses to the sensitive and non-sensitive open-ended items. An additional ANOVA test will be performed to analyse between-group groups in the total word counts of users' responses to those non-sensitive items that ask for positive emotional events. To test the fourth hypothesis (H4), a mediation analysis will be employed to test whether and how much the covariation (if any) between the groups and a series of dependent variables could be attributed to the perceived anthropomorphism. Furthermore, on an exploratory basis, descriptive statistics will be generated and examined to detect any trends of ethnic differences in users’ self-disclosure of health information within and between each experimental condition. To be more specific, within the digital human or chatbot condition, the mean of each dependent variable across ethnic groups will be compared to examine any trends indicating self-disclosure of different ethnic group members to the digital human, chatbot or online questionnaire may be different. In addition, the mean of each dependent variable for the same ethnic group will be further compared across conditions. This is to be done to look for any trends indicating differences in self-disclosure to the digital human, chatbot and online questionnaire within the same ethnic group.

Recruitment
Recruitment status
Not yet recruiting
Date of first participant enrolment
Anticipated
Actual
Date of last participant enrolment
Anticipated
Actual
Date of last data collection
Anticipated
Actual
Sample size
Target
Accrual to date
Final
Recruitment outside Australia
Country [1] 24736 0
New Zealand
State/province [1] 24736 0
Auckland

Funding & Sponsors
Funding source category [1] 311319 0
University
Name [1] 311319 0
The University of Auckland
Country [1] 311319 0
New Zealand
Primary sponsor type
University
Name
The University of Auckland
Address
The University of Auckland
Private Bag 92019
Auckland 1142
Country
New Zealand
Secondary sponsor category [1] 312690 0
None
Name [1] 312690 0
Address [1] 312690 0
Country [1] 312690 0

Ethics approval
Ethics application status
Approved
Ethics committee name [1] 310821 0
Auckland Health Research Ethics Committee
Ethics committee address [1] 310821 0
Auckland Health Research Ethics Committee
The University of Auckland
Private Bag 92019
Auckland 1142
Ethics committee country [1] 310821 0
New Zealand
Date submitted for ethics approval [1] 310821 0
21/02/2022
Approval date [1] 310821 0
17/05/2022
Ethics approval number [1] 310821 0
AH23991

Summary
Brief summary
The current study aims to investigate whether and how increasing human-likeness of a computer would affect users’ self-disclosure of health information in a clinical assessment context. And if so, what may be the psychological process facilitating such effects. To be specific, this study will compare users’ self-disclosure behaviours to an interactive, realistic humanlike digital human interviewer, an interactive, less humanlike chatbot interviewer, and a non-interactive online questionnaire task in the context of receiving a brief assessment on health-relevant information.
The primary hypothesis is that computer’s increasing human-likeness will have differential impacts on users’ self-disclosure of health information depending on the sensitivity and valence of the health information. In particular, a computer’s increasing human-likeness will decrease users’ willingness to disclose sensitive health information, manifested in a stronger socially desirable responding pattern and a preference for not providing an answer. On the other hand, a computer’s increasing human-likeness may increase users’ willingness to disclose non-sensitive health information, especially for those positive health information. As such, our first hypothesis is that a more humanlike digital human interviewer should elevate stronger socially desirable responses to questions asking for sensitive health information, compared with a less humanlike chatbot interviewer, then to a least humanlike online questionnaire task (H1). Our second hypothesis is that a digital human interviewer would receive a higher proportion of users’ declined responses (i.e., choose I do not want to answer) to those sensitive health information, compared with a chatbot interviewer, then an online questionnaire task (H2). Meanwhile, a digital human interviewer may increase users’ self-disclosure of non-sensitive and positive health information, compared with a chatbot interviewer, then to an online questionnaire task (H3). The secondary hypothesis is, the above effects, if any, may be mediated by variations in users’ perceived anthropomorphism of the computers (H4). That is, users will perceive a stronger sense of “human presence” when interacting with a more humanlike digital human interviewer, compared to a chatbot interviewer and an online questionnaire task. Consequently, users may modify their self-disclosure behaviours in the same direction as how the real human presence affects self-disclosure.
Trial website
Trial related presentations / publications
Public notes
The contact details of the principal investigator are as below.
Elizabeth Broadbent
Department of Psychological Medicine
The University of Auckland
Email: e.broadbent@auckland.ac.nz
Phone: (09) 3737599 Ext. 86756

Contacts
Principal investigator
Name 118982 0
Prof Elizabeth Broadbent (supervisor)
Address 118982 0
Department of Psychological Medicine
The University of Auckland, Faculty of Medical and Health Sciences
Building 507
85 Park Road
Grafton
Auckland 1023
New Zealand.
Country 118982 0
New Zealand
Phone 118982 0
+64 9 3737599
Fax 118982 0
Email 118982 0
e.broadbent@auckland.ac.nz
Contact person for public queries
Name 118983 0
Prof Elizabeth Broadbent (supervisor)
Address 118983 0
Department of Psychological Medicine
The University of Auckland, Faculty of Medical and Health Sciences
Building 507
85 Park Road
Grafton
Auckland 1023
New Zealand.
Country 118983 0
New Zealand
Phone 118983 0
+64 9 3737599
Fax 118983 0
Email 118983 0
e.broadbent@auckland.ac.nz
Contact person for scientific queries
Name 118984 0
Prof Elizabeth Broadbent (supervisor)
Address 118984 0
Department of Psychological Medicine
The University of Auckland, Faculty of Medical and Health Sciences
Building 507
85 Park Road
Grafton
Auckland 1023
New Zealand.
Country 118984 0
New Zealand
Phone 118984 0
+64 9 3737599
Fax 118984 0
Email 118984 0
e.broadbent@auckland.ac.nz

Data sharing statement
Will individual participant data (IPD) for this trial be available (including data dictionaries)?
No
No/undecided IPD sharing reason/comment
Neither ethics board approval nor informed consent from participants were obtained to share participant data publicly.


What supporting documents are/will be available?

Doc. No.TypeCitationLinkEmailOther DetailsAttachment
15873Other    The assessment procedure 383981-(Uploaded-31-05-2022-14-33-05)-Study-related document.docx



Results publications and other study-related documents

Documents added manually
Current Study Results
No documents have been uploaded by study researchers.

Update to Study Results
Doc. No.TypeIs Peer Reviewed?DOICitations or Other DetailsAttachment
3896Basic resultsNo 383981-(Uploaded-22-05-2023-12-27-25)-Basic results summary.docx
4172Plain language summaryNo Overall, this study found differences in health in... [More Details]

Documents added automatically
No additional documents have been identified.