More than 300 million people have depression and more than 250 million people are affected by an anxiety disorder around the world []. Population-wide studies in Germany have revealed that approximately 1 in 10 people meets the diagnostic criteria for depression. Approximately 1 in 5 lives with an anxiety disorder, and 1 in 20 people deals with chronic pain [] or insomnia []. Depression and anxiety are among the top 10 contributors to health loss, accounting for 7.5% and 4.5% of years lived with a disability, respectively [,]. Mental disorders result in high direct and indirect costs, estimated at more than €60 (US $63) billion per year in Germany alone [,].
Digital Mental Health Interventions May Reduce Symptom BurdenThe efficacy of digital interventions has been shown for both the reduction of symptoms of common mental disorders and improvement of quality of life []. Most digital interventions address singular disorder categories and are structured similarly to disorder-specific treatment manuals []. These interventions consist of several subsequent sessions or modules. Interventions targeting the same disorder tend to be similar in terms of their components and content.
Guided digital interventions often yield larger effects than self-guided interventions (eg, the studies by Koelen et al [], Lakhtakia and Torous [], Moshe et al [], and Schröder et al []), but a recent meta-analysis revealed that overall, the difference in effect sizes may not be very substantial [], at least concerning anxiety disorders. Besides within-intervention guidance, guidance-related aspects of the study design such as the conduct of clinical interviews [], use of automated reminders [], or treatment setting in which the intervention is used [] seem to be associated with greater effects. Nevertheless, although the scalability of web-based interventions with guidance is higher than that of traditional psychotherapy, it is limited because staff resources are required, even if the time allotted for guidance is limited. Self-guided interventions, although associated with only small to medium effects on individuals, can reach a larger number of patients and may thus have a larger public health impact [].
Comorbidity and Transdiagnostic InterventionsComorbidity among mental disorders is high and has been shown to be largely associated with common causal pathways in several large studies [,]. Therefore, contemporary models of psychopathology postulate dimensional spectra (eg, internalizing, thought disorder, and externalizing) comprising multiple mental health syndromes with co-occurring genetic, neurobiological, environmental, and behavioral indicators []. Many pharmacological and psychological treatments yield transdiagnostic effects on multiple mental disorders [-]. Therefore, recent treatment protocols for mental disorders, such as the Unified Protocol [] and the Common Elements Treatment Approach [], replace disorder- and symptom-specific interventions with interventions that address common causal factors and have been shown to be transdiagnostically effective.
Most people with mental illness in Germany receive care exclusively from primary care physicians; 83% of patients with affective disorders (F3, International Classification of Diseases, 10th Revision) and 91% of patients with neurotic, stress, and somatoform disorders (F4, International Classification of Diseases, 10th Revision) do not receive treatment from a mental health specialist []. Differentiating between different mental disorders can be a challenge for primary care providers, especially given the high rate of comorbidity. The presence of emotional problems is recognized in most cases during contact with a primary care provider, but an accurate diagnosis is made much less frequently [,]. Thus, low-threshold transdiagnostic interventions may be more suitable than disorder-specific interventions in a primary care setting.
Digital interventions with a transdiagnostic approach have yet been less well researched than disorder-specific interventions, but the evidence base is growing (eg, the studies by Newby et al [], Newby et al [], and Păsărelu et al []). However, based on the findings discussed earlier, we expected a transdiagnostic digital intervention to impact a range of mental disorder symptoms, including anxiety and depression, as well as quality of life.
Digital Mental Health Interventions May Reduce Treatment BarriersTimely treatment for mental disorders is impeded by structural barriers, such as limited availability and high cost, but within-person attitudinal barriers may constitute an even stronger obstacle for treatment seeking. Wanting to handle the problem on one’s own, low perceived need for care, stigma, low knowledge about mental health services, and fear about the act of help seeking or the source of help itself have been shown to be the largest treatment barriers by far [-]. These factors reduce the chances of timely intervention and increase the risk of long-term symptom deterioration and chronification []. Low-threshold digital interventions that involve no personal contact and can be used anonymously can counteract at least some of these internal barriers and simplify help seeking.
An underinvestigated research area concerns the reduction of within-person barriers through digital mental health interventions. Some older studies evaluating digital mental health interventions for depression and anxiety have shown a decrease in self-stigmatization [-]. Effects on help-seeking attitudes and actual help seeking have been detected in some randomized controlled trials (RCTs) [-]. These effects seem to be linked to changes in health literacy [,]. In addition, a recent review on digital interventions found an increase in self-management behavior to be an important mediator of treatment effects []. However, research on the effects of transdiagnostic interventions on attitudinal barriers, help seeking, and self-management skills is lacking. A major advantage of transdiagnostic digital interventions in this respect is their potential to be truly low threshold. Unlike disorder-specific interventions, transdiagnostic interventions can be applied before diagnosis and, in a first step, help users determine whether they have a mental health problem at all, what their problem is, and whether they may need help from a mental health professional. Help seeking can then be actively encouraged by providing information and correcting unhelpful and false assumptions about mental health care.
This StudyOn the basis of the findings from previous research, we expected that the use of a self-guided transdiagnostic self-management app for mental health, in addition to care as usual (CAU), would lead to significant improvements in mental health literacy and variables that reflect patient empowerment, such as help seeking, reduced stigma, and self-management skills. If self-guided mental health apps have these effects, they would constitute a low-cost public health impact if made available for people with mental health problems in addition to CAU. Furthermore, we aimed to explore whether such an intervention leads to a greater reduction in symptoms of common mental disorders and a stronger improvement in quality of life than CAU only.
Thus, the aim of our study was to investigate whether the use of a self-guided transdiagnostic app for mental health is associated with improvements in mental health literacy and variables that reflect patient empowerment, such as help seeking, reduced stigma, and self-management skills. Furthermore, the intervention’s effects on symptoms of common mental disorders and quality of life were explored.
This trial was registered in the German Clinical Trials Register (DRKS00022531), and the local ethical committee of Freie Universität Berlin approved the protocol (AZ 039/2020).
DesignTo examine the effects of a transdiagnostic mental health app (MindDoc), we conducted a single-center RCT with 3 assessments []. We assigned participants to 2 groups in a 1:1 ratio. The intervention group (IG) received immediate access to the MindDoc app in addition to current care (CAU); however, a limitation exists in that we recruited participants without outpatient or inpatient psychotherapy at the start of the trial (see Recruitment Strategy section). In the control condition, participants were not given any guidance or encouragement to modify their current care and were informed that they had the option to receive access to the MindDoc app after the 6-month study period. Essentially, this created a waitlist control condition for the use of the MindDoc app. However, it is important to note that participants were allowed to use any treatments that were accessible to them during the trial.
Health literacy, patient empowerment, help-seeking attitudes, health service use, symptom distress, and quality of life were assessed before randomization (baseline assessment), 8 weeks after randomization (postintervention assessment), and 6 months after randomization (follow-up assessment).
The participants in the control group (CG) received access to the MindDoc app after completing their follow-up assessment.
InterventionThe users in the IG received immediate access to the MindDoc app. The MindDoc app is a self-guided transdiagnostic intervention designed for individuals who want to take care of their mental health. It can be used across the mental health care spectrum, including for (indicated) prevention, early recognition, treatment, and aftercare. The core features of the app encompass regular self-monitoring and automated feedback and psychological courses and exercises.
The self-monitoring feature consists of an adaptive system of daily multiple-choice questions based on the Hierarchical Taxonomy of Psychopathology (HiTOP) [] as well as regular mood ratings. On the basis of their entries in the self-monitoring feature, users receive regular automated feedback on relevant symptoms, problem areas, and in-app resources. Users also receive biweekly feedback on their overall mental health as well as encouragement to seek help depending on their health status.
The courses and exercises in the app provide information on common mental disorders and their treatment (psychoeducation) and teach self-management skills to support users in coping with symptoms and problems. Although some courses are disorder specific, most follow a transdiagnostic approach, considering the heterogeneity and high comorbidity in mental illness. The learning goals of the courses include, for example, identifying and gradually changing unhelpful thought patterns and basic assumptions, clarifying personal goals and values, promoting functional stress management behaviors, and fostering the ability to relax. All the courses and exercises are based on the fundamentals of cognitive behavioral therapy and its derivatives (eg, acceptance and commitment therapy and mindfulness-based stress reduction).
All the app’s content was developed by or under the supervision of licensed clinical psychological psychotherapists based on established approaches and guidelines for identifying and treating mental illness. The MindDoc app underwent no major changes in content or functioning during the intervention phase of the trial. Updates to the app that occurred during the research study were bug fixes and performance improvements and would not have impacted the therapeutic approach or usability of the app.
The app can send push notifications to a user’s phone every time a new question block is ready to be answered (3 times a day) and when automated feedback (insights) has been generated. Push notifications can be turned on and off by each user for question blocks, insights, or both. We did not monitor whether the trial participants used this feature. The notification settings in the RCT did not differ from those in routine application. There were no cointerventions, except for support by the study coordinator in installing the app.
The app contains detailed information on how to access mental health care. Participants who report a high symptom burden or functional impairment within the monitoring function of the app will be prompted to consult a health care professional in the automated feedback. Furthermore, users are repeatedly reminded that study participation does not substitute for diagnosis, counseling, or treatment by a licensed physician or psychotherapist.
The MindDoc app is a commercial product with both free and premium (paid) features. The study participants had free access to all the features. A more detailed description of the app is provided in the . Descriptions of the app in this section and in correspond to the version of the app used in the research study and may not be exactly apply to the currently available version of the app.
Recruitment StrategyThe participants were openly recruited via press releases and social media as well as health insurance member magazines and websites in Germany. Recruitment took place over a period of approximately 7 months (December 2020 to June 2021). Participation in the study was anonymous, but the participants were required to provide an (anonymous) email address through which they could be contacted.
Participants and ProceduresAssessment ProcedureAll assessments related to the trial were carried out outside the app via web-based surveys (self-assessment) on a web-based platform (Unipark/EFS Survey; Questback GmbH).
After receiving detailed written information about the study procedures and data processing, participants provided electronic informed consent. Participants were screened according to the predefined inclusion and exclusion criteria (see the subsequent sections). Eligible individuals then received access to the baseline assessment. Those who completed baseline assessments were randomly assigned to either the IG or CG in a 1:1 ratio using an algorithm provided by the assessment platform (Unipark/EFS Survey). They were immediately informed about the result of the assignment on the assessment platform and via email.
The participants in the IG received access to the MindDoc app and were recommended to use it for at least 8 weeks, although they had full access to it for 6 months. To this end, they received individual codes that unlocked the content of the app after downloading the app from the app store. The participants of both groups received an email invitation to the postintervention and follow-up assessments 8 weeks and 6 months after the baseline assessment, respectively. Participants who did not complete the postintervention or follow-up assessment were reminded 1 week later.
The participants were not financially compensated for participating in the study. However, participants who had completed the postintervention and follow-up assessments took part in a monthly raffle, where they could win a universal €50 (US $52.66) voucher that can be redeemed in a number of web-based stores.
Inclusion CriteriaWe included adults with clinically relevant symptoms of internalizing disorders indicated by scoring above the cutoff for mild symptoms on one or more of the following scales: Patient Health Questionnaire–9 (PHQ-9) score>4 [], Generalized Anxiety Disorder–7 (GAD-7) score>4 [], Mini-Social Phobia Inventory score>6 [], Patient Health Questionnaire–15 (PHQ-15) score>4 [], Regensburg Insomnia Scale (RIS) score>12 [], binge eating or compensatory behaviors>once/wk, BMI<18.5 kg/m2 or critical weight loss, and weight and shape concern.
In addition, participants needed to have full legal capacity (self-disclosure), have access to a smartphone (iOS [Apple Inc] or Android [Google LLC]) and the internet, and live in Germany.
Exclusion CriteriaWe excluded individuals with severe symptoms of internalizing disorders (PHQ-9 score>19, GAD-7 score>15, or PHQ-15 score>14) and severely underweight individuals (BMI<15 kg/m2). We also excluded individuals who reported acute suicidality and individuals who reported a history of bipolar disorder, psychotic disorder, or substance use disorder. Participants who met these exclusion criteria were provided with detailed information on treatment options.
To ensure that the effects we discovered in the trial were attributable to the use of the app and not to other specific and intensive treatments, we excluded individuals with current or planned outpatient psychotherapy or inpatient treatment for a mental disorder. However, participants were allowed to use or initiate any treatment during the study period.
Outcomes and MeasuresPrimary OutcomesWe assessed 4 coprimary outcomes in the trial: mental health literacy, mental health–related patient empowerment and self-management skills (MHPSS), attitudes toward help seeking (after 8 wk), and actual mental health service use (after 6 months).
Mental health literacy was assessed using the Mental Health Literacy Questionnaire, which is a 29-item scale with 4 dimensions (knowledge of mental health problems, erroneous beliefs or stereotypes, help-seeking and first-aid skills, and self-help strategies). The measure differentiates well between individuals with more experience with mental health and individuals with less experience with mental health and has good internal consistency (Cronbach α=.84) for the total score [].
We used the Assessment of Mental Health Related Patient Empowerment and Self-Management-Skills questionnaire, which was constructed based on a systematic review on self-management skills for depression [], a Delphi consensus study on self-help strategies for depression [], 2 studies on useful self-management skills for mood [] and anxiety [] disorders from the patient perspective, and a conceptual framework for patient choice and empowerment in northern European health systems []. The questionnaire consists of 10 items in a statement format assessing patient empowerment based on how much patients agree or disagree on a 5-point Likert scale, for example, “I know well about the treatment options for my disease,” and 18 items in a question format assessing the frequency of self-management skills on a 5-point Likert scale, for example, “In the last 8 weeks, how often have you engaged in activities that gave you a feeling of achievement?” The complete Assessment of Mental Health Related Patient Empowerment and Self-Management-Skills can be retrieved in .
Attitudes toward help seeking were assessed using the Inventory of Attitudes Toward Seeking Mental Health Services, which is a 24-item scale assessing 3 internally consistent within-person barriers to seeking mental health services: psychological openness, help-seeking propensity, and indifference to stigma. Internal consistency (Cronbach α=.87) and the validity of the assessment could be confirmed in separate samples [].
Actual seeking of outpatient psychotherapeutic or psychiatric treatment was assessed via 2 questions asking whether these services were used in the last 6 months.
Secondary OutcomesThe PHQ-9 is the depression module of the self-administered version of the Primary Care Evaluation of Mental Disorders diagnostic instrument for common mental disorders. It scores each of the 9 Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) diagnostic criteria from 0 (not at all) to 3 (nearly every day). The PHQ-9 is a reliable (Cronbach α=.89) and valid measure of depression severity []. Higher scores indicate a higher symptom load.
The GAD-7 is a 1D instrument designed to detect symptoms of generalized anxiety disorder as defined in the DSM-5. The item scores range from 0 (not at all) to 3 (nearly every day). The GAD-7 is a valid and efficient tool for screening for anxiety disorders and assessing their severity in clinical practice and research []. Higher scores indicate a higher symptom load.
The PHQ-15 is the module for assessing the severity of somatic symptoms of the self-administered version of the Primary Care Evaluation of Mental Disorders diagnostic instrument for common mental disorders. It comprises 15 somatic symptoms from the PHQ, with each symptom scored from 0 (“not bothered at all”) to 2 (“bothered a lot”). The PHQ-15 is a reliable (Cronbach α=.80) and valid screening tool for somatization []. Higher scores indicate a higher symptom load.
The RIS [] is a self-rating scale with 10 items for assessing the cognitive, emotional, and behavioral aspects of psychophysiological insomnia. It has good internal consistency with Cronbach α=.89 and distinguishes well between controls and patients with psychophysiological insomnia. Higher scores indicate a higher symptom load.
The Personality Inventory for DSM-5, Brief Form Plus is a short form of the Personality Inventory for DSM-5 with 34 items, which is compatible with the dimensional assessment of maladaptive personality expressions in the International Classification of Diseases, 11th Revision. The Operationalized Psychodynamic Diagnosis-Structure Questionnaire Short is a short 12-item measure for assessing the severity of personality dysfunction. Dimensional assessment of the severity and style of personality dysfunction according to DSM-5 and International Classification of Diseases, 11th Revision are important predictors of treatment course, adherence, and response and general psychopathology []. Both the Operationalized Psychodynamic Diagnosis-Structure Questionnaire Short (Cronbach α=.89) and the Personality Inventory for DSM-5, Brief Form Plus (average McDonald ω=0.81) are validated and reliable measures [-]. Higher scores indicate higher personality dysfunction.
Quality of life was assessed using the Assessment of Quality of Life-8 Dimensions, which is a 35-item self-assessment scale designed to evaluate health services that impact the psychosocial aspects of quality of life. It assesses 3 physical and 5 psychosocial domains of functioning. It has good reliability (Cronbach α=.96) and convergent and predictive validity []. Higher scores indicate a lower quality of life.
Secondary outcomes were symptoms of common mental disorders and quality of life 8 weeks and 6 months after baseline assessments.
To determine the overall burden of symptoms of common mental disorders, we calculated a composite score from the sum scores of the PHQ-9, GAD-7, PHQ-15, and RIS, divided by the respective scale span. The composite score calculated in this manner can take values between 0 and 1, with higher values indicating higher symptom burden.
All assessments were tested before fielding the trial; the web-based survey contained, on average, 12 items per page, and there was no adaptive testing. The questionnaires at the 3 assessment time points contained between 20 and 25 pages of questions, and all items were mandatory. We used no cookies or IP check but identified multiple entries from the same users through either multiple code redemptions on the same device or multiple participant code or email entries. We applied no weighting of the items, and there were no incomplete questionnaires, as all items were mandatory.
Use data were recorded directly in the MindDoc app. Data from the 2 sources (MindDoc app and study survey) were matched via a personalized download link, which users in the IG received after randomization. A day of use was recorded when a user actively engaged with the app on that day, such as answering questions or engaging with an exercise. An exercise was considered engaged with once it was opened.
Statistical AnalysesAssumptions for the appropriate statistical tests were checked for normality through histograms, skewness, and Kolmogorov-Smirnov test; sphericity was assessed through Mauchly test; and the assumption of equality of variance-covariance matrices was investigated through Box test and Levene test.
If participants entered assessments several times, only the first assessment was used in the consecutive analyses. Participants were excluded if they fulfilled the exclusion criteria or were missing data from their baseline assessments. Missing data from the postintervention and follow-up assessments of nonbinary primary outcomes (mental health literacy, help-seeking attitudes, and MHPSS) were imputed using baseline scores on symptom severity, mental health literacy, patient empowerment, help-seeking attitudes, quality of life, and severity of personality dysfunction and demographic information by applying an iterative Markov Chain Monte Carlo method based on the initial treatment assignment. We calculated imputations for 10, 50, 100, 150, 200, 300, 400, and 500 iterations. Then, we calculated the fraction of missing information (FMI) index for all multiply imputed data sets. The FMI ranges from 0 to 1 (with 1 meaning that 100% of the data necessary for the planned inferences are missing) and is a reliable indicator of the validity of inferences based on imputed data sets []. For the following analyses, we chose the number of imputations that yielded no further decline in the FMI compared with the previous ones. To further investigate the robustness of the findings and address potential bias due to nonrandom missing outcome observations, we calculated Random Forest Lee Bounds (RFLBs) for all nonbinary primary and secondary outcomes as a second procedure to account for missing data and potential nonrandom missingness [].
In addition, a per-protocol (PP) analysis was performed excluding participants who made the following protocol violations: (1) failure to download the app and complete the onboarding process, (2) use of the app before the randomization date, (3) reporting of the use of the MindDoc app during the intervention and follow-up periods in the waitlist condition, and (4) noncompletion of postintervention or follow-up assessment.
To further investigate the robustness of the results and avoid potential bias due to nonrandom missing outcome observations, RFLBs were determined based on the PP sample for all nonbinary primary and secondary outcomes. In this procedure, the first step uses the random forest (RF) procedure to determine variables based on the baseline data that are related to missing data at later measurement time points. On the basis of this, upper and lower bounds were calculated for the mean differences in actual values between the IG and CG on the respective measure under investigation. If the rate of missing values differs across the study arms, the RFLB procedure trims the outcome distribution of the group with the lower dropout rate by removing observations from the lower (upper) end of the distribution using RFs trained on baseline variables that are predictive of dropout so that an upper and a lower bound adjusted for potential nonrandom missing outcome observations are estimated for the treatment effect. Upper and lower bounds that do not include 0 indicate that the measured difference between the groups persists, that is, is robust, even under the assumption of systematic differences in dropouts (missing not at random) between the IG and CG. Baseline variables identified to be predictive of dropout using the RFLB procedure were further investigated for systematic differences between participants who did drop out at the postintervention assessment point and those who did.
To compare the intervention effects on all non–count-based primary and secondary outcomes, we applied analysis of covariance (ANCOVA) between the groups at the posttreatment and follow-up time points, adjusting for baseline scores both on the multiply imputed intention-to-treat (ITT) and the PP data set. Differences in mental health–related health service use between the IG and CG were assessed using chi-square tests for the available data and the PP sample at follow-up. Multiple imputation on zero-inflated count data, such as physician visits, yields unreliable estimates [] especially if missing data rates are high. Concerns of multiple testing error for the primary outcomes were addressed through Bonferroni correction. Differences in count-based measures of health service use between the groups were assessed using chi-square tests in the PP sample.
To investigate the predictors of study dropout and adverse events, such as deterioration, baseline variables that were more predictive than a random variable in RF models were identified and investigated using 2-tailed t tests.
Out of the 4057 individuals who provided informed consent to participate in the study, 1045 (25.76%) were randomized (). Study dropout rates in the IG and CG were 41.9% (219/523) and 37.9% (198/522) at the postintervention assessment and increased to 59.7% (312/523) and 41.2% (215/522) at the follow-up assessment, respectively. Assessment data from participants without protocol violations (PP sample) was available of 48.4% (253/523) of the participants in the IG and 57.7% (301/522) of the participants in the CG at the postintervention assessment, decreasing to 32.5% (170/523) and 54% (282/522), respectively, at the follow-up assessment.
The MindDoc app was available from the Apple App Store and Google Play Store during the study period. To decrease the risk of app use in the CG, we did not disclose which app was the subject of the trial until after randomization (IG) or follow-up (CG). Nevertheless, protocol violations related to app use did occur in the sample: of the 1045 participants, 40 (3.8%; IG: 30/523, 5.7%; CG: 10/522, 1.9%) used the app before randomization. This information was obtained based on self-report or the use logs of the app manufacturer. An additional 24 (4.6%) participants in the CG used the app during the study period. In the IG, 102 (19.5%) participants did not download the app or complete the app onboarding. The postintervention or follow-up assessment was not completed by 183 (35%) participants in the IG and 131 (25.1%) participants in the CG. Of the 1045 participants, 10 (1%) did multiple assessments, of which we only used the first assessment respectively for further analyses.
Sample Characteristicspresents the baseline characteristics of the participants. The mean age of the overall sample was 38.3 (SD 11.19; range 18-77) years. Most participants (769/1045, 73.59%) were female, and most (865/1045, 82.78%) had completed upper secondary education (“Abitur,” European Qualifications Framework level 4) or higher education.
Although current psychotherapeutic treatment was an exclusion criterion for the trial, almost 2 out of 3 (636/1045, 60.86%) participants reported previous psychiatric or psychotherapeutic treatment. The symptom burden was evenly distributed between mild and moderate in most of the individual symptom domains, and most participants (924/1045, 88.42%) reported elevated symptoms in ≥3 domains.
Table 1. Participant characteristics.CharacteristicsFull sample (N=1045)Intervention (n=523)Control (n=522)Sociodemographic characteristicsaPHQ-9: Patient Health Questionnaire–9 (depression module).
bGAD-7: Patient Health Questionnaire–7 (anxiety module).
cPHQ-15: Patient Health Questionnaire–15 (somatic symptom module).
dRIS: Regensburg Insomnia Scale.
App UseThe participants in the IG were recommended to use the MindDoc app for at least 8 weeks, but they were given unlimited access to the app for 6 months. shows the use metrics of the participants in the IG. shows the number of active users per week during the intervention period. During this 8-week period, they used the app, on average, for 2 out of 3 days, and during the 6-month period, they used it for 4 out of 10 days. Although engagement with the questions continued well after the 8-week period, engagement with the courses and exercises subsided over time.
Table 2. App use metrics.The results of the efficacy analyses with respect to the 4 coprimary outcomes 8 weeks after baseline assessments are summarized in .
Intervention effects on the improvement in MHPSS could be confirmed 8 weeks after baseline assessments. ANCOVAs including baseline assessments as predictors showed significant (P<.001) results in both the multiply imputed ITT (100 imputations) and the PP analyses. Effect sizes for the between-group comparison for ITT were Cohen d=0.29 and Cohen d=0.28 for PP analyses 8 weeks after the baseline assessment.
Mental health literacy in the IG did not significantly exceed that in the CG 8 weeks after baseline assessments. Positive help-seeking attitudes in the IG did not significantly exceed those in the CG 8 weeks after baseline assessments. The proportion of participants who actually sought outpatient psychotherapeutic or psychiatric treatments was 35.5% (75/211; available data) or 33.5% (57/170; PP sample) in the IG and 25.7% (79/307; available data) or 25.5% (72/282; PP sample) in the CG. Results of between-group chi-square tests were F1=5.30 (P=.02) using available data and F1=2.95 (P=.09) only using data from the PP sample, indicating a trend toward higher outpatient treatment seeking in the IG, but the results were not significant in the PP sample.
Of the 4 coprimary end points, only the between-group differences in MHPSS yield a P value below the Bonferroni corrected α value of .013.
Table 3. Results of analyses of covariance (ANCOVAs) and Cohen d for the nonbinary coprimary outcomes (intention-to-treat [ITT] and per-protocol [PP] analyses) 8 weeks after baseline assessmenta.aThe F test results were pooled using the D2 statistic [].
bIG: intervention group.
cCG: control group.
dRFLB: Random Forest Lee Bound.
eThese are lower and upper bounds of the Random Forest Lee Bound procedure.
fMHLq: Mental Health Literacy Questionnaire.
gAMHPSS: Assessment of Mental Health Related Patient Empowerment and Self-Management-Skills.
hIASMHS: Inventory of Attitudes Toward Seeking Mental Health Services.
Secondary Outcomes and Other MeasuresFor all secondary outcomes and other measures, including the mental health composite score and the measures for depression, anxiety, somatization, insomnia, and quality of life, the results showed significant between-group effects at the postintervention assessment point, controlling for pretreatment scores both in ITT and PP analyses (). Intervention effects ranged from Cohen d=0.19 (Cohen d=0.23 in PP sample) for quality of life to Cohen d=0.34 (Cohen d=0.39 in PP sample) for depression.
Table 4. Results of analyses of covariance (ANCOVAs) and Cohen d for the secondary and additional outcomes (intention-to-treat [ITT] and per-protocol [PP] analyses) at the postintervention assessment pointa.
Comments (0)