STIR Lab 5 Accepted CHI Papers

Congratulations to the STIR Lab researchers and their collaborators on getting five full papers conditionally accepted at CHI 2024. The papers’ titles, abstracts, and authors are as follows:

“I’m gonna KMS”: From Imminent Risk to Youth Joking about Suicide and Self-Harm via Social Media

Authors:
Naima Samreen Ali, Sarvech Qadir, Ashwaq Alsoubai, Dr. Munmun De Choudhury, Dr. Afsaneh Razi, Dr. Pamela J. Wisniewski

Abstract:
Recent increases in self-harm and suicide rates among youth have coincided with prevalent social media use; therefore, making these sensitive topics of critical importance to the HCI research community. We analyzed 1,214 direct message conversations (DMs) from 151 young Instagram users (ages 13-21), who engaged in private conversations using self-harm and suicide-related language. We found that youth discussed their personal experiences, including imminent thoughts of suicide and/or self-harm, as well as their past attempts and recovery. They gossiped about others, including complaining about triggering content and coercive threats of self-harm and suicide but also tried to intervene when a friend was in danger. Most of the conversations involved suicide or self-harm language that did not indicate the intent to harm, but instead used hyperbolical language or humor. Our results shed light on youth perceptions, norms, and experiences of self-harm and suicide to inform future efforts towards risk detection and prevention.

Towards Digital Independence: Identifying the Tensions between Autistic Young Adults and Their Support Network When Mediating Social Media

Authors:
Spring Cullen, Elizabeth S Johnson, Dr. Pamela J. Wisniewski, Dr. Xinru Page

Abstract:
We conducted an ethnographically-informed study to observe 28 participants (9 autistic Young Adults or “YAs” in need of substantial daily support, 6 parents, 13 support staff) to understand how autistic YAs self-regulate and receive mediation on social media. We found that autistic YAs relied on black-and-white boundary rules and struggled with impulse control; therefore, they coped by asking their support network to help them deal with negative social experiences. Their support networks responded by providing informal advice, in-the-moment instruction, and formal education, but often resorted to monitoring and restrictive mediation when more proactive approaches were ineffective. Overall, we saw boundary tensions arise between Autistic YAs and their support networks as they struggled to find the right balance between providing oversight versus promoting autonomy. This work contributes to the critical disability literature by revealing the benefits and tensions of allyship in the context of helping young autistic adults navigate social media.


Systemization of Knowledge (SoK): Creating a Research Agenda for Human-Centered Real-Time Risk Detection on Social Media Platforms

Authors:
Ashwaq Alsoubai, Dr. Jinkyung Park, Sarvech Qadir, Dr. Gianluca Stringhini, Dr. Afsaneh Razi, Dr. Pamela J. Wisniewski

Abstract:
Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for real-time risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. “Real-time” detection was mainly operationalized as “early” detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms’ improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks.

Examining the Unique Online Risk Experiences and Mental Health Outcomes of LGBTQ+ versus Heterosexual Youth

Authors:
Tangila Islam, Mamtaj Akter, Joshua Anderson, Dr. Mary Jean Amon, Dr. Pamela J. Wisniewski

Abstract:
We collected and analyzed Instagram direct messages (DMs) from 173 youth aged 13–21 (including 86 LGBTQ+ youth). We examined youth’s risk-flagged social media trace data with their self-reported mental health outcomes to examine how the differing online experiences of LGBTQ+ youth compared with their heterosexual counterparts. We found that LGBTQ+ youth experienced significantly more high-risk online interactions compared to heterosexual youth. LGBTQ+ youth reported overall poorer mental health, with online harassment specifically amplifying Self-Harm and Injury. LGBTQ+ youth’s mental well-being linked positively to sexual messages, unlike heterosexual youth. Qualitatively, we found that most of the risk-flagged messages of LGBTQ+ youth were sexually motivated; however, a silver lining was that they sought support for their sexual identity from peers on the platform. The study highlights the importance of tailored online safety and inclusive design for LGBTQ+ youth, with implications for CHI community advancements in fostering a supportive online environments.

Tricky vs. Transparent: Towards an Ecologically Valid and Safe Approach for Evaluating Online Safety Nudges for Teens

Authors:
Zainab Agha, Dr. Jinkyung Park, Ruyuan Wan, Naima Samreen Ali, Yiwei Wang, Dr. Dominic DiFranzo, Dr. Karla Badillo-Urquiola, Dr. Pamela J. Wisniewski

Abstract:
HCI research has been at the forefront of designing interventions for protecting teens online; yet, how can we test and evaluate these solutions without endangering the youth we aim to protect? Towards this goal, we conducted focus groups with 20 teens to inform the design of a social media simulation platform and study for evaluating online safety nudges co-designed with teens. Participants evaluated risk scenarios, personas, platform features, and our research design to provide insight regarding the ecological validity of these artifacts. Teens expected risk scenarios to be subtle and tricky, while also higher in risk to be believable. The teens iterated on the nudges to prioritize risk prevention without reducing autonomy, risk coping, and community accountability. For the simulation, teens recommended using transparency with some deceit to balance realism and respect for participants. Our meta-level research provides a teen-centered action plan to evaluate online safety interventions safely and effectively.

STIR Lab 5 Accepted CHI Papers