Photo/Getty Images
Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers who often see first-hand the trials faced by vulnerable youth, and who are charged with their protection, do see some value in using artificial intelligence as an early risk detection tool for online safety.
But they are concerned about feasibility due to a lack of resources, access to the necessary social media data, context, and concerns about violating the trust relationships they build with youth, which take time.
As part of the National Science Foundation I-Corps program, a team of researchers led by Vanderbilt University Computer Science Associate Professor Pamela J. Wisniewski, Flowers Family Fellow in Engineering, conducted interviews with 37 social service providers (SSPs) across the United States who work with underprivileged youth to determine what online risks most concern them and if they see value in using AI as a solution for automated online risk detection. The respondents included children, youth and family services workers, mental health therapists, teachers, juvenile justice officers, an LGBTQ+ advocate, a government consultant, and police officers.
Online sexual risks, like sexual grooming and abuse, and cyberbullying were the top concerns, especially when these experiences crossed the boundary between digital and physical worlds. SSPs say they rely heavily on self-reporting to know whether and when online risks occur, which requires building a trusting relationship. Otherwise, they become aware only after a formal investigation has been launched.
While there are algorithmic decision-support systems in child welfare agencies to assess offline risk outcomes so caseworkers can support the needs of the children placed in care, this study is the first to address using AI risk detection to help SSPs identify and mitigate online risk experiences of underprivileged youth.
“What we found, and what was impactful, is that SSPs don’t want to use technology as surveillance or to crack down on youngsters, they want it to help them start conversations. There is little interest in a solution that censors or sends an alert to legal authorities,” said Xavier V. Caddle, a graduate student on Wisniewski’s research team. “They want a nudge or a tidbit in order to ask, ‘Did something happen at school today? Someone sent this message, did it hurt you? Did it offend you?’”
The study offers detailed responses from the distinct types of SSPs that indicate risk detection technology needs to recognize the differences in end-user views and that would affect model design, Wisniewski said. “AI can over flag. Kids cuss, so using the F-word becomes ‘noise.’” SSPs prefer a tool that prioritizes and filters risks like sexually risky behavior and cyberbullying but also takes into account the differences in SSP duties.
For example, judicial system users need views that support investigation and incident response. They care about the detection and prevention of illegal behavior. Educators and child welfare officers need a more day-by-day view of the experiences of specific teens. Clinicians, therapists, and mental health practitioners mainly want to see assessments to correlate findings with their current established means of patient assessment to identify factors that indicate poor mental health.
“There is an interest among SSPs in online risk detection technology because they rely predominantly on self-disclosure and tip-offs and they view it as useful as conversation starters, but not to surveil and report on kids in their care,” Wisniewski said. “It’s clear that any automated risk detection system for SSPs should be designed and deployed with caution.”
The study’s findings were reported in Proceedings of the Association for Computing Machinery on Human-Computer Interaction. The co-authors of the paper, Duty to Respond: The Challenges Social Service Providers Face When Charged with Keeping Youth Safe Online, are University of Central Florida graduate students Xavier V. Caddle, Nurun Naher and Zachary P. Miller; and University of Notre Dame Assistant Professor Karla Badillo-Urquiola.
This research was supported by the U.S. National Science Foundation. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.
Contact Brenda Ellis, Vanderbilt University School of Engineering