Towards Leveraging AI-based Moderation to Address Emergent Harassment in Social Virtual Reality

Overview:
Extensive HCI research has investigated how to prevent and mitigate harassment in virtual spaces, particularly by leveraging human-based and Artificial Intelligence-based moderation. However, social Virtual Reality constitutes a novel social space that faces both new harassment challenges and a lack of consensus on how moderation should be approached in this context to address such harassment.
Roles:
Conceptualized the framing and research questions; collected and analyzed the data; wrote manuscripts

Publication:
Kelsea Schulenberg, Lingyuan Li, Guo Freeman, Samaneh Zamanifard, & Nathan McNeese. (2023). Towards Leveraging AI-based Moderation to Address Emergent Harassment in Social Virtual Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 514, 1–17. (Acceptance rate: 28.39%) 
Research Questions
RQ1: What are the perceived opportunities and limitations for AI-based moderation to address emergent harassment in social VR, especially in comparison to traditional human-based moderation?
QR2: How can we design future AI moderators to enhance such opportunities and address limitations to better prevent emergent harassment in social VR?
Method
39 in-depth semi-structured interviews
Qualitative analysis: Grounded Theory
Results
RQ1
① AI-based moderation helps make consistent judgements regarding harassment in social VR but can show interpretation limitations if designed without proper consideration.
② AI-based moderation effectively manages social VR harassment in real time and at a large scale but still shows some technical limitations to address new forms of harassment.
③ AI-based moderation overcomes potential subjective biases of individual human moderators but may introduce new equality limitations. 
RQ2
① User-human-AI collaboration as a comprehensive approach for improving AI-based moderation to address social VR harassment
② Leveraging code source transparency and user-controlled creative customization of AI moderators to address AI’s equality limitation
Contributions
First, we offer the first empirical investigation into how social VR users view AI-based moderation as having unique advantages and limitations for mitigating new forms of harassment, and how they envision ways in which the AI-based moderation system itself, especially taken in combination with human-based and/or community-based moderation, should be designed to provide a sense of comfort and safety depending on individual needs. 
Second, using social VR as a unique online context, we expand the rapidly evolving body of literature on content moderation and AI by pointing towards AI’s new and envisioned roles for innovating traditional moderation mechanisms. However, the potential risks of AI also playing a role in creating new and possibly unfair power dynamics in social VR must be addressed. 
Grounded in these insights, lastly, we propose three vital principles aimed at informing designing future AI-based moderation incorporating user-human-AI collaboration to achieve safer and more inclusive online social spaces and interaction dynamics.
Back to Top