AI-Wrapped: Participatory, Privacy-Preserving Measurement of Longitudinal LLM Use In-the-Wild

2026-02-20Human-Computer Interaction

Human-Computer Interaction
AI summary

The authors developed AI-Wrapped, a system to collect real-life data on how people use large language models (LLMs) while giving users a personalized report about their own usage. They tested it with 82 adults who shared many conversations, revealing that people use LLMs for tasks like work, creativity, and personal reflection. The study found some users might rely too much on these models or overly refine their interactions. Despite efforts to protect privacy, users were still cautious about sharing data, showing the need for trust and clear design in research setups.

large language modelsalignment researchnaturalistic interaction dataprivacyuser behaviordata collectionreflective useinstrumental usedata retentiontrust in AI
Authors
Cathy Mengying Fang, Sheer Karny, Chayapatr Archiwaranguprok, Yasith Samaradivakara, Pat Pataranutaporn, Pattie Maes
Abstract
Alignment research on large language models (LLMs) increasingly depends on understanding how these systems are used in everyday contexts. yet naturalistic interaction data is difficult to access due to privacy constraints and platform control. We present AI-Wrapped, a prototype workflow for collecting naturalistic LLM usage data while providing participants with an immediate ``wrapped''-style report on their usage statistics, top topics, and safety-relevant behavioral patterns. We report findings from an initial deployment with 82 U.S.-based adults across 48,495 conversations from their 2025 histories. Participants used LLMs for both instrumental and reflective purposes, including creative work, professional tasks, and emotional or existential themes. Some usage patterns were consistent with potential over-reliance or perfectionistic refinement, while heavier users showed comparatively more reflective exchanges than primarily transactional ones. Methodologically, even with zero data retention and PII removal, participants may remain hesitant to share chat data due to perceived privacy and judgment risks, underscoring the importance of trust, agency, and transparent design when building measurement infrastructure for alignment research.