Understanding Usage and Engagement in AI-Powered Scientific Research Tools: The Asta Interaction Dataset

2026-02-26Human-Computer Interaction

Human-Computer InteractionArtificial IntelligenceInformation Retrieval
AI summary

The authors studied how researchers use two AI tools that help find scientific papers and answer science questions by looking at over 200,000 user interactions. They found that people ask longer and more detailed questions than usual and use the AI like a teammate to help write and explore ideas. Users often go back to the AI's answers and references in a flexible way. Over time, users become better at asking specific questions and checking sources, but still sometimes use simple keyword searches. The authors shared their data and new way to classify questions to help improve future AI research assistants.

AI-powered research toolsLarge Language Models (LLMs)Retrieval-augmented generationQuery patternsUser engagementScientific question answeringLiterature discoveryInteraction datasetResearch workflowsQuery intent taxonomy
Authors
Dany Haddad, Dan Bareket, Joseph Chee Chang, Jay DeYoung, Jena D. Hwang, Uri Katz, Mark Polak, Sangho Suh, Harshit Surana, Aryeh Tiktinsky, Shriya Atmakuri, Jonathan Bragg, Mike D'Arcy, Sergey Feldman, Amal Hassan-Ali, Rubén Lozano, Bodhisattwa Prasad Majumder, Charles McGrady, Amanpreet Singh, Brooke Vlahos, Yoav Goldberg, Doug Downey
Abstract
AI-powered scientific research tools are rapidly being integrated into research workflows, yet the field lacks a clear lens into how researchers use these systems in real-world settings. We present and analyze the Asta Interaction Dataset, a large-scale resource comprising over 200,000 user queries and interaction logs from two deployed tools (a literature discovery interface and a scientific question-answering interface) within an LLM-powered retrieval-augmented generation platform. Using this dataset, we characterize query patterns, engagement behaviors, and how usage evolves with experience. We find that users submit longer and more complex queries than in traditional search, and treat the system as a collaborative research partner, delegating tasks such as drafting content and identifying research gaps. Users treat generated responses as persistent artifacts, revisiting and navigating among outputs and cited evidence in non-linear ways. With experience, users issue more targeted queries and engage more deeply with supporting citations, although keyword-style queries persist even among experienced users. We release the anonymized dataset and analysis with a new query intent taxonomy to inform future designs of real-world AI research assistants and to support realistic evaluation.