From Research Question to Scientific Workflow: Leveraging Agentic AI for Science Automation
2026-04-23 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors created a system that helps scientists turn their research questions, written in normal language, into detailed workflows that computers can run automatically. Their design has three parts: an AI (LLM) that understands the question's meaning, tools that build exact workflows from that meaning, and documents called 'Skills' made by experts to guide this process. This setup keeps the AI's guesswork only in understanding the question, making the final workflow reliable and consistent. They tested the system on a genetics project and showed it improved accuracy and reduced data handling, while keeping costs and delays very low.
scientific workflow systemslarge language models (LLM)workflow DAGsemantic translationKubernetesdeterministic generationagentic architecturepopulation geneticsworkflow management system (WMS)data transfer optimization
Authors
Bartosz Balis, Michal Orzechowski, Piotr Kica, Michal Dygas, Michal Kuszewski
Abstract
Scientific workflow systems automate execution -- scheduling, fault tolerance, resource management -- but not the semantic translation that precedes it. Scientists still manually convert research questions into workflow specifications, a task requiring both domain knowledge and infrastructure expertise. We propose an agentic architecture that closes this gap through three layers: an LLM interprets natural language into structured intents (semantic layer); validated generators produce reproducible workflow DAGs (deterministic layer); and domain experts author ``Skills'': markdown documents encoding vocabulary mappings, parameter constraints, and optimization strategies (knowledge layer). This decomposition confines LLM non-determinism to intent extraction: identical intents always yield identical workflows. We implement and evaluate the architecture on the 1000 Genomes population genetics workflow and Hyperflow WMS running on Kubernetes. In an ablation study on 150 queries, Skills raise full-match intent accuracy from 44% to 83%; skill-driven deferred workflow generation reduces data transfer by 92\%; and the end-to-end pipeline completes queries on Kubernetes with LLM overhead below 15 seconds and cost under $0.001 per query.