AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework
2026-03-03 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors created a system where three AI agents work together to write and check scientific code reliably. One agent breaks down tasks and creates tests, another writes the code, and the third checks the code's quality. They use a special method that helps these agents improve by challenging each other and updating their strategies based on feedback. This approach makes the code writing process more dependable and helps people without coding skills turn their ideas into technical plans. Tests show the system works well, especially on tasks in Earth Science, doing better than other AI models.
Large Language ModelsBayesian FrameworkMulti-agent SystemCode GenerationAdversarial LoopFunctional CorrectnessStatic AnalysisLow-code PlatformAI for Science (AI4S)Human-AI Collaboration
Authors
Zihang Zeng, Jiaquan Zhang, Pengze Li, Yuan Qi, Xi Chen
Abstract
Large Language Models (LLMs) demonstrate potentials for automating scientific code generation but face challenges in reliability, error propagation in multi-agent workflows, and evaluation in domains with ill-defined success metrics. We present a Bayesian adversarial multi-agent framework specifically designed for AI for Science (AI4S) tasks in the form of a Low-code Platform (LCP). Three LLM-based agents are coordinated under the Bayesian framework: a Task Manager that structures user inputs into actionable plans and adaptive test cases, a Code Generator that produces candidate solutions, and an Evaluator providing comprehensive feedback. The framework employs an adversarial loop where the Task Manager iteratively refines test cases to challenge the Code Generator, while prompt distributions are dynamically updated using Bayesian principles by integrating code quality metrics: functional correctness, structural alignment, and static analysis. This co-optimization of tests and code reduces dependence on LLM reliability and addresses evaluation uncertainty inherent to scientific tasks. LCP also streamlines human-AI collaboration by translating non-expert prompts into domain-specific requirements, bypassing the need for manual prompt engineering by practitioners without coding backgrounds. Benchmark evaluations demonstrate LCP's effectiveness in generating robust code while minimizing error propagation. The proposed platform is also tested on an Earth Science cross-disciplinary task and demonstrates strong reliability, outperforming competing models.