LLMs as Idiomatic Decompilers: Recovering High-Level Code from x86-64 Assembly for Dart

2026-04-02Software Engineering

Software Engineering
AI summary

The authors explore using small, specialized language models to turn machine code back into human-friendly Dart code, which is a newer programming language not well studied in this context. They tested how well these models work by measuring how readable and syntactically correct the generated code is. Their 4-billion-parameter model performed almost as well as a much larger 480-billion-parameter model, showing smaller models can do a good job with less computing power. They also found adding training examples from a related language, Swift, helps only when the model is bigger, suggesting a limit on learning across languages at smaller sizes.

decompilationlarge language modelsDart programming languageSwift programming languagecross-lingual transferCODEBLEUcompile@kreverse engineering
Authors
Raafat Abualazm, Ayman Abo Elhassan
Abstract
Translating machine code into human-readable high-level languages is an open research problem in reverse engineering. Despite recent advancements in LLM-based decompilation to C, modern languages like Dart and Swift are unexplored. In this paper, we study the use of small specialized LLMs as an idiomatic decompiler for such languages. Additionally, we investigate the augmentation of training data using synthetic same-language examples, and compare it against adding human-written examples using related-language (Swift -> Dart). We apply CODEBLEU to evaluate the decompiled code readability and compile@k to measure the syntax correctness. Our experimental results show that on a 73-function Dart test dataset (representing diverse complexity levels), our 4B specialized model achieves 71.3 CODEBLEU (95% CI 65.5-77.1), approximately comparable to a ~480B code model (73.1; 67.4-78.8). On a subset of 34 natural Dart functions, it reaches compile@k5 = 79.4% (Wilson 95% CI 63.2-89.7), vs. 64.7% (47.9-78.5) for the base model; the difference is suggestive but not statistically significant at 0.05. Our results indicate that adding Swift training data helps at 8B but not at 4B, suggesting a capacity threshold for effective cross-lingual transfer. Our experimental results show that small specialized models can generate readable, idiomatic Dart with meaningful identifiers while using minimal compute.