When LLMs Lag Behind: Knowledge Conflicts from Evolving APIs in Code Generation

2026-04-10Software Engineering

Software Engineering
AI summary

The authors studied how large language models (LLMs) handle changes in software libraries, like when functions are added, changed, or removed. They found that LLMs often struggle to update their code correctly even when given new information, with less than half of the generated code working properly without full documentation. Bigger models and better documentation help, but issues remain. Using reasoning strategies like Self-Reflection improves results somewhat. The study shows that LLMs still rely too much on old knowledge and need better ways to adapt to software updates.

Large Language ModelsAPI EvolutionCode GenerationRetrieval-Augmented GenerationSelf-ReflectionSoftware LibrariesParametric KnowledgeExecutable CodeBenchmarkingDocumentation
Authors
Ahmed Nusayer Ashik, Shaowei Wang, Tse-Hsun Chen, Muhammad Asaduzzaman, Yuan Tian
Abstract
The rapid evolution of software libraries creates a significant challenge for Large Language Models (LLMs), whose static parametric knowledge often becomes stale post-training. While retrieval-augmented generation (RAG) is commonly used to provide up-to-date API specifications, "context-memory conflict" arises when external instructions contradict a model's internal parametric knowledge. This paper presents a systematic empirical study of LLM code generation under API evolution (e.g., API deprecation, API modification, and API addition), by constructing a benchmark of 270 real-world updates from eight Python libraries. We evaluate four LLM families of 11 models. Our results show that without comprehensive documentation, LLMs struggle to prioritize external context, averaging only 42.55% of generated code examples are executable in the target environment. While structured documentation and larger model scales improve LLMs' ability to update adoption, they do not fully resolve executability issues with a low 66.36% executable rate. In addition, reasoning-based strategies (e.g., Self-Reflection) significantly boost LLMs' performance with 11% improvement on executable rate. Our findings highlight the persistence of outdated patterns from LLMs, even when API update specifications are provided, and emphasize the need for evolution-aware benchmarks and techniques.