AI summaryⓘ
The authors review ways to improve large language models by giving them better access to information during use. They explain methods like in-context learning, retrieval-augmented generation, and enhancements involving graphs and causal reasoning. They also organize and evaluate existing research to show which findings are well-supported. Finally, they offer guidance for applying these techniques safely and suggest future research directions.
Large Language ModelsIn-Context LearningPrompt EngineeringRetrieval-Augmented GenerationGraphRAGCausalRAGCausal ReasoningContext WindowNatural Language ProcessingTrustworthy AI
Authors
Prakhar Bansal, Shivangi Agarwal
Abstract
Large language models (LLMs) encode vast world knowledge in their parameters, yet they remain fundamentally limited by static knowledge, finite context windows, and weakly structured causal reasoning. This survey provides a unified account of augmentation strategies along a single axis: the degree of structured context supplied at inference time. We cover in-context learning and prompt engineering, Retrieval-Augmented Generation (RAG), GraphRAG, and CausalRAG. Beyond conceptual comparison, we provide a transparent literature-screening protocol, a claim-audit framework, and a structured cross-paper evidence synthesis that distinguishes higher-confidence findings from emerging results. The paper concludes with a deployment-oriented decision framework and concrete research priorities for trustworthy retrieval-augmented NLP.