Semantics-Aware Caching for Concept Learning

2026-03-06Machine Learning

Machine Learning
AI summary

The authors studied how computers learn to group things by their features using a method called concept learning. They noticed that finding the right groups can take a very long time because the computer has to check many examples repeatedly. To fix this, the authors created a smart memory system that remembers which examples belong to which groups and uses this to speed up the learning process. They tested this approach with different reasoning tools and found it made learning much faster without losing accuracy.

concept learningdescription logicssupervised machine learningconcept retrievalsubsumptionsymbolic reasoningneuro-symbolic reasoningcachingruntime optimization
Authors
Louis Mozart Kamdem Teyou, Caglar Demir, Axel-Cyrille Ngonga Ngomo
Abstract
Concept learning is a form of supervised machine learning that operates on knowledge bases in description logics. State-of-the-art concept learners often rely on an iterative search through a countably infinite concept space. In each iteration, they retrieve instances of candidate solutions to select the best concept for the next iteration. While simple learning problems might require a few dozen instance retrieval calls to find a fitting solution, complex learning problems might necessitate thousands of calls. We alleviate the resulting runtime challenge by presenting a semantics-aware caching approach. Our cache is essentially a subsumption-aware map that links concepts to a set of instances via crisp set operations. Our experiments on 5 datasets with 4 symbolic reasoners, a neuro-symbolic reasoner, and 5 popular pagination policies demonstrate that our cache can reduce the runtime of concept retrieval and concept learning by an order of magnitude while being effective for both symbolic and neuro-symbolic reasoners.