A Survey on Compositional Learning of AI Models: Theoretical and Experimetnal Practices
Published in Transactions on Machine Learning Research (TMLR), 2024
Abstract
Compositional learning, mastering the ability to combine basic concepts and construct more intricate ones, is crucial for human cognition, especially in human language comprehension and visual perception. This notion is tightly connected to generalization over unobserved situations. Despite its integral role in intelligence, there is a lack of systematic theoretical and experimental research methodologies, making it difficult to analyze the compositional learning abilities of computational models. In this paper, we survey the literature on com- positional learning of AI models and the connections made to cognitive studies. We identify abstract concepts of compositionality in cognitive and linguistic studies and connect these to the computational challenges faced by language and vision models in compositional rea- soning. We overview the formal definitions, tasks, evaluation benchmarks, various computa- tional models, and theoretical findings. Our primary focus is on linguistic benchmarks and combining language and vision, though there is a large amount of research on compositional concept learning in the computer vision community alone. We cover modern studies on large language models to provide a deeper understanding of the cutting-edge compositional capabilities exhibited by state-of-the-art AI models and pinpoint important directions for future research.
Recommended citation: S. Sinha, T. Premsri, P. Kordjamshidi. "A Survey on Compositional Learning of AI Models: Theoretical and Experimental Practices." Transactions on Machine Learning Research (TMLR). 2024.
Download Paper
