Alert button
Picture for Danila Sinopalnikov

Danila Sinopalnikov

Alert button

*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task

Dec 15, 2020
Dmitry Tsarkov, Tibor Tihon, Nathan Scales, Nikola Momchev, Danila Sinopalnikov, Nathanael Schärli

Figure 1 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 2 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 3 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 4 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task

We present *-CFQ ("star-CFQ"): a suite of large-scale datasets of varying scope based on the CFQ semantic parsing benchmark, designed for principled investigation of the scalability of machine learning systems in a realistic compositional task setting. Using this suite, we conduct a series of experiments investigating the ability of Transformers to benefit from increased training size under conditions of fixed computational cost. We show that compositional generalization remains a challenge at all training sizes, and we show that increasing the scope of natural language leads to consistently higher error rates, which are only partially offset by increased training data. We further show that while additional training data from a related domain improves the accuracy in data-starved situations, this improvement is limited and diminishes as the distance from the related domain to the target domain increases.

* Accepted, AAAI-21 
Viaarxiv icon

Measuring Compositional Generalization: A Comprehensive Method on Realistic Data

Dec 20, 2019
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, Olivier Bousquet

Figure 1 for Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
Figure 2 for Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
Figure 3 for Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
Figure 4 for Measuring Compositional Generalization: A Comprehensive Method on Realistic Data

State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.

* Accepted for publication at ICLR 2020 
Viaarxiv icon