Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"magic": models, code, and papers

Magic: the Gathering is as Hard as Arithmetic

Mar 11, 2020
Stella Biderman

Magic: the Gathering is a popular and famously complicated card game about magical combat. Recently, several authors including Chatterjee and Ibsen-Jensen (2016) and Churchill, Biderman, and Herrick (2019) have investigated the computational complexity of playing Magic optimally. In this paper we show that the ``mate-in-$n$'' problem for Magic is $\Delta^0_n$-hard and that optimal play in two-player Magic is non-arithmetic in general. These results apply to how real Magic is played, can be achieved using standard-size tournament legal decks, and do not rely on stochasticity or hidden information. Our paper builds upon the construction that Churchill, Biderman, and Herrick (2019) used to show that this problem was at least as hard as the halting problem.

* pre-print, currently under review 
  
Access Paper or Ask Questions

MAGIC: Microlensing Analysis Guided by Intelligent Computation

Jun 16, 2022
Haimeng Zhao, Wei Zhu

The modeling of binary microlensing light curves via the standard sampling-based method can be challenging, because of the time-consuming light curve computation and the pathological likelihood landscape in the high-dimensional parameter space. In this work, we present MAGIC, which is a machine learning framework to efficiently and accurately infer the microlensing parameters of binary events with realistic data quality. In MAGIC, binary microlensing parameters are divided into two groups and inferred separately with different neural networks. The key feature of MAGIC is the introduction of neural controlled differential equation, which provides the capability to handle light curves with irregular sampling and large data gaps. Based on simulated light curves, we show that MAGIC can achieve fractional uncertainties of a few percent on the binary mass ratio and separation. We also test MAGIC on a real microlensing event. MAGIC is able to locate the degenerate solutions even when large data gaps are introduced. As irregular samplings are common in astronomical surveys, our method also has implications to other studies that involve time series.

* 20 pages, 15 figures, code available at https://github.com/JasonZHM/magic-microlensing . An earlier version of this work is accepted to the ICML 2022 Workshop on Machine Learning for Astrophysics at https://ml4astro.github.io/icml2022/ 
  
Access Paper or Ask Questions

The MAGICAL Benchmark for Robust Imitation

Nov 01, 2020
Sam Toyer, Rohin Shah, Andrew Critch, Stuart Russell

Imitation Learning (IL) algorithms are typically evaluated in the same environment that was used to create demonstrations. This rewards precise reproduction of demonstrations in one particular environment, but provides little information about how robustly an algorithm can generalise the demonstrator's intent to substantially different deployment settings. This paper presents the MAGICAL benchmark suite, which permits systematic evaluation of generalisation by quantifying robustness to different kinds of distribution shift that an IL algorithm is likely to encounter in practice. Using the MAGICAL suite, we confirm that existing IL algorithms overfit significantly to the context in which demonstrations are provided. We also show that standard methods for reducing overfitting are effective at creating narrow perceptual invariances, but are not sufficient to enable transfer to contexts that require substantially different behaviour, which suggests that new approaches will be needed in order to robustly generalise demonstrator intent. Code and data for the MAGICAL suite is available at https://github.com/qxcv/magical/.

* NeurIPS 2020 conference paper (poster) 
  
Access Paper or Ask Questions

Playing magic tricks to deep neural networks untangles human deception

Aug 20, 2019
Regina Zaghi-Lara, Miguel Ángel Gea, Jordi Camí, Luis M. Martínez, Alex Gomez-Marin

Magic is the art of producing in the spectator an illusion of impossibility. Although the scientific study of magic is in its infancy, the advent of recent tracking algorithms based on deep learning allow now to quantify the skills of the magician in naturalistic conditions at unprecedented resolution and robustness. In this study, we deconstructed stage magic into purely motor maneuvers and trained an artificial neural network (DeepLabCut) to follow coins as a professional magician made them appear and disappear in a series of tricks. Rather than using AI as a mere tracking tool, we conceived it as an "artificial spectator". When the coins were not visible, the algorithm was trained to infer their location as a human spectator would (i.e. in the left fist). This created situations where the human was fooled while AI (as seen by a human) was not, and vice versa. Magic from the perspective of the machine reveals our own cognitive biases.

  
Access Paper or Ask Questions

Formal Methods with a Touch of Magic

May 25, 2020
Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina

Machine learning and formal methods have complimentary benefits and drawbacks. In this work, we address the controller-design problem with a combination of techniques from both fields. The use of black-box neural networks in deep reinforcement learning (deep RL) poses a challenge for such a combination. Instead of reasoning formally about the output of deep RL, which we call the {\em wizard}, we extract from it a decision-tree based model, which we refer to as the {\em magic book}. Using the extracted model as an intermediary, we are able to handle problems that are infeasible for either deep RL or formal methods by themselves. First, we suggest, for the first time, combining a magic book in a synthesis procedure. We synthesize a stand-alone correct-by-design controller that enjoys the favorable performance of RL. Second, we incorporate a magic book in a bounded model checking (BMC) procedure. BMC allows us to find numerous traces of the plant under the control of the wizard, which a user can use to increase the trustworthiness of the wizard and direct further training.

  
Access Paper or Ask Questions

Enhancing magic sets with an application to ontological reasoning

Jul 19, 2019
Mario Alviano, Nicola Leone, Pierfrancesco Veltri, Jessica Zangari

Magic sets are a Datalog to Datalog rewriting technique to optimize query answering. The rewritten program focuses on a portion of the stable model(s) of the input program which is sufficient to answer the given query. However, the rewriting may introduce new recursive definitions, which can involve even negation and aggregations, and may slow down program evaluation. This paper enhances the magic set technique by preventing the creation of (new) recursive definitions in the rewritten program. It turns out that the new version of magic sets is closed for Datalog programs with stratified negation and aggregations, which is very convenient to obtain efficient computation of the stable model of the rewritten program. Moreover, the rewritten program is further optimized by the elimination of subsumed rules and by the efficient handling of the cases where binding propagation is lost. The research was stimulated by a challenge on the exploitation of Datalog/\textsc{dlv} for efficient reasoning on large ontologies. All proposed techniques have been hence implemented in the \textsc{dlv} system, and tested for ontological reasoning, confirming their effectiveness. Under consideration for publication in Theory and Practice of Logic Programming.

* Paper presented at the 35th International Conference on Logic Programming (ICLP 2019), Las Cruces, New Mexico, USA, 20-25 September 2019, 16 pages 
  
Access Paper or Ask Questions

MAGIC: Learning Macro-Actions for Online POMDP Planning using Generator-Critic

Nov 07, 2020
Yiyuan Lee, Panpan Cai, David Hsu

When robots operate in the real-world, they need to handle uncertainties in sensing, acting, and the environment. Many tasks also require reasoning about long-term consequences of robot decisions. The partially observable Markov decision process (POMDP) offers a principled approach for planning under uncertainty. However, its computational complexity grows exponentially with the planning horizon. We propose to use temporally-extended macro-actions to cut down the effective planning horizon and thus the exponential factor of the complexity. We propose Macro-Action Generator-Critic (MAGIC), an algorithm that learns a macro-action generator from data, and uses the learned macro-actions to perform long-horizon planning. MAGIC learns the generator using experience provided by an online planner, and in-turn conditions the planner using the generated macro-actions. We evaluate MAGIC on several long-term planning tasks, showing that it significantly outperforms planning using primitive actions, hand-crafted macro-actions, as well as naive reinforcement learning in both simulation and on a real robot.

* 6 pages (+ 1 page references). 7 figures. Submitted to International Conference on Robotics and Automation (ICRA), 2021 
  
Access Paper or Ask Questions

MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning

Dec 13, 2021
Wenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, Yueting Zhuang

Text-based image captioning (TextCap) requires simultaneous comprehension of visual content and reading the text of images to generate a natural language description. Although a task can teach machines to understand the complex human environment further given that text is omnipresent in our daily surroundings, it poses additional challenges in normal captioning. A text-based image intuitively contains abundant and complex multimodal relational content, that is, image details can be described diversely from multiview rather than a single caption. Certainly, we can introduce additional paired training data to show the diversity of images' descriptions, this process is labor-intensive and time-consuming for TextCap pair annotations with extra texts. Based on the insight mentioned above, we investigate how to generate diverse captions that focus on different image parts using an unpaired training paradigm. We propose the Multimodal relAtional Graph adversarIal inferenCe (MAGIC) framework for diverse and unpaired TextCap. This framework can adaptively construct multiple multimodal relational graphs of images and model complex relationships among graphs to represent descriptive diversity. Moreover, a cascaded generative adversarial network is developed from modeled graphs to infer the unpaired caption generation in image-sentence feature alignment and linguistic coherence levels. We validate the effectiveness of MAGIC in generating diverse captions from different relational information items of an image. Experimental results show that MAGIC can generate very promising outcomes without using any image-caption training pairs.

  
Access Paper or Ask Questions

Language Models Can See: Plugging Visual Controls in Text Generation

May 05, 2022
Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, Nigel Collier

Generative language models (LMs) such as GPT-2/3 can be prompted to generate text with remarkable quality. While they are designed for text-prompted generation, it remains an open question how the generation process could be guided by modalities beyond text such as images. In this work, we propose a training-free framework, called MAGIC (iMAge-Guided text generatIon with CLIP), for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks (e.g., image captioning) in a zero-shot manner. MAGIC is a simple yet efficient plug-and-play framework, which directly combines an off-the-shelf LM (i.e., GPT-2) and an image-text matching model (i.e., CLIP) for image-grounded text generation. During decoding, MAGIC influences the generation of the LM by introducing a CLIP-induced score, called magic score, which regularizes the generated result to be semantically related to a given image while being coherent to the previously generated context. Notably, the proposed decoding scheme does not involve any gradient update operation, therefore being computationally efficient. On the challenging task of zero-shot image captioning, MAGIC outperforms the state-of-the-art method by notable margins with a nearly 27 times decoding speedup. MAGIC is a flexible framework and is theoretically compatible with any text generation tasks that incorporate image grounding. In the experiments, we showcase that it is also capable of performing visually grounded story generation given both an image and a text prompt.

* 20 pages, 5 figures, 5 tables 
  
Access Paper or Ask Questions

Magic for Filter Optimization in Dynamic Bottom-up Processing

Apr 29, 1996
Guido Minnen

Off-line compilation of logic grammars using Magic allows an incorporation of filtering into the logic underlying the grammar. The explicit definite clause characterization of filtering resulting from Magic compilation allows processor independent and logically clean optimizations of dynamic bottom-up processing with respect to goal-directedness. Two filter optimizations based on the program transformation technique of Unfolding are discussed which are of practical and theoretical interest.

* Proceedings of ACL 96, Santa Cruz, USA, June 23-28 
* 8 pages LaTeX (uses aclap.sty) 
  
Access Paper or Ask Questions
1
2
3
4
5
6
7
>>