Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"magic": models, code, and papers

A Machine Learning Imaging Core using Separable FIR-IIR Filters

Jan 02, 2020
Masayoshi Asama, Leo F. Isikdogan, Sushma Rao, Bhavin V. Nayak, Gilad Michael

We propose fixed-function neural network hardware that is designed to perform pixel-to-pixel image transformations in a highly efficient way. We use a fully trainable, fixed-topology neural network to build a model that can perform a wide variety of image processing tasks. Our model uses compressed skip lines and hybrid FIR-IIR blocks to reduce the latency and hardware footprint. Our proposed Machine Learning Imaging Core, dubbed MagIC, uses a silicon area of ~3mm^2 (in TSMC 16nm), which is orders of magnitude smaller than a comparable pixel-wise dense prediction model. MagIC requires no DDR bandwidth, no SRAM, and practically no external memory. Each MagIC core consumes 56mW (215 mW max power) at 500MHz and achieves an energy-efficient throughput of 23TOPS/W/mm^2. MagIC can be used as a multi-purpose image processing block in an imaging pipeline, approximating compute-heavy image processing applications, such as image deblurring, denoising, and colorization, within the power and silicon area limits of mobile devices.

  
Access Paper or Ask Questions

MAGIC: Multi-scale Heterogeneity Analysis and Clustering for Brain Diseases

Jul 10, 2020
Junhao Wen, Erdem Varol, Ganesh Chand, Aristeidis Sotiras, Christos Davatzikos

There is a growing amount of clinical, anatomical and functional evidence for the heterogeneous presentation of neuropsychiatric and neurodegenerative diseases such as schizophrenia and Alzheimers Disease (AD). Elucidating distinct subtypes of diseases allows a better understanding of neuropathogenesis and enables the possibility of developing targeted treatment programs. Recent semi-supervised clustering techniques have provided a data-driven way to understand disease heterogeneity. However, existing methods do not take into account that subtypes of the disease might present themselves at different spatial scales across the brain. Here, we introduce a novel method, MAGIC, to uncover disease heterogeneity by leveraging multi-scale clustering. We first extract multi-scale patterns of structural covariance (PSCs) followed by a semi-supervised clustering with double cyclic block-wise optimization across different scales of PSCs. We validate MAGIC using simulated heterogeneous neuroanatomical data and demonstrate its clinical potential by exploring the heterogeneity of AD using T1 MRI scans of 228 cognitively normal (CN) and 191 patients. Our results indicate two main subtypes of AD with distinct atrophy patterns that consist of both fine-scale atrophy in the hippocampus as well as large-scale atrophy in cortical regions. The evidence for the heterogeneity is further corroborated by the clinical evaluation of two subtypes, which indicates that there is a subpopulation of AD patients that tend to be younger and decline faster in cognitive performance relative to the other subpopulation, which tends to be older and maintains a relatively steady decline in cognitive abilities.

* 11 pages, 3 figures, accepted by MICCAI2020 
  
Access Paper or Ask Questions

Looks Like Magic: Transfer Learning in GANs to Generate New Card Illustrations

May 28, 2022
Matheus K. Venturelli, Pedro H. Gomes, Jônatas Wehrmann

In this paper, we propose MAGICSTYLEGAN and MAGICSTYLEGAN-ADA - both incarnations of the state-of-the-art models StyleGan2 and StyleGan2 ADA - to experiment with their capacity of transfer learning into a rather different domain: creating new illustrations for the vast universe of the game "Magic: The Gathering" cards. This is a challenging task especially due to the variety of elements present in these illustrations, such as humans, creatures, artifacts, and landscapes - not to mention the plethora of art styles of the images made by various artists throughout the years. To solve the task at hand, we introduced a novel dataset, named MTG, with thousands of illustration from diverse card types and rich in metadata. The resulting set is a dataset composed by a myriad of both realistic and fantasy-like illustrations. Although, to investigate effects of diversity we also introduced subsets that contain specific types of concepts, such as forests, islands, faces, and humans. We show that simpler models, such as DCGANs, are not able to learn to generate proper illustrations in any setting. On the other side, we train instances of MAGICSTYLEGAN using all proposed subsets, being able to generate high quality illustrations. We perform experiments to understand how well pre-trained features from StyleGan2 can be transferred towards the target domain. We show that in well trained models we can find particular instances of noise vector that realistically represent real images from the dataset. Moreover, we provide both quantitative and qualitative studies to support our claims, and that demonstrate that MAGICSTYLEGAN is the state-of-the-art approach for generating Magic illustrations. Finally, this paper highlights some emerging properties regarding transfer learning in GANs, which is still a somehow under-explored field in generative learning research.

  
Access Paper or Ask Questions

Editorial introduction: The power of words and networks

May 24, 2021
A. Fronzetti Colladon, P. Gloor, D. F. Iezzi

According to Freud "words were originally magic and to this day words have retained much of their ancient magical power". By words, behaviors are transformed and problems are solved. The way we use words reveals our intentions, goals and values. Novel tools for text analysis help understand the magical power of words. This power is multiplied, if it is combined with the study of social networks, i.e. with the analysis of relationships among social units. This special issue of the International Journal of Information Management, entitled "Combining Social Network Analysis and Text Mining: from Theory to Practice", includes heterogeneous and innovative research at the nexus of text mining and social network analysis. It aims to enrich work at the intersection of these fields, which still lags behind in theoretical, empirical, and methodological foundations. The nine articles accepted for inclusion in this special issue all present methods and tools that have business applications. They are summarized in this editorial introduction.

* International Journal of Information Management 51, 102031 (2020) 
  
Access Paper or Ask Questions

How Insight Emerges in a Distributed, Content-addressable Memory

Jun 18, 2011
Liane Gabora, Apara Ranjan

We begin this chapter with the bold claim that it provides a neuroscientific explanation of the magic of creativity. Creativity presents a formidable challenge for neuroscience. Neuroscience generally involves studying what happens in the brain when someone engages in a task that involves responding to a stimulus, or retrieving information from memory and using it the right way, or at the right time. If the relevant information is not already encoded in memory, the task generally requires that the individual make systematic use of information that is encoded in memory. But creativity is different. It paradoxically involves studying how someone pulls out of their brain something that was never put into it! Moreover, it must be something both new and useful, or appropriate to the task at hand. The ability to pull out of memory something new and appropriate that was never stored there in the first place is what we refer to as the magic of creativity. Even if we are so fortunate as to determine which areas of the brain are active and how these areas interact during creative thought, we will not have an answer to the question of how the brain comes up with solutions and artworks that are new and appropriate. On the other hand, since the representational capacity of neurons emerges at a level that is higher than that of the individual neurons themselves, the inner workings of neurons is too low a level to explain the magic of creativity. Thus we look to a level that is midway between gross brain regions and neurons. Since creativity generally involves combining concepts from different domains, or seeing old ideas from new perspectives, we focus our efforts on the neural mechanisms underlying the representation of concepts and ideas. Thus we ask questions about the brain at the level that accounts for its representational capacity, i.e. at the level of distributed aggregates of neurons.

* Gabora, L. & Ranjan, A. (2012). How insight emerges in a distributed, content-addressable memory. In A. Bristol, O. Vartanian, & J. Kaufman (Eds.) The Neuroscience of Creativity. New York: Oxford University Press 
* in press; 17 pages; 2 figures 
  
Access Paper or Ask Questions

A third level trigger programmable on FPGA for the gamma/hadron separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM classifier

Feb 24, 2006
Marco Frailis, Oriana Mansutti, Praveen Boinee, Giuseppe Cabras, Alessandro De Angelis, Barbara De Lotto, Alberto Forti, Mauro Dell'Orso, Riccardo Paoletti, Angelo Scribano, Nicola Turini, Mose' Mariotti, Luigi Peruzzo, Antonio Saggion

We studied the application of the Pseudo-Zernike features as image parameters (instead of the Hillas parameters) for the discrimination between the images produced by atmospheric electromagnetic showers caused by gamma-rays and the ones produced by atmospheric electromagnetic showers caused by hadrons in the MAGIC Experiment. We used a Support Vector Machine as classification algorithm with the computed Pseudo-Zernike features as classification parameters. We implemented on a FPGA board a kernel function of the SVM and the Pseudo-Zernike features to build a third level trigger for the gamma-hadron separation task of the MAGIC Experiment.

  
Access Paper or Ask Questions

Goal-Driven Query Answering for Existential Rules with Equality

Nov 20, 2017
Michael Benedikt, Boris Motik, Efthymia Tsamoura

Inspired by the magic sets for Datalog, we present a novel goal-driven approach for answering queries over terminating existential rules with equality (aka TGDs and EGDs). Our technique improves the performance of query answering by pruning the consequences that are not relevant for the query. This is challenging in our setting because equalities can potentially affect all predicates in a dataset. We address this problem by combining the existing singularization technique with two new ingredients: an algorithm for identifying the rules relevant to a query and a new magic sets algorithm. We show empirically that our technique can significantly improve the performance of query answering, and that it can mean the difference between answering a query in a few seconds or not being able to process the query at all.

  
Access Paper or Ask Questions

On the statistical complexity of quantum circuits

Jan 15, 2021
Kaifeng Bu, Dax Enshan Koh, Lu Li, Qingxian Luo, Yaobo Zhang

In theoretical machine learning, the statistical complexity is a notion that measures the richness of a hypothesis space. In this work, we apply a particular measure of statistical complexity, namely the Rademacher complexity, to the quantum circuit model in quantum computation and study how the statistical complexity depends on various quantum circuit parameters. In particular, we investigate the dependence of the statistical complexity on the resources, depth, width, and the number of input and output registers of a quantum circuit. To study how the statistical complexity scales with resources in the circuit, we introduce a resource measure of magic based on the $(p,q)$ group norm, which quantifies the amount of magic in the quantum channels associated with the circuit. These dependencies are investigated in the following two settings: (i) where the entire quantum circuit is treated as a single quantum channel, and (ii) where each layer of the quantum circuit is treated as a separate quantum channel. The bounds we obtain can be used to constrain the capacity of quantum neural networks in terms of their depths and widths as well as the resources in the network.

* 6+19 pages 
  
Access Paper or Ask Questions

Ontology of Card Sleights

Mar 20, 2019
Aaron Sterling

We present a machine-readable movement writing for sleight-of-hand moves with cards -- a "Labanotation of card magic." This scheme of movement writing contains 440 categories of motion, and appears to taxonomize all card sleights that have appeared in over 1500 publications. The movement writing is axiomatized in $\mathcal{SROIQ}$(D) Description Logic, and collected formally as an Ontology of Card Sleights, a computational ontology that extends the Basic Formal Ontology and the Information Artifact Ontology. The Ontology of Card Sleights is implemented in OWL DL, a Description Logic fragment of the Web Ontology Language. While ontologies have historically been used to classify at a less granular level, the algorithmic nature of card tricks allows us to transcribe a performer's actions step by step. We conclude by discussing design criteria we have used to ensure the ontology can be accessed and modified with a simple click-and-drag interface. This may allow database searches and performance transcriptions by users with card magic knowledge, but no ontology background.

* IEEE 14th International Conference on Semantic Computing (ICSC), February 2019, pp. 263-270 
* 8 pages. Preprint. Final version appeared in ICSC 2019. Copyright of final version is held by IEEE 
  
Access Paper or Ask Questions
<<
1
2
3
4
5
6
7
8
>>