Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

The AI Triplet: Computational, Conceptual, and Mathematical Representations in AI Education

Oct 14, 2021
Maithilee Kunda

Expertise in AI requires integrating computational, conceptual, and mathematical knowledge and representations. We propose this trifecta as an "AI triplet," similar in spirit to the "chemistry triplet" that has influenced the past four decades of chemistry education. We describe a rationale for this triplet and how it maps onto topics commonly taught in AI courses, such as tree search and gradient descent. Also, similar to impacts of the chemistry triplet on chemistry education, we suggest an initial example of how considering the AI triplet may help pinpoint obstacles in AI education, i.e., how student learning might be scaffolded to approach expert-level flexibility in moving between the points of the triplet.


  Access Paper or Ask Questions

Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI

Jul 26, 2021
Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang

This is the Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. Deep neural networks (DNNs) have undoubtedly brought great success to a wide range of applications in computer vision, computational linguistics, and AI. However, foundational principles underlying the DNNs' success and their resilience to adversarial attacks are still largely missing. Interpreting and theorizing the internal mechanisms of DNNs becomes a compelling yet controversial topic. This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI.


  Access Paper or Ask Questions

Zero-Shot Clinical Acronym Expansion with a Hierarchical Metadata-Based Latent Variable Model

Sep 29, 2020
Griffin Adams, Mert Ketenci, Adler Perotte, Noemie Elhadad

We introduce Latent Meaning Cells, a deep latent variable model which learns contextualized representations of words by combining local lexical context and metadata. Metadata can refer to granular context, such as section type, or to more global context, such as unique document ids. Reliance on metadata for contextualized representation learning is apropos in the clinical domain where text is semi-structured and expresses high variation in topics. We evaluate the LMC model on the task of clinical acronym expansion across three datasets. The LMC significantly outperforms a diverse set of baselines at a fraction of the pre-training cost and learns clinically coherent representations.

* 31 pages 

  Access Paper or Ask Questions

Cosine Similarity of Multimodal Content Vectors for TV Programmes

Sep 23, 2020
Saba Nazir. Taner Cagali, Chris Newell, Mehrnoosh Sadrzadeh

Multimodal information originates from a variety of sources: audiovisual files, textual descriptions, and metadata. We show how one can represent the content encoded by each individual source using vectors, how to combine the vectors via middle and late fusion techniques, and how to compute the semantic similarities between the contents. Our vectorial representations are built from spectral features and Bags of Audio Words, for audio, LSI topics and Doc2vec embeddings for subtitles, and the categorical features, for metadata. We implement our model on a dataset of BBC TV programmes and evaluate the fused representations to provide recommendations. The late fused similarity matrices significantly improve the precision and diversity of recommendations.

* 3 pages, 1 figure, Machine Learning for Media Discovery (ML4MD) Workshop at ICML 2020 

  Access Paper or Ask Questions

A Survey of Asymptotically Optimal Sampling-based Motion Planning Methods

Sep 22, 2020
Jonathan D. Gammell, Marlin P. Strub

Motion planning is a fundamental problem in autonomous robotics. It requires finding a path to a specified goal that avoids obstacles and obeys a robot's limitations and constraints. It is often desirable for this path to also optimize a cost function, such as path length. Formal path-quality guarantees for continuously valued search spaces are an active area of research interest. Recent results have proven that some sampling-based planning methods probabilistically converge towards the optimal solution as computational effort approaches infinity. This survey summarizes the assumptions behind these popular asymptotically optimal techniques and provides an introduction to the significant ongoing research on this topic.

* To appear in the Annual Review of Control, Robotics, and Autonomous Systems, Volume 4, 2021. 25 pages. 2 figures 

  Access Paper or Ask Questions

Method and Dataset Mining in Scientific Papers

Nov 29, 2019
Rujing Yao, Linlin Hou, Yingchun Ye, Ou Wu, Ji Zhang, Jian Wu

Literature analysis facilitates researchers better understanding the development of science and technology. The conventional literature analysis focuses on the topics, authors, abstracts, keywords, references, etc., and rarely pays attention to the content of papers. In the field of machine learning, the involved methods (M) and datasets (D) are key information in papers. The extraction and mining of M and D are useful for discipline analysis and algorithm recommendation. In this paper, we propose a novel entity recognition model, called MDER, and constructe datasets from the papers of the PAKDD conferences (2009-2019). Some preliminary experiments are conducted to assess the extraction performance and the mining results are visualized.


  Access Paper or Ask Questions

Extractive Summarization of Long Documents by Combining Global and Local Context

Sep 17, 2019
Wen Xiao, Giuseppe Carenini

In this paper, we propose a novel neural single document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers, Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it to longer documents. Rather surprisingly, an ablation study indicates that the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.

* 12 pages (with appendix), accepted at EMNLP-IJCNLP 2019 

  Access Paper or Ask Questions

Automatic difficulty management and testing in games using a framework based on behavior trees and genetic algorithms

Sep 10, 2019
Ciprian Paduraru, Miruna Paduraru

The diversity of agent behaviors is an important topic for the quality of video games and virtual environments in general. Offering the most compelling experience for users with different skills is a difficult task, and usually needs important manual human effort for tuning existing code. This can get even harder when dealing with adaptive difficulty systems. Our paper's main purpose is to create a framework that can automatically create behaviors for game agents of different difficulty classes and enough diversity. In parallel with this, a second purpose is to create more automated tests for showing defects in the source code or possible logic exploits with less human effort.

* Accepted for publication in the IEEE Proceedings of The 24 International Conference on Engineering of Complex Computer Systems (ICECCS 2019) 

  Access Paper or Ask Questions

Latent Universal Task-Specific BERT

May 16, 2019
Alon Rozental, Zohar Kelrich, Daniel Fleischer

This paper describes a language representation model which combines the Bidirectional Encoder Representations from Transformers (BERT) learning mechanism described in Devlin et al. (2018) with a generalization of the Universal Transformer model described in Dehghani et al. (2018). We further improve this model by adding a latent variable that represents the persona and topics of interests of the writer for each training example. We also describe a simple method to improve the usefulness of our language representation for solving problems in a specific domain at the expense of its ability to generalize to other fields. Finally, we release a pre-trained language representation model for social texts that was trained on 100 million tweets.

* 6 pages, 2 figures 

  Access Paper or Ask Questions

<<
203
204
205
206
207
208
209
210
211
212
213
214
215
>>