Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Hands-on experiments on intelligent behavior for mobile robots

Jun 30, 2014
Erik Cuevas, Daniel Zaldivar, Marco Perez-, Marte Ramirez

In recent years, Artificial Intelligence techniques have emerged as useful tools for solving various engineering problems that were not possible or convenient to handle by traditional methods. AI has directly influenced many areas of computer science and becomes an important part of the engineering curriculum. However, determining the important topics for a single semester AI course is a nontrivial task, given the lack of a general methodology. AI concepts commonly overlap with many other disciplines involving a wide range of subjects, including applied approaches to more formal mathematical issues. This paper presents the use of a simple robotic platform to assist the learning of basic AI concepts. The study is guided through some simple experiments using autonomous mobile robots. The central algorithm is the Learning Automata. Using LA, each robot action is applied to an environment to be evaluated by means of a fitness value. The response of the environment is used by the automata to select its next action. This procedure holds until the goal task is reached. The proposal addresses the AI study by offering in LA a unifying context to draw together several of the topics of AI and motivating the students to learn by building some hands on laboratory exercises. The presented material has been successfully tested as AI teaching aide in the University of Guadalajara robotics group as it motivates students and increases enrolment and retention while educating better computer engineers.

* International Journal of Electrical Engineering Education 48 (1), (2011), pp. 66-78 
* 11 Pages 

  Access Paper or Ask Questions

Recent Advances in Deep Learning-based Dialogue Systems

May 10, 2021
Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, Vinay Adiga, Erik Cambria

Dialogue systems are a popular Natural Language Processing (NLP) task as it is promising in real-life applications. It is also a complicated task since many NLP tasks deserving study are involved. As a result, a multitude of novel works on this task are carried out, and most of them are deep learning-based due to the outstanding performance. In this survey, we mainly focus on the deep learning-based dialogue systems. We comprehensively review state-of-the-art research outcomes in dialogue systems and analyze them from two angles: model type and system type. Specifically, from the angle of model type, we discuss the principles, characteristics, and applications of different models that are widely used in dialogue systems. This will help researchers acquaint these models and see how they are applied in state-of-the-art frameworks, which is rather helpful when designing a new dialogue system. From the angle of system type, we discuss task-oriented and open-domain dialogue systems as two streams of research, providing insight into the hot topics related. Furthermore, we comprehensively review the evaluation methods and datasets for dialogue systems to pave the way for future research. Finally, some possible research trends are identified based on the recent research outcomes. To the best of our knowledge, this survey is the most comprehensive and up-to-date one at present in the area of dialogue systems and dialogue-related tasks, extensively covering the popular frameworks, topics, and datasets.

* 75 pages, 19 figures 

  Access Paper or Ask Questions

Bridging Vision and Language from the Video-to-Text Perspective: A Comprehensive Review

Mar 27, 2021
Jesus Perez-Martin, Benjamin Bustos, Silvio Jamil F. Guimarães, Ivan Sipiran, Jorge Pérez, Grethel Coello Said

Research in the area of Vision and Language encompasses challenging topics that seek to connect visual and textual information. The video-to-text problem is one of these topics, in which the goal is to connect an input video with its textual description. This connection can be mainly made by retrieving the most significant descriptions from a corpus or generating a new one given a context video. These two ways represent essential tasks for Computer Vision and Natural Language Processing communities, called text retrieval from video task and video captioning/description task. These two tasks are substantially more complex than predicting or retrieving a single sentence from an image. The spatiotemporal information present in videos introduces diversity and complexity regarding the visual content and the structure of associated language descriptions. This review categorizes and describes the state-of-the-art techniques for the video-to-text problem. It covers the main video-to-text methods and the ways to evaluate their performance. We analyze how the most reported benchmark datasets have been created, showing their drawbacks and strengths for the problem requirements. We also show the impressive progress that researchers have made on each dataset, and we analyze why, despite this progress, the video-to-text conversion is still unsolved. State-of-the-art techniques are still a long way from achieving human-like performance in generating or retrieving video descriptions. We cover several significant challenges in the field and discuss future research directions.

* 66 pages, 5 figures. Submitted to Artificial Intelligence Review 

  Access Paper or Ask Questions

Multimodal Analytics for Real-world News using Measures of Cross-modal Entity Consistency

Mar 23, 2020
Eric Müller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Ralph Ewerth

The World Wide Web has become a popular source for gathering information and news. Multimodal information, e.g., enriching text with photos, is typically used to convey the news more effectively or to attract attention. Photo content can range from decorative, depict additional important information, or can even contain misleading information. Therefore, automatic approaches to quantify cross-modal consistency of entity representation can support human assessors to evaluate the overall multimodal message, for instance, with regard to bias or sentiment. In some cases such measures could give hints to detect fake news, which is an increasingly important topic in today's society. In this paper, we introduce a novel task of cross-modal consistency verification in real-world news and present a multimodal approach to quantify the entity coherence between image and text. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate cross-modal similarity for these entities using state of the art approaches. In contrast to previous work, our system automatically gathers example data from the Web and is applicable to real-world news. Results on two novel datasets that cover different languages, topics, and domains demonstrate the feasibility of our approach. Datasets and code are publicly available to foster research towards this new direction.

* Accepted for publication in: International Conference on Multimedia Retrieval (ICMR), Dublin, 2020 

  Access Paper or Ask Questions

Performance Investigation of Feature Selection Methods

Sep 16, 2013
Anuj sharma, Shubhamoy Dey

Sentiment analysis or opinion mining has become an open research domain after proliferation of Internet and Web 2.0 social media. People express their attitudes and opinions on social media including blogs, discussion forums, tweets, etc. and, sentiment analysis concerns about detecting and extracting sentiment or opinion from online text. Sentiment based text classification is different from topical text classification since it involves discrimination based on expressed opinion on a topic. Feature selection is significant for sentiment analysis as the opinionated text may have high dimensions, which can adversely affect the performance of sentiment analysis classifier. This paper explores applicability of feature selection methods for sentiment analysis and investigates their performance for classification in term of recall, precision and accuracy. Five feature selection methods (Document Frequency, Information Gain, Gain Ratio, Chi Squared, and Relief-F) and three popular sentiment feature lexicons (HM, GI and Opinion Lexicon) are investigated on movie reviews corpus with a size of 2000 documents. The experimental results show that Information Gain gave consistent results and Gain Ratio performs overall best for sentimental feature selection while sentiment lexicons gave poor performance. Furthermore, we found that performance of the classifier depends on appropriate number of representative feature selected from text.

* 6 pages 

  Access Paper or Ask Questions

Evaluating Mixed-initiative Conversational Search Systems via User Simulation

Apr 20, 2022
Ivan Sekulić, Mohammad Aliannejadi, Fabio Crestani

Clarifying the underlying user information need by asking clarifying questions is an important feature of modern conversational search system. However, evaluation of such systems through answering prompted clarifying questions requires significant human effort, which can be time-consuming and expensive. In this paper, we propose a conversational User Simulator, called USi, for automatic evaluation of such conversational search systems. Given a description of an information need, USi is capable of automatically answering clarifying questions about the topic throughout the search session. Through a set of experiments, including automated natural language generation metrics and crowdsourcing studies, we show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers. Moreover, we make the first steps towards multi-turn interactions, where conversational search systems asks multiple questions to the (simulated) user with a goal of clarifying the user need. To this end, we expand on currently available datasets for studying clarifying questions, i.e., Qulac and ClariQ, by performing a crowdsourcing-based multi-turn data acquisition. We show that our generative, GPT2-based model, is capable of providing accurate and natural answers to unseen clarifying questions in the single-turn setting and discuss capabilities of our model in the multi-turn setting. We provide the code, data, and the pre-trained model to be used for further research on the topic.


  Access Paper or Ask Questions

Conducting sparse feature selection on arbitrarily long phrases in text corpora with a focus on interpretability

Jul 23, 2016
Luke Miratrix, Robin Ackerman

We propose a general framework for topic-specific summarization of large text corpora, and illustrate how it can be used for analysis in two quite different contexts: an OSHA database of fatality and catastrophe reports (to facilitate surveillance for patterns in circumstances leading to injury or death) and legal decisions on workers' compensation claims (to explore relevant case law). Our summarization framework, built on sparse classification methods, is a compromise between simple word frequency based methods currently in wide use, and more heavyweight, model-intensive methods such as Latent Dirichlet Allocation (LDA). For a particular topic of interest (e.g., mental health disability, or chemical reactions), we regress a labeling of documents onto the high-dimensional counts of all the other words and phrases in the documents. The resulting small set of phrases found as predictive are then harvested as the summary. Using a branch-and-bound approach, this method can be extended to allow for phrases of arbitrary length, which allows for potentially rich summarization. We discuss how focus on the purpose of the summaries can inform choices of regularization parameters and model constraints. We evaluate this tool by comparing computational time and summary statistics of the resulting word lists to three other methods in the literature. We also present a new R package, textreg. Overall, we argue that sparse methods have much to offer text analysis, and is a branch of research that should be considered further in this context.


  Access Paper or Ask Questions

Autonomous Vehicles: Open-Source Technologies, Considerations, and Development

Jan 25, 2022
Oussama Saoudi, Ishwar Singh, Hamidreza Mahyar

Autonomous vehicles are the culmination of advances in many areas such as sensor technologies, artificial intelligence (AI), networking, and more. This paper will introduce the reader to the technologies that build autonomous vehicles. It will focus on open-source tools and libraries for autonomous vehicle development, making it cheaper and easier for developers and researchers to participate in the field. The topics covered are as follows. First, we will discuss the sensors used in autonomous vehicles and summarize their performance in different environments, costs, and unique features. Then we will cover Simultaneous Localization and Mapping (SLAM) and algorithms for each modality. Third, we will review popular open-source driving simulators, a cost-effective way to train machine learning models and test vehicle software performance. We will then highlight embedded operating systems and the security and development considerations when choosing one. After that, we will discuss Vehicle-to-Vehicle (V2V) and Internet-of-Vehicle (IoV) communication, which are areas that fuse networking technologies with autonomous vehicles to extend their functionality. We will then review the five levels of vehicle automation, commercial and open-source Advanced Driving Assistance Systems, and their features. Finally, we will touch on the major manufacturing and software companies involved in the field, their investments, and their partnerships. These topics will give the reader an understanding of the industry, its technologies, active research, and the tools available for developers to build autonomous vehicles.

* 13 pages, 7 figures 

  Access Paper or Ask Questions

Multi-Class and Automated Tweet Categorization

Nov 13, 2021
Khubaib Ahmed Qureshi

Twitter is among the most prevalent social media platform being used by millions of people all over the world. It is used to express ideas and opinions about political, social, business, sports, health, religion, and various other categories. The study reported here aims to detect the tweet category from its text. It becomes quite challenging when text consists of 140 characters only, with full of noise. The tweet is categorized under 12 specified categories using Text Mining or Natural Language Processing (NLP), and Machine Learning (ML) techniques. It is observed that a huge number of trending topics are provided by Twitter but it is really challenging to find out that what these trending topics are all about. Therefore, it is extremely crucial to automatically categorize the tweets into general categories for plenty of information extraction tasks. A large dataset is constructed by combining two different nature of datasets having varying levels of category identification complexities. It is annotated by experts under proper guidelines for increased quality and high agreement values. It makes the proposed model quite robust. Various types of ML algorithms were used to train and evaluate the proposed model. These models have explored over three datasets separately. It is explored that the nature of the dataset is highly non-linear therefore complex or non-linear models perform better. The best ensemble model named, Gradient Boosting achieved an AUC score of 85\%. That is much better than the other related studies conducted.


  Access Paper or Ask Questions

Measuring daily-life fear perception change: a computational study in the context of COVID-19

Jul 27, 2021
Yuchen Chai, Juan Palacios, Jianghao Wang, Yichun Fan, Siqi Zheng

COVID-19, as a global health crisis, has triggered the fear emotion with unprecedented intensity. Besides the fear of getting infected, the outbreak of COVID-19 also created significant disruptions in people's daily life and thus evoked intensive psychological responses indirect to COVID-19 infections. Here, we construct an expressed fear database using 16 million social media posts generated by 536 thousand users between January 1st, 2019 and August 31st, 2020 in China. We employ deep learning techniques to detect the fear emotion within each post and apply topic models to extract the central fear topics. Based on this database, we find that sleep disorders ("nightmare" and "insomnia") take up the largest share of fear-labeled posts in the pre-pandemic period (January 2019-December 2019), and significantly increase during the COVID-19. We identify health and work-related concerns are the two major sources of fear induced by the COVID-19. We also detect gender differences, with females generating more posts containing the daily-life fear sources during the COVID-19 period. This research adopts a data-driven approach to trace back public emotion, which can be used to complement traditional surveys to achieve real-time emotion monitoring to discern societal concerns and support policy decision-making.

* 15 pages 

  Access Paper or Ask Questions

<<
146
147
148
149
150
151
152
153
154
155
156
157
158
>>