Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Maximizing the Set Cardinality of Users Scheduled for Ultra-dense uRLLC Networks

Jul 20, 2021
Shiwen He, Jun Yuan, Zhenyu An, Yunshan Yi, Yongming Huang

Ultra-reliability and low latency communication plays an important role in the fifth and sixth generation communication systems. Among the different research issues, scheduling as many users as possible to serve on the limited time-frequency resource is a crucial topic, with requirement of the maximum allowable transmission power and the minimum rate requirement of each user.We address it by proposing a mixed integer programming model, with objective function is maximizing the set cardinality of users instead of maximizing the system sum rate. Mathematical transformations and successive convex approximation are combined to solve the problem. Numerical results show that the proposed method achieves a considerable performance compared with exhaustive search method, but with lower computational complexity.

* 4 pages, 2 figures 

  Access Paper or Ask Questions

An ontology for the formalization and visualization of scientific knowledge

Jul 13, 2021
Vincenzo Daponte, Gilles Falquet

The construction of an ontology of scientific knowledge objects, presented here, is part of the development of an approach oriented towards the visualization of scientific knowledge. It is motivated by the fact that the concepts that are used to organize scientific knowledge (theorem, law, experience, proof, etc.) appear in existing ontologies but that none of these ontologies is centered on this topic and presents them in a simple and easily understandable organization. This ontology has been constructed by 1) selecting concepts that appear in high level ontologies or in ontologies of knowledge objects of specific fields and 2) by interviewing scientists in different fields. We have aligned this ontology with some of the sources used, which has allowed us to verify its consistency with respect to them. The validation of the ontology consists in using it to formalize knowledge from various sources, which we have begun to do in the field of physics.

  Access Paper or Ask Questions

DravidianMultiModality: A Dataset for Multi-modal Sentiment Analysis in Tamil and Malayalam

Jun 09, 2021
Bharathi Raja Chakravarthi, Jishnu Parameswaran P. K, Premjith B, K. P Soman, Rahul Ponnusamy, Prasanna Kumar Kumaresan, Kingston Pal Thamburaj, John P. McCrae

Human communication is inherently multimodal and asynchronous. Analyzing human emotions and sentiment is an emerging field of artificial intelligence. We are witnessing an increasing amount of multimodal content in local languages on social media about products and other topics. However, there are not many multimodal resources available for under-resourced Dravidian languages. Our study aims to create a multimodal sentiment analysis dataset for the under-resourced Tamil and Malayalam languages. First, we downloaded product or movies review videos from YouTube for Tamil and Malayalam. Next, we created captions for the videos with the help of annotators. Then we labelled the videos for sentiment, and verified the inter-annotator agreement using Fleiss's Kappa. This is the first multimodal sentiment analysis dataset for Tamil and Malayalam by volunteer annotators.

* 31 

  Access Paper or Ask Questions

Return-based Scaling: Yet Another Normalisation Trick for Deep RL

May 11, 2021
Tom Schaul, Georg Ostrovski, Iurii Kemaev, Diana Borsa

Scaling issues are mundane yet irritating for practitioners of reinforcement learning. Error scales vary across domains, tasks, and stages of learning; sometimes by many orders of magnitude. This can be detrimental to learning speed and stability, create interference between learning tasks, and necessitate substantial tuning. We revisit this topic for agents based on temporal-difference learning, sketch out some desiderata and investigate scenarios where simple fixes fall short. The mechanism we propose requires neither tuning, clipping, nor adaptation. We validate its effectiveness and robustness on the suite of Atari games. Our scaling method turns out to be particularly helpful at mitigating interference, when training a shared neural network on multiple targets that differ in reward scale or discounting.

  Access Paper or Ask Questions

DiCOVA Challenge: Dataset, task, and baseline system for COVID-19 diagnosis using acoustics

Apr 05, 2021
Ananya Muguli, Lancelot Pinto, Nirmala R., Neeraj Sharma, Prashant Krishnan, Prasanta Kumar Ghosh, Rohit Kumar, Shrirama Bhat, Srikanth Raj Chetupalli, Sriram Ganapathy, Shreyas Ramoji, Viral Nanda

The DiCOVA challenge aims at accelerating research in diagnosing COVID-19 using acoustics (DiCOVA), a topic at the intersection of speech and audio processing, respiratory health diagnosis, and machine learning. This challenge is an open call for researchers to analyze a dataset of sound recordings collected from COVID-19 infected and non-COVID-19 individuals for a two-class classification. These recordings were collected via crowdsourcing from multiple countries, through a website application. The challenge features two tracks, one focusing on cough sounds, and the other on using a collection of breath, sustained vowel phonation, and number counting speech recordings. In this paper, we introduce the challenge and provide a detailed description of the task, and present a baseline system for the task.

  Access Paper or Ask Questions

A Parameterised Quantum Circuit Approach to Point Set Matching

Feb 12, 2021
Mohammadreza Noormandipour, Hanchen Wang

Point set registration is one of the challenging tasks in areas such as pattern recognition, computer vision and image processing. Efficient performance of this task has been a hot topic of research due to its widespread applications. We propose a parameterised quantum circuit learning approach to point set matching problem. The proposed method benefits from a kernel-based quantum generative model that: 1) is able to find all possible optimal matching solution angles, 2) is potentially able to show quantum learning supremacy, and 3) benefits from kernel-embedding techniques and integral probability metrics for the definition of a powerful loss function. Moreover, the theoretical framework has been backed up by satisfactory preliminary and proof of concept experimental results.

* 10 pages, 3 figures 

  Access Paper or Ask Questions

Quantifying the Effects of COVID-19 on Mental Health Support Forums

Sep 08, 2020
Laura Biester, Katie Matton, Janarthanan Rajendran, Emily Mower Provost, Rada Mihalcea

The COVID-19 pandemic, like many of the disease outbreaks that have preceded it, is likely to have a profound effect on mental health. Understanding its impact can inform strategies for mitigating negative consequences. In this work, we seek to better understand the effects of COVID-19 on mental health by examining discussions within mental health support communities on Reddit. First, we quantify the rate at which COVID-19 is discussed in each community, or subreddit, in order to understand levels of preoccupation with the pandemic. Next, we examine the volume of activity in order to determine whether the quantity of people seeking online mental health support has risen. Finally, we analyze how COVID-19 has influenced language use and topics of discussion within each subreddit.

  Access Paper or Ask Questions

Artificial neural networks for neuroscientists: A primer

Jun 01, 2020
Guangyu Robert Yang, Xiao-Jing Wang

Artificial neural networks (ANNs) are essential tools in machine learning that are increasingly used for building computational models in neuroscience. Besides being powerful techniques for data analysis, ANNs provide a new approach for neuroscientists to build models that capture complex behaviors, neural activity and connectivity, as well as to explore optimization in neural systems. In this pedagogical Primer, we introduce conventional ANNs and demonstrate how they have been deployed to study neuroscience questions. Next, we detail how to customize the analysis, structure, and learning of ANNs to better address a wide range of challenges in brain research. To help the readers garner hands-on experience, this Primer is accompanied with tutorial-style code in PyTorch and Jupyter Notebook, covering major topics.

  Access Paper or Ask Questions

Low resource language dataset creation, curation and classification: Setswana and Sepedi -- Extended Abstract

Mar 30, 2020
Vukosi Marivate, Tshephisho Sefara, Vongani Chabalala, Keamogetswe Makhaya, Tumisho Mokgonyane, Rethabile Mokoena, Abiodun Modupe

The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.

* Accepted for the AfricaNLP workshop at ICLR 2020 

  Access Paper or Ask Questions

LANCE: Efficient Low-Precision Quantized Winograd Convolution for Neural Networks Based on Graphics Processing Units

Mar 20, 2020
Guangli Li, Lei Liu, Xueying Wang, Xiu Ma, Xiaobing Feng

Accelerating deep convolutional neural networks has become an active topic and sparked an interest in academia and industry. In this paper, we propose an efficient low-precision quantized Winograd convolution algorithm, called LANCE, which combines the advantages of fast convolution and quantization techniques. By embedding linear quantization operations into the Winograd-domain, the fast convolution can be performed efficiently under low-precision computation on graphics processing units. We test neural network models with LANCE on representative image classification datasets, including SVHN, CIFAR, and ImageNet. The experimental results show that our 8-bit quantized Winograd convolution improves the performance by up to 2.40x over the full-precision convolution with trivial accuracy loss.

* Accepted by ICASSP 2020 

  Access Paper or Ask Questions