Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

A novel method for identifying the deep neural network model with the Serial Number

Nov 19, 2019
XiangRui Xu, YaQin Li, Cao Yuan

Deep neural network (DNN) with the state of art performance has emerged as a viable and lucrative business service. However, those impressive performances require a large number of computational resources, which comes at a high cost for the model creators. The necessity for protecting DNN models from illegal reproducing and distribution appears salient now. Recently, trigger-set watermarking, breaking the white-box restriction, relying on adversarial training pre-defined (incorrect) labels for crafted inputs, and subsequently using them to verify the model authenticity, has been the main topic of DNN ownership verification. While these methods have successfully demonstrated robustness against removal attacks, few are effective against the tampering attacks from competitors forging the fake watermarks and dogging in the manager. In this paper, we put forth a new framework of the trigger-set watermark by embedding a unique Serial Number (relatedness less original labels) to the deep neural network for model ownership identification, which is both robust to model pruning and resist to tampering attacks. Experiment results demonstrate that the DNN Serial Number only incurs slight accuracy degradation of the original performance and is valid for ownership verification.

* 9pages,9 figures,conference 

  Access Paper or Ask Questions

Predicting the Politics of an Image Using Webly Supervised Data

Oct 31, 2019
Christopher Thomas, Adriana Kovashka

The news media shape public opinion, and often, the visual bias they contain is evident for human observers. This bias can be inferred from how different media sources portray different subjects or topics. In this paper, we model visual political bias in contemporary media sources at scale, using webly supervised data. We collect a dataset of over one million unique images and associated news articles from left- and right-leaning news sources, and develop a method to predict the image's political leaning. This problem is particularly challenging because of the enormous intra-class visual and semantic diversity of our data. We propose a two-stage method to tackle this problem. In the first stage, the model is forced to learn relevant visual concepts that, when joined with document embeddings computed from articles paired with the images, enable the model to predict bias. In the second stage, we remove the requirement of the text domain and train a visual classifier from the features of the former model. We show this two-stage approach facilitates learning and outperforms several strong baselines. We also present extensive qualitative results demonstrating the nuances of the data.

* 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada 

  Access Paper or Ask Questions

Stable and Fair Classification

Feb 26, 2019
Lingxiao Huang, Nisheeth K. Vishnoi

Fair classification has been a topic of intense study in machine learning, and several algorithms have been proposed towards this important task. However, in a recent study, Friedler et al. observed that fair classification algorithms may not be stable with respect to variations in the training dataset -- a crucial consideration in several real-world applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework based on fair classification algorithms that are formulated as optimization problems, by introducing a stability-focused regularization term. Theoretically, we prove a stability guarantee, that was lacking in fair classification algorithms, and also provide an accuracy guarantee for our extended framework. Our accuracy guarantee can be used to inform the selection of the regularization parameter in our framework. To the best of our knowledge, this is the first work that combines stability and fairness in automated decision-making tasks. We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve the best balance between fairness and accuracy over the Adult dataset. Our empirical results show that our framework indeed improves the stability at only a slight sacrifice in accuracy.

  Access Paper or Ask Questions

Sign Language Representation by TEO Humanoid Robot: End-User Interest, Comprehension and Satisfaction

Jan 17, 2019
Jennifer J. Gago, Juan G. Victores, Carlos Balaguer

In this paper, we illustrate our work on improving the accessibility of Cyber-Physical Systems (CPS), presenting a study on human-robot interaction where the end-users are either deaf or hearing-impaired people. Current trends in robotic designs include devices with robotic arms and hands capable of performing manipulation and grasping tasks. This paper focuses on how these devices can be used for a different purpose, which is that of enabling robotic communication via sign language. For the study, several tests and questionnaires are run to check and measure how end-users feel about interpreting sign language represented by a humanoid robotic assistant as opposed to subtitles on a screen. Stemming from this dichotomy, dactylology, basic vocabulary representation and end-user satisfaction are the main topics covered by a delivered form, in which additional commentaries are valued and taken into consideration for further decision taking regarding robot-human interaction. The experiments were performed using TEO, a household companion humanoid robot developed at the University Carlos III de Madrid (UC3M), via representations in Spanish Sign Language (LSE), and a total of 16 deaf and hearing-impaired participants.

* Electronics, 8(1), 57 (2019) 
* 21 pages, 11 figures, MDPI Electronics Journal 

  Access Paper or Ask Questions

Interactive Image Segmentation using Label Propagation through Complex Networks

Jan 09, 2019
Fabricio Aparecido Breve

Interactive image segmentation is a topic of many studies in image processing. In a conventional approach, a user marks some pixels of the object(s) of interest and background, and an algorithm propagates these labels to the rest of the image. This paper presents a new graph-based method for interactive segmentation with two stages. In the first stage, nodes representing pixels are connected to their $k$-nearest neighbors to build a complex network with the small-world property to propagate the labels quickly. In the second stage, a regular network in a grid format is used to refine the segmentation on the object borders. Despite its simplicity, the proposed method can perform the task with high accuracy. Computer simulations are performed using some real-world images to show its effectiveness in both two-classes and multi-classes problems. It is also applied to all the images from the Microsoft GrabCut dataset for comparison, and the segmentation accuracy is comparable to those achieved by some state-of-the-art methods, while it is faster than them. In particular, it outperforms some recent approaches when the user input is composed only by a few "scribbles" draw over the objects. Its computational complexity is only linear on the image size at the best-case scenario and linearithmic in the worst case.

* Paper accepted for publication in Expert Systems With Applications 

  Access Paper or Ask Questions

Nonnegative Matrix Factorization for Signal and Data Analytics: Identifiability, Algorithms, and Applications

Oct 18, 2018
Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos, Wing-Kin Ma

Nonnegative matrix factorization (NMF) has become a workhorse for signal and data analytics, triggered by its model parsimony and interpretability. Perhaps a bit surprisingly, the understanding to its model identifiability---the major reason behind the interpretability in many applications such as topic mining and hyperspectral imaging---had been rather limited until recent years. Beginning from the 2010s, the identifiability research of NMF has progressed considerably: Many interesting and important results have been discovered by the signal processing (SP) and machine learning (ML) communities. NMF identifiability has a great impact on many aspects in practice, such as ill-posed formulation avoidance and performance-guaranteed algorithm design. On the other hand, there is no tutorial paper that introduces NMF from an identifiability viewpoint. In this paper, we aim at filling this gap by offering a comprehensive and deep tutorial on model identifiability of NMF as well as the connections to algorithms and applications. This tutorial will help researchers and graduate students grasp the essence and insights of NMF, thereby avoiding typical `pitfalls' that are often times due to unidentifiable NMF formulations. This paper will also help practitioners pick/design suitable factorization tools for their own problems.

* accepted version, IEEE Signal Processing Magazine 

  Access Paper or Ask Questions

Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web

Sep 03, 2018
Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla, Jens Lehmann

With the growth of the internet, the number of fake-news online has been proliferating every year. The consequences of such phenomena are manifold, ranging from lousy decision-making process to bullying and violence episodes. Therefore, fact-checking algorithms became a valuable asset. To this aim, an important step to detect fake-news is to have access to a credibility score for a given information source. However, most of the widely used Web indicators have either been shut-down to the public (e.g., Google PageRank) or are not free for use (Alexa Rank). Further existing databases are short-manually curated lists of online sources, which do not scale. Finally, most of the research on the topic is theoretical-based or explore confidential data in a restricted simulation environment. In this paper we explore current research, highlight the challenges and propose solutions to tackle the problem of classifying websites into a credibility scale. The proposed model automatically extracts source reputation cues and computes a credibility factor, providing valuable insights which can help in belittling dubious and confirming trustful unknown websites. Experimental results outperform state of the art in the 2-classes and 5-classes setting.

* EMNLP 2018: Conference on Empirical Methods in Natural Language Processing (The First Workshop on Fact Extraction and Verification) 

  Access Paper or Ask Questions

Small Sample Learning in Big Data Era

Aug 22, 2018
Jun Shu, Zongben Xu, Deyu Meng

As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called "concept learning", which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called "experience learning", which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.

* 76 pages, 15 figures, survey of small sample learning 

  Access Paper or Ask Questions

Weight-based Fish School Search algorithm for Many-Objective Optimization

Mar 28, 2018
F. B. Lima Neto, I. M. C. Albuquerque, J. B. Monteiro Filho

Optimization problems with more than one objective consist in a very attractive topic for researchers due to its applicability in real-world situations. Over the years, the research effort in the Computational Intelligence field resulted in algorithms able to achieve good results by solving problems with more than one conflicting objective. However, these techniques do not exhibit the same performance as the number of objectives increases and become greater than 3. This paper proposes an adaptation of the metaheuristic Fish School Search to solve optimization problems with many objectives. This adaptation is based on the division of the school in clusters that are specialized in solving a single-objective problem generated by the decomposition of the original problem. For this, we used concepts and ideas often employed by state-of-the-art algorithms, namely: (i) reference points and lines in the objectives space; (ii) clustering process; and (iii) the decomposition technique Penalty-based Boundary Intersection. The proposed algorithm was compared with two state-of-the-art bio-inspired algorithms. Moreover, a version of the proposed technique tailored to solve multi-modal problems was also presented. The experiments executed have shown that the performance obtained by both versions is competitive with state-of-the-art results.

  Access Paper or Ask Questions