Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

A new interpretable unsupervised anomaly detection method based on residual explanation

Mar 14, 2021
David F. N. Oliveira, Lucio F. Vismari, Alexandre M. Nascimento, Jorge R. de Almeida Jr, Paulo S. Cugnasca, Joao B. Camargo Jr, Leandro Almeida, Rafael Gripp, Marcelo Neves

Despite the superior performance in modeling complex patterns to address challenging problems, the black-box nature of Deep Learning (DL) methods impose limitations to their application in real-world critical domains. The lack of a smooth manner for enabling human reasoning about the black-box decisions hinder any preventive action to unexpected events, in which may lead to catastrophic consequences. To tackle the unclearness from black-box models, interpretability became a fundamental requirement in DL-based systems, leveraging trust and knowledge by providing ways to understand the model's behavior. Although a current hot topic, further advances are still needed to overcome the existing limitations of the current interpretability methods in unsupervised DL-based models for Anomaly Detection (AD). Autoencoders (AE) are the core of unsupervised DL-based for AD applications, achieving best-in-class performance. However, due to their hybrid aspect to obtain the results (by requiring additional calculations out of network), only agnostic interpretable methods can be applied to AE-based AD. These agnostic methods are computationally expensive to process a large number of parameters. In this paper we present the RXP (Residual eXPlainer), a new interpretability method to deal with the limitations for AE-based AD in large-scale systems. It stands out for its implementation simplicity, low computational cost and deterministic behavior, in which explanations are obtained through the deviation analysis of reconstructed input features. In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP, demonstrating its potential to support decision making in large scale critical systems.

* 8 pages 
  
Access Paper or Ask Questions

Height estimation from single aerial images using a deep ordinal regression network

Jun 04, 2020
Xiang Li, Mingyang Wang, Yi Fang

Understanding the 3D geometric structure of the Earth's surface has been an active research topic in photogrammetry and remote sensing community for decades, serving as an essential building block for various applications such as 3D digital city modeling, change detection, and city management. Previous researches have extensively studied the problem of height estimation from aerial images based on stereo or multi-view image matching. These methods require two or more images from different perspectives to reconstruct 3D coordinates with camera information provided. In this paper, we deal with the ambiguous and unsolved problem of height estimation from a single aerial image. Driven by the great success of deep learning, especially deep convolution neural networks (CNNs), some researches have proposed to estimate height information from a single aerial image by training a deep CNN model with large-scale annotated datasets. These methods treat height estimation as a regression problem and directly use an encoder-decoder network to regress the height values. In this paper, we proposed to divide height values into spacing-increasing intervals and transform the regression problem into an ordinal regression problem, using an ordinal loss for network training. To enable multi-scale feature extraction, we further incorporate an Atrous Spatial Pyramid Pooling (ASPP) module to extract features from multiple dilated convolution layers. After that, a post-processing technique is designed to transform the predicted height map of each patch into a seamless height map. Finally, we conduct extensive experiments on ISPRS Vaihingen and Potsdam datasets. Experimental results demonstrate significantly better performance of our method compared to the state-of-the-art methods.

* 5 pages, 3 figures 
  
Access Paper or Ask Questions

Elementos da teoria de aprendizagem de máquina supervisionada

Oct 06, 2019
Vladimir G. Pestov

This is a set of lecture notes for an introductory course (advanced undergaduates or the 1st graduate course) on foundations of supervised machine learning (in Portuguese). The topics include: the geometry of the Hamming cube, concentration of measure, shattering and VC dimension, Glivenko-Cantelli classes, PAC learnability, universal consistency and the k-NN classifier in metric spaces, dimensionality reduction, universal approximation, sample compression. There are appendices on metric and normed spaces, measure theory, etc., making the notes self-contained. Este \'e um conjunto de notas de aula para um curso introdut\'orio (curso de gradua\c{c}\~ao avan\c{c}ado ou o 1o curso de p\'os) sobre fundamentos da aprendizagem de m\'aquina supervisionada (em Portugu\^es). Os t\'opicos incluem: a geometria do cubo de Hamming, concentra\c{c}\~ao de medida, fragmenta\c{c}\~ao e dimens\~ao de Vapnik-Chervonenkis, classes de Glivenko-Cantelli, aprendizabilidade PAC, consist\^encia universal e o classificador k-NN em espa\c{c}os m\'etricos, redu\c{c}\~ao de dimensionalidade, aproxima\c{c}\~ao universal, compress\~ao amostral. H\'a ap\^endices sobre espa\c{c}os m\'etricos e normados, teoria de medida, etc., tornando as notas autosuficientes.

* 390 pp. + vii, in Portuguese, a preliminary version, to be published by IMPA as a book of lectures of the 23nd Brazilian Math Colloquium (July 28 - Aug 2, 2019), submitted to arXiv upon IMPA permission 
  
Access Paper or Ask Questions

Image and Video Compression with Neural Networks: A Review

Apr 10, 2019
Siwei Ma, Xinfeng Zhang, Chuanmin Jia, Zhenghui Zhao, Shiqi Wang, Shanshe Wang

In recent years, the image and video coding technologies have advanced by leaps and bounds. However, due to the popularization of image and video acquisition devices, the growth rate of image and video data is far beyond the improvement of the compression ratio. In particular, it has been widely recognized that there are increasing challenges of pursuing further coding performance improvement within the traditional hybrid coding framework. Deep convolution neural network (CNN) which makes the neural network resurge in recent years and has achieved great success in both artificial intelligent and signal processing fields, also provides a novel and promising solution for image and video compression. In this paper, we provide a systematic, comprehensive and up-to-date review of neural network based image and video compression techniques. The evolution and development of neural network based compression methodologies are introduced for images and video respectively. More specifically, the cutting-edge video coding techniques by leveraging deep learning and HEVC framework are presented and discussed, which promote the state-of-the-art video coding performance substantially. Moreover, the end-to-end image and video coding frameworks based on neural networks are also reviewed, revealing interesting explorations on next generation image and video coding frameworks/standards. The most significant research works on the image and video coding related topics using neural networks are highlighted, and future trends are also envisioned. In particular, the joint compression on semantic and visual information is tentatively explored to formulate high efficiency signal representation structure for both human vision and machine vision, which are the two dominant signal receptor in the age of artificial intelligence.

* Accepted by IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT) as transactions paper 
  
Access Paper or Ask Questions

Implementation of Robust Face Recognition System Using Live Video Feed Based on CNN

Nov 18, 2018
Yang Li, Sangwhan Cha

The way to accurately and effectively identify people has always been an interesting topic in research and industry. With the rapid development of artificial intelligence in recent years, facial recognition gains lots of attention due to prompting the development of emerging identification methods. Compared to traditional card recognition, fingerprint recognition and iris recognition, face recognition has many advantages including non-contact interface, high concurrency, and user-friendly usage. It has high potential to be used in government, public facilities, security, e-commerce, retailing, education and many other fields. With the development of deep learning and the introduction of deep convolutional neural networks, the accuracy and speed of face recognition have made great strides. However, the results from different networks and models are very different with different system architecture. Furthermore, it could take significant amount of data storage space and data processing time for the face recognition system with video feed, if the system stores images and features of human faces. In this paper, facial features are extracted by merging and comparing multiple models, and then a deep neural network is constructed to train and construct the combined features. In this way, the advantages of multiple models can be combined to mention the recognition accuracy. After getting a model with high accuracy, we build a product model. The model will take a human face image and extract it into a vector. Then the distance between vectors are compared to determine if two faces on different picture belongs to the same person. The proposed approach reduces data storage space and data processing time for the face recognition system with video feed scientifically with our proposed system architecture.

  
Access Paper or Ask Questions

From Free Text to Clusters of Content in Health Records: An Unsupervised Graph Partitioning Approach

Nov 14, 2018
M. Tarik Altuncu, Erik Mayer, Sophia N. Yaliraki, Mauricio Barahona

Electronic Healthcare records contain large volumes of unstructured data in different forms. Free text constitutes a large portion of such data, yet this source of richly detailed information often remains under-used in practice because of a lack of suitable methodologies to extract interpretable content in a timely manner. Here we apply network-theoretical tools to the analysis of free text in Hospital Patient Incident reports in the English National Health Service, to find clusters of reports in an unsupervised manner and at different levels of resolution based directly on the free text descriptions contained within them. To do so, we combine recently developed deep neural network text-embedding methodologies based on paragraph vectors with multi-scale Markov Stability community detection applied to a similarity graph of documents obtained from sparsified text vector similarities. We showcase the approach with the analysis of incident reports submitted in Imperial College Healthcare NHS Trust, London. The multiscale community structure reveals levels of meaning with different resolution in the topics of the dataset, as shown by relevant descriptive terms extracted from the groups of records, as well as by comparing a posteriori against hand-coded categories assigned by healthcare personnel. Our content communities exhibit good correspondence with well-defined hand-coded categories, yet our results also provide further medical detail in certain areas as well as revealing complementary descriptors of incidents beyond the external classification. We also discuss how the method can be used to monitor reports over time and across different healthcare providers, and to detect emerging trends that fall outside of pre-existing categories.

* 25 pages, 2 tables, 8 figures and 5 supplementary figures 
  
Access Paper or Ask Questions

A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector

Apr 28, 2018
Rayson Laroca, Evair Severo, Luiz A. Zanlorensi, Luiz S. Oliveira, Gabriel Resende Gonçalves, William Robson Schwartz, David Menotti

Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector. The Convolutional Neural Networks (CNNs) are trained and fine-tuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset, called UFPR-ALPR dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.

* Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 2018 
  
Access Paper or Ask Questions

YouTube-8M: A Large-Scale Video Classification Benchmark

Sep 27, 2016
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan

Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.

* 10 pages 
  
Access Paper or Ask Questions

Video Capsule Endoscopy Classification using Focal Modulation Guided Convolutional Neural Network

Jun 16, 2022
Abhishek Srivastava, Nikhil Kumar Tomar, Ulas Bagci, Debesh Jha

Video capsule endoscopy is a hot topic in computer vision and medicine. Deep learning can have a positive impact on the future of video capsule endoscopy technology. It can improve the anomaly detection rate, reduce physicians' time for screening, and aid in real-world clinical analysis. CADx classification system for video capsule endoscopy has shown a great promise for further improvement. For example, detection of cancerous polyp and bleeding can lead to swift medical response and improve the survival rate of the patients. To this end, an automated CADx system must have high throughput and decent accuracy. In this paper, we propose FocalConvNet, a focal modulation network integrated with lightweight convolutional layers for the classification of small bowel anatomical landmarks and luminal findings. FocalConvNet leverages focal modulation to attain global context and allows global-local spatial interactions throughout the forward pass. Moreover, the convolutional block with its intrinsic inductive/learning bias and capacity to extract hierarchical features allows our FocalConvNet to achieve favourable results with high throughput. We compare our FocalConvNet with other SOTA on Kvasir-Capsule, a large-scale VCE dataset with 44,228 frames with 13 classes of different anomalies. Our proposed method achieves the weighted F1-score, recall and MCC} of 0.6734, 0.6373 and 0.2974, respectively outperforming other SOTA methodologies. Furthermore, we report the highest throughput of 148.02 images/second rate to establish the potential of FocalConvNet in a real-time clinical environment. The code of the proposed FocalConvNet is available at https://github.com/NoviceMAn-prog/FocalConvNet.

* CBMS 2022 
  
Access Paper or Ask Questions

Laneformer: Object-aware Row-Column Transformers for Lane Detection

Mar 18, 2022
Jianhua Han, Xiajun Deng, Xinyue Cai, Zhen Yang, Hang Xu, Chunjing Xu, Xiaodan Liang

We present Laneformer, a conceptually simple yet powerful transformer-based architecture tailored for lane detection that is a long-standing research topic for visual perception in autonomous driving. The dominant paradigms rely on purely CNN-based architectures which often fail in incorporating relations of long-range lane points and global contexts induced by surrounding objects (e.g., pedestrians, vehicles). Inspired by recent advances of the transformer encoder-decoder architecture in various vision tasks, we move forwards to design a new end-to-end Laneformer architecture that revolutionizes the conventional transformers into better capturing the shape and semantic characteristics of lanes, with minimal overhead in latency. First, coupling with deformable pixel-wise self-attention in the encoder, Laneformer presents two new row and column self-attention operations to efficiently mine point context along with the lane shapes. Second, motivated by the appearing objects would affect the decision of predicting lane segments, Laneformer further includes the detected object instances as extra inputs of multi-head attention blocks in the encoder and decoder to facilitate the lane point detection by sensing semantic contexts. Specifically, the bounding box locations of objects are added into Key module to provide interaction with each pixel and query while the ROI-aligned features are inserted into Value module. Extensive experiments demonstrate our Laneformer achieves state-of-the-art performances on CULane benchmark, in terms of 77.1% F1 score. We hope our simple and effective Laneformer will serve as a strong baseline for future research in self-attention models for lane detection.

* AAAI2022 
  
Access Paper or Ask Questions
<<
>>