Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"photo": models, code, and papers

Land Use Classification using Convolutional Neural Networks Applied to Ground-Level Images

Sep 21, 2016
Yi Zhu, Shawn Newsam

Land use mapping is a fundamental yet challenging task in geographic science. In contrast to land cover mapping, it is generally not possible using overhead imagery. The recent, explosive growth of online geo-referenced photo collections suggests an alternate approach to geographic knowledge discovery. In this work, we present a general framework that uses ground-level images from Flickr for land use mapping. Our approach benefits from several novel aspects. First, we address the nosiness of the online photo collections, such as imprecise geolocation and uneven spatial distribution, by performing location and indoor/outdoor filtering, and semi- supervised dataset augmentation. Our indoor/outdoor classifier achieves state-of-the-art performance on several bench- mark datasets and approaches human-level accuracy. Second, we utilize high-level semantic image features extracted using deep learning, specifically convolutional neural net- works, which allow us to achieve upwards of 76% accuracy on a challenging eight class land use mapping problem.

* ACM SIGSPATIAL 2015, Best Poster Award 
  

Single View Depth Estimation from Examples

Apr 14, 2013
Tal Hassner, Ronen Basri

We describe a non-parametric, "example-based" method for estimating the depth of an object, viewed in a single photo. Our method consults a database of example 3D geometries, searching for those which look similar to the object in the photo. The known depths of the selected database objects act as shape priors which constrain the process of estimating the object's depth. We show how this process can be performed by optimizing a well defined target likelihood function, via a hard-EM procedure. We address the problem of representing the (possibly infinite) variability of viewing conditions with a finite (and often very small) example set, by proposing an on-the-fly example update scheme. We further demonstrate the importance of non-stationarity in avoiding misleading examples when estimating structured shapes. We evaluate our method and present both qualitative as well as quantitative results for challenging object classes. Finally, we show how this same technique may be readily applied to a number of related problems. These include the novel task of estimating the occluded depth of an object's backside and the task of tailoring custom fitting image-maps for input depths.

  

Multi-scale Dynamic Feature Encoding Network for Image Demoireing

Sep 26, 2019
Xi Cheng, Zhenyong Fu, Jian Yang

The prevalence of digital sensors, such as digital cameras and mobile phones, simplifies the acquisition of photos. Digital sensors, however, suffer from producing Moire when photographing objects having complex textures, which deteriorates the quality of photos. Moire spreads across various frequency bands of images and is a dynamic texture with varying colors and shapes, which pose two main challenges in demoireing---an important task in image restoration. In this paper, towards addressing the first challenge, we design a multi-scale network to process images at different spatial resolutions, obtaining features in different frequency bands, and thus our method can jointly remove moire in different frequency bands. Towards solving the second challenge, we propose a dynamic feature encoding module (DFE), embedded in each scale, for dynamic texture. Moire pattern can be eliminated more effectively via DFE.Our proposed method, termed Multi-scale convolutional network with Dynamic feature encoding for image DeMoireing (MDDM), can outperform the state of the arts in fidelity as well as perceptual on benchmarks.

* Accepted in Advances in Image Manipulation workshop and challenges at ICCV 2019 
  

Learned Smartphone ISP on Mobile NPUs with Deep Learning, Mobile AI 2021 Challenge: Report

May 17, 2021
Andrey Ignatov, Cheng-Ming Chiang, Hsien-Kai Kuo, Anastasia Sycheva, Radu Timofte, Min-Hung Chen, Man-Yu Lee, Yu-Syuan Xu, Yu Tseng, Shusong Xu, Jin Guo, Chao-Hung Chen, Ming-Chun Hsyu, Wen-Chia Tsai, Chao-Wei Chen, Grigory Malivenko, Minsu Kwon, Myungje Lee, Jaeyoon Yoo, Changbeom Kang, Shinjo Wang, Zheng Shaolong, Hao Dejun, Xie Fen, Feng Zhuang, Yipeng Ma, Jingyang Peng, Tao Wang, Fenglong Song, Chih-Chung Hsu, Kwan-Lin Chen, Mei-Hsuang Wu, Vishal Chudasama, Kalpesh Prajapati, Heena Patel, Anjali Sarvaiya, Kishor Upla, Kiran Raja, Raghavendra Ramachandra, Christoph Busch, Etienne de Stoutz

As the quality of mobile cameras starts to play a crucial role in modern smartphones, more and more attention is now being paid to ISP algorithms used to improve various perceptual aspects of mobile photos. In this Mobile AI challenge, the target was to develop an end-to-end deep learning-based image signal processing (ISP) pipeline that can replace classical hand-crafted ISPs and achieve nearly real-time performance on smartphone NPUs. For this, the participants were provided with a novel learned ISP dataset consisting of RAW-RGB image pairs captured with the Sony IMX586 Quad Bayer mobile sensor and a professional 102-megapixel medium format camera. The runtime of all models was evaluated on the MediaTek Dimensity 1000+ platform with a dedicated AI processing unit capable of accelerating both floating-point and quantized neural networks. The proposed solutions are fully compatible with the above NPU and are capable of processing Full HD photos under 60-100 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.

* Mobile AI 2021 Workshop and Challenges: https://ai-benchmark.com/workshops/mai/2021/ 
  

Neural Implicit Representations for Physical Parameter Inference from a Single Video

Apr 29, 2022
Florian Hofherr, Lukas Koestler, Florian Bernard, Daniel Cremers

Neural networks have recently been used to analyze diverse physical systems and to identify the underlying dynamics. While existing methods achieve impressive results, they are limited by their strong demand for training data and their weak generalization abilities to out-of-distribution data. To overcome these limitations, in this work we propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena to obtain a dynamic scene representation that can be identified directly from visual observations. Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video. (ii) The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images. (iii) The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters, and (iv) long-term prediction in state space. (v) Furthermore, the photo-realistic rendering of novel scenes with modified physical parameters becomes possible.

  

GRIHA: Synthesizing 2-Dimensional Building Layouts from Images Captured using a Smart Phone

Mar 15, 2021
Shreya Goyal, Naimul Khan, Chiranjoy Chattopadhyay, Gaurav Bhatnagar

Reconstructing an indoor scene and generating a layout/floor plan in 3D or 2D is a widely known problem. Quite a few algorithms have been proposed in the literature recently. However, most existing methods either use RGB-D images, thus requiring a depth camera, or depending on panoramic photos, assuming that there is little to no occlusion in the rooms. In this work, we proposed GRIHA (Generating Room Interior of a House using ARCore), a framework for generating a layout using an RGB image captured using a simple mobile phone camera. We take advantage of Simultaneous Localization and Mapping (SLAM) to assess the 3D transformations required for layout generation. SLAM technology is built-in in recent mobile libraries such as ARCore by Google. Hence, the proposed method is fast and efficient. It gives the user freedom to generate layout by merely taking a few conventional photos, rather than relying on specialized depth hardware or occlusion-free panoramic images. We have compared GRIHA with other existing methods and obtained superior results. Also, the system is tested on multiple hardware platforms to test the dependency and efficiency.

* 19 pages, 22 Figures, 4 Tables 
  

Demoiréing of Camera-Captured Screen Images Using Deep Convolutional Neural Network

Apr 11, 2018
Bolin Liu, Xiao Shu, Xiaolin Wu

Taking photos of optoelectronic displays is a direct and spontaneous way of transferring data and keeping records, which is widely practiced. However, due to the analog signal interference between the pixel grids of the display screen and camera sensor array, objectionable moir\'e (alias) patterns appear in captured screen images. As the moir\'e patterns are structured and highly variant, they are difficult to be completely removed without affecting the underneath latent image. In this paper, we propose an approach of deep convolutional neural network for demoir\'eing screen photos. The proposed DCNN consists of a coarse-scale network and a fine-scale network. In the coarse-scale network, the input image is first downsampled and then processed by stacked residual blocks to remove the moir\'e artifacts. After that, the fine-scale network upsamples the demoir\'ed low-resolution image back to the original resolution. Extensive experimental results have demonstrated that the proposed technique can efficiently remove the moir\'e patterns for camera acquired screen images; the new technique outperforms the existing ones.

  

Load Balanced GANs for Multi-view Face Image Synthesis

Mar 04, 2018
Jie Cao, Yibo Hu, Bing Yu, Ran He, Zhenan Sun

Multi-view face synthesis from a single image is an ill-posed problem and often suffers from serious appearance distortion. Producing photo-realistic and identity preserving multi-view results is still a not well defined synthesis problem. This paper proposes Load Balanced Generative Adversarial Networks (LB-GAN) to precisely rotate the yaw angle of an input face image to any specified angle. LB-GAN decomposes the challenging synthesis problem into two well constrained subtasks that correspond to a face normalizer and a face editor respectively. The normalizer first frontalizes an input image, and then the editor rotates the frontalized image to a desired pose guided by a remote code. In order to generate photo-realistic local details, the normalizer and the editor are trained in a two-stage manner and regulated by a conditional self-cycle loss and an attention based L2 loss. Exhaustive experiments on controlled and uncontrolled environments demonstrate that the proposed method not only improves the visual realism of multi-view synthetic images, but also preserves identity information well.

  

NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions

Mar 28, 2022
Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu

4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralHOFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.

  

NeuralFusion: Neural Volumetric Rendering under Human-object Interactions

Feb 28, 2022
Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu

4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.

  
<<
34
35
36
37
38
39
40
41
42
43
44
45
46
>>