Alert button
Picture for Yunke Zhang

Yunke Zhang

Alert button

A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities

Aug 01, 2023
Yanxin Xi, Yu Liu, Tong Li, Jintao Ding, Yunke Zhang, Sasu Tarkoma, Yong Li, Pan Hui

Figure 1 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 2 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 3 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 4 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities

Cities play an important role in achieving sustainable development goals (SDGs) to promote economic growth and meet social needs. Especially satellite imagery is a potential data source for studying sustainable urban development. However, a comprehensive dataset in the United States (U.S.) covering multiple cities, multiple years, multiple scales, and multiple indicators for SDG monitoring is lacking. To support the research on SDGs in U.S. cities, we develop a satellite imagery dataset using deep learning models for five SDGs containing 25 sustainable development indicators. The proposed dataset covers the 100 most populated U.S. cities and corresponding Census Block Groups from 2014 to 2023. Specifically, we collect satellite imagery and identify objects with state-of-the-art object detection and semantic segmentation models to observe cities' bird's-eye view. We further gather population, nighttime light, survey, and built environment data to depict SDGs regarding poverty, health, education, inequality, and living environment. We anticipate the dataset to help urban policymakers and researchers to advance SDGs-related studies, especially applying satellite imagery to monitor long-term and multi-scale SDGs in cities.

* 20 pages, 5 figures 
Viaarxiv icon

Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides

Apr 18, 2023
Bo Yu, Hechang Chen, Yunke Zhang, Lele Cong, Shuchao Pang, Hongren Zhou, Ziye Wang, Xianling Cong

Figure 1 for Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides
Figure 2 for Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides
Figure 3 for Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides
Figure 4 for Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides

Artificial intelligence-enabled histopathological data analysis has become a valuable assistant to the pathologist. However, existing models lack representation and inference abilities compared with those of pathologists, especially in cancer subtype diagnosis, which is unconvincing in clinical practice. For instance, pathologists typically observe the lesions of a slide from global to local, and then can give a diagnosis based on their knowledge and experience. In this paper, we propose a Data and Knowledge Co-driving (D&K) model to replicate the process of cancer subtype classification on a histopathological slide like a pathologist. Specifically, in the data-driven module, the bagging mechanism in ensemble learning is leveraged to integrate the histological features from various bags extracted by the embedding representation unit. Furthermore, a knowledge-driven module is established based on the Gestalt principle in psychology to build the three-dimensional (3D) expert knowledge space and map histological features into this space for metric. Then, the diagnosis can be made according to the Euclidean distance between them. Extensive experimental results on both public and in-house datasets demonstrate that the D&K model has a high performance and credible results compared with the state-of-the-art methods for diagnosing histopathological subtypes. Code: https://github.com/Dennis-YB/Data-and-Knowledge-Co-driving-for-Cancer-Subtypes-Classification

* [J]. Knowledge-Based Systems, 2023, 260: 110168  
Viaarxiv icon

Attention-guided Temporal Coherent Video Object Matting

May 24, 2021
Yunke Zhang, Chi Wang, Miaomiao Cui, Peiran Ren, Xuansong Xie, Xian-sheng Hua, Hujun Bao, Qixing Huang, Weiwei Xu

Figure 1 for Attention-guided Temporal Coherent Video Object Matting
Figure 2 for Attention-guided Temporal Coherent Video Object Matting
Figure 3 for Attention-guided Temporal Coherent Video Object Matting
Figure 4 for Attention-guided Temporal Coherent Video Object Matting

This paper proposes a novel deep learning-based video object matting method that can achieve temporally coherent matting results. Its key component is an attention-based temporal aggregation module that maximizes image matting networks' strength for video matting networks. This module computes temporal correlations for pixels adjacent to each other along the time axis in feature space to be robust against motion noises. We also design a novel loss term to train the attention weights, which drastically boosts the video matting performance. Besides, we show how to effectively solve the trimap generation problem by fine-tuning a state-of-the-art video object segmentation network with a sparse set of user-annotated keyframes. To facilitate video matting and trimap generation networks' training, we construct a large-scale video matting dataset with 80 training and 28 validation foreground video clips with ground-truth alpha mattes. Experimental results show that our method can generate high-quality alpha mattes for various videos featuring appearance change, occlusion, and fast motion. Our code and dataset can be found at https://github.com/yunkezhang/TCVOM

* 10 pages, 6 figures 
Viaarxiv icon

Active Boundary Loss for Semantic Segmentation

Feb 04, 2021
Chi Wang, Yunke Zhang, Miaomiao Cui, Jinlin Liu, Peiran Ren, Yin Yang, Xuansong Xie, XianSheng Hua, Hujun Bao, Weiwei Xu

Figure 1 for Active Boundary Loss for Semantic Segmentation
Figure 2 for Active Boundary Loss for Semantic Segmentation
Figure 3 for Active Boundary Loss for Semantic Segmentation
Figure 4 for Active Boundary Loss for Semantic Segmentation

This paper proposes a novel active boundary loss for semantic segmentation. It can progressively encourage the alignment between predicted boundaries and ground-truth boundaries during end-to-end training, which is not explicitly enforced in commonly used cross-entropy loss. Based on the predicted boundaries detected from the segmentation results using current network parameters, we formulate the boundary alignment problem as a differentiable direction vector prediction problem to guide the movement of predicted boundaries in each iteration. Our loss is model-agnostic and can be plugged into the training of segmentation networks to improve the boundary details. Experimental results show that training with the active boundary loss can effectively improve the boundary F-score and mean Intersection-over-Union on challenging image and video object segmentation datasets.

* 8 pages, 7 figures 
Viaarxiv icon