Alert button
Picture for Bo Zhao

Bo Zhao

Alert button

A global product of fine-scale urban building height based on spaceborne lidar

Oct 22, 2023
Xiao Ma, Guang Zheng, Chi Xu, L. Monika Moskal, Peng Gong, Qinghua Guo, Huabing Huang, Xuecao Li, Yong Pang, Cheng Wang, Huan Xie, Bailang Yu, Bo Zhao, Yuyu Zhou

Figure 1 for A global product of fine-scale urban building height based on spaceborne lidar
Figure 2 for A global product of fine-scale urban building height based on spaceborne lidar
Figure 3 for A global product of fine-scale urban building height based on spaceborne lidar
Figure 4 for A global product of fine-scale urban building height based on spaceborne lidar

Characterizing urban environments with broad coverages and high precision is more important than ever for achieving the UN's Sustainable Development Goals (SDGs) as half of the world's populations are living in cities. Urban building height as a fundamental 3D urban structural feature has far-reaching applications. However, so far, producing readily available datasets of recent urban building heights with fine spatial resolutions and global coverages remains a challenging task. Here, we provide an up-to-date global product of urban building heights based on a fine grid size of 150 m around 2020 by combining the spaceborne lidar instrument of GEDI and multi-sourced data including remotely sensed images (i.e., Landsat-8, Sentinel-2, and Sentinel-1) and topographic data. Our results revealed that the estimated method of building height samples based on the GEDI data was effective with 0.78 of Pearson's r and 3.67 m of RMSE in comparison to the reference data. The mapping product also demonstrated good performance as indicated by its strong correlation with the reference data (i.e., Pearson's r = 0.71, RMSE = 4.60 m). Compared with the currently existing products, our global urban building height map holds the ability to provide a higher spatial resolution (i.e., 150 m) with a great level of inherent details about the spatial heterogeneity and flexibility of updating using the GEDI samples as inputs. This work will boost future urban studies across many fields including climate, environmental, ecological, and social sciences.

Viaarxiv icon

Real-Fake: Effective Training Data Synthesis Through Distribution Matching

Oct 16, 2023
Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, Bo Zhao

Synthetic training data has gained prominence in numerous learning tasks and scenarios, offering advantages such as dataset augmentation, generalization evaluation, and privacy preservation. Despite these benefits, the efficiency of synthetic data generated by current methodologies remains inferior when training advanced deep models exclusively, limiting its practical utility. To address this challenge, we analyze the principles underlying training data synthesis for supervised learning and elucidate a principled theoretical framework from the distribution-matching perspective that explicates the mechanisms governing synthesis efficacy. Through extensive experiments, we demonstrate the effectiveness of our synthetic data across diverse image classification tasks, both as a replacement for and augmentation to real datasets, while also benefits challenging tasks such as out-of-distribution generalization and privacy preservation.

* Code released at (https://github.com/BAAI-DCAI/Training-Data-Synthesis) 
Viaarxiv icon

Image Captions are Natural Prompts for Text-to-Image Models

Jul 17, 2023
Shiye Lei, Hao Chen, Sen Zhang, Bo Zhao, Dacheng Tao

Figure 1 for Image Captions are Natural Prompts for Text-to-Image Models
Figure 2 for Image Captions are Natural Prompts for Text-to-Image Models
Figure 3 for Image Captions are Natural Prompts for Text-to-Image Models
Figure 4 for Image Captions are Natural Prompts for Text-to-Image Models

With the rapid development of Artificial Intelligence Generated Content (AIGC), it has become common practice in many learning tasks to train or fine-tune large models on synthetic data due to the data-scarcity and privacy leakage problems. Albeit promising with unlimited data generation, owing to massive and diverse information conveyed in real images, it is challenging for text-to-image generative models to synthesize informative training data with hand-crafted prompts, which usually leads to inferior generalization performance when training downstream models. In this paper, we theoretically analyze the relationship between the training effect of synthetic data and the synthetic data distribution induced by prompts. Then we correspondingly propose a simple yet effective method that prompts text-to-image generative models to synthesize more informative and diverse training data. Specifically, we caption each real image with the advanced captioning model to obtain informative and faithful prompts that extract class-relevant information and clarify the polysemy of class names. The image captions and class names are concatenated to prompt generative models for training image synthesis. Extensive experiments on ImageNette, ImageNet-100, and ImageNet-1K verify that our method significantly improves the performance of models trained on synthetic training data, i.e., 10% classification accuracy improvements on average.

* 20 pages, 1 figure, 10 tables 
Viaarxiv icon

SVIT: Scaling up Visual Instruction Tuning

Jul 09, 2023
Bo Zhao, Boya Wu, Tiejun Huang

Figure 1 for SVIT: Scaling up Visual Instruction Tuning
Figure 2 for SVIT: Scaling up Visual Instruction Tuning
Figure 3 for SVIT: Scaling up Visual Instruction Tuning
Figure 4 for SVIT: Scaling up Visual Instruction Tuning

Thanks to the emerging of foundation models, the large language and vision models are integrated to acquire the multimodal ability of visual captioning, dialogue, question answering, etc. Although existing multimodal models present impressive performance of visual understanding and reasoning, their limits are still largely under-explored due to the scarcity of high-quality instruction tuning data. To push the limits of multimodal capability, we Sale up Visual Instruction Tuning (SVIT) by constructing a dataset of 3.2 million visual instruction tuning data including 1.6M conversation question-answer (QA) pairs and 1.6M complex reasoning QA pairs and 106K detailed image descriptions. Besides the volume, the proposed dataset is also featured by the high quality and rich diversity, which is generated by prompting GPT-4 with the abundant manual annotations of images. We empirically verify that training multimodal models on SVIT can significantly improve the multimodal performance in terms of visual perception, reasoning and planing.

Viaarxiv icon

Federated Generative Learning with Foundation Models

Jun 28, 2023
Jie Zhang, Xiaohua Qi, Bo Zhao

Figure 1 for Federated Generative Learning with Foundation Models
Figure 2 for Federated Generative Learning with Foundation Models
Figure 3 for Federated Generative Learning with Foundation Models
Figure 4 for Federated Generative Learning with Foundation Models

Existing federated learning solutions focus on transmitting features, parameters or gadients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning, that transmits prompts associated with distributed training data between clients and server. The informative training data can be synthesized remotely based on received prompts containing little privacy and the foundation generative models. The new framework possesses multiple advantages, including improved communication efficiency, better resilience to distribution shift, substantial performance gains, and enhanced privacy protection, which are verified in extensive experiments on ImageNet and DomainNet datasets.

Viaarxiv icon

Pushing the Limits of 3D Shape Generation at Scale

Jun 20, 2023
Wang Yu, Xuelin Qian, Jingyang Huo, Tiejun Huang, Bo Zhao, Yanwei Fu

We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions. Through the adaptation of the Auto-Regressive model and the utilization of large language models, we have developed a remarkable model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D. Our approach addresses the limitations of existing methods by enhancing the quality and diversity of generated 3D shapes. To tackle the challenges of high-resolution 3D shape generation, our model incorporates tri-plane features as latent representations, effectively reducing computational complexity. Additionally, we introduce a discrete codebook for efficient quantization of these representations. Leveraging the power of transformers, we enable multi-modal conditional generation, facilitating the production of diverse and visually impressive 3D shapes. To train our expansive model, we leverage an ensemble of publicly-available 3D datasets, consisting of a comprehensive collection of approximately 900,000 objects from renowned repositories such as ModelNet40, ShapeNet, Pix3D, 3D-Future, and Objaverse. This diverse dataset empowers our model to learn from a wide range of object variations, bolstering its ability to generate high-quality and diverse 3D shapes. Extensive experimentation demonstrate the remarkable efficacy of our approach in significantly improving the visual quality of generated 3D shapes. By pushing the boundaries of 3D generation, introducing novel methods for latent representation learning, and harnessing the power of transformers for multi-modal conditional generation, our contributions pave the way for substantial advancements in the field. Our work unlocks new possibilities for applications in gaming, virtual reality, product design, and other domains that demand high-quality and diverse 3D objects.

Viaarxiv icon

Large-scale Dataset Pruning with Dynamic Uncertainty

Jun 08, 2023
Muyang He, Shuo Yang, Tiejun Huang, Bo Zhao

Figure 1 for Large-scale Dataset Pruning with Dynamic Uncertainty
Figure 2 for Large-scale Dataset Pruning with Dynamic Uncertainty
Figure 3 for Large-scale Dataset Pruning with Dynamic Uncertainty
Figure 4 for Large-scale Dataset Pruning with Dynamic Uncertainty

The state of the art of many learning tasks, e.g., image classification, is advanced by collecting larger datasets and then training larger models on them. As the outcome, the increasing computational cost is becoming unaffordable. In this paper, we investigate how to prune the large-scale datasets, and thus produce an informative subset for training sophisticated deep models with negligible performance drop. We propose a simple yet effective dataset pruning method by exploring both the prediction uncertainty and training dynamics. To our knowledge, this is the first work to study dataset pruning on large-scale datasets, i.e., ImageNet-1K and ImageNet-21K, and advanced models, i.e., Swin Transformer and ConvNeXt. Extensive experimental results indicate that our method outperforms the state of the art and achieves 75% lossless compression ratio on both ImageNet-1K and ImageNet-21K. The code and pruned datasets are available at https://github.com/BAAI-DCAI/Dataset-Pruning.

Viaarxiv icon

DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting

Jun 03, 2023
Salva Rühling Cachay, Bo Zhao, Hailey James, Rose Yu

Figure 1 for DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting
Figure 2 for DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting
Figure 3 for DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting
Figure 4 for DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting

While diffusion models can successfully generate data and make predictions, they are predominantly designed for static images. We propose an approach for training diffusion models for dynamics forecasting that leverages the temporal dynamics encoded in the data, directly coupling it with the diffusion steps in the network. We train a stochastic, time-conditioned interpolator and a backbone forecaster network that mimic the forward and reverse processes of conventional diffusion models, respectively. This design choice naturally encodes multi-step and long-range forecasting capabilities, allowing for highly flexible, continuous-time sampling trajectories and the ability to trade-off performance with accelerated sampling at inference time. In addition, the dynamics-informed diffusion process imposes a strong inductive bias, allowing for improved computational efficiency compared to traditional Gaussian noise-based diffusion models. Our approach performs competitively on probabilistic skill score metrics in complex dynamics forecasting of sea surface temperatures, Navier-Stokes flows, and spring mesh systems.

* Code will be released at: https://github.com/Rose-STL-Lab/dyffusion 
Viaarxiv icon

Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling

May 25, 2023
Hengfa Lu, Huihui Ye, Lawrence L. Wald, Bo Zhao

Figure 1 for Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling
Figure 2 for Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling
Figure 3 for Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling
Figure 4 for Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling

Magnetic Resonance (MR) Fingerprinting is an emerging multi-parametric quantitative MR imaging technique, for which image reconstruction methods utilizing low-rank and subspace constraints have achieved state-of-the-art performance. However, this class of methods often suffers from an ill-conditioned model-fitting issue, which degrades the performance as the data acquisition lengths become short and/or the signal-to-noise ratio becomes low. To address this problem, we present a new image reconstruction method for MR Fingerprinting, integrating low-rank and subspace modeling with a deep generative prior. Specifically, the proposed method captures the strong spatiotemporal correlation of contrast-weighted time-series images in MR Fingerprinting via a low-rank factorization. Further, it utilizes an untrained convolutional generative neural network to represent the spatial subspace of the low-rank model, while estimating the temporal subspace of the model from simulated magnetization evolutions generated based on spin physics. Here the architecture of the generative neural network serves as an effective regularizer for the ill-conditioned inverse problem without additional spatial training data that are often expensive to acquire. The proposed formulation results in a non-convex optimization problem, for which we develop an algorithm based on variable splitting and alternating direction method of multipliers.We evaluate the performance of the proposed method with numerical simulations and in vivo experiments and demonstrate that the proposed method outperforms the state-of-the-art low-rank and subspace reconstruction.

Viaarxiv icon

Improving Convergence and Generalization Using Parameter Symmetries

May 22, 2023
Bo Zhao, Robert M. Gower, Robin Walters, Rose Yu

Figure 1 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 2 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 3 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 4 for Improving Convergence and Generalization Using Parameter Symmetries

In overparametrized models, different values of the parameters may result in the same loss value. Parameter space symmetries are transformations that change the model parameters but leave the loss invariant. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's success is not well understood. In this paper, we show that teleportation not only speeds up optimization in the short-term, but gives overall faster time to convergence. Additionally, we show that teleporting to minima with different curvatures improves generalization and provide insights on the connection between the curvature of the minima and generalization ability. Finally, we show that integrating teleportation into a wide range of optimization algorithms and optimization-based meta-learning improves convergence.

* 29 pages, 13 figures 
Viaarxiv icon