Alert button
Picture for Weihao Yu

Weihao Yu

Alert button

MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities

Aug 04, 2023
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang

Figure 1 for MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Figure 2 for MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Figure 3 for MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Figure 4 for MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities

We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.

* Code and data: https://github.com/yuweihao/MM-Vet 
Viaarxiv icon

Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems

Apr 18, 2023
Bosong Huang, Weihao Yu, Ruzhong Xie, Jing Xiao, Jin Huang

Figure 1 for Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems
Figure 2 for Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems
Figure 3 for Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems
Figure 4 for Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems

Source localization is the inverse problem of graph information dissemination and has broad practical applications. However, the inherent intricacy and uncertainty in information dissemination pose significant challenges, and the ill-posed nature of the source localization problem further exacerbates these challenges. Recently, deep generative models, particularly diffusion models inspired by classical non-equilibrium thermodynamics, have made significant progress. While diffusion models have proven to be powerful in solving inverse problems and producing high-quality reconstructions, applying them directly to the source localization is infeasible for two reasons. Firstly, it is impossible to calculate the posterior disseminated results on a large-scale network for iterative denoising sampling, which would incur enormous computational costs. Secondly, in the existing methods for this field, the training data itself are ill-posed (many-to-one); thus simply transferring the diffusion model would only lead to local optima. To address these challenges, we propose a two-stage optimization framework, the source localization denoising diffusion model (SL-Diff). In the coarse stage, we devise the source proximity degrees as the supervised signals to generate coarse-grained source predictions. This aims to efficiently initialize the next stage, significantly reducing its convergence time and calibrating the convergence process. Furthermore, the introduction of cascade temporal information in this training method transforms the many-to-one mapping relationship into a one-to-one relationship, perfectly addressing the ill-posed problem. In the fine stage, we design a diffusion model for the graph inverse problem that can quantify the uncertainty in the dissemination. The proposed SL-Diff yields excellent prediction results within a reasonable sampling time at extensive experiments.

Viaarxiv icon

InceptionNeXt: When Inception Meets ConvNeXt

Mar 29, 2023
Weihao Yu, Pan Zhou, Shuicheng Yan, Xinchao Wang

Figure 1 for InceptionNeXt: When Inception Meets ConvNeXt
Figure 2 for InceptionNeXt: When Inception Meets ConvNeXt
Figure 3 for InceptionNeXt: When Inception Meets ConvNeXt
Figure 4 for InceptionNeXt: When Inception Meets ConvNeXt

Inspired by the long-range modeling ability of ViTs, large-kernel convolutions are widely studied and adopted recently to enlarge the receptive field and improve model performance, like the remarkable work ConvNeXt which employs 7x7 depthwise convolution. Although such depthwise operator only consumes a few FLOPs, it largely harms the model efficiency on powerful computing devices due to the high memory access costs. For example, ConvNeXt-T has similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained on A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt can improve speed, it results in significant performance degradation. It is still unclear how to speed up large-kernel-based CNN models while preserving their performance. To tackle this issue, inspired by Inceptions, we propose to decompose large-kernel depthwise convolution into four parallel branches along channel dimension, i.e. small square kernel, two orthogonal band kernels, and an identity mapping. With this new Inception depthwise convolution, we build a series of networks, namely IncepitonNeXt, which not only enjoy high throughputs but also maintain competitive performance. For instance, InceptionNeXt-T achieves 1.6x higher training throughputs than ConvNeX-T, as well as attains 0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can serve as an economical baseline for future architecture design to reduce carbon footprint. Code is available at https://github.com/sail-sg/inceptionnext.

* Code: https://github.com/sail-sg/inceptionnext 
Viaarxiv icon

Lorentz Equivariant Model for Knowledge-Enhanced Collaborative Filtering

Feb 09, 2023
Bosong Huang, Weihao Yu, Ruzhong Xie, Jing Xiao, Jin Huang

Figure 1 for Lorentz Equivariant Model for Knowledge-Enhanced Collaborative Filtering
Figure 2 for Lorentz Equivariant Model for Knowledge-Enhanced Collaborative Filtering
Figure 3 for Lorentz Equivariant Model for Knowledge-Enhanced Collaborative Filtering
Figure 4 for Lorentz Equivariant Model for Knowledge-Enhanced Collaborative Filtering

Introducing prior auxiliary information from the knowledge graph (KG) to assist the user-item graph can improve the comprehensive performance of the recommender system. Many recent studies show that the ensemble properties of hyperbolic spaces fit the scale-free and hierarchical characteristics exhibited in the above two types of graphs well. However, existing hyperbolic methods ignore the consideration of equivariance, thus they cannot generalize symmetric features under given transformations, which seriously limits the capability of the model. Moreover, they cannot balance preserving the heterogeneity and mining the high-order entity information to users across two graphs. To fill these gaps, we propose a rigorously Lorentz group equivariant knowledge-enhanced collaborative filtering model (LECF). Innovatively, we jointly update the attribute embeddings (containing the high-order entity signals from the KG) and hyperbolic embeddings (the distance between hyperbolic embeddings reveals the recommendation tendency) by the LECF layer with Lorentz Equivariant Transformation. Moreover, we propose Hyperbolic Sparse Attention Mechanism to sample the most informative neighbor nodes. Lorentz equivariance is strictly maintained throughout the entire model, and enforcing equivariance is proven necessary experimentally. Extensive experiments on three real-world benchmarks demonstrate that LECF remarkably outperforms state-of-the-art methods.

* 11 pages, 6 figures 
Viaarxiv icon

MetaFormer Baselines for Vision

Oct 24, 2022
Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng, Shuicheng Yan, Xinchao Wang

Figure 1 for MetaFormer Baselines for Vision
Figure 2 for MetaFormer Baselines for Vision
Figure 3 for MetaFormer Baselines for Vision
Figure 4 for MetaFormer Baselines for Vision

MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.

* Code: https://github.com/sail-sg/metaformer 
Viaarxiv icon

ODformer: Spatial-Temporal Transformers for Long Sequence Origin-Destination Matrix Forecasting Against Cross Application Scenario

Aug 17, 2022
Jin Huang, Bosong Huang, Weihao Yu, Jing Xiao, Ruzhong Xie, Ke Ruan

Figure 1 for ODformer: Spatial-Temporal Transformers for Long Sequence Origin-Destination Matrix Forecasting Against Cross Application Scenario
Figure 2 for ODformer: Spatial-Temporal Transformers for Long Sequence Origin-Destination Matrix Forecasting Against Cross Application Scenario
Figure 3 for ODformer: Spatial-Temporal Transformers for Long Sequence Origin-Destination Matrix Forecasting Against Cross Application Scenario
Figure 4 for ODformer: Spatial-Temporal Transformers for Long Sequence Origin-Destination Matrix Forecasting Against Cross Application Scenario

Origin-Destination (OD) matrices record directional flow data between pairs of OD regions. The intricate spatiotemporal dependency in the matrices makes the OD matrix forecasting (ODMF) problem not only intractable but also non-trivial. However, most of the related methods are designed for very short sequence time series forecasting in specific application scenarios, which cannot meet the requirements of the variation in scenarios and forecasting length of practical applications. To address these issues, we propose a Transformer-like model named ODformer, with two salient characteristics: (i) the novel OD Attention mechanism, which captures special spatial dependencies between OD pairs of the same origin (destination), greatly improves the ability of the model to predict cross-application scenarios after combining with 2D-GCN that captures spatial dependencies between OD regions. (ii) a PeriodSparse Self-attention that effectively forecasts long sequence OD matrix series while adapting to the periodic differences in different scenarios. Generous experiments in three application backgrounds (i.e., transportation traffic, IP backbone network traffic, crowd flow) show our method outperforms the state-of-the-art methods.

Viaarxiv icon

Re-thinking and Re-labeling LIDC-IDRI for Robust Pulmonary Cancer Prediction

Jul 28, 2022
Hanxiao Zhang, Xiao Gu, Minghui Zhang, Weihao Yu, Liang Chen, Zhexin Wang, Feng Yao, Yun Gu, Guang-Zhong Yang

Figure 1 for Re-thinking and Re-labeling LIDC-IDRI for Robust Pulmonary Cancer Prediction
Figure 2 for Re-thinking and Re-labeling LIDC-IDRI for Robust Pulmonary Cancer Prediction
Figure 3 for Re-thinking and Re-labeling LIDC-IDRI for Robust Pulmonary Cancer Prediction
Figure 4 for Re-thinking and Re-labeling LIDC-IDRI for Robust Pulmonary Cancer Prediction

The LIDC-IDRI database is the most popular benchmark for lung cancer prediction. However, with subjective assessment from radiologists, nodules in LIDC may have entirely different malignancy annotations from the pathological ground truth, introducing label assignment errors and subsequent supervision bias during training. The LIDC database thus requires more objective labels for learning-based cancer prediction. Based on an extra small dataset containing 180 nodules diagnosed by pathological examination, we propose to re-label LIDC data to mitigate the effect of original annotation bias verified on this robust benchmark. We demonstrate in this paper that providing new labels by similar nodule retrieval based on metric learning would be an effective re-labeling strategy. Training on these re-labeled LIDC nodules leads to improved model performance, which is enhanced when new labels of uncertain nodules are added. We further infer that re-labeling LIDC is current an expedient way for robust lung cancer prediction while building a large pathological-proven nodule database provides the long-term solution.

Viaarxiv icon

Inception Transformer

May 26, 2022
Chenyang Si, Weihao Yu, Pan Zhou, Yichen Zhou, Xinchao Wang, Shuicheng Yan

Figure 1 for Inception Transformer
Figure 2 for Inception Transformer
Figure 3 for Inception Transformer
Figure 4 for Inception Transformer

Recent studies show that Transformer has strong capability of building long-range dependencies, yet is incompetent in capturing high frequencies that predominantly convey local information. To tackle this issue, we present a novel and general-purpose Inception Transformer, or iFormer for short, that effectively learns comprehensive features with both high- and low-frequency information in visual data. Specifically, we design an Inception mixer to explicitly graft the advantages of convolution and max-pooling for capturing the high-frequency information to Transformers. Different from recent hybrid frameworks, the Inception mixer brings greater efficiency through a channel splitting mechanism to adopt parallel convolution/max-pooling path and self-attention path as high- and low-frequency mixers, while having the flexibility to model discriminative information scattered within a wide frequency range. Considering that bottom layers play more roles in capturing high-frequency details while top layers more in modeling low-frequency global information, we further introduce a frequency ramp structure, i.e. gradually decreasing the dimensions fed to the high-frequency mixer and increasing those to the low-frequency mixer, which can effectively trade-off high- and low-frequency components across different layers. We benchmark the iFormer on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection and ADE20K segmentation. For example, our iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much higher than DeiT-S by 3.6%, and even slightly better than much bigger model Swin-B (83.3%) with only 1/4 parameters and 1/3 FLOPs. Code and models will be released at https://github.com/sail-sg/iFormer.

* Code and models will be released at https://github.com/sail-sg/iFormer 
Viaarxiv icon

Mugs: A Multi-Granular Self-Supervised Learning Framework

Mar 27, 2022
Pan Zhou, Yichen Zhou, Chenyang Si, Weihao Yu, Teck Khim Ng, Shuicheng Yan

Figure 1 for Mugs: A Multi-Granular Self-Supervised Learning Framework
Figure 2 for Mugs: A Multi-Granular Self-Supervised Learning Framework
Figure 3 for Mugs: A Multi-Granular Self-Supervised Learning Framework
Figure 4 for Mugs: A Multi-Granular Self-Supervised Learning Framework

In self-supervised learning, multi-granular features are heavily desired though rarely investigated, as different downstream tasks (e.g., general and fine-grained classification) often require different or multi-granular features, e.g.~fine- or coarse-grained one or their mixture. In this work, for the first time, we propose an effective MUlti-Granular Self-supervised learning (Mugs) framework to explicitly learn multi-granular visual features. Mugs has three complementary granular supervisions: 1) an instance discrimination supervision (IDS), 2) a novel local-group discrimination supervision (LGDS), and 3) a group discrimination supervision (GDS). IDS distinguishes different instances to learn instance-level fine-grained features. LGDS aggregates features of an image and its neighbors into a local-group feature, and pulls local-group features from different crops of the same image together and push them away for others. It provides complementary instance supervision to IDS via an extra alignment on local neighbors, and scatters different local-groups separately to increase discriminability. Accordingly, it helps learn high-level fine-grained features at a local-group level. Finally, to prevent similar local-groups from being scattered randomly or far away, GDS brings similar samples close and thus pulls similar local-groups together, capturing coarse-grained features at a (semantic) group level. Consequently, Mugs can capture three granular features that often enjoy higher generality on diverse downstream tasks over single-granular features, e.g.~instance-level fine-grained features in contrastive learning. By only pretraining on ImageNet-1K, Mugs sets new SoTA linear probing accuracy 82.1$\%$ on ImageNet-1K and improves previous SoTA by $1.1\%$. It also surpasses SoTAs on other tasks, e.g. transfer learning, detection and segmentation.

* code and models are available at https://github.com/sail-sg/mugs 
Viaarxiv icon

LTSP: Long-Term Slice Propagation for Accurate Airway Segmentation

Feb 13, 2022
Yangqian Wu, Minghui Zhang, Weihao Yu, Hao Zheng, Jiasheng Xu, Yun Gu

Figure 1 for LTSP: Long-Term Slice Propagation for Accurate Airway Segmentation
Figure 2 for LTSP: Long-Term Slice Propagation for Accurate Airway Segmentation
Figure 3 for LTSP: Long-Term Slice Propagation for Accurate Airway Segmentation
Figure 4 for LTSP: Long-Term Slice Propagation for Accurate Airway Segmentation

Purpose: Bronchoscopic intervention is a widely-used clinical technique for pulmonary diseases, which requires an accurate and topological complete airway map for its localization and guidance. The airway map could be extracted from chest computed tomography (CT) scans automatically by airway segmentation methods. Due to the complex tree-like structure of the airway, preserving its topology completeness while maintaining the segmentation accuracy is a challenging task. Methods: In this paper, a long-term slice propagation (LTSP) method is proposed for accurate airway segmentation from pathological CT scans. We also design a two-stage end-to-end segmentation framework utilizing the LTSP method in the decoding process. Stage 1 is used to generate a coarse feature map by an encoder-decoder architecture. Stage 2 is to adopt the proposed LTSP method for exploiting the continuity information and enhancing the weak airway features in the coarse feature map. The final segmentation result is predicted from the refined feature map. Results: Extensive experiments were conducted to evaluate the performance of the proposed method on 70 clinical CT scans. The results demonstrate the considerable improvements of the proposed method compared to some state-of-the-art methods as most breakages are eliminated and more tiny bronchi are detected. The ablation studies further confirm the effectiveness of the constituents of the proposed method. Conclusion: Slice continuity information is beneficial to accurate airway segmentation. Furthermore, by propagating the long-term slice feature, the airway topology connectivity is preserved with overall segmentation accuracy maintained.

* Accepted by IPCAI 2022 
Viaarxiv icon