Alert button
Picture for Chao Peng

Chao Peng

Alert button

Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning

Sep 04, 2023
Chao Peng, Zhengwei Lv, Jiarong Fu, Jiayuan Liang, Zhao Zhang, Ajitha Rajan, Ping Yang

Figure 1 for Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning
Figure 2 for Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning
Figure 3 for Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning
Figure 4 for Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning

Android Apps are frequently updated to keep up with changing user, hardware, and business demands. Ensuring the correctness of App updates through extensive testing is crucial to avoid potential bugs reaching the end user. Existing Android testing tools generate GUI events focussing on improving the test coverage of the entire App rather than prioritising updates and its impacted elements. Recent research has proposed change-focused testing but relies on random exploration to exercise the updates and impacted GUI elements that is ineffective and slow for large complex Apps with a huge input exploration space. We propose directed testing of App updates with Hawkeye that is able to prioritise executing GUI actions associated with code changes based on deep reinforcement learning from historical exploration data. Our empirical evaluation compares Hawkeye with state-of-the-art model-based and reinforcement learning-based testing tools FastBot2 and ARES using 10 popular open-source and 1 commercial App. We find that Hawkeye is able to generate GUI event sequences targeting changed functions more reliably than FastBot2 and ARES for the open source Apps and the large commercial App. Hawkeye achieves comparable performance on smaller open source Apps with a more tractable exploration space. The industrial deployment of Hawkeye in the development pipeline also shows that Hawkeye is ideal to perform smoke testing for merge requests of a complicated commercial App.

Viaarxiv icon

CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility

Jul 19, 2023
Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou

Figure 1 for CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility
Figure 2 for CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility
Figure 3 for CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility
Figure 4 for CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility

With the rapid evolution of large language models (LLMs), there is a growing concern that they may pose risks or have negative social impacts. Therefore, evaluation of human values alignment is becoming increasingly important. Previous work mainly focuses on assessing the performance of LLMs on certain knowledge and reasoning abilities, while neglecting the alignment to human values, especially in a Chinese context. In this paper, we present CValues, the first Chinese human values evaluation benchmark to measure the alignment ability of LLMs in terms of both safety and responsibility criteria. As a result, we have manually collected adversarial safety prompts across 10 scenarios and induced responsibility prompts from 8 domains by professional experts. To provide a comprehensive values evaluation of Chinese LLMs, we not only conduct human evaluation for reliable comparison, but also construct multi-choice prompts for automatic evaluation. Our findings suggest that while most Chinese LLMs perform well in terms of safety, there is considerable room for improvement in terms of responsibility. Moreover, both the automatic and human evaluation are important for assessing the human values alignment in different aspects. The benchmark and code is available on ModelScope and Github.

* Working in Process 
Viaarxiv icon

An End-to-End Network for Panoptic Segmentation

Mar 13, 2019
Huanyu Liu, Chao Peng, Changqian Yu, Jingbo Wang, Xu Liu, Gang Yu, Wei Jiang

Figure 1 for An End-to-End Network for Panoptic Segmentation
Figure 2 for An End-to-End Network for Panoptic Segmentation
Figure 3 for An End-to-End Network for Panoptic Segmentation
Figure 4 for An End-to-End Network for Panoptic Segmentation

Panoptic segmentation, which needs to assign a category label to each pixel and segment each object instance simultaneously, is a challenging topic. Traditionally, the existing approaches utilize two independent models without sharing features, which makes the pipeline inefficient to implement. In addition, a heuristic method is usually employed to merge the results. However, the overlapping relationship between object instances is difficult to determine without sufficient context information during the merging process. To address the problems, we propose a novel end-to-end network for panoptic segmentation, which can efficiently and effectively predict both the instance and stuff segmentation in a single network. Moreover, we introduce a novel spatial ranking module to deal with the occlusion problem between the predicted instances. Extensive experiments have been done to validate the performance of our proposed method and promising results have been achieved on the COCO Panoptic benchmark.

Viaarxiv icon

BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation

Aug 02, 2018
Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang

Figure 1 for BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
Figure 2 for BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
Figure 3 for BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
Figure 4 for BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation

Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048x1024 input, we achieve 68.4% Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.

* Accepted to ECCV 2018. 17 pages, 4 figures, 9 tables 
Viaarxiv icon

Learning a Discriminative Feature Network for Semantic Segmentation

Apr 25, 2018
Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang

Figure 1 for Learning a Discriminative Feature Network for Semantic Segmentation
Figure 2 for Learning a Discriminative Feature Network for Semantic Segmentation
Figure 3 for Learning a Discriminative Feature Network for Semantic Segmentation
Figure 4 for Learning a Discriminative Feature Network for Semantic Segmentation

Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.

* Accepted to CVPR 2018. 10 pages, 9 figures 
Viaarxiv icon

DetNet: A Backbone network for Object Detection

Apr 19, 2018
Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, Jian Sun

Figure 1 for DetNet: A Backbone network for Object Detection
Figure 2 for DetNet: A Backbone network for Object Detection
Figure 3 for DetNet: A Backbone network for Object Detection
Figure 4 for DetNet: A Backbone network for Object Detection

Recent CNN based object detectors, no matter one-stage methods like YOLO, SSD, and RetinaNe or two-stage detectors like Faster R-CNN, R-FCN and FPN are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. 1. Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. 2. Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet~(4.8G FLOPs) backbone. The code will be released for the reproduction.

Viaarxiv icon

MegDet: A Large Mini-Batch Object Detector

Apr 11, 2018
Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, Jian Sun

Figure 1 for MegDet: A Large Mini-Batch Object Detector
Figure 2 for MegDet: A Large Mini-Batch Object Detector
Figure 3 for MegDet: A Large Mini-Batch Object Detector
Figure 4 for MegDet: A Large Mini-Batch Object Detector

The improvements in recent CNN-based object detection works, from R-CNN [11], Fast/Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from new network, new framework, or novel loss design. But mini-batch size, a key factor in the training, has not been well studied. In this paper, we propose a Large MiniBatch Object Detector (MegDet) to enable the training with much larger mini-batch size than before (e.g. from 16 to 256), so that we can effectively utilize multiple GPUs (up to 128 in our experiments) to significantly shorten the training time. Technically, we suggest a learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5%) to COCO 2017 Challenge, where we won the 1st place of Detection task.

Viaarxiv icon

ExFuse: Enhancing Feature Fusion for Semantic Segmentation

Apr 11, 2018
Zhenli Zhang, Xiangyu Zhang, Chao Peng, Dazhi Cheng, Jian Sun

Figure 1 for ExFuse: Enhancing Feature Fusion for Semantic Segmentation
Figure 2 for ExFuse: Enhancing Feature Fusion for Semantic Segmentation
Figure 3 for ExFuse: Enhancing Feature Fusion for Semantic Segmentation
Figure 4 for ExFuse: Enhancing Feature Fusion for Semantic Segmentation

Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and high-resolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0\% in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9\% mean IoU, which outperforms the previous state-of-the-art results.

Viaarxiv icon

Light-Head R-CNN: In Defense of Two-Stage Object Detector

Nov 23, 2017
Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, Jian Sun

Figure 1 for Light-Head R-CNN: In Defense of Two-Stage Object Detector
Figure 2 for Light-Head R-CNN: In Defense of Two-Stage Object Detector
Figure 3 for Light-Head R-CNN: In Defense of Two-Stage Object Detector
Figure 4 for Light-Head R-CNN: In Defense of Two-Stage Object Detector

In this paper, we first investigate why typical two-stage methods are not as fast as single-stage, fast detectors like YOLO and SSD. We find that Faster R-CNN and R-FCN perform an intensive computation after or before RoI warping. Faster R-CNN involves two fully connected layers for RoI recognition, while R-FCN produces a large score maps. Thus, the speed of these networks is slow due to the heavy-head design in the architecture. Even if we significantly reduce the base model, the computation cost cannot be largely decreased accordingly. We propose a new two-stage detector, Light-Head R-CNN, to address the shortcoming in current two-stage approaches. In our design, we make the head of network as light as possible, by using a thin feature map and a cheap R-CNN subnet (pooling and single fully-connected layer). Our ResNet-101 based light-head R-CNN outperforms state-of-art object detectors on COCO while keeping time efficiency. More importantly, simply replacing the backbone with a tiny network (e.g, Xception), our Light-Head R-CNN gets 30.7 mmAP at 102 FPS on COCO, significantly outperforming the single-stage, fast detectors like YOLO and SSD on both speed and accuracy. Code will be made publicly available.

Viaarxiv icon

Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network

Mar 08, 2017
Chao Peng, Xiangyu Zhang, Gang Yu, Guiming Luo, Jian Sun

Figure 1 for Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network
Figure 2 for Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network
Figure 3 for Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network
Figure 4 for Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network

One of recent trends [30, 31, 14] in network architec- ture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more ef- ficient than a large kernel, given the same computational complexity. However, in the field of semantic segmenta- tion, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the clas- sification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the ob- ject boundaries. Our approach achieves state-of-art perfor- mance on two public benchmarks and significantly outper- forms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.

Viaarxiv icon