Alert button
Picture for Kejia Zhang

Kejia Zhang

Alert button

Synth-AC: Enhancing Audio Captioning with Synthetic Supervision

Sep 18, 2023
Feiyang Xiao, Qiaoxi Zhu, Jian Guan, Xubo Liu, Haohe Liu, Kejia Zhang, Wenwu Wang

Figure 1 for Synth-AC: Enhancing Audio Captioning with Synthetic Supervision
Figure 2 for Synth-AC: Enhancing Audio Captioning with Synthetic Supervision
Figure 3 for Synth-AC: Enhancing Audio Captioning with Synthetic Supervision
Figure 4 for Synth-AC: Enhancing Audio Captioning with Synthetic Supervision

Data-driven approaches hold promise for audio captioning. However, the development of audio captioning methods can be biased due to the limited availability and quality of text-audio data. This paper proposes a SynthAC framework, which leverages recent advances in audio generative models and commonly available text corpus to create synthetic text-audio pairs, thereby enhancing text-audio representation. Specifically, the text-to-audio generation model, i.e., AudioLDM, is used to generate synthetic audio signals with captions from an image captioning dataset. Our SynthAC expands the availability of well-annotated captions from the text-vision domain to audio captioning, thus enhancing text-audio representation by learning relations within synthetic text-audio pairs. Experiments demonstrate that our SynthAC framework can benefit audio captioning models by incorporating well-annotated text corpus from the text-vision domain, offering a promising solution to the challenge caused by data scarcity. Furthermore, SynthAC can be easily adapted to various state-of-the-art methods, leading to substantial performance improvements.

Viaarxiv icon

Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning

Aug 14, 2023
Kejia Zhang, Mingyu Yang, Stephen D. J. Lang, Alistair M. McInnes, Richard B. Sherley, Tilo Burghardt

Figure 1 for Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning
Figure 2 for Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning
Figure 3 for Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning
Figure 4 for Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning

African penguins (Spheniscus demersus) are an endangered species. Little is known regarding their underwater hunting strategies and associated predation success rates, yet this is essential for guiding conservation. Modern bio-logging technology has the potential to provide valuable insights, but manually analysing large amounts of data from animal-borne video recorders (AVRs) is time-consuming. In this paper, we publish an animal-borne underwater video dataset of penguins and introduce a ready-to-deploy deep learning system capable of robustly detecting penguins (mAP50@98.0%) and also instances of fish (mAP50@73.3%). We note that the detectors benefit explicitly from air-bubble learning to improve accuracy. Extending this detector towards a dual-stream behaviour recognition network, we also provide the first results for identifying predation behaviour in penguin underwater videos. Whilst results are promising, further work is required for useful applicability of predation behaviour detection in field scenarios. In summary, we provide a highly reliable underwater penguin detector, a fish detector, and a valuable first attempt towards an automated visual detection of complex behaviours in a marine predator. We publish the networks, the DivingWithPenguins video dataset, annotations, splits, and weights for full reproducibility and immediate usability by practitioners.

* 5 pages, 5 figures, 4 Tables, "3rd International Workshop on Camera traps, AI, and Ecology (CamTrapAI)" 
Viaarxiv icon

Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones

Jan 03, 2022
Zhe Zhang, Shiyao Ma, Zhaohui Yang, Zehui Xiong, Jiawen Kang, Yi Wu, Kejia Zhang, Dusit Niyato

Figure 1 for Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones
Figure 2 for Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones
Figure 3 for Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones
Figure 4 for Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones

Air access networks have been recognized as a significant driver of various Internet of Things (IoT) services and applications. In particular, the aerial computing network infrastructure centered on the Internet of Drones has set off a new revolution in automatic image recognition. This emerging technology relies on sharing ground truth labeled data between Unmanned Aerial Vehicle (UAV) swarms to train a high-quality automatic image recognition model. However, such an approach will bring data privacy and data availability challenges. To address these issues, we first present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition. Specifically, we propose model parameters mixing strategy to improve the naive combination of FL and semi-supervised learning methods under two realistic scenarios (labels-at-client and labels-at-server), which is referred to as Federated Mixing (FedMix). Furthermore, there are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules in different environments, i.e., statistical heterogeneity. To alleviate the statistical heterogeneity problem, we propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule, which can adjust the weight of the corresponding local model according to its frequency. Numerical results demonstrate that the performance of our proposed method is significantly better than those of the current baseline and is robust to different non-IID levels of client data.

* arXiv admin note: text overlap with arXiv:2110.13388 
Viaarxiv icon

MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering

Jul 07, 2021
Haiwei Pan, Shuning He, Kejia Zhang, Bo Qu, Chunling Chen, Kun Shi

Figure 1 for MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering
Figure 2 for MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering
Figure 3 for MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering
Figure 4 for MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering

Medical Visual Question Answering (VQA) is a multi-modal challenging task widely considered by research communities of the computer vision and natural language processing. Since most current medical VQA models focus on visual content, ignoring the importance of text, this paper proposes a multi-view attention-based model(MuVAM) for medical visual question answering which integrates the high-level semantics of medical images on the basis of text description. Firstly, different methods are utilized to extract the features of the image and the question for the two modalities of vision and text. Secondly, this paper proposes a multi-view attention mechanism that include Image-to-Question (I2Q) attention and Word-to-Text (W2T) attention. Multi-view attention can correlate the question with image and word in order to better analyze the question and get an accurate answer. Thirdly, a composite loss is presented to predict the answer accurately after multi-modal feature fusion and improve the similarity between visual and textual cross-modal features. It consists of classification loss and image-question complementary (IQC) loss. Finally, for data errors and missing labels in the VQA-RAD dataset, we collaborate with medical experts to correct and complete this dataset and then construct an enhanced dataset, VQA-RADPh. The experiments on these two datasets show that the effectiveness of MuVAM surpasses the state-of-the-art method.

Viaarxiv icon