Alert button
Picture for Saeed Ranjbar Alvar

Saeed Ranjbar Alvar

Alert button

ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages

Oct 26, 2023
Mohammad Akbari, Saeed Ranjbar Alvar, Behnam Kamranian, Amin Banitalebi-Dehkordi, Yong Zhang

Building multi-modal language models has been a trend in the recent years, where additional modalities such as image, video, speech, etc. are jointly learned along with natural languages (i.e., textual information). Despite the success of these multi-modal language models with different modalities, there is no existing solution for neural network architectures and natural languages. Providing neural architectural information as a new modality allows us to provide fast architecture-2-text and text-2-architecture retrieval/generation services on the cloud with a single inference. Such solution is valuable in terms of helping beginner and intermediate ML users to come up with better neural architectures or AutoML approaches with a simple text query. In this paper, we propose ArchBERT, a bi-modal model for joint learning and understanding of neural architectures and natural languages, which opens up new avenues for research in this area. We also introduce a pre-training strategy named Masked Architecture Modeling (MAM) for a more generalized joint learning. Moreover, we introduce and publicly release two new bi-modal datasets for training and validating our methods. The ArchBERT's performance is verified through a set of numerical experiments on different downstream tasks such as architecture-oriented reasoning, question answering, and captioning (summarization). Datasets, codes, and demos are available supplementary materials.

* CoNLL 2023 
Viaarxiv icon

Joint Image Compression and Denoising via Latent-Space Scalability

May 04, 2022
Saeed Ranjbar Alvar, Mateen Ulhaq, Hyomin Choi, Ivan V. Bajić

Figure 1 for Joint Image Compression and Denoising via Latent-Space Scalability
Figure 2 for Joint Image Compression and Denoising via Latent-Space Scalability
Figure 3 for Joint Image Compression and Denoising via Latent-Space Scalability
Figure 4 for Joint Image Compression and Denoising via Latent-Space Scalability

When it comes to image compression in digital cameras, denoising is traditionally performed prior to compression. However, there are applications where image noise may be necessary to demonstrate the trustworthiness of the image, such as court evidence and image forensics. This means that noise itself needs to be coded, in addition to the clean image itself. In this paper, we present a learnt image compression framework where image denoising and compression are performed jointly. The latent space of the image codec is organized in a scalable manner such that the clean image can be decoded from a subset of the latent space at a lower rate, while the noisy image is decoded from the full latent space at a higher rate. The proposed codec is compared against established compression and denoising benchmarks, and the experiments reveal considerable bitrate savings of up to 80% compared to cascade compression and denoising.

Viaarxiv icon

License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes

May 03, 2022
Saeed Ranjbar Alvar, Korcan Uyanik, Ivan V. Bajić

Figure 1 for License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes
Figure 2 for License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes
Figure 3 for License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes
Figure 4 for License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes

Traffic scene analysis is important for emerging technologies such as smart traffic management and autonomous vehicles. However, such analysis also poses potential privacy threats. For example, a system that can recognize license plates may construct patterns of behavior of the corresponding vehicles' owners and use that for various illegal purposes. In this paper we present a system that enables traffic scene analysis while at the same time preserving license plate privacy. The system is based on a multi-task model whose latent space is selectively compressed depending on the amount of information the specific features carry about analysis tasks and private information. Effectiveness of the proposed method is illustrated by experiments on the Cityscapes dataset, for which we also provide license plate annotations.

* submitted to IEEE MIPR'22 
Viaarxiv icon

Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation

Mar 10, 2022
Saeed Ranjbar Alvar, Lanjun Wang, Jian Pei, Yong Zhang

Figure 1 for Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation
Figure 2 for Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation
Figure 3 for Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation
Figure 4 for Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation

Image-to-image translation models are shown to be vulnerable to the Membership Inference Attack (MIA), in which the adversary's goal is to identify whether a sample is used to train the model or not. With daily increasing applications based on image-to-image translation models, it is crucial to protect the privacy of these models against MIAs. We propose adversarial knowledge distillation (AKD) as a defense method against MIAs for image-to-image translation models. The proposed method protects the privacy of the training samples by improving the generalizability of the model. We conduct experiments on the image-to-image translation models and show that AKD achieves the state-of-the-art utility-privacy tradeoff by reducing the attack performance up to 38.9% compared with the regular training model at the cost of a slight drop in the quality of the generated output images. The experimental results also indicate that the models trained by AKD generalize better than the regular training models. Furthermore, compared with existing defense methods, the results show that at the same privacy protection level, image translation models trained by AKD generate outputs with higher quality; while at the same quality of outputs, AKD enhances the privacy protection over 30%.

Viaarxiv icon

Practical Noise Simulation for RGB Images

Jan 30, 2022
Saeed Ranjbar Alvar, Ivan V. Bajić

This document describes a noise generator that simulates realistic noise found in smartphone cameras. The generator simulates Poissonian-Gaussian noise whose parameters have been estimated on the Smartphone Image Denoising Dataset (SIDD). The generator is available online, and is currently being used in compressed-domain denoising exploration experiments in JPEG AI.

* Reference paper for the code 
Viaarxiv icon

Pareto-Optimal Bit Allocation for Collaborative Intelligence

Sep 25, 2020
Saeed Ranjbar Alvar, Ivan V. Bajić

Figure 1 for Pareto-Optimal Bit Allocation for Collaborative Intelligence
Figure 2 for Pareto-Optimal Bit Allocation for Collaborative Intelligence
Figure 3 for Pareto-Optimal Bit Allocation for Collaborative Intelligence
Figure 4 for Pareto-Optimal Bit Allocation for Collaborative Intelligence

In recent studies, collaborative intelligence (CI) has emerged as a promising framework for deployment of Artificial Intelligence (AI)-based services on mobile/edge devices. In CI, the AI model (a deep neural network) is split between the edge and the cloud, and intermediate features are sent from the edge sub-model to the cloud sub-model. In this paper, we study bit allocation for feature coding in multi-stream CI systems. We model task distortion as a function of rate using convex surfaces similar to those found in distortion-rate theory. Using such models, we are able to provide closed-form bit allocation solutions for single-task systems and scalarized multi-task systems. Moreover, we provide analytical characterization of the full Pareto set for 2-stream k-task systems, and bounds on the Pareto set for 3-stream 2-task systems. Analytical results are examined on a variety of DNN models from the literature to demonstrate wide applicability of the results

Viaarxiv icon

Bit Allocation for Multi-Task Collaborative Intelligence

Feb 14, 2020
Saeed Ranjbar Alvar, Ivan V. Bajić

Figure 1 for Bit Allocation for Multi-Task Collaborative Intelligence
Figure 2 for Bit Allocation for Multi-Task Collaborative Intelligence
Figure 3 for Bit Allocation for Multi-Task Collaborative Intelligence
Figure 4 for Bit Allocation for Multi-Task Collaborative Intelligence

Recent studies have shown that collaborative intelligence (CI) is a promising framework for deployment of Artificial Intelligence (AI)-based services on mobile devices. In CI, a deep neural network is split between the mobile device and the cloud. Deep features obtained at the mobile are compressed and transferred to the cloud to complete the inference. So far, the methods in the literature focused on transferring a single deep feature tensor from the mobile to the cloud. Such methods are not applicable to some recent, high-performance networks with multiple branches and skip connections. In this paper, we propose the first bit allocation method for multi-stream, multi-task CI. We first establish a model for the joint distortion of the multiple tasks as a function of the bit rates assigned to different deep feature tensors. Then, using the proposed model, we solve the rate-distortion optimization problem under a total rate constraint to obtain the best rate allocation among the tensors to be transferred. Experimental results illustrate the efficacy of the proposed scheme compared to several alternative bit allocation methods.

* Accepted for publication ICASSP'20 
Viaarxiv icon

FDDB-360: Face Detection in 360-degree Fisheye Images

Feb 07, 2019
Jianglin Fu, Saeed Ranjbar Alvar, Ivan V. Bajic, Rodney G. Vaughan

Figure 1 for FDDB-360: Face Detection in 360-degree Fisheye Images
Figure 2 for FDDB-360: Face Detection in 360-degree Fisheye Images
Figure 3 for FDDB-360: Face Detection in 360-degree Fisheye Images
Figure 4 for FDDB-360: Face Detection in 360-degree Fisheye Images

360-degree cameras offer the possibility to cover a large area, for example an entire room, without using multiple distributed vision sensors. However, geometric distortions introduced by their lenses make computer vision problems more challenging. In this paper we address face detection in 360-degree fisheye images. We show how a face detector trained on regular images can be re-trained for this purpose, and we also provide a 360-degree fisheye-like version of the popular FDDB face detection dataset, which we call FDDB-360.

Viaarxiv icon

MV-YOLO: Motion Vector-aided Tracking by Semantic Object Detection

Jun 15, 2018
Saeed Ranjbar Alvar, Ivan V. Bajić

Figure 1 for MV-YOLO: Motion Vector-aided Tracking by Semantic Object Detection
Figure 2 for MV-YOLO: Motion Vector-aided Tracking by Semantic Object Detection
Figure 3 for MV-YOLO: Motion Vector-aided Tracking by Semantic Object Detection
Figure 4 for MV-YOLO: Motion Vector-aided Tracking by Semantic Object Detection

Object tracking is the cornerstone of many visual analytics systems. While considerable progress has been made in this area in recent years, robust, efficient, and accurate tracking in real-world video remains a challenge. In this paper, we present a hybrid tracker that leverages motion information from the compressed video stream and a general-purpose semantic object detector acting on decoded frames to construct a fast and efficient tracking engine. The proposed approach is compared with several well-known recent trackers on the OTB tracking dataset. The results indicate advantages of the proposed method in terms of speed and/or accuracy.Other desirable features of the proposed method are its simplicity and deployment efficiency, which stems from the fact that it reuses the resources and information that may already exist in the system for other reasons.

Viaarxiv icon

Can you find a face in a HEVC bitstream?

Feb 23, 2018
Saeed Ranjbar Alvar, Hyomin Choi, Ivan V. Bajic

Figure 1 for Can you find a face in a HEVC bitstream?
Figure 2 for Can you find a face in a HEVC bitstream?
Figure 3 for Can you find a face in a HEVC bitstream?
Figure 4 for Can you find a face in a HEVC bitstream?

Finding faces in images is one of the most important tasks in computer vision, with applications in biometrics, surveillance, human-computer interaction, and other areas. In our earlier work, we demonstrated that it is possible to tell whether or not an image contains a face by only examining the HEVC syntax, without fully reconstructing the image. In the present work we move further in this direction by showing how to localize faces in HEVC-coded images, without full reconstruction. We also demonstrate the benefits that such approach can have in privacy-friendly face localization.

Viaarxiv icon