Alert button
Picture for Ayan Biswas

Ayan Biswas

Alert button

Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture

Sep 14, 2023
Ayan Biswas, Supriya Dhabal, Palaniandavar Venkateswaran

Figure 1 for Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture
Figure 2 for Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture
Figure 3 for Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture
Figure 4 for Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture

Music genre classification has become increasingly critical with the advent of various streaming applications. Nowadays, we find it impossible to imagine using the artist's name and song title to search for music in a sophisticated music app. It is always difficult to classify music correctly because the information linked to music, such as region, artist, album, or non-album, is so variable. This paper presents a study on music genre classification using a combination of Digital Signal Processing (DSP) and Deep Learning (DL) techniques. A novel algorithm is proposed that utilizes both DSP and DL methods to extract relevant features from audio signals and classify them into various genres. The algorithm was tested on the GTZAN dataset and achieved high accuracy. An end-to-end deployment architecture is also proposed for integration into music-related applications. The performance of the algorithm is analyzed and future directions for improvement are discussed. The proposed DSP and DL-based music genre classification algorithm and deployment architecture demonstrate a promising approach for music genre classification.

Viaarxiv icon

Dynamic Data Assimilation of MPAS-O and the Global Drifter Dataset

Jan 11, 2023
Derek DeSantis, Ayan Biswas, Earl Lawrence, Phillip Wolfram

Figure 1 for Dynamic Data Assimilation of MPAS-O and the Global Drifter Dataset
Figure 2 for Dynamic Data Assimilation of MPAS-O and the Global Drifter Dataset
Figure 3 for Dynamic Data Assimilation of MPAS-O and the Global Drifter Dataset
Figure 4 for Dynamic Data Assimilation of MPAS-O and the Global Drifter Dataset

In this study, we propose a new method for combining in situ buoy measurements with Earth system models (ESMs) to improve the accuracy of temperature predictions in the ocean. The technique utilizes the dynamics and modes identified in ESMs to improve the accuracy of buoy measurements while still preserving features such as seasonality. Using this technique, errors in localized temperature predictions made by the MPAS-O model can be corrected. We demonstrate that our approach improves accuracy compared to other interpolation and data assimilation methods. We apply our method to assimilate the Model for Prediction Across Scales Ocean component (MPAS-O) with the Global Drifter Program's in-situ ocean buoy dataset.

Viaarxiv icon

IDLat: An Importance-Driven Latent Generation Method for Scientific Data

Aug 05, 2022
Jingyi Shen, Haoyu Li, Jiayi Xu, Ayan Biswas, Han-Wei Shen

Figure 1 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 2 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 3 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 4 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data

Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.

* 11 pages, 12 figures, Proc. IEEE VIS 2022 
Viaarxiv icon

Relationship-aware Multivariate Sampling Strategy for Scientific Simulation Data

Aug 31, 2020
Subhashis Hazarika, Ayan Biswas, Phillip J. Wolfram, Earl Lawrence, Nathan Urban

Figure 1 for Relationship-aware Multivariate Sampling Strategy for Scientific Simulation Data
Figure 2 for Relationship-aware Multivariate Sampling Strategy for Scientific Simulation Data
Figure 3 for Relationship-aware Multivariate Sampling Strategy for Scientific Simulation Data
Figure 4 for Relationship-aware Multivariate Sampling Strategy for Scientific Simulation Data

With the increasing computational power of current supercomputers, the size of data produced by scientific simulations is rapidly growing. To reduce the storage footprint and facilitate scalable post-hoc analyses of such scientific data sets, various data reduction/summarization methods have been proposed over the years. Different flavors of sampling algorithms exist to sample the high-resolution scientific data, while preserving important data properties required for subsequent analyses. However, most of these sampling algorithms are designed for univariate data and cater to post-hoc analyses of single variables. In this work, we propose a multivariate sampling strategy which preserves the original variable relationships and enables different multivariate analyses directly on the sampled data. Our proposed strategy utilizes principal component analysis to capture the variance of multivariate data and can be built on top of any existing state-of-the-art sampling algorithms for single variables. In addition, we also propose variants of different data partitioning schemes (regular and irregular) to efficiently model the local multivariate relationships. Using two real-world multivariate data sets, we demonstrate the efficacy of our proposed multivariate sampling strategy with respect to its data reduction capabilities as well as the ease of performing efficient post-hoc multivariate analyses.

* To appear as IEEE Vis 2020 Shortpaper 
Viaarxiv icon

Deep Learning-Based Feature-Aware Data Modeling for Complex Physics Simulations

Dec 08, 2019
Qun Liu, Subhashis Hazarika, John M. Patchett, James Paul Ahrens, Ayan Biswas

Figure 1 for Deep Learning-Based Feature-Aware Data Modeling for Complex Physics Simulations
Figure 2 for Deep Learning-Based Feature-Aware Data Modeling for Complex Physics Simulations
Figure 3 for Deep Learning-Based Feature-Aware Data Modeling for Complex Physics Simulations
Figure 4 for Deep Learning-Based Feature-Aware Data Modeling for Complex Physics Simulations

Data modeling and reduction for in situ is important. Feature-driven methods for in situ data analysis and reduction are a priority for future exascale machines as there are currently very few such methods. We investigate a deep-learning based workflow that targets in situ data processing using autoencoders. We propose a Residual Autoencoder integrated Residual in Residual Dense Block (RRDB) to obtain better performance. Our proposed framework compressed our test data into 66 KB from 2.1 MB per 3D volume timestep.

* Accepted as a research poster at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) 
Viaarxiv icon

Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons

Jun 13, 2018
Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Ayan Biswas, Kaushik Roy, Shreyas Sen

Figure 1 for Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons
Figure 2 for Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons
Figure 3 for Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons
Figure 4 for Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons

Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depends heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed, while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons are prone to noise, variability and mismatch, but can lead to extremely low-power designs. In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing. The proposed MS-N is implemented in 65 nm CMOS technology and exhibits > 100X better energy-efficiency across all frequencies over two traditional digital neurons synthesized in the same technology node. We also demonstrate that the inherent error-resiliency of a fully connected or even convolutional neural network (CNN) can handle the noise as well as the manufacturing non-idealities of the MS-N up to certain degrees. Notably, a system-level implementation on MNIST datasets exhibits a worst-case increase in classification error by 2.1% when the integrated noise power in the bandwidth is ~ 0.1 uV2, along with +-3{\sigma} amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.

Viaarxiv icon