Radiology reports are unstructured and contain the imaging findings and corresponding diagnoses transcribed by radiologists which include clinical facts and negated and/or uncertain statements. Extracting pathologic findings and diagnoses from radiology reports is important for quality control, population health, and monitoring of disease progress. Existing works, primarily rely either on rule-based systems or transformer-based pre-trained model fine-tuning, but could not take the factual and uncertain information into consideration, and therefore generate false-positive outputs. In this work, we introduce three sedulous augmentation techniques which retain factual and critical information while generating augmentations for contrastive learning. We introduce RadBERT-CL, which fuses these information into BlueBert via a self-supervised contrastive loss. Our experiments on MIMIC-CXR show superior performance of RadBERT-CL on fine-tuning for multi-class, multi-label report classification. We illustrate that when few labeled data are available, RadBERT-CL outperforms conventional SOTA transformers (BERT/BlueBert) by significantly larger margins (6-11%). We also show that the representations learned by RadBERT-CL can capture critical medical information in the latent space.
In classical cosmological analysis of large scale structure surveys with 2-pt functions, the parameter measurement precision is limited by several key degeneracies within the cosmology and astrophysics sectors. For cosmic shear, clustering amplitude $\sigma_8$ and matter density $\Omega_m$ roughly follow the $S_8=\sigma_8(\Omega_m/0.3)^{0.5}$ relation. In turn, $S_8$ is highly correlated with the intrinsic galaxy alignment amplitude $A_{\rm{IA}}$. For galaxy clustering, the bias $b_g$ is degenerate with both $\sigma_8$ and $\Omega_m$, as well as the stochasticity $r_g$. Moreover, the redshift evolution of IA and bias can cause further parameter confusion. A tomographic 2-pt probe combination can partially lift these degeneracies. In this work we demonstrate that a deep learning analysis of combined probes of weak gravitational lensing and galaxy clustering, which we call DeepLSS, can effectively break these degeneracies and yield significantly more precise constraints on $\sigma_8$, $\Omega_m$, $A_{\rm{IA}}$, $b_g$, $r_g$, and IA redshift evolution parameter $\eta_{\rm{IA}}$. The most significant gains are in the IA sector: the precision of $A_{\rm{IA}}$ is increased by approximately 8x and is almost perfectly decorrelated from $S_8$. Galaxy bias $b_g$ is improved by 1.5x, stochasticity $r_g$ by 3x, and the redshift evolution $\eta_{\rm{IA}}$ and $\eta_b$ by 1.6x. Breaking these degeneracies leads to a significant gain in constraining power for $\sigma_8$ and $\Omega_m$, with the figure of merit improved by 15x. We give an intuitive explanation for the origin of this information gain using sensitivity maps. These results indicate that the fully numerical, map-based forward modeling approach to cosmological inference with machine learning may play an important role in upcoming LSS surveys. We discuss perspectives and challenges in its practical deployment for a full survey analysis.
Variational Autoencoders (VAEs) have recently been highly successful at imputing and acquiring heterogeneous missing data and identifying outliers. However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables and strictly Gaussian posterior approximations. To address these limitations, we present HH-VAEM, a Hierarchical VAE model for mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic hyper-parameter tuning for improved approximate inference. Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation, supervised learning and outlier identification with missing features. Finally, we also present a sampling-based approach for efficiently computing the information gain when missing features are to be acquired with HH-VAEM. Our experiments show that this sampling-based approach is superior to alternatives based on Gaussian approximations.
The decentralized stochastic multi-player multi-armed bandit (MP-MAB) problem, where the collision information is not available to the players, is studied in this paper. Building on the seminal work of Boursier and Perchet (2019), we propose error correction synchronization involving communication (EC-SIC), whose regret is shown to approach that of the centralized stochastic MP-MAB with collision information. By recognizing that the communication phase without collision information corresponds to the Z-channel model in information theory, the proposed EC-SIC algorithm applies optimal error correction coding for the communication of reward statistics. A fixed message length, as opposed to the logarithmically growing one in Boursier and Perchet (2019), also plays a crucial role in controlling the communication loss. Experiments with practical Z-channel codes, such as repetition code, flip code and modified Hamming code, demonstrate the superiority of EC-SIC in both synthetic and real-world datasets.
In this paper, we propose a deformable convolution-based generative adversarial network (DCNGAN) for perceptual quality enhancement of compressed videos. DCNGAN is also adaptive to the quantization parameters (QPs). Compared with optical flows, deformable convolutions are more effective and efficient to align frames. Deformable convolutions can operate on multiple frames, thus leveraging more temporal information, which is beneficial for enhancing the perceptual quality of compressed videos. Instead of aligning frames in a pairwise manner, the deformable convolution can process multiple frames simultaneously, which leads to lower computational complexity. Experimental results demonstrate that the proposed DCNGAN outperforms other state-of-the-art compressed video quality enhancement algorithms.
In the initial stage of human life, communication, seen as a process of social interaction, was always the best way to reach consensus between the parties. Understanding and credibility in this process are essential for the mutual agreement to be validated. But, how to do it so that this communication reaches the great mass? This is the main challenge when what is sought is the dissemination of information and its approval. In this context, this study presents the ALT software, developed from original readability metrics adapted to the Portuguese language, available on the web, to reduce communication difficulties. The development of the software was motivated by the theory of communicative action of Habermas, which uses a multidisciplinary style to measure the credibility of the discourse in the communication channels used to build and maintain a safe and healthy relationship with the public. -- No est\'agio inicial da vida humana a comunica\c{c}\~ao, vista como um processo de intera\c{c}\~ao social, foi sempre o melhor caminho para o consenso entre as partes. O entendimento e a credibilidade nesse processo s\~ao fundamentais para que o acordo m\'utuo seja validado. Mas, como faz\^e-lo de forma que essa comunica\c{c}\~ao alcance a grande massa? Esse \'e o principal desafio quando o que se busca \'e a difus\~ao da informa\c{c}\~ao e a sua aprova\c{c}\~ao. Nesse contexto, este estudo apresenta o software ALT, desenvolvido a partir de m\'etricas de legibilidade originais adaptadas para a L\'ingua Portuguesa, dispon\'ivel na web, para reduzir as dificuldades na comunica\c{c}\~ao. O desenvolvimento do software foi motivado pela teoria do agir comunicativo de Habermas, que faz uso de um estilo multidisciplinar para medir a credibilidade do discurso nos canais de comunica\c{c}\~ao utilizados para construir e manter uma rela\c{c}\~ao segura e saud\'avel com o p\'ublico.
In this paper, we investigate the physical layer security in the reconfigurable intelligent surface (RIS)-aided cell-free networks. A maximum weighted sum secrecy rate problem is formulated by jointly optimizing the active beamforming (BF) at the base stations and passive BF at the RISs. To handle this non-trivial problem, we adopt the alternating optimization to decouple the original problem into two sub-ones, which are solved using the semidefinite relaxation and continuous convex approximation theory. To decrease the complexity for obtaining overall channel state information (CSI), we extend the proposed framework to the case that only requires part of the RIS' CSI. This is achieved via deliberately discarding the RIS that has a small contribution to the user's secrecy rate. Based on this, we formulate a mixed integer non-linear programming problem, and the linear conic relaxation is used to obtained the solutions. Finally, the simulation results show that the proposed schemes can obtain a higher secrecy rate than the existing ones.
As AI-based systems increasingly impact many areas of our lives, auditing these systems for fairness is an increasingly high-stakes problem. Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment. Counterfactual fairness describes an individualized notion of fairness but is even more challenging to evaluate after deployment. We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers. Prediction sensitivity helps answer the question: would this prediction have been different, if this individual had belonged to a different demographic group -- for every prediction made by the deployed model. Prediction sensitivity can leverage correlations between protected status and other features and does not require protected status information at prediction time. Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.
We construct a blockchain-enabled social media network to mitigate the spread of misinformation. We derive the information transmission-time distribution by modeling the misinformation transmission as double-spend attacks on blockchain. This distribution is then incorporated into the SIR model, which substitutes the single rate parameter in the traditional SIR model. Then, on a multi-community network, we study the propagation of misinformation numerically and show that the proposed blockchain enabled social media network outperforms the baseline network in flattening the curve of the infected population.
Indirect Time-of-Flight cameras (iToF) are low-cost devices that provide depth images at an interactive frame rate. However, they are affected by different error sources, with the spotlight taken by Multi-Path Interference (MPI), a key challenge for this technology. Common data-driven approaches tend to focus on a direct estimation of the output depth values, ignoring the underlying transient propagation of the light in the scene. In this work instead, we propose a very compact architecture, leveraging on the direct-global subdivision of transient information for the removal of MPI and for the reconstruction of the transient information itself. The proposed model reaches state-of-the-art MPI correction performances both on synthetic and real data and proves to be very competitive also at extreme levels of noise; at the same time, it also makes a step towards reconstructing transient information from multi-frequency iToF data.