Alert button
Picture for Jin Peng Zhou

Jin Peng Zhou

Alert button

Correction with Backtracking Reduces Hallucination in Summarization

Oct 31, 2023
Zhenzhen Liu, Chao Wan, Varsha Kishore, Jin Peng Zhou, Minmin Chen, Kilian Q. Weinberger

Abstractive summarization aims at generating natural language summaries of a source document that are succinct while preserving the important elements. Despite recent advances, neural text summarization models are known to be susceptible to hallucinating (or more correctly confabulating), that is to produce summaries with details that are not grounded in the source document. In this paper, we introduce a simple yet efficient technique, CoBa, to reduce hallucination in abstractive summarization. The approach is based on two steps: hallucination detection and mitigation. We show that the former can be achieved through measuring simple statistics about conditional word probabilities and distance to context words. Further, we demonstrate that straight-forward backtracking is surprisingly effective at mitigation. We thoroughly evaluate the proposed method with prior art on three benchmark datasets for text summarization. The results show that CoBa is effective and efficient in reducing hallucination, and offers great adaptability and flexibility.

Viaarxiv icon

Magnushammer: A Transformer-based Approach to Premise Selection

Mar 08, 2023
Maciej Mikuła, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng Zhou, Christian Szegedy, Łukasz Kuciński, Piotr Miłoś, Yuhuai Wu

Figure 1 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 2 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 3 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 4 for Magnushammer: A Transformer-based Approach to Premise Selection

Premise selection is a fundamental problem of automated theorem proving. Previous works often use intricate symbolic methods, rely on domain knowledge, and require significant engineering effort to solve this task. In this work, we show that Magnushammer, a neural transformer-based approach, can outperform traditional symbolic systems by a large margin. Tested on the PISA benchmark, Magnushammer achieves $59.5\%$ proof rate compared to a $38.3\%$ proof rate of Sledgehammer, the most mature and popular symbolic-based solver. Furthermore, by combining Magnushammer with a neural formal prover based on a language model, we significantly improve the previous state-of-the-art proof rate from $57.0\%$ to $71.0\%$.

Viaarxiv icon

Unsupervised Out-of-Distribution Detection with Diffusion Inpainting

Feb 20, 2023
Zhenzhen Liu, Jin Peng Zhou, Yufan Wang, Kilian Q. Weinberger

Figure 1 for Unsupervised Out-of-Distribution Detection with Diffusion Inpainting
Figure 2 for Unsupervised Out-of-Distribution Detection with Diffusion Inpainting
Figure 3 for Unsupervised Out-of-Distribution Detection with Diffusion Inpainting
Figure 4 for Unsupervised Out-of-Distribution Detection with Diffusion Inpainting

Unsupervised out-of-distribution detection (OOD) seeks to identify out-of-domain data by learning only from unlabeled in-domain data. We present a novel approach for this task - Lift, Map, Detect (LMD) - that leverages recent advancement in diffusion models. Diffusion models are one type of generative models. At their core, they learn an iterative denoising process that gradually maps a noisy image closer to their training manifolds. LMD leverages this intuition for OOD detection. Specifically, LMD lifts an image off its original manifold by corrupting it, and maps it towards the in-domain manifold with a diffusion model. For an out-of-domain image, the mapped image would have a large distance away from its original manifold, and LMD would identify it as OOD accordingly. We show through extensive experiments that LMD achieves competitive performance across a broad variety of datasets.

Viaarxiv icon

Learned Systems Security

Jan 10, 2023
Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot

Figure 1 for Learned Systems Security
Figure 2 for Learned Systems Security
Figure 3 for Learned Systems Security
Figure 4 for Learned Systems Security

A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.

Viaarxiv icon

Does Label Differential Privacy Prevent Label Inference Attacks?

Feb 25, 2022
Ruihan Wu, Jin Peng Zhou, Kilian Q. Weinberger, Chuan Guo

Figure 1 for Does Label Differential Privacy Prevent Label Inference Attacks?
Figure 2 for Does Label Differential Privacy Prevent Label Inference Attacks?
Figure 3 for Does Label Differential Privacy Prevent Label Inference Attacks?
Figure 4 for Does Label Differential Privacy Prevent Label Inference Attacks?

Label differential privacy (LDP) is a popular framework for training private ML models on datasets with public features and sensitive private labels. Despite its rigorous privacy guarantee, it has been observed that in practice LDP does not preclude label inference attacks (LIAs): Models trained with LDP can be evaluated on the public training features to recover, with high accuracy, the very private labels that it was designed to protect. In this work, we argue that this phenomenon is not paradoxical and that LDP merely limits the advantage of an LIA adversary compared to predicting training labels using the Bayes classifier. At LDP $\epsilon=0$ this advantage is zero, hence the optimal attack is to predict according to the Bayes classifier and is independent of the training labels. Finally, we empirically demonstrate that our result closely captures the behavior of simulated attacks on both synthetic and real world datasets.

Viaarxiv icon

Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media

Aug 20, 2020
Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot

Figure 1 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 2 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 3 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 4 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media

Progress in generative modelling, especially generative adversarial networks, have made it possible to efficiently synthesize and alter media at scale. Malicious individuals now rely on these machine-generated media, or deepfakes, to manipulate social discourse. In order to ensure media authenticity, existing research is focused on deepfake detection. Yet, the very nature of frameworks used for generative modeling suggests that progress towards detecting deepfakes will enable more realistic deepfake generation. Therefore, it comes at no surprise that developers of generative models are under the scrutiny of stakeholders dealing with misinformation campaigns. As such, there is a clear need to develop tools that ensure the transparent use of generative modeling, while minimizing the harm caused by malicious applications. We propose a framework to provide developers of generative models with plausible deniability. We introduce two techniques to provide evidence that a model developer did not produce media that they are being accused of. The first optimizes over the source of entropy of each generative model to probabilistically attribute a deepfake to one of the models. The second involves cryptography to maintain a tamper-proof and publicly-broadcasted record of all legitimate uses of the model. We evaluate our approaches on the seminal example of face synthesis, demonstrating that our first approach achieves 97.62% attribution accuracy, and is less sensitive to perturbations and adversarial examples. In cases where a machine learning approach is unable to provide plausible deniability, we find that involving cryptography as done in our second approach is required. We also discuss the ethical implications of our work, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling.

Viaarxiv icon