Alert button
Picture for Wenwen Zhou

Wenwen Zhou

Alert button

AutoPCF: Efficient Product Carbon Footprint Accounting with Large Language Models

Aug 11, 2023
Zhu Deng, Jinjie Liu, Biao Luo, Can Yuan, Qingrun Yang, Lei Xiao, Wenwen Zhou, Zhu Liu

Figure 1 for AutoPCF: Efficient Product Carbon Footprint Accounting with Large Language Models
Figure 2 for AutoPCF: Efficient Product Carbon Footprint Accounting with Large Language Models
Figure 3 for AutoPCF: Efficient Product Carbon Footprint Accounting with Large Language Models
Figure 4 for AutoPCF: Efficient Product Carbon Footprint Accounting with Large Language Models

The product carbon footprint (PCF) is crucial for decarbonizing the supply chain, as it measures the direct and indirect greenhouse gas emissions caused by all activities during the product's life cycle. However, PCF accounting often requires expert knowledge and significant time to construct life cycle models. In this study, we test and compare the emergent ability of five large language models (LLMs) in modeling the 'cradle-to-gate' life cycles of products and generating the inventory data of inputs and outputs, revealing their limitations as a generalized PCF knowledge database. By utilizing LLMs, we propose an automatic AI-driven PCF accounting framework, called AutoPCF, which also applies deep learning algorithms to automatically match calculation parameters, and ultimately calculate the PCF. The results of estimating the carbon footprint for three case products using the AutoPCF framework demonstrate its potential in achieving automatic modeling and estimation of PCF with a large reduction in modeling time from days to minutes.

Viaarxiv icon

IB-UQ: Information bottleneck based uncertainty quantification for neural function regression and neural operator learning

Feb 07, 2023
Ling Guo, Hao Wu, Wenwen Zhou, Tao Zhou

Figure 1 for IB-UQ: Information bottleneck based uncertainty quantification for neural function regression and neural operator learning
Figure 2 for IB-UQ: Information bottleneck based uncertainty quantification for neural function regression and neural operator learning
Figure 3 for IB-UQ: Information bottleneck based uncertainty quantification for neural function regression and neural operator learning
Figure 4 for IB-UQ: Information bottleneck based uncertainty quantification for neural function regression and neural operator learning

In this paper, a novel framework is established for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks, including deep neural network (DNN) regression and neural operator learning (DeepONet). Specifically, we first employ the General Incompressible-Flow Networks (GIN) model to learn a "wide" distribution fromnoisy observation data. Then, following the information bottleneck objective, we learn a stochastic map from input to some latent representation that can be used to predict the output. A tractable variational bound on the IB objective is constructed with a normalizing flow reparameterization. Hence, we can optimize the objective using the stochastic gradient descent method. IB-UQ can provide both mean and variance in the label prediction by explicitly modeling the representation variables. Compared to most DNN regression methods and the deterministic DeepONet, the proposed model can be trained on noisy data and provide accurate predictions with reliable uncertainty estimates on unseen noisy data. We demonstrate the capability of the proposed IB-UQ framework via several representative examples, including discontinuous function regression, real-world dataset regression and learning nonlinear operators for diffusion-reaction partial differential equation.

* 22 pages, 36figures 
Viaarxiv icon

RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction

Oct 28, 2022
Yanyan Shen, Lifan Zhao, Weiyu Cheng, Zibin Zhang, Wenwen Zhou, Kangyi Lin

Figure 1 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 2 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 3 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 4 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction

Click-Through Rate (CTR) prediction on cold users is a challenging task in recommender systems. Recent researches have resorted to meta-learning to tackle the cold-user challenge, which either perform few-shot user representation learning or adopt optimization-based meta-learning. However, existing methods suffer from information loss or inefficient optimization process, and they fail to explicitly model global user preference knowledge which is crucial to complement the sparse and insufficient preference information of cold users. In this paper, we propose a novel and efficient approach named RESUS, which decouples the learning of global preference knowledge contributed by collective users from the learning of residual preferences for individual users. Specifically, we employ a shared predictor to infer basis user preferences, which acquires global preference knowledge from the interactions of different users. Meanwhile, we develop two efficient algorithms based on the nearest neighbor and ridge regression predictors, which infer residual user preferences via learning quickly from a few user-specific interactions. Extensive experiments on three public datasets demonstrate that our RESUS approach is efficient and effective in improving CTR prediction accuracy on cold users, compared with various state-of-the-art methods.

* Accepted by TOIS 2022. Code are available in https://github.com/MogicianXD/RESUS 
Viaarxiv icon

A Novel Fast Exact Subproblem Solver for Stochastic Quasi-Newton Cubic Regularized Optimization

Apr 19, 2022
Jarad Forristal, Joshua Griffin, Wenwen Zhou, Seyedalireza Yektamaram

Figure 1 for A Novel Fast Exact Subproblem Solver for Stochastic Quasi-Newton Cubic Regularized Optimization
Figure 2 for A Novel Fast Exact Subproblem Solver for Stochastic Quasi-Newton Cubic Regularized Optimization
Figure 3 for A Novel Fast Exact Subproblem Solver for Stochastic Quasi-Newton Cubic Regularized Optimization

In this work we describe an Adaptive Regularization using Cubics (ARC) method for large-scale nonconvex unconstrained optimization using Limited-memory Quasi-Newton (LQN) matrices. ARC methods are a relatively new family of optimization strategies that utilize a cubic-regularization (CR) term in place of trust-regions and line-searches. LQN methods offer a large-scale alternative to using explicit second-order information by taking identical inputs to those used by popular first-order methods such as stochastic gradient descent (SGD). Solving the CR subproblem exactly requires Newton's method, yet using properties of the internal structure of LQN matrices, we are able to find exact solutions to the CR subproblem in a matrix-free manner, providing large speedups and scaling into modern size requirements. Additionally, we expand upon previous ARC work and explicitly incorporate first-order updates into our algorithm. We provide experimental results when the SR1 update is used, which show substantial speed-ups and competitive performance compared to Adam and other second order optimizers on deep neural networks (DNNs). We find that our new approach, ARCLQN, compares to modern optimizers with minimal tuning, a common pain-point for second order methods.

* 14 pages, 1 figures, 3 tables 
Viaarxiv icon