Alert button
Picture for Ge Jin

Ge Jin

Alert button

pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting

Add code
Bookmark button
Alert button
May 16, 2023
Yunyi Zhou, Zhixuan Chu, Yijia Ruan, Ge Jin, Yuchen Huang, Sheng Li

Figure 1 for pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting
Figure 2 for pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting
Figure 3 for pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting
Figure 4 for pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting
Viaarxiv icon

MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data

Add code
Bookmark button
Alert button
Oct 27, 2021
Zhibo Zhu, Ziqi Liu, Ge Jin, Zhiqiang Zhang, Lei Chen, Jun Zhou, Jianyong Zhou

Figure 1 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 2 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 3 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 4 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Viaarxiv icon

FANDA: A Novel Approach to Perform Follow-up Query Analysis

Add code
Bookmark button
Alert button
Jan 24, 2019
Qian Liu, Bei Chen, Jian-Guang Lou, Ge Jin, Dongmei Zhang

Figure 1 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 2 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 3 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 4 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Viaarxiv icon

Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe

Add code
Bookmark button
Alert button
May 04, 2018
Jiong Gong, Haihao Shen, Guoming Zhang, Xiaoli Liu, Shane Li, Ge Jin, Niharika Maheshwari, Evarist Fomenko, Eden Segal

Figure 1 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 2 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 3 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 4 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Viaarxiv icon