Picture for Tianjiao Li

Tianjiao Li

ERA: Expert Retrieval and Assembly for Early Action Prediction

Add code
Jul 22, 2022
Figure 1 for ERA: Expert Retrieval and Assembly for Early Action Prediction
Figure 2 for ERA: Expert Retrieval and Assembly for Early Action Prediction
Figure 3 for ERA: Expert Retrieval and Assembly for Early Action Prediction
Figure 4 for ERA: Expert Retrieval and Assembly for Early Action Prediction
Viaarxiv icon

Stochastic first-order methods for average-reward Markov decision processes

Add code
May 19, 2022
Figure 1 for Stochastic first-order methods for average-reward Markov decision processes
Viaarxiv icon

Accelerated and instance-optimal policy evaluation with linear function approximation

Add code
Dec 24, 2021
Figure 1 for Accelerated and instance-optimal policy evaluation with linear function approximation
Figure 2 for Accelerated and instance-optimal policy evaluation with linear function approximation
Viaarxiv icon

Faster Algorithm and Sharper Analysis for Constrained Markov Decision Process

Add code
Oct 20, 2021
Figure 1 for Faster Algorithm and Sharper Analysis for Constrained Markov Decision Process
Viaarxiv icon

The Multi-Modal Video Reasoning and Analyzing Competition

Add code
Aug 18, 2021
Figure 1 for The Multi-Modal Video Reasoning and Analyzing Competition
Figure 2 for The Multi-Modal Video Reasoning and Analyzing Competition
Figure 3 for The Multi-Modal Video Reasoning and Analyzing Competition
Figure 4 for The Multi-Modal Video Reasoning and Analyzing Competition
Viaarxiv icon

UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

Add code
Apr 12, 2021
Figure 1 for UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles
Figure 2 for UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles
Figure 3 for UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles
Figure 4 for UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles
Viaarxiv icon

Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning

Add code
Nov 25, 2020
Figure 1 for Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
Figure 2 for Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
Figure 3 for Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
Figure 4 for Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
Viaarxiv icon

Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation

Add code
Nov 15, 2020
Figure 1 for Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
Figure 2 for Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
Figure 3 for Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
Figure 4 for Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
Viaarxiv icon