Alert button
Picture for Wei-Di Chang

Wei-Di Chang

Alert button

Generalizable Imitation Learning Through Pre-Trained Representations

Nov 15, 2023
Wei-Di Chang, Francois Hogan, David Meger, Gregory Dudek

Viaarxiv icon

Imitation Learning from Observation through Optimal Transport

Oct 02, 2023
Wei-Di Chang, Scott Fujimoto, David Meger, Gregory Dudek

Viaarxiv icon

For SALE: State-Action Representation Learning for Deep Reinforcement Learning

Jun 04, 2023
Scott Fujimoto, Wei-Di Chang, Edward J. Smith, Shixiang Shane Gu, Doina Precup, David Meger

Figure 1 for For SALE: State-Action Representation Learning for Deep Reinforcement Learning
Figure 2 for For SALE: State-Action Representation Learning for Deep Reinforcement Learning
Figure 3 for For SALE: State-Action Representation Learning for Deep Reinforcement Learning
Figure 4 for For SALE: State-Action Representation Learning for Deep Reinforcement Learning
Viaarxiv icon

Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks

Feb 03, 2023
Igor Kozlov, Dmitriy Rivkin, Wei-Di Chang, Di Wu, Xue Liu, Gregory Dudek

Figure 1 for Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks
Figure 2 for Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks
Figure 3 for Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks
Figure 4 for Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks
Viaarxiv icon

IL-flOw: Imitation Learning from Observation using Normalizing Flows

May 19, 2022
Wei-Di Chang, Juan Camilo Gamboa Higuera, Scott Fujimoto, David Meger, Gregory Dudek

Figure 1 for IL-flOw: Imitation Learning from Observation using Normalizing Flows
Figure 2 for IL-flOw: Imitation Learning from Observation using Normalizing Flows
Figure 3 for IL-flOw: Imitation Learning from Observation using Normalizing Flows
Figure 4 for IL-flOw: Imitation Learning from Observation using Normalizing Flows
Viaarxiv icon

One-Shot Informed Robotic Visual Search in the Wild

Mar 22, 2020
Karim Koreitem, Florian Shkurti, Travis Manderson, Wei-Di Chang, Juan Camilo Gamboa Higuera, Gregory Dudek

Figure 1 for One-Shot Informed Robotic Visual Search in the Wild
Figure 2 for One-Shot Informed Robotic Visual Search in the Wild
Figure 3 for One-Shot Informed Robotic Visual Search in the Wild
Figure 4 for One-Shot Informed Robotic Visual Search in the Wild
Viaarxiv icon

OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning

Nov 24, 2017
Peter Henderson, Wei-Di Chang, Pierre-Luc Bacon, David Meger, Joelle Pineau, Doina Precup

Figure 1 for OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
Figure 2 for OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
Figure 3 for OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
Figure 4 for OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
Viaarxiv icon

Underwater Multi-Robot Convoying using Visual Tracking by Detection

Sep 25, 2017
Florian Shkurti, Wei-Di Chang, Peter Henderson, Md Jahidul Islam, Juan Camilo Gamboa Higuera, Jimmy Li, Travis Manderson, Anqi Xu, Gregory Dudek, Junaed Sattar

Figure 1 for Underwater Multi-Robot Convoying using Visual Tracking by Detection
Figure 2 for Underwater Multi-Robot Convoying using Visual Tracking by Detection
Figure 3 for Underwater Multi-Robot Convoying using Visual Tracking by Detection
Figure 4 for Underwater Multi-Robot Convoying using Visual Tracking by Detection
Viaarxiv icon

Benchmark Environments for Multitask Learning in Continuous Domains

Aug 14, 2017
Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David Meger, Gregory Dudek

Figure 1 for Benchmark Environments for Multitask Learning in Continuous Domains
Figure 2 for Benchmark Environments for Multitask Learning in Continuous Domains
Figure 3 for Benchmark Environments for Multitask Learning in Continuous Domains
Figure 4 for Benchmark Environments for Multitask Learning in Continuous Domains
Viaarxiv icon