Picture for Tatsuya Harada

Tatsuya Harada

The University of Tokyo, RIKEN AIP

SayTap: Language to Quadrupedal Locomotion

Add code
Jun 14, 2023
Viaarxiv icon

HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting

Add code
May 14, 2023
Figure 1 for HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting
Figure 2 for HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting
Figure 3 for HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting
Figure 4 for HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting
Viaarxiv icon

Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images

Add code
Apr 07, 2023
Figure 1 for Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images
Figure 2 for Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images
Figure 3 for Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images
Figure 4 for Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images
Viaarxiv icon

Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields

Add code
Mar 10, 2023
Figure 1 for Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
Figure 2 for Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
Figure 3 for Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
Figure 4 for Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
Viaarxiv icon

Self-Supervised Learning for Group Equivariant Neural Networks

Add code
Mar 08, 2023
Figure 1 for Self-Supervised Learning for Group Equivariant Neural Networks
Figure 2 for Self-Supervised Learning for Group Equivariant Neural Networks
Figure 3 for Self-Supervised Learning for Group Equivariant Neural Networks
Figure 4 for Self-Supervised Learning for Group Equivariant Neural Networks
Viaarxiv icon

Sketch-based Medical Image Retrieval

Add code
Mar 07, 2023
Figure 1 for Sketch-based Medical Image Retrieval
Figure 2 for Sketch-based Medical Image Retrieval
Figure 3 for Sketch-based Medical Image Retrieval
Figure 4 for Sketch-based Medical Image Retrieval
Viaarxiv icon

Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning

Add code
Feb 19, 2023
Figure 1 for Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning
Figure 2 for Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning
Figure 3 for Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning
Figure 4 for Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning
Viaarxiv icon

Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer

Add code
Dec 07, 2022
Figure 1 for Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer
Figure 2 for Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer
Figure 3 for Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer
Figure 4 for Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer
Viaarxiv icon

Learning by Asking Questions for Knowledge-based Novel Object Recognition

Add code
Oct 12, 2022
Figure 1 for Learning by Asking Questions for Knowledge-based Novel Object Recognition
Figure 2 for Learning by Asking Questions for Knowledge-based Novel Object Recognition
Figure 3 for Learning by Asking Questions for Knowledge-based Novel Object Recognition
Figure 4 for Learning by Asking Questions for Knowledge-based Novel Object Recognition
Viaarxiv icon

Grouped self-attention mechanism for a memory-efficient Transformer

Add code
Oct 06, 2022
Figure 1 for Grouped self-attention mechanism for a memory-efficient Transformer
Figure 2 for Grouped self-attention mechanism for a memory-efficient Transformer
Figure 3 for Grouped self-attention mechanism for a memory-efficient Transformer
Figure 4 for Grouped self-attention mechanism for a memory-efficient Transformer
Viaarxiv icon