Picture for Satoshi Tsutsui

Satoshi Tsutsui

Action Recognition based on Cross-Situational Action-object Statistics

Add code
Aug 15, 2022
Figure 1 for Action Recognition based on Cross-Situational Action-object Statistics
Figure 2 for Action Recognition based on Cross-Situational Action-object Statistics
Figure 3 for Action Recognition based on Cross-Situational Action-object Statistics
Figure 4 for Action Recognition based on Cross-Situational Action-object Statistics
Viaarxiv icon

Novel View Synthesis for High-fidelity Headshot Scenes

Add code
May 31, 2022
Figure 1 for Novel View Synthesis for High-fidelity Headshot Scenes
Figure 2 for Novel View Synthesis for High-fidelity Headshot Scenes
Figure 3 for Novel View Synthesis for High-fidelity Headshot Scenes
Figure 4 for Novel View Synthesis for High-fidelity Headshot Scenes
Viaarxiv icon

Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition

Add code
Apr 22, 2022
Figure 1 for Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition
Figure 2 for Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition
Figure 3 for Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition
Figure 4 for Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition
Viaarxiv icon

How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors

Add code
Oct 04, 2021
Figure 1 for How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors
Figure 2 for How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors
Figure 3 for How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors
Figure 4 for How You Move Your Head Tells What You Do: Self-supervised Video Representation Learning with Egocentric Cameras and IMU Sensors
Viaarxiv icon

Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers

Add code
Jun 12, 2021
Figure 1 for Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers
Figure 2 for Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers
Figure 3 for Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers
Figure 4 for Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers
Viaarxiv icon

Whose hand is this? Person Identification from Egocentric Hand Gestures

Add code
Nov 17, 2020
Figure 1 for Whose hand is this? Person Identification from Egocentric Hand Gestures
Figure 2 for Whose hand is this? Person Identification from Egocentric Hand Gestures
Figure 3 for Whose hand is this? Person Identification from Egocentric Hand Gestures
Figure 4 for Whose hand is this? Person Identification from Egocentric Hand Gestures
Viaarxiv icon

A Computational Model of Early Word Learning from the Infant's Point of View

Add code
Jun 04, 2020
Figure 1 for A Computational Model of Early Word Learning from the Infant's Point of View
Figure 2 for A Computational Model of Early Word Learning from the Infant's Point of View
Figure 3 for A Computational Model of Early Word Learning from the Infant's Point of View
Figure 4 for A Computational Model of Early Word Learning from the Infant's Point of View
Viaarxiv icon

Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition

Add code
Nov 17, 2019
Figure 1 for Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Figure 2 for Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Figure 3 for Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Figure 4 for Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Viaarxiv icon

Active Object Manipulation Facilitates Visual Object Learning: An Egocentric Vision Study

Add code
Jun 04, 2019
Viaarxiv icon

Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion

Add code
Jun 28, 2018
Figure 1 for Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
Figure 2 for Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
Figure 3 for Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
Figure 4 for Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
Viaarxiv icon