Alert button
Picture for Tomotake Sasaki

Tomotake Sasaki

Alert button

D3: Data Diversity Design for Systematic Generalization in Visual Question Answering

Sep 15, 2023
Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix

Viaarxiv icon

Modularity Trumps Invariance for Compositional Robustness

Jun 15, 2023
Ian Mason, Anirban Sarkar, Tomotake Sasaki, Xavier Boix

Figure 1 for Modularity Trumps Invariance for Compositional Robustness
Figure 2 for Modularity Trumps Invariance for Compositional Robustness
Figure 3 for Modularity Trumps Invariance for Compositional Robustness
Figure 4 for Modularity Trumps Invariance for Compositional Robustness
Viaarxiv icon

HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection

May 17, 2023
Kentaro Takemoto, Moyuru Yamada, Tomotake Sasaki, Hisanao Akima

Figure 1 for HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection
Figure 2 for HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection
Figure 3 for HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection
Figure 4 for HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection
Viaarxiv icon

Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts

Mar 17, 2023
Anirban Sarkar, Matthew Groth, Ian Mason, Tomotake Sasaki, Xavier Boix

Figure 1 for Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts
Figure 2 for Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts
Figure 3 for Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts
Figure 4 for Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts
Viaarxiv icon

Safe Exploration Method for Reinforcement Learning under Existence of Disturbance

Sep 30, 2022
Yoshihiro Okawa, Tomotake Sasaki, Hitoshi Yanami, Toru Namerikawa

Figure 1 for Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
Figure 2 for Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
Figure 3 for Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
Figure 4 for Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
Viaarxiv icon

Transformer Module Networks for Systematic Generalization in Visual Question Answering

Jan 27, 2022
Moyuru Yamada, Vanessa D'Amario, Kentaro Takemoto, Xavier Boix, Tomotake Sasaki

Figure 1 for Transformer Module Networks for Systematic Generalization in Visual Question Answering
Figure 2 for Transformer Module Networks for Systematic Generalization in Visual Question Answering
Figure 3 for Transformer Module Networks for Systematic Generalization in Visual Question Answering
Figure 4 for Transformer Module Networks for Systematic Generalization in Visual Question Answering
Viaarxiv icon

Do Neural Networks for Segmentation Understand Insideness?

Jan 25, 2022
Kimberly Villalobos, Vilim Štih, Amineh Ahmadinejad, Shobhita Sundaram, Jamell Dozier, Andrew Francl, Frederico Azevedo, Tomotake Sasaki, Xavier Boix

Viaarxiv icon

Symmetry Perception by Deep Networks: Inadequacy of Feed-Forward Architectures and Improvements with Recurrent Connections

Dec 08, 2021
Shobhita Sundaram, Darius Sinha, Matthew Groth, Tomotake Sasaki, Xavier Boix

Figure 1 for Symmetry Perception by Deep Networks: Inadequacy of Feed-Forward Architectures and Improvements with Recurrent Connections
Figure 2 for Symmetry Perception by Deep Networks: Inadequacy of Feed-Forward Architectures and Improvements with Recurrent Connections
Figure 3 for Symmetry Perception by Deep Networks: Inadequacy of Feed-Forward Architectures and Improvements with Recurrent Connections
Figure 4 for Symmetry Perception by Deep Networks: Inadequacy of Feed-Forward Architectures and Improvements with Recurrent Connections
Viaarxiv icon

Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss

Oct 30, 2021
Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, Tomotake Sasaki

Figure 1 for Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss
Figure 2 for Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss
Figure 3 for Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss
Figure 4 for Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations: late-stopping, tuning batch normalization and invariance loss
Viaarxiv icon

Annotation Cost Reduction of Stream-based Active Learning by Automated Weak Labeling using a Robot Arm

Oct 03, 2021
Kanata Suzuki, Taro Sunagawa, Tomotake Sasaki, Takashi Katoh

Figure 1 for Annotation Cost Reduction of Stream-based Active Learning by Automated Weak Labeling using a Robot Arm
Figure 2 for Annotation Cost Reduction of Stream-based Active Learning by Automated Weak Labeling using a Robot Arm
Figure 3 for Annotation Cost Reduction of Stream-based Active Learning by Automated Weak Labeling using a Robot Arm
Figure 4 for Annotation Cost Reduction of Stream-based Active Learning by Automated Weak Labeling using a Robot Arm
Viaarxiv icon