Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Yoichi Sato

GO-Finder: A Registration-Free Wearable System for Assisting Users in Finding Lost Objects via Hand-Held Object Discovery


Feb 12, 2021
Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi Sato

* 13 pages, 13 figures, ACM IUI 2021 

  Access Paper or Ask Questions

A Comprehensive Study on Visual Explanations for Spatio-temporal Networks


May 01, 2020
Zhenqiang Li, Weimin Wang, Zuoyue Li, Yifei Huang, Yoichi Sato


  Access Paper or Ask Questions

A computer-aided diagnosis system using artificial intelligence for hip fractures significantly improves the diagnostic rate of residents. -Multi-institutional joint Development Research


Apr 05, 2020
Yoichi Sato, Yasuhiko Takegami, Takamune Asamoto, Yutaro Ono, Ryosuke Goto, Asahi Kitamura, Seiwa Honda

* 6 pages, 3 tables, 7 figures. / author's homepage : https://www.fracture-ai.org 

  Access Paper or Ask Questions

A Computer-Aided Diagnosis System Using Artificial Intelligence for Proximal Femoral Fractures Enables Residents to Achieve a Diagnostic Rate Equivalent to Orthopedic Surgeons -- multi-institutional joint development research


Mar 11, 2020
Yoichi Sato, Takamune Asamoto, Yutaro Ono, Ryosuke Goto, Asahi Kitamura, Seiwa Honda

* 6 pages, 3 tables, 7 figures. / author's homepage : https://www.fracture-ai.org 

  Access Paper or Ask Questions

Manipulation-skill Assessment from Videos with Spatial Attention Network


Jan 09, 2019
Zhenqiang Li, Yifei Huang, Minjie Cai, Yoichi Sato


  Access Paper or Ask Questions

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions


Jan 07, 2019
Yifei Huang, Minjie Cai, Zhenqiang Li, Yoichi Sato


  Access Paper or Ask Questions

Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes


Jul 22, 2018
Minjie Cai, Kris Kitani, Yoichi Sato

* 14 pages, 13 figures 

  Access Paper or Ask Questions

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition


Jul 20, 2018
Yifei Huang, Minjie Cai, Zhenqiang Li, Yoichi Sato


  Access Paper or Ask Questions

Future Person Localization in First-Person Videos


Mar 28, 2018
Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato

* Accepted to CVPR 2018 

  Access Paper or Ask Questions

Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures


Nov 29, 2017
Ryo Yonetani, Kris M. Kitani, Yoichi Sato

* To appear in IEEE TPAMI 

  Access Paper or Ask Questions

Continuous 3D Label Stereo Matching using Local Expansion Moves


Oct 17, 2017
Tatsunori Taniai, Yasuyuki Matsushita, Yoichi Sato, Takeshi Naemura

* IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 11, pp. 2725-2739, 2018 
* 14 pages. An extended version of our preliminary conference paper [39], Taniai et al. "Graph Cut based Continuous Stereo Matching using Locally Shared Labels" in the proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014). Our results were submitted to Middlebury Stereo Benchmark Version 2 on April 22, 2015, and to Version 3 on July 4, 2017 

  Access Paper or Ask Questions

Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption


Jul 28, 2017
Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato

* To appear in ICCV 2017 

  Access Paper or Ask Questions

Fast Multi-frame Stereo Scene Flow with Motion Segmentation


Jul 05, 2017
Tatsunori Taniai, Sudipta N. Sinha, Yoichi Sato

* 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6891-6900 
* 15 pages. To appear at IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo Scene Flow Benchmark in November 2016 

  Access Paper or Ask Questions

Hierarchical Gaussian Descriptors with Application to Person Re-Identification


Jun 14, 2017
Tetsu Matsukawa, Takahiro Okabe, Einoshin Suzuki, Yoichi Sato

* 14 pages, 12 figures, 4 tables 

  Access Paper or Ask Questions