Alert button
Picture for Masashi Okada

Masashi Okada

Alert button

A Contact Model based on Denoising Diffusion to Learn Variable Impedance Control for Contact-rich Manipulation

Add code
Bookmark button
Alert button
Mar 20, 2024
Masashi Okada, Mayumi Komatsu, Tadahiro Taniguchi

Figure 1 for A Contact Model based on Denoising Diffusion to Learn Variable Impedance Control for Contact-rich Manipulation
Figure 2 for A Contact Model based on Denoising Diffusion to Learn Variable Impedance Control for Contact-rich Manipulation
Figure 3 for A Contact Model based on Denoising Diffusion to Learn Variable Impedance Control for Contact-rich Manipulation
Figure 4 for A Contact Model based on Denoising Diffusion to Learn Variable Impedance Control for Contact-rich Manipulation
Viaarxiv icon

Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning

Add code
Bookmark button
Alert button
Sep 08, 2023
Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi

Figure 1 for Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning
Figure 2 for Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning
Figure 3 for Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning
Figure 4 for Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning
Viaarxiv icon

Learning Compliant Stiffness by Impedance Control-Aware Task Segmentation and Multi-objective Bayesian Optimization with Priors

Add code
Bookmark button
Alert button
Jul 28, 2023
Masashi Okada, Mayumi Komatsu, Ryo Okumura, Tadahiro Taniguchi

Figure 1 for Learning Compliant Stiffness by Impedance Control-Aware Task Segmentation and Multi-objective Bayesian Optimization with Priors
Figure 2 for Learning Compliant Stiffness by Impedance Control-Aware Task Segmentation and Multi-objective Bayesian Optimization with Priors
Figure 3 for Learning Compliant Stiffness by Impedance Control-Aware Task Segmentation and Multi-objective Bayesian Optimization with Priors
Figure 4 for Learning Compliant Stiffness by Impedance Control-Aware Task Segmentation and Multi-objective Bayesian Optimization with Priors
Viaarxiv icon

Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times

Add code
Bookmark button
Alert button
Feb 03, 2023
Atsuyoshi Kita, Nobuhiro Suenari, Masashi Okada, Tadahiro Taniguchi

Figure 1 for Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times
Figure 2 for Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times
Figure 3 for Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times
Figure 4 for Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times
Viaarxiv icon

Self-Supervised Representation Learning as Multimodal Variational Inference

Add code
Bookmark button
Alert button
Mar 22, 2022
Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi

Figure 1 for Self-Supervised Representation Learning as Multimodal Variational Inference
Figure 2 for Self-Supervised Representation Learning as Multimodal Variational Inference
Figure 3 for Self-Supervised Representation Learning as Multimodal Variational Inference
Figure 4 for Self-Supervised Representation Learning as Multimodal Variational Inference
Viaarxiv icon

Multi-View Dreaming: Multi-View World Model with Contrastive Learning

Add code
Bookmark button
Alert button
Mar 15, 2022
Akira Kinose, Masashi Okada, Ryo Okumura, Tadahiro Taniguchi

Figure 1 for Multi-View Dreaming: Multi-View World Model with Contrastive Learning
Figure 2 for Multi-View Dreaming: Multi-View World Model with Contrastive Learning
Figure 3 for Multi-View Dreaming: Multi-View World Model with Contrastive Learning
Figure 4 for Multi-View Dreaming: Multi-View World Model with Contrastive Learning
Viaarxiv icon

DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction

Add code
Bookmark button
Alert button
Mar 01, 2022
Masashi Okada, Tadahiro Taniguchi

Figure 1 for DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction
Figure 2 for DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction
Figure 3 for DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction
Figure 4 for DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction
Viaarxiv icon

Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

Add code
Bookmark button
Alert button
Jul 29, 2020
Masashi Okada, Tadahiro Taniguchi

Figure 1 for Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
Figure 2 for Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
Figure 3 for Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
Figure 4 for Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
Viaarxiv icon

PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference

Add code
Bookmark button
Alert button
Mar 01, 2020
Masashi Okada, Norio Kosaka, Tadahiro Taniguchi

Figure 1 for PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference
Figure 2 for PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference
Figure 3 for PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference
Figure 4 for PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference
Viaarxiv icon

Domain-Adversarial and -Conditional State Space Model for Imitation Learning

Add code
Bookmark button
Alert button
Jan 31, 2020
Ryo Okumura, Masashi Okada, Tadahiro Taniguchi

Figure 1 for Domain-Adversarial and -Conditional State Space Model for Imitation Learning
Figure 2 for Domain-Adversarial and -Conditional State Space Model for Imitation Learning
Figure 3 for Domain-Adversarial and -Conditional State Space Model for Imitation Learning
Figure 4 for Domain-Adversarial and -Conditional State Space Model for Imitation Learning
Viaarxiv icon