Overview#

Deoxys is a modularized, real-time controller library for Franka Emika Panda to facilitate robot learning research. Deoxys is developed by Yifeng Zhu and aims to democratize basic knowledge of robot manipulation to the robot learning community through open-sourcing the controller implementation.

Here are a list of features that we identified as strengths of our library.

✔ A user-friendly Python Interface and real-time controller implementation in C++
✔ Specialized for research on closed-loop visuomotor skill learning
✔ Easy configuration of robot controllers
✔ Seamless transfer from robosuite for real-robot control

Past research#

A number of projects have been boosted by Deoxys. Here is a list of publications based on Deoxys so far.

Highlight

  • VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposals Priors. Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu
  • Learning Generalizable Manipulation Policies with Object-Centric 3D Representations.Yifeng Zhu, Zhenyu Jiang, Peter Stone, Yuke Zhu
  • MimicPlay: Long-Horizon Imitation Learning by Watching Human Play. Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar
  • NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities. Ruohan Zhang, Sharon Lee, Minjune Hwang, Ayano Hiranaka, Chen Wang, Wensi Ai, Jin Jie Ryan Tan, Shreya Gupta, Yilun Hao, Gabrael Levine, Ruohan Gao, Anthony Norcia, Li Fei-Fei, Jiajun Wu
  • 2023

  • Learning Generalizable Manipulation Policies with Object-Centric 3D Representations.Yifeng Zhu, Zhenyu Jiang, Peter Stone, Yuke Zhu
  • MUTEX: Learning Unified Policies from Multimodal Task Specifications.Rutav Shah, Roberto Martín Martín, Yuke Zhu
  • MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations.Ajay Mandlekar, Soroush Nasiriany*, Bowen Wen*, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, Dieter Fox
  • Interactive Robot Learning from Verbal Correction.Huihan Liu, Alice Chen, Yuke Zhu, Adith Swaminathan, Andrey Kolobov, Ching-An Cheng
  • Model-Based Runtime Monitoring with Interactive Imitation Learning.Huihan Liu, Shivin Dass, Roberto Martín-Martín, Yuke Zhu
  • Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow.Zhenyu Jiang, Hanwen Jiang, Yuke Zhu
  • NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities. Ruohan Zhang, Sharon Lee, Minjune Hwang, Ayano Hiranaka, Chen Wang, Wensi Ai, Jin Jie Ryan Tan, Shreya Gupta, Yilun Hao, Gabrael Levine, Ruohan Gao, Anthony Norcia, Li Fei-Fei, Jiajun Wu
  • MimicPlay: Long-Horizon Imitation Learning by Watching Human Play. Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar
  • VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models. Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Li Fei-Fei
  • Primitive Skill-based Robot Learning from Human Evaluative Feedback. Ayano Hiranaka, Minjune Hwang, Sharon Lee, Chen Wang, Li Fei-Fei, Jiajun Wu, Ruohan Zhang
  • Robot Learning on the Job: Human-in-the-Loop Manipulation and Learning During Deployment. Huihan Liu, Soroush Nasiriany, Lance Zhang, Zhiyao Bao, Yuke Zhu (RSS 2023 Best Paper Nominee)
  • 2021-2022

  • VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposals Priors. Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu
  • Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation. Yifeng Zhu, Peter Stone, Yuke Zhu
  • Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks. Soroush Nasiriany, Huihan Liu, Yuke Zhu (ICRA 2022 Outstanding Learning Paper)
  • Learning and Retrieval from Prior Data for Skill-based Imitation Learning. Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu
  • Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations. Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yuke Zhu
  • Ditto: Building Digital Twins of Articulated Objects from Interaction. Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu
  • ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation. Bokui Shen, Zhenyu Jiang, Christopher Choy, Leonidas J. Guibas, Silvio Savarese, Anima Anandkumar, Yuke Zhu (RSS 2022 Best Paper Nominee)
  • Reference#

    If you find this repo useful for your research, please cite us through the following work:

    @article{zhu2022viola,
      title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors},
      author={Zhu, Yifeng and Joshi, Abhishek and Stone, Peter and Zhu, Yuke},
      journal={arXiv preprint arXiv:2210.11339},
      doi={10.48550/arXiv.2210.11339},
      year={2022}
    }