Controlling hands in high-dimensional action space has been a longstanding challenge, yet humans naturally perform dexterous tasks with ease. In this paper, we draw inspiration from the concept of internal model exhibited in human behavior and reconsider dexterous hands as learnable systems. Specifically, we introduce MoDex, a framework that includes a couple of neural networks (NNs) capturing the dynamical characteristics of hands and a bidirectional planning approach, which demonstrates both training and planning efficiency. To show the versatility of MoDex, we further integrate it with an external model to manipulate in-hand objects and a large language model (LLM) to generate various gestures in both simulation and real world. Extensive experiments on different dexterous hands address the data efficiency in learning a new task and the transferability between different tasks.
Thumb-up
Finger-gun
OK
Rock&Roll
Scissors
Call
Thumb-up
Finger-gun
OK
Rock&Roll
Scissors
Call
Thumb-up
Finger-gun
OK
Rock&Roll
Scissors
Call
OK
Rock&Roll
Scissors
Call
Finger-gun