BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:PUBLISH PRODID:-//Drupal iCal API//EN X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT SEQUENCE:1 X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC 198531 20250131T112952Z DTSTART;TZID=America/New_York:20250205T130000 DTEND;TZID=America/New_York:2 0250205T140000 URL;TYPE=URI:/news/calendar/events/rbe-c olloquium-speaker-dr-yifan-zhu RBE Colloquium Speaker - Dr. Yifan Zhu Data-Efficient Visual-Tactile World Models for Robot Deployment in the Open World\n\n\n\n \n \n\n\n\nAbstract: \nDespite recent advances in robotics, the robust deployment of robots in the open world for practical tasks remains a formidable challenge. A signi ficant roadblock is that robots lack a fundamental understanding of their interaction with the physical world, especially in new scenarios. Traditio nal model-based approaches utilize geometry primitives and physics models, which require significant prior knowledge and fail in the unknown. On the other hand, deep-learning-based approaches are extremely data-hungry and brittle against distribution shifts. In this talk, I will talk about how I developed robot world model representations that tightly integrate physic s modeling and machine learning to allow data-efficient and accurate world models for contact-rich tasks involving rigid bodies, granular media, and deformable objects. Next, I will discuss how I tailor the world model rep resentations to downstream active perception to allow fast adaptation of t he world models with sparse online visual-tactile data. Following this, I will briefly talk about how I leveraged the learned world models for plann ing locomotion and manipulation tasks, and designed low-cost robot hardwar e with multi-sensory capabilities, which is a prerequisite towards buildin g predictive world models from multi-modal sensing. Finally, I will conclu de with my future research directions in representing, acquiring, and usin g robot world models for contact-rich tasks in the open world.\nBio:\nYifa n Zhu is currently a postdoctoral research associate in the Mechanical Eng ineering and Materials Science Department at Yale University. Prior to thi s, he obtained a Ph.D. degree in the Computer Science Department at the Un iversity of Illinois Urbana-Champaign and a B.E. degree in the Mechanical Engineering Department at Vanderbilt University. His research centers arou nd developing representations for robots that facilitate tight integration of physics modeling and machine learning for predictive world modeling fr om visual-tactile perceptions, especially in the low-data regime. His rese arch has been featured in top-tier robotics venues such as R:SS, ICRA, IRO S, RA-L, and IJSR. He also co-led the UIUC team in placing 4-th in the $10 M ANA Avatar XPRIZE competition.\nZoom link: https://wpi.zoom.us/j/9341334 9160\n END:VEVENT END:VCALENDAR