BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:PUBLISH PRODID:-//Drupal iCal API//EN X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT SEQUENCE:1 X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC 204156 20250211T145002Z DTSTART;TZID=America/New_York:20250218T150000 DTEND;TZID=America/New_York:2 0250218T160000 URL;TYPE=URI:/news/calendar/events/robot ics-engineering-colloquium-speaker-dr-wei-xiao Robotics Engineering Colloquium Speaker - Dr. Wei Xiao Certifiable Neural Control for Safe Autonomyand Robotics\n\n\n\n \n \n\n\n\nAbstract: Safety is central to autonomous systems and robots since a single failure could lead to catastrophic results. In unstructured complex environments where system states and environment information are not available, the saf ety-critical control problem is much more challenging. In this talk, I wil l first discuss safety from a control theoretic perspective with Control B arrier Functions (CBFs). CBFs capture the evolution of the safety requirem ents during the execution of a control system and can be used to guarantee safety for all times due to their forward invariance. Next, this talk wil l introduce an approach for extending the use of CBFs to machine learning- based control, using differentiable CBFs that are end-to-end trainable and adaptively guarantee safety using environmental dependencies. These novel safety layers give rise to new neural network (NN) architectures such as what we have termed BarrierNet. In machine learning and robot learning, th e interpretability of a NN is crucial. The talk will further introduce a n ovel method called invariance propagation through the NN. This approach en ables causal reasoning of the NN's parameters or inputs with respect to ro bot behaviors, as well as introducing guarantees. Finally, I will show how we can certify more powerful generative AI, such as diffusion models, for generalizable and safe autonomy and robotics. These techniques have been successfully applied to various robotic systems, such as autonomous ground vehicles, surface vessels, and flight vehicles, legged robots, robot swar ms, soft robots, and manipulators.\nBio: Wei Xiao is currently a postdocto ral associate at the Computer Science and Artificial Intelligence lab (CSA IL), Massachusetts Institute of Technology. He received his Ph.D. degree f rom the Boston University, Brookline, MA, USA in 2021. His research intere sts include safety-critical control theory and trustworthy machine learnin g, with a particular emphasis on robotics. He received an Outstanding Diss ertation Award at Boston University, an Outstanding Student Paper Award at the 2020 IEEE Conference on Decision and Control, and a Best Paper Nomina tion at ACM/IEEE ICCPS 2021.\nZoom link: https://wpi.zoom.us/j/93413349160 \n END:VEVENT END:VCALENDAR