BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:PUBLISH PRODID:-//Drupal iCal API//EN X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT SEQUENCE:1 X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC 224081 20250916T125900Z DTSTART;TZID=America/New_York:20250929T120000 DTEND;TZID=America/New_York:2 0250929T125000 URL;TYPE=URI:/news/calendar/events/bme-s eminar-series-bashima-islam-phd-wpi-ece-mindfulness-sensing-multimodal-rea soning-toward-sensor BME Seminar Series: Bashima Islam, PhD: WPI ECE: 鈥淔rom Mindfulness Sensin g to Multimodal Reasoning: Toward Sensor-Language Intelligence Beyond Visi on鈥? \n\n\n \n \n\n\n\nSeminar Series \n鈥淔rom Mindfulness Sensing to Multimodal Reasoning: Toward Sensor-Language Intelligence Beyond Vision鈥漒n\n\n\n\n \n \n\n\n\nBashima Islam, PhD\nAssistant Professor of Electrical and Compu ter Engineering\nWorcester Polytechnic Institute\n\nAbstract: The rapid gr owth of multimodal AI opens new opportunities for sensing, reasoning, and interaction, yet most systems still focus narrowly on vision and overlook signals critical for human-centered applications. In this talk, I will pre sent three recent projects from my group that collectively broaden the sco pe of multimodal intelligence. First, I will introduce our work on mindful ness and respiration sensing, where we design smartphone-based algorithms that track respiration rate and estimate mindfulness skill progression usi ng only accelerometer data. This study demonstrates how sensor feedback ca n enhance usability and engagement in digital mindfulness training, provid ing a compelling case for health-oriented multimodal AI. Next, I will pres ent RAVEN, a unified architecture for multimodal question answering. At it s core is QuART, a query-conditioned token gating module that learns to as sign relevance scores across modalities, enabling the system to amplify in formative cues while suppressing distractors. Through a staged training pi peline, RAVEN achieves robust reasoning across video, audio, and sensor st reams. Finally, I will discuss LLaSA, the Large Language and Sensor Assist ant, which extends multimodal research to wearable sensing. LLaSA introduc es new datasets and evaluation frameworks for aligning sensor signals with language, offering the first general-purpose assistant that reasons joint ly over sensor data and natural language queries. Together, these projects chart a path toward sustainable multimodal systems that hear, sense, and reason with the world, advancing both human-centered applications and core AI methods.\nBio: Dr. Bashima Islam is an Assistant Professor of Electric al and Computer Engineering at Worcester Polytechnic Institute, with affil iations in Computer Science and Data Science. Her research transforms AI t hrough the fusion of sensors, speech, and language to build sustainable, i ntelligent systems for the edge. She focuses on bridging AI with real-worl d impact that advance acoustic understanding, behavioral health monitoring , and real-world multimodal intelligence while considering resource constr ained of low-power devices. She has been recognized with several prestigio us honors, including the NSF CRII Award, multiple NIH research grants, and selection to Forbes 30 Under 30 in Science.\nFor a zoom link please conta ct Kate Harrison at kharrison@wpi.edu\n END:VEVENT END:VCALENDAR