BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:PUBLISH PRODID:-//Drupal iCal API//EN X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT SEQUENCE:1 X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC 224601 20250929T091755Z DTSTART;TZID=America/New_York:20251015T093000 DTEND;TZID=America/New_York:2 0251015T103000 URL;TYPE=URI:/news/calendar/events/depar tment-mathematical-sciences-taouri-wang-dissertation-proposal Department of Mathematical Sciences: Taouri Wang, Dissertation Proposal \n\n\n \n \n\n\n\nZoom link:\nhttps://wpi.zoom.us/j/4833920729?om n=98284733385\nMeeting ID: 483 392 0729\n\nTitle: Physics-Informed Neural Networks for High Dimensional Partial Differential Equations Arising from Stochastic Dynamics\n\nAbstract: Partial differential equations connected with stochastic dynamics are widely used in modeling, control, and modern optimization. In this work, we focus on two such equations: Fokker-Planck (FP) equations for densities of stochas-tic differential equations(SDEs) a nd Hamilton-Jacobi-Bellman (HJB) equations for Langevin diffusions with co ntrols. Classical grid-based numerical solvers face difficulties from unbo unded domains and density normalization for FP, and fully nonlinear operat ors for HJB. These issues exacerbated at high dimension. Ex-isting neural solvers for high dimensional problems often require heavy Monte-Carlo data , struggle with normalization for FP, or rely on policy-iteration loops fo r HJB that are costly in computation.\nBased on Physics-informed neural ne tworks(PINNs), I address three tasks in high dimension (dimension greater than 3): (i) steady-state FP; (ii) ex-ploratory HJB and its use to approxi mate the classical HJB; and (iii) Langevin-based non-convex optimization w ith state-dependent temperature. To efficiently solve FP equations, we app ly (a) tensor neural networks with efficient auto-differentiation(AD); (b) SDE-guided numerical-support; and (c) accurate nor-malization for probabi lity density. For HJB, we solve the exploratory PDE di-rectly (no policy i teration) by embedding the log-integral operator in the residual with a nu merically stable scheme for small exploration weight 位; the result ap-pro ximates the classical HJB at small 位. We then deploy the resulting solver to design state-dependent temperature schedules for Langevin-based non-co nvex optimization.\nFor FP, our method in 6-10 dimensions achieves less th an 10% relative er-ror. For HJB, current experiments show that exploratory solutions with our methods accurately approximate their classical counter parts at small 位; Prelim-inary experiments with Langevin diffusion indica te promising performance for non-convex optimization with learned temperat ure schedules.\nI request committee鈥檚 feedback on two issues: (1) my wor k on Fokker-Planck equation solvers; and (2) my ongoing development of the numerical solver for exploratory HJB and its application to Langevin diff usion with state-dependent temperature for non-convex optimization. For (2 ), I will add experiments on (a) high-dimensional exploratory HJB with con vergence tests toward the classical HJB as exploration weight 位 goes to 0 , and (b) different algorithmic design based on the obtained state-depende nt temperature and additional non-convex optimization benchmarks.\n\nDisse rtation Committee:\nProf. Zhongqiang Zhang (advisor), Worcester Polytechni c InstituteProf. Gu Wang, Worcester Polytechnic InstituteProf. Xun Li, ext ernal member, Hong Kong Polytechnic UniversityProf. Marcus Sarkis-Martins, Worcester Polytechnic InstituteProf. Jacob Whitehill, Worcester Polytechn ic Institute\n END:VEVENT END:VCALENDAR