Toward Trustworthy AI for Safety-Critical Systems
Please join us for this special Associates event on the Los Angeles Westside hosted by members Marina Chen and Chi-Fu Huang.
Adam Wierman, professor of computing and mathematical sciences, will provide an overview of his work to develop robust and reliable tools to yield AI with formal guarantees on performance, stability, and safety. Artificial intelligence (AI) and other modern tools have the potential to transform data centers, electricity grids, transportation, and other critical modern systems. However, if these tools fail these systems, there could be loss of life or significant property or environmental damage. There is much upside for using AI, yet there is almost zero margin for error when applied to these critical systems, so what kind of balance can be struck if any? Is it possible to provide guarantees that allow modern AI tools to be used in safety-critical applications?