Lunch will be served at 11:45 AM.
Ethics discussions abound but translating “do no harm” into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work. Learn how to wield your preferred technology ethics code to make systems that are accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a framework to guide teams successfully through this process.
This talk is for teams working on (or anticipating working on) emerging technologies such as artificially intelligent (AI) systems. Attendees do not need any previous experience or knowledge about ethics.
Carol Smith is the AI Division Trust Lab Lead and a Principal Research Scientist at the Carnegie Mellon University (CMU), Software Engineering Institute. She leads research focused on development practices that result in trustworthy, human-centered, and responsible AI systems. Ms. Smith has been conducting research to improve the human experience with complex systems across industries for over 20 years. Since 2015 she has led research to integrate ethics and improve human experiences with AI systems, autonomous vehicles, and other complex and emerging technologies. Ms. Smith is recognized globally as a leading researcher and user experience advocate and has presented over 250 talks and workshops in over 40 cities around the world. Her writing can be found in publications from organizations including AAAI, ACM, and the UXPA, and she has taught courses and lectured at CMU and other leading institutions. Ms. Smith is currently an ACM Distinguished Speaker and a Working Group member of the IEEE P7008™ Standard. Ms. Smith holds a Master of Science degree in Human-Computer Interaction from DePaul University.