Katia Sycara holds the Edward Fredkin Research Professorship Chair in the Robotics Institute, School of Computer Science at Carnegie Mellon University. She is the Director of the Laboratory for Advanced Agents and Robotics Technologies. She held the Sixth Century Chair in Computing at the University of Aberdeen, UK (part time) from 2005-2015. She earned a B.Sc. in Applied Mathematics from Brown University, M.Sc. in Electrical Engineering from the University of Wisconsin and PhD in Computer Science from Georgia Institute of Technology. She holds an Honorary Doctorate from the University of the Aegean. She is a Fellow of the Institute of Electronic and Electrical Engineers (IEEE), Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). She received the ACM/SIGART Agents Research Award and the Group Decision and Negotiation Section Award of INFORMS. She has served on various industry scientific advisory boards, such as France Telecon and Siemens as well as multiple standards committees. She has received 2 Influential 10-year paper awards and multiple best paper awards as well as multimillion-dollar research grants, including 4 Multi University Research Initiatives (MURIs) with multiple Universities and PIs. She is the Director of the Center of Excellence in Trustworthy Human Agent Teaming. She has served as General Chair and Program Chair of multiple conferences, has given numerous invited talks, authored more than 700 technical papers that model decision making and learning in human and agent teams, multiple robots and complex systems.
The Science of the Deal: Teaming and Negotiating in Artificial and Human Societies
Operations Research has focused on modeling processes and interactions in human society so as to be able to characterize these processes, analyze and understand their nature and predict their possible effects. Recently and increasingly artificial agents enter human societies as decision support systems, as computational processes on the Web or as embodied presence in the form of robotic service providers or assistants. This engenders the need to develop algorithms that allow those artificial agents to autonomously interact in various ways, such as negotiating, forming coalitions, or making decisions as a team. In this talk we present some of our work in modeling these interactions, giving examples that include models of human decision making and also models of artificial agents that can also be used in interactions of the agents with humans in human-autonomy teaming scenarios. Applications include crisis response, search and rescue and environment exploration. We present insights and also discuss potential vulnerabilities of consensus protocols.