Abstract: In recent years, advances in machine learning and related fields have led to significant advances in a range of user-interface technologies, including audio processing, speech recognition, and natural language processing. These advances in turn have enabled speech-based digital assistants and speech-to-speech translation systems to become practical to deploy on a large scale. In essence, machines are becoming capable of hearing what we are saying. But will they understand what we want them to do when we talk to them? What are the prospects for getting useful work done in essence, by synthesizing programs -- through the act of having a conversation with a computer? In this lecture, I will speculate on the central role that programming-language design and program synthesis may have in this possible -- and I will argue, likely -- future of computing, one in which every user writes programs, every day, by conversing with a computing system.
Bio: Dr. Peter Lee is Corporate Vice President, Microsoft Research. He manages Microsoft’s worldwide research operations, comprising twelve laboratories and over 1,000 researchers, engineers, and support personnel dedicated to advancing the state of the art in computing and creating new technologies for Microsoft’s products and services. Prior to joining Microsoft, Lee held key positions in both government and academia, most recently at the Defense Advanced Research Projects Agency (DARPA), where he founded and directed a major technology office that supported research in computing and related areas in the social and physical sciences. Prior to DARPA, Lee served as head of Carnegie Mellon University's nationally top-ranked computer science department. He also served as the university's vice provost for research.