On Sept 18th the topics will be natural language processing and visual place recognition for autonomous driving. Ther will be presentations by two professors at Toyota Technological Institute at Chicago (TTIC).
Accurate estimation of a vehicle’s position and orientation is integral to autonomous operation across a broad range of applications including intelligent transportation, exploration, and surveillance. Self-driving vehicles are heavily reliant upon Global Positioning System (GPS) receivers to estimate their absolute, geo-referenced pose. However, most commercial GPS systems suffer from low precision, are sensitive to multi-path effects (e.g., in the “urban canyons” formed by tall buildings), which can introduce significant biases that are difficult to detect, or may be entirely unavailable (e.g., due to signal jamming). Visual place recognition addresses these limitations by formulating pose estimation as a problem of registering camera images against a geo-referenced image database (map). Visual place recognition is challenging due to the appearance variations that result from changes in perspective, scene composition, illumination, weather, and seasons. This talk by Matthew Walter will describe several state-of-the-art approaches to mitigating these challenges as well as directions for future research.
Natural language processing (NLP) has been transformed during the past two years by language representation models
learned from massive amounts of unannotated text data. Kevin Gimpel will first give a brief overview of the key developments in this area, including models like ELMo, BERT, and the GPT-2. Then he will discuss his group’s work on learning representations of sentences for particular goals, including recognizing paraphrases, disentangling syntax and semantics, and generating paraphrases that match desired syntactic or stylistic properties.
Food and refreshments will be provided, followed by a lab tour for those interested after the meeting.