Safety in conversational AI: Meaning is massively influenced by context

All members of the department are welcome: undergraduates, postgraduates, postdocs, teaching staff, technical staff - anyone who would like to attend and learn a little bit about what our speakers do in their research career. Members from other disciplines within the School, and the wider University community, are also welcome to attend.

All PhD students in Chemistry are expected to attend as part of their PhD training.

Safety in conversational AI: Meaning is massively influenced by context
-

This is a past event

While the NLP community has traditionally explored the ethical issues of text-based models (such as hate speech detection, inherent biases of the system etc), real-world conversations and dialogues differ significantly from structured, written text documents, and this brings with it its own unique set of safety challenges. From an understanding perspective, I will present research on how robust such models are to input transcripts arising from dialogues, given that they are pre-trained on massive amounts of written text. I will also present work on contexts where models must be robust to variability, and what steps can be taken to ensure such guarantees. Additionally, in real-world interactions, unique human-like ways to communicate may be co-opted by designers of these systems to drive up user engagement. From a generation perspective, I will present research on anthromorphism, i.e. the implications of encouraging humans to relate to such systems in human-like ways.

Speaker
Tanvi Dinkar
Venue
Meston G05