Losing bits and finding meaning: Efficient compression shapes meaning in language
Speaker: Noga Zaslavsky
Our world is extremely complex, and yet we are able to exchange our thoughts and beliefs about it using a relatively small number of words. What computational principles can explain this extraordinary ability? In this talk, I argue that in order to communicate and reason about meaning while operating under limited resources, both humans and machines must efficiently compress their representations of the world. In support of this claim, I present a series of studies showing that: (1) languages evolve under pressure to efficiently compress meanings into words; (2) the same principle can help reverse-engineer the visual representations that underlie semantic systems; (3) efficient compression may also explain how meaning is constructed in real time, as interlocutors reason pragmatically about each other; and (4) these findings offer a new framework for studying how language may emerge in artificial agents without relying on human-generated training data. This body of research suggests that efficient compression underlies meaning in language and offers a cognitively-motivated approach to emergent communication in multi-agent systems.