Ondřej Dušek: Large neural language models for data-to-text generation

0
5293

Ondřej Dušek: Large neural language models for data-to-text generation

Current research state-of-the-art in automatic data-to-text generation, a major task in natural language generation, is dominated by large language models based on the Transformer neural network architecture. These models are capable of producing lifelike, natural texts; however, they are hard to control and often do not adhere to the input data, omitting important content or producing „hallucinated“ text which is not grounded in the input data. In this talk, I will first show the basic operation principles of the large language models. I will then detail our experiments aiming at higher accuracy of generated text in two ways: (1) improving accuracy of the generating language models themselves, (2) automatically detecting errors in generated texts.

Watch the seminar

BIO

Ondřej Dušek is an assistant professor at the Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University. His research is in the areas of natural language generation and dialogue systems; he specifically focuses on neural-networks-based approaches to these problems and their evaluation. Ondřej got his PhD in 2017 at Charles University. Between 2016 and 2018, he worked at the Interaction Lab at Heriot Watt University in Edinburgh, one of the leading groups in natural-language interaction with computers. There he co-organized the E2E NLG text generation challenge, and since then he has been involved in multiple international efforts around the evaluation of generated text. He recently obtained an ERC Starting Grant on developing new, fluent and accurate methods for language generation. The project will start in the coming months.

More info on AICzechia Seminars

Previous articleRICAIP Days 2022
Next articlePro Ukrajinu: koncert orchestru ČVUT a vernisáž