
Welcome to the weekly AI virtual seminars. Join us for deep dive tech talks on AI/ML/Data, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.
Self-Improvement with Large Language Models
Speaker: Xinyun Chen @Google DeepMind
Abstract: Large language models (LLMs) have achieved impressive performance in many domains, including code generation and reasoning. However, to accomplish challenging tasks, generating the correct solution in one go becomes challenging. In this talk, I will first discuss our work self-debugging, which instructs LLMs to debug their own predicted programs. In particular, we demonstrate that self-debugging can teach LLMs to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language. Self-debugging notably improves both the model performance and sample efficiency, matching or outperforming baselines that generate more than 10× candidate programs. In the second part, I will further demonstrate that LLMs can also improve their own prompts to achieve better performance, acting as optimizers.
Community on Slack
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #virtualevents channel)