AI Meetup (London): AI, Generative AI, LLMs


Feb 22, 06:00 PM GMT
  • London AICamp
  • 238 RSVP
Description
Speaker

Welcome to our monthly in-person AI meetup in London, in collaboration with Arbitration City. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers

Agenda:
* 6:00pm~6:30pm: Checkin, Food/Snacks/Drink and Networking
* 6:30pm~6:45pm: Welcome/community update
* 6:45pm~8:30pm: Tech talks
* 8:30pm: Open discussion & Mixer

Tech Talk: Deploy self-hosted open-source AI solutions
Speaker: Dmitri Evseev (Arbitration City)
Abstract: I will share practical insights from my journey from law firm partner to AI startup founder, focusing on deploying self-hosted, open-source AI solutions in the legal sector and beyond. I will discuss the benefits of self-hosting over third-party APIs, the challenges of implementing these systems for production use, and methods to optimise GPU usage with open-source tools. The talk will also cover approaches to integrate containerised architectures and encryption for secure, scalable AI deployment, aiming to assist the LLMOps community and others exploring self-hosted AI and retrieval-augmented generation (RAG).

Tech Talk: Falcon OS - An open source LLM Operating System
Speaker: Heiko Hotz (Google)
Abstract: In this talk I will introduce the Falcon OS project, a collaboration with the Technology Innovation Institute and Weights & Biases. Falcon OS is a new operating system project centered around the open-source Falcon 40B LLM. It aims to simplify complex tasks through natural language, bridging the gap between users and computers. This talk will explore its potential to transform AI applications and what it takes for an LLM to be able to reason and act, a key capability for such a system.

Tech Talk: Navigating LLM Deployment: Tips, Tricks, and Techniques
Speaker: Meryem Arik (TitanML)
Abstract: Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence. Self-hosting LLMs, as opposed to using API-based models, comes with its own host of challenges - as well as needing to solve business problems, engineers need to wrestle with the intricacies of model inference, deployment and infrastructure. In this talk we are going to discuss the best practices in model optimisation, serving and monitoring - with practical tips and real case-studies.

Speakers/Topics:
Stay tuned as we are updating speakers and schedules.
If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Venue:
International Dispute Resolution Centre
1 Paternoster Ln, London EC4M 7BQ

Sponsors:
We are actively seeking sponsors to support AI developers community.  Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 10,000+ AI developers in London or 300K+ worldwide.

Community on Slack/Discord
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #london channel) | Join Discord

Dmitri Evseev,Heiko Hotz,+

The event ended.
Watch Recording
*Recordings hosted on Youtube, click the link will open the Youtube page.
Contact Organizer