Join us for a power-packed night of learning, sharing, and networking at AI Dev Day - San Francisco. We are excited to bring the AI developer community together to learn and discuss the latest trends, practical experiences, and best practices in the field of AI, LLMs, generative AI, and machine learning.
In addition to the tech talks, there will be plenty of opportunities to network with AI developers, live demos by AI startups, panel discussion, and career opportunities.
- 5:00pm~5:50pm: Checkin, food/drink and networking
- 5:50pm~6:00pm: Welcome, Community update
- 6:00pm~8:00pm: Tech talks
- 8:00pm~8:30pm: Q&A and Open discussion
Tech Talk 1: Getting unstuck - How not to get locked into one LLM
Speaker: Fabian Baier @Pulze.ai
Abstract: In this presentation we will explore how leaders can avoid the LLM lock-in and actually leverage many models at the same time by leveraging a state of the art knowledge graph and dynamic routing in real time. At the end of the session, you will have actionable information that you can leverage, and tools that you can deploy immediately.
Tech Talk 2: Introduction to LLMs and Langchain
Speaker: Apoorva Jaiswal @JPMorgan Chase
Abstract: LLMs and LangChain have become the most talked about terms in tech. To keep up with the fast moving world of Generative AI, this session will take you through the journey of LLMs. We will also understand how easy it is to build an application using LangChain powered by LLMs. Join me for a beginner to intermediate level journey!
Tech Talk 3: Are Vector Databases Enough for Multimodal Data?
Speaker: Vishakha Gupta @ApertureData
Abstract: Are Vector databases enough when working on use cases that involve multiple modalities of data like images, videos, or documents? I will present some of the use cases we encounter when talking to Data Science teams that work with complex and mixed data types, explain what’s needed from the supporting data infrastructure, describe how we are tackling this problem, and enabling comprehensive Vector Search + Classification.
Tech Talk 4: Video Understanding with Foundation Models
Speaker: James Le @Twelve Labs
Abstract: This talk is for developers who are interested in building video applications using state-of-the-art multimodal foundation models. We will discuss: a) The evolution of models in language understanding and video understanding; b) video embeddings; c) video-language modeling.
GitHub, 88 Colin P Kelly Junior Street, San Francisco, CA
How to find us: the entrance is located at 275 Brannan st.
We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food/drink, or cash sponsorship. Sponsors will have the chance to speak at the meetups, receive prominent recognition, and gain exposure to our extensive membership base of 30k+ in San Francisco or 300K+ developers in global.
Community on Slack
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (browse channels and join the #san-francisco channel)