Welcome to the LLMs Night in Silicon Valley, in collaboration with Google, JPMC and Gorilla. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers.
Agenda:
- 5:30pm~6:00pm: Checkin, food and networking
- 6:00pm~6:10pm: Welcome, Community update
- 6:10pm~8:00pm: Tech talks and Q&A
- 8:00pm~8:30pm: Open discussion, Mixer and Closing.
Tech Talk: Automatic Workflow Generation with LLMs
Speaker: Saba Rahimi (JPMC)
Abstract: In this talk, Dr. Saba Rahimi, will discuss a novel approach, FlowMind, that leverages Large Language Models (LLMs), such as Generative Pretrained Transformer (GPT), to create an automatic workflow generation system. FlowMind uses a generic prompt recipe that helps ground LLM reasoning with reliable Application Programming Interfaces (APIs). This approach not only mitigates against hallucinations in LLMs, but also eliminates direct interaction between LLMs and proprietary data or code, thus ensuring the integrity and confidentiality of information within confidential domains. FlowMind further simplifies user interaction by presenting high-level descriptions of auto-generated workflows, enabling users to inspect and provide feedback effectively. As part of this work, a new finance-centric question-answering dataset, NCEN-QA, was created from N-CEN reports on funds. NCEN-QA was used to evaluate FlowMind’s performance in workflow generation.
Tech Talk: Seeing Beyond Words: Multimodal Retrieval Augmented Generation
Speaker: Jeff Nelson (Google)
Abstract: The saying "a picture is worth a thousand words" encapsulates the immense potential of visual data. But most retrieval-augmented generation (RAG) applications rely only on text. This presentation applies RAG to multimodal use cases. We’ll begin with an overview of the components that comprise RAG (embeddings, vector search, generative LLM), showcase sample architectures and quickly dive into a practical demo. Attendees will learn to create powerful LLM-based workflows and embed them in existing applications.
Tech Talk: Gorilla LLM: Teaching LLMs to Use Tools at Scale
Speaker: Shishir Patil (UC Berkeley)
Abstract: In this talk, we will explore our innovative approach to integrating Large Language Models (LLMs) with various tools via APIs. Bridging LLMs with APIs presents a significant challenge, primarily because of the models’ struggles to generate precise input arguments and their propensity to hallucinate API calls. Gorilla LLM, trained with our novel Retriever-Aware-Training (RAT), surpasses the performance of all open-sourced LLMs on writing API calls. Gorilla presents a novel PL-inspired metric to measure hallucination, commonly encountered in LLMs. Gorilla is an open-source project having served hundreds of thousand user requests, with enterprise adoption, and an energetic community supporting it. We’ll also spotlight the Berkeley Function Calling Leaderboard to evaluate an LLM’s ability to call functions (tools) accurately. We’ll conclude with our learnings from our deployment experiences, and present open research questions to enable wider integration of LLMs in applications.
Speakers/Topics:
Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics
Venue:
JPMC Tech Center, 3223 Hanover St, Palo Alto, CA 94304
Check the Slack for driving directions, parking info, car pool, etc..
Sponsors:
We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 30,000+ AI developers in San Francisco Bay Area or 350K+ worldwide.
Community on Slack/Discord
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #sanfrancisco channel) | Join Discord