Welcome to our in-person monthly ML meetup in San Francisco. Join us for deep dive tech talks on AI/ML, food/drink, networking with speakers&peers developers, and win lucky draw prizes.
* 5:00pm~5:30pm: Checkin, Food/drink and networking
* 5:30pm~5:45pm: Welcome/community update/Sponsor intro
* 5:45pm~7:30pm: Tech talks
* 7:30pm: Open discussion, Lucky draw & Mixer
Tech Talk 1: Ray as the Common Infrastructure for LLM and Generative AI
Speaker: Zhe Zhang, Head of Ray OSS @Anyscale
Abstract: Generative AI exposes many new and exciting challenges to the underlying compute infrastructure. In this talk, we will introduce how Ray, a leading solution for scaling ML workloads, tackles these challenges (from training and fine tuning, to inference and deployment).
Because of its flexibility and architectural advantages, Ray is used by leading AI organizations to train large language models (LLM) at scale (e.g., by OpenAI to train ChatGPT , Cohere to train their models, EleutherAI to train GPT-J, and Alpa for multi-node training and serving). Meanwhile, there’s also a fast increasing demand for users to orchestrate their own “open source version” of generative AI workloads, without needing to be trained from scratch. We will dive into how Ray can be best used in both scenarios. We will finish with a roadmap for improvements we’re undertaking to make things even easier.
Tech Talk 2: LLMs for the rest of us
Speaker: Chenggang Wu, Co-founder and CTO @Aqueduct
Abstract: Large language models (LLMs) and other foundation models have made an unprecedented step forward in AI. Unfortunately, the infrastructure required to run these models is so overwhelmingly complex that only a few select companies have the requisite capabilities. Infrastructure challenges range from managing large amount of data, deploying complex pipelines, and managing compute services. In this talk, we will discuss:
- An overview of the infrastructure challenges of running LLMs in the cloud
- A demo/walkthrough of deploying an LLM on existing cloud infrastructure
- How Aqueduct seamlessly takes a LLM-powered workflow from prototype to production.
Tech Talk 3: Searching vector embeddings at scale with Weaviate
Speaker: Dan Dascalescu @Weaviate
55 Hawthorne Street, 9th Floor, San Francisco, CA 94105 Google Map
We will raffle winners for prizes during the event. To enter the lucky draw, share the event on social media:
#aicampsf Join the monthly ML meetup in San Francisco by @aicampai to learn AI, ML, Data and Cloud technology with tech leads and industry experts. Free join in person: https://www.aicamp.ai/event/eventdetails/W2023040417
Community on Slack
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #sanfrancisco channel)