AI Dev Day - Silicon Valley


Jul 25, 04:30 PM PDT
  • Plug & Play Tech Center, 440 N Wolfe Rd, Sunnyvale, CA 94085 AICamp
  • 626 RSVP
Description
Speaker

Join us for a power-packed night of learning, sharing, and networking at AI Dev Day - Silicon Valley. We are excited to bring the AI developer community together to learn and discuss the latest trends, practical experiences, and best practices in the field of AI, LLMs, generative AI, and machine learning.

In addition to the tech talks, there will be plenty of opportunities to network with AI developers, live demos/expo stations by AI startups, panel discussion, and career opportunities.

Agenda (PDT):
- 4:30pm~5:30pm: Checkin, food/drink and networking
- 5:30pm~7:30pm: Tech talks, and panel
- 7:30pm: open discussion, mixer

Tech Talk 1: Evaluating LLM-based applications
Speaker: Josh Tobin, founder @Gantry
Abstract: Evaluating LLM-based applications can feel like more of an art than a science. In this talk, we will give a hands-on introduction to evaluating language models. You will come away with knowledge and tools you can use to evaluate your own applications, and answers to questions like: Where do I get evaluation data from, anyway? Is it possible to evaluate generative models in an automated way? What metrics can I use? What is the role of human evaluation?

Tech Talk 2: Real-Time Training and Scoring in AI/ML
Speaker: Wes Wagner, Solutions Engineer @Redpanda
Abstract: This session will discuss the important aspects of time series data and time-aware features in the context of real-time analytics. Additionally, we will cover how to merge multiple data streams for more complex feature creation and scoring.
We will apply real-time streams to anticipate future air traffic, illustrating a simple application of these concepts which you could extend to your own use cases. Further, the implications for MLOps in a streaming environment will be explored. We will discuss the adjustments required for real-time data handling and strategies to address the issue of missing data in a real-time setup and how to make decisions when parts of the data streams fail.

Tech Talk 3: Working with LLMs at Scale
Speaker: Yujian Tang, Developer @Zilliz
Abstract: we’ll introduce LLMs and two main problems they face when it comes to production: high cost and lack of domain knowledge. We then introduce vector databases as a solution to this problem. We cover how a vector database can facilitate data injection and caching through the use of vector embeddings.

Lightning Talk 1: From Generic To Genius: Personalize Generative AI
Speaker: Ryan Michael, VP of Engineering @Kaskada/DataStax
Abstract: Generative AI has already demonstrated immense value, but systems like ChatGPT don’t know anything about who we are as individuals. At Kaskada, we have developed a compute engine to help LLM’s understand who they’re talking to and what they’re talking about. Kaskada does this by augmenting prompts with real-time contextual information and making it easy to recreate the context of past prompts, significantly accelerating the prompt engineering process. In this talk, we introduce the abstraction that makes this possible: the concept of timelines. Timelines can be interpreted as a history of changes or as snapshots at specific time points.

Lightning Talk 2: Practical Data Considerations for building Production-Ready LLM Applications
Speaker: Simon Suo, Cofounder / CTO @LlamaIndex
Abstract: Building an LLM application is easy, but putting it in production is hard. As an AI engineer, you are starting to ask: how do I better manage and structure my data to improve my Q&A system? In this talk, we will discuss practical data considerations for building production-ready LLM applications. You will walk away with concepts and tools to help you diagnose problems and improve your application.

Lightning Talk 3: Notebooks: A Tool for LLMs
Speaker: Kyle Kelley, Chief Architect @Noteable
Abstract: We delve into the transformative potential of integrating computational notebooks, specifically Jupyter, with Large Language Models (LLMs). This talk will explore how the Noteable Plugin has been designed to enable LLMs to write literate computational notebooks, allowing models to write prose, code, and plot

Venue:
Plug & Play Tech Center, 440 N Wolfe Rd, Sunnyvale, CA 94085

Sponsors:
- Premium -


- Standard -

Community on Slack
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #sanfrancisco channel)

Community Partners:
- Google Developers Group - Silicon Valley
- ACM SF Bay

Lucky draw
We will raffle winners for prizes during the event. To enter the lucky draw, "comment" the post on LinkedIn: LinkedIn Post

The event ended.

Contact Organizer