Scale AI with confidence— Learn directly from Ray’s creators and power builders at this free technical event featuring real-world talks and hands-on workshops designed to move your AI workloads from experiment to production.
What to expect:
- Community-Led Talks: Learn real-world lessons, proven architectures, and scaling patterns from teams actively building with Ray.
- Direct Expert Access:
Engage directly with Ray’s creators and builders to get answers, insights, and best practices for productionizing AI.
- Hands-on Workshops:
Accelerate your path to production with an instructor-led Ray workshop covering Ray’s AI libraries.
- Peer Networking:
Connect with experienced engineers and AI teams and learn from shared experiences.
Speakers:
Omar Shorbaji
Anyscale
Alicia Chua
Anyscale
Agenda: VLA Track — Fine tune VLA models for physical AI
12:00 - 12:30 PM
Registration + Networking
12:30 - 12:45 PM
Training kickoff and environment setup
12:45 - 1:30 PM
Module 1: Ray, the Foundation for Distributed Physical AI
Overview of Ray's core concepts: How Ray provides simple, unified APIs for cluster computing. Ray Libraries. Setup and basic execution examples.
1:30 - 2:30 PM
Module 2: Large scale VLA fine-tuning
Overview of data preprocessing and distributed training libraries, Ray Data & Ray Train, with an emphasis on vision language action models.
2:45 - 3:45 PM
Module 3: Robotics simulation
Using Ray Core to parallelize computationally intensive simulations (e.g., Mujoco, Isaac Sim)
3:45 - 4:00 PM
Q&A and closing remarks
4:00 - 5:30 PM
Happy Hour + Networking
Agenda: Ray Track — Ray Distributed training with Ray and PyTorch
12:45 - 1:30 PM
Module 1: Scaling Python for AI Workloads with Ray
Learn Ray’s core concepts, including tasks, actors, and clusters. Understand how Ray scales Python and ML workloads, manage resources, and run distributed programs reliably across nodes.
1:30 - 2:30 PM
Module 2: Building scalable multimodal data pipelines
Ingest, transform, and preprocess large multimodal datasets, then create streaming pipelines to chunk data, generate embeddings at scale.
2:45 - 3:45 PM
Module 3: Distributed Training at scale
Scale model training with data and model parallelism (e.g., FSDP). Run distributed jobs, manage checkpoints, and integrate frameworks like PyTorch with reliability and performance.
3:45 - 4:00 PM
Q&A and closing remarks
Venue:
Convene, One Boston Place
201 Washington St, Boston, MA 02108