This virtual AI seminar is hosted by SF Big Analytics.
Tech Talk: Training GNNs at Internet Scale using cuGraph and WholeGraph
Speaker: Joe Eaton (Nvidia)
Abstract: We present our approach to manage 70TB graph datasets, and train GraphSage across 1024 GPUs. One key feature of our approach is the separation of the graph sampling and GNN training phases, giving the user flexibility to scale each independent of the other. WholeGraph provides a distributed feature store that leverages GPU memory and caching to provide high performance dataloading. Dataloading and sampling are the two largest bottlenecks in GNN training according to our profiling.
Tech Talk: Practical Evaluation of LLMs and LLM Systems
Speaker: Andrei Lopatenko
Abstract: In-Depth Analysis of LLM Evaluation Methods: Gain insights into various methods used to evaluate LLM models, understanding their strengths and weaknesses.
End-to-End Evaluation Techniques: Explore how LLM augmented systems are assessed from a holistic perspective, ensuring comprehensive evaluation. Pragmatic Approach to System Deployment: Learn practical strategies for applying evaluation techniques to real-world systems, ensuring seamless deployment and functionality.
Focused Overview on Critical LLM Aspects: Get an overview of essential evaluation techniques for assessing crucial elements of modern LLM systems, enhancing understanding and applicability.
Simplifying the Evaluation Process: Understand how to streamline the evaluation process, making the work of LLM scientists more efficient and productive.