
Join us for an exclusive series of webinars and workshops designed to enhance your AI skills and accelerate productivity, with focus on AI coding, DeepSeek, Hugging Face, GraphRAG, Intel OPEA, etc.. Whether you're building local LLMs, optimizing AI workflows, or deploying AI agents, these live sessions—led by industry experts—will provide invaluable insights and hands-on experience.
Session 6: Building an LLM-Powered Chatbot with Streamlit* and Hugging Face*
Construct a chatbot with a Streamlit frontend, leveraging the power of LLMs . Simply connect your OpenAI-compatible API and model endpoint to a Hugging Face* . In this demonstration, the model inference endpoints are hosted on Intel® Gaudi® accelerators and deployed on Denvr Dataworks’ cloud servers. The session covers these topics:
- Building an LLM-powered chatbot on Hugging Face.
- Coding a front-end Streamlit application.
- Using API secrets on Hugging Face.
- Using modern OpenAI-compatible API.
- Provides insights for balancing accuracy, performance, and energy efficiency for GNN deployment.
Speakers:
- Benjamin Consolvo, Staff AI Software Engineer at Intel
- Murilo Gustineli, Senior Software Solutions Engineer at Intel
Venue:
virtual, join from anywhere.
Upcoming Sessions: