Large Language Models Meetup (Austin)


May 17, 05:00 PM CDT
  • US-Austin (Capital Factory, 701 Brazos St, Austin, TX 78701) AICamp
  • 109 RSVP
Description
Speaker

Welcome to our in-person monthly ML meetup in Austin. Join us for deep dive tech talks on AI/ML, food/drink, networking with speakers&peers developers

Agenda (CDT):
* 5:00pm~5:30pm: Checkin, Food/drink and networking
* 5:30pm~5:40pm: Welcome/community update/Sponsor intro
* 5:40pm~7:30pm: Tech talks
* 7:30pm: Open discussion, Lucky draw & Mixer

Tech Talk 1: Explore LLMs with LangChain
Speaker: Sophia Yang, Data Scientist @Anaconda
Abstract: Is LangChain the easiest way to interact with large language models and build applications? It’s an open-source tool and it provides so many capabilities that I find useful:
- integrate with various LLM providers including OpenAI, Cohere, Huggingface, and more.
- create a question-answering or text summarization bot with your own document
- deal with chat history with LangChain Memory
- chain various LLMs together and use LLMs with a suite of tools like Google Search, Python REPL, and more.

In this talk, we are going to explore LLMs with LangChain together.

Tech Talk 2: How to Expand the Capabilities of Large Language Models
Speaker: Jonathan Mugan, Principal Scientist @DeUmbra
Abstract: Large Language Models (LLMs) are a wonderful and surprising advance in artificial intelligence, but we still have work to do, because while they show a remarkable ability to converse, they still struggle with truth, exactness, and novel thought. In this talk, we will take a detailed look at how LLMs work under the hood and we will cover how we can expand their ability to understand the world by training them with a cognitive foundation. We will also discuss how we can combine LLMs with GOFAI (Good Old-Fashioned Artificial Intelligence) to get the best of both approaches. GOFAI is brittle, but it is precise and it can reason forward, and we will discuss how if we can have LLMs write GOFAI representations such as logic formulas, we can build an intelligence that exhibits both the suppleness of LLMs and the exactness and forward thinking of GOFAI.

Venue:
Capital Factory, 701 Brazos St, Austin, TX 78701 Google Map
How to find us: Antone room at 16th floor.

Lucky draw
We will raffle winners for prizes during the event. To enter the lucky draw, share the event on social media:

  • Prefer Twitter? Twitter the event with hashtag #aicampaustin and tag @aicampai. For example:
  • #aicampaustin Join the monthly ML meetup in Austin by @aicampai to learn AI, ML, Data and Cloud technology with tech leads and industry experts. Free join in person: https://www.aicamp.ai/event/eventdetails/W2023051715
  • Prefer LinkedIn? Comment the post on LinkedIn: LinkedIn Post
  • Community on Slack
    - Event chat: chat and connect with speakers and attendees
    - Sharing blogs, events, job openings, projects collaborations
    Join Slack (search and join #austin channel)

    Sophia Yang,Jonathan Mugan

    Sophia Yang
    Senior Data Scientist and a Developer Advocate at Anaconda. She is passionate about the data science community and the Python open-source community. She is the author of multiple Python open-source libraries such as condastats, cranlogs, PyPowerUp, intake-stripe, and intake-salesforce. She serves on the Steering Committee and the Code of Conduct Committee of the Python open-source visualization system HoloViz. She also volunteers at NumFOCUS, PyData, and SciPy conferences.
    Jonathan Mugan
    Dr. Mugan is a Principal Scientist at DeUmbra where he specializes in machine learning and artificial intelligence. His research focuses on giving robots the ability to acquire a grounded understanding of their environment. You can read his recent article (https://thegradient.pub/grounding-large-language-models-in-a-cognitive-foundation/) in The Gradient for a description of his current thinking.
    The event ended.
    Watch Recording
    *Recordings hosted on Youtube, click the link will open the Youtube page.
    Contact Organizer