AcademicVideosYouTube

LLMs in Sequential Settings: Can They Predict? Step-by-Step (Part One)

Can Foundation Models Like BERT Predict Sequential Event-Based Data?

In this video, we explore the potential of foundation models like BERT to predict sequences of event-based data. Using user interaction logs from a quiz system as a test case, we demonstrate how BERT can be applied to predict quiz scores based on sequences of user actions.

We cover the entire process in detail:

  1. Data Collection and Preprocessing: Gathering user interaction logs, which include various actions (like changing answers, requesting hints) and time intervals. We preprocess this raw data into a sequence of tokens representing actions and times.
  2. Feature Engineering: Transforming these token sequences into features suitable for training a Transformer model. For instance, changing an answer after 5-10 seconds is tokenized as ‘UA2’.
  3. Model Training: Training our Transformer model with the prepared dataset, fine-tuning parameters to optimize performance.
  4. Model Evaluation: Validating our model’s effectiveness on a separate test set to ensure its accuracy in predicting quiz scores.

In addition to the core project, we’ll demonstrate how to connect SSH in VS Code for remote runs, explaining the thought process behind our approaches.

This is part one of our series. In part two, we’ll dive into more advanced models to enhance performance. Stay tuned for more!

Ready to see if BERT can predict sequential event-based data? Watch the full video on our channel and explore the code on GitHub. Don’t forget to subscribe and hit the bell icon to stay updated with our latest videos!

YouTube player

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.