Best Practices in Hyperparameter Optimization for Developing High-Performing Models

ABOUT THIS WEBINAR
This session will explore how to balance model training and hyperparameter optimization (HPO) to develop high-performing models. In the process, attendees will develop intuition around the practical aspects of model training, metric selection, and HPO in the course of building a deep learning model.

First, we will spend time discussing the best ways to track runs as you go through the modeling process. Second, we will discuss useful visualizations for analyzing model behavior, comparing architectures, and evaluating metrics. Finally, we will delve into methods for automated hyperparameter optimization with a focus on how tuning hyperparameters boosts model performance, provides model insights, and bolsters modeling workflows and team collaboration.

We will present an anomaly detection problem based on a Kaggle financial dataset using a deep learning classification model to show rather than tell how HPO can be useful. After presenting the use case and the dataset, we’ll dive into the optimization journey, presenting the model performance and workflow uplifts that a modeler can observe when following a structured optimization approach using SigOpt. The session will feature sections that are optionally interactive, focusing on Jupyter notebooks and the SigOpt web UI. Every attendee will get temporary, free access to SigOpt in order to perform their own tuning jobs as part of the webinar. And all attendees will also be eligible to participate in a proprietary offer for access to SigOpt for the MLconf community.
ADDITIONAL INFO
  • Duration: 1 hour
  • Price: Free
  • Language: English
  • Who can attend? Everyone
  • Dial-in available? (listen only): Not available.
FEATURED PRESENTERS
HOSTED BY
ATTENDED (85)
Share
To invite people, share this page: