Testing ML systems

Testing ML systems

0 Ratings
0
Episode
74 of 336
Duration
47min
Language
English
Format
Category
Non-fiction

Production ML systems include more than just the model. In these complicated systems, how do you ensure quality over time, especially when you are constantly updating your infrastructure, data and models? Tania Allard joins us to discuss the ins and outs of testing ML systems. Among other things, she presents a simple formula that helps you score your progress towards a robust system and identify problem areas.

Join the discussion

Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!

Featuring:

• Tania Allard – Website • , GitHub • , X • Chris Benson – Website • , GitHub • , LinkedIn • , X • Daniel Whitenack – Website • , GitHub • , X Show Notes:

“What’s your ML score” talk “Jupyter Notebooks: Friends or Foes?” talk Joel Grus’s episode: “AI code that facilitates good science” Papermill nbdev nbval

Books

“DevOps For Dummies” by Emily Freeman

Something missing or broken? PRs welcome!


Listen and read

Step into an infinite world of stories

  • Read and listen as much as you want
  • Over 1 million titles
  • Exclusive titles + Storytel Originals
  • 7 days free trial, then €9.99/month
  • Easy to cancel anytime
Try for free
Details page - Device banner - 894x1036
Cover for Testing ML systems

Other podcasts you might like ...