I made a model, now what?

This is a bit late, but last October I gave the talk at the PyData Atlanta Meetup. That meetup is really great, and I've enjoyed going to it over the past couple of years, if you're in Atlanta I highly recommend it. I'd given lightning talks a couple of times before but this was the first time I'd given a longer format talk there. As best as I can tell, people seemed to like it. Here are the slides:

The theme of the talk was basically what you, as a data scientist, can do to make sure your models:

  1. Actually get into production
  2. Continue to work once in production
  3. Fail in an observable way when they inevitably degrade

A lot of this comes down to understanding your organization. Inevitably there is some handoff between a scientist who made a model and someone in ops or engineering that is going to make sure it works in production and manages it there. So understanding exactly what "production" means, how to make that an easy process, and how to make sure the custodian understands how to tell when something is going wrong is a critical and undersold responsibility of the data scientist.

One idea I presented here, and that I've used with success, is the idea to package as much of the data processing as possible into the concept of a scikit-learn pipeline object. This creates a nice pickle-able artifact that can be passed around (with security caveats) and owned by data science.  Data science produces this artifact, specifies some expectations for input and output, and the ops/engineering team now has a thing they can treat as more or less a black box.

Operationally, this is hugely useful, but can be an observability anti-pattern in production. If the people monitoring production see a critical part as a black box, how do they know when something is going wrong? There must be a mechanism for logging meta data about the model's behavior over time and presenting it to the data science team so that they aren't throwing models over the wall and hoping for the best. They need to be able to keep an eye on things and stay engaged.

So that's what I talked about, what do you think? How do you organize ownership of artifacts within an engineering/ops/data science organization?

Will

Will has a background in Mechanical Engineering from Auburn, but mostly just writes software now. He was the first employee at Predikto, and is currently building out the premiere platform for predictive maintenance in heavy industry there as Chief Scientist. When not working on that, he is generally working on something related to python, data science or cycling.

Leave a Reply