What’s New in v0.6.2: ♻️ Continuous Deployment and a fresh CLI 👩‍💻 | ZenML Blog

Last updated: November 3, 2022.

ZenML 0.6.2 brings you the ability to serve models using MLflow deployments as well as an updated CLI interface! For a real continuous deployment cycle, we know that ZenML pipelines should be able to handle everything — from pre-processing to training to serving to monitoring and then potentially re-training and re-serving. The interfaces we created in this release are the foundation on which all of this will build.

We also improved how you interact with ZenML through the CLI. Everything looks so much smarter and readable now with the popular rich library integrated into our dependencies.

Smaller changes that you’ll notice include updates to our cloud integrations and bug fixes for Windows users. For a detailed look at what’s changed, give our full release notes a glance.

♻️ Continuous Deployment with MLflow

A Continuous Deployment workflow. Achievement unlocked!

The biggest new feature in the 0.6.2 release is our integration with the parts of MLflow that allow you to serve your models. We previously added MLflow Tracking, but now hook into the standard format for packaging machine learning models so that you can deploy them for real-time serving using a range of deployment tools. With the new integration you can locally deploy your models using a local deployment server.

This is the foundation for the obvious next useful step: non-local deployments using tools like KServe and BentoML. (Community votes on that directed us first towards MLflow, but we realize that there are several other options that are commonly used.)

As part of this new feature, we added a new concept of a ‘service’. The service extends the paradigm of a ZenML pipeline to now cover long-running processes or workflows; you are no longer limited to executing run-to-completion pipelines or mini-jobs. With services you can therefore serve the an artifact created by a pipeline and have it reflected in a running component that you can interact with after-the fact. For machine learning, this is what gives us continuous model deployment.

The MLflow deployment integration means you can implement a workflow — for example — where you train a model, make some decision based on the results (perhaps you evaluate the best model) and immediately see the model updated in production as a prediction service.

We’re really excited about the production use cases that this feature enables. To learn more, check out the new documentation page we just included to guide you in understanding continuous training and continuous deployment. The mlflow_deployment example is also a great way to understand how to use this new feature. (Use the CLI to explore and interact with the examples.)

Improving our CLI with rich

Our CLI tables look much nicer with 'rich'

If you’ve been using the ZenML CLI utility for a while, you’ll know that it was functional but maybe not always delightful. We’ve taken a bit of time to make it more pleasant to use from the user perspective. We used ‘rich’ to add a visual uplift to most user-facing parts of the zenml terminal interface.

Tables are easier to read, spinners conceal log messages that you didn’t really need to see, and tracebacks from errors raised while using ZenML are now much more feature-filled and easy to parse. Now that we’ve added rich into our dependencies it will be easier to continually improve the CLI going forward.

We’ll be writing more about how we integrated with rich on the blog in the coming days, so stay tuned for that!

🗒 Documentation Updates

As the codebase and functionality of ZenML grows, we always want to make sure our documentation is clear, up-to-date and easy to use. We made a number of changes in this release that will improve your experience in this regard:

➕ Other Updates, Additions and Fixes

🙌 Community Contributions

We received a contribution from Rasmus Halvgaard, in which he fixed a number of documentation errors and redundancies in our codebase. Thank you, Rasmus!

Contribute to ZenML!

Join our Slack to let us know what you think we should build next!

Keep your eyes open for future releases and make sure to vote on your favorite feature of our roadmap to make sure it gets implemented as soon as possible.

[Photo by Hybrid on Unsplash]


More from us: