Last updated: November 22, 2022.
It’s been a while since we last released a new feature. That’s because we’ve been busy in the past month with the Month of MLOps competition and fixing various technical debts after our major 0.20.0 release.
This time around, we are back to shipping new features to you!
Let’s dive right into the changes.
The BentoML integration has been on our radar for some time now and we finally took the time to flesh it out with the help from our community contributor Timothy Cvetko, and from the BentoML team Aaron Pham, Bozhao Yu, Sean Sheng and Jiang, Bo.
The new BentoML integration includes a BentoML model deployer component that lets you deploy models using major machine learning frameworks (e.g. PyTorch, Tensorflow, HuggingFace, etc) on your local machine and in the cloud.
We will showcase the BentoML integration in our next community hour (23rd Nov 2022, 5:30PM CET). Want to see it in action? Join us here.
In the meantime, check out the BentoML example on our repo.
The previous Airflow orchestrator was limited to local runs and we had many additional unpleasant constraints. It’s a pain to work with. So, we’ve completely rewritten the Airflow orchestrator. Now, it works both locally and with remote Airflow deployments!
Watch the demo video below to see the revamped Airflow orchestrator in action.
And also, check out the example of the brand-new Airflow orchestrator here.
The revamped Airflow orchestrator comes with a breaking change. The Airflow orchestrator now requires a newer version of Airflow and Docker installed to work.
You can simply run
zenml integration install airflow to update your installations to the correct versions.
You can now use the ZenML Label Studio integration with non-local (i.e. deployed) instances. For more information, see the Label Studio documentation. The Label Studio example walks through how you can set it up on cloud infrastructures like Azure, GCP, and AWS.
We fixed the Spark example and it now works again end-to-end.
We also included a fix that speeds up the data sync from the MLMD database to the ZenML server.
As usual, we also made various minor improvements which you can view here.
We are grateful to have the following new contributors in this release:
We would like to also acknowledge the BentoML team for supporting us by reviewing the PR and BentoML API usage, especially:
Also, not forgetting our community contributor @Timothy102 for initiating the integration effort.
Thank you for helping us make ZenML better and sharing it with the community.
If you find any bugs or something doesn’t work the way you expect, please let us know in Slack or also feel free to open up a GitHub issue if you prefer. We welcome your feedback and we thank you for your support!