Last updated: November 3, 2022.
This release blog describes the changes for three releases v0.13.0 (major release), v0.13.1 and v0.13.2 (minor releases). v0.13.0 brings the first iteration of our Apache Spark integration. This integration opens up the possibility of running large-scale workloads on single-node machines or clusters. Additionally, this release also allows you to run custom code along with your models using KServe or Seldon. Lastly, we introduce the Stack Recipe as a convenient way to spin up perfectly configured infrastructure with ease. v0.13.1 and v0.13.2 includes several bugfixes and quality of life improvements for ZenML users.
Version 0.13.0 is chock-full of exciting features:
View the full release notes here.
Version 0.13.1 comes with several quality of life improvements:
step_b.after(step_a)
View the full release notes here.
Version 0.13.2 comes with a new local Docker orchestrator and many other improvements and fixes:
View the full release notes here.
As always, we’ve also included various bug fixes and lots of improvements to the documentation and our examples.
To date, Spark has been the most requested feature on our Roadmap.
We heard you! And in this release, we present to you the long-awaited Spark integration!
With Spark, this release brings distributed processing into the ZenML toolkit. You can now run heavy data-processing workloads across machines/clusters as part of your MLOps pipeline and leverage all the distributed processing goodies that come with Spark.
We showcased how to use it in our community meetup on 17th August 2022 👇
Run the Spark integration example here.
We continue our streak in supporting model deployment in ZenML by introducing a feature that allows you to deploy custom code alongside your models on KServe or Seldon.
With this, you can now ship the model with the pre-processing and post-processing code to run within the deployment environment.
We showcased how to deploy custom code with a model during our community meetup on 24th August 2022
Run the example here.
Spinning up and configuring infrastructure is a difficult part of the MLOps journey and can easily become a barrier to entry.
Worry not! Now you don’t need to get lost in the infrastructure configuration details.
Using our mlops-stacks repository, it is now possible to spin up perfectly-configured infrastructure with the corresponding ZenML stack using the ZenML CLI.
View the demo recorded during our community meetup on 31st August 2022 👇
Check out all the Stack Recipes here.
This release introduces a breaking change to the CLI by adjusting the access to the stack component-specific resources for secrets managers and model deployers to be more explicitly linked to the component.
Here is how:
# `zenml secret register ...` becomes
zenml secrets-manager secret register ...
# `zenml served_models list` becomes
zenml model-deployer models list
Model Deployment -
Spark Integration -
Tekton Orchestrator -
Materializer -
CLI Improvement -
secret
, feature
and served-models
) by @strickvl in #833.Secrets -
README page improvements -
Link checker and broken links -
Misc -
config.yaml
by @strickvl in #827.Join our Slack to let us know if you have an idea for a feature or something you’d like to contribute to the framework.
We have a new home for our roadmap where you can vote on your favorite upcoming feature or propose new ideas for what the core team should work on. You can vote without needing to log in, so please do let us know what you want us to build!