Tag: mlops
-
Introducing mlstacks: a refreshed way to deploy MLOps infrastructure (01 Sep 2023)
We released an updated way to deploy MLOps infrastructure, building on the success of the `mlops-stack` repo and its stack recipes. All the new goodies are available via the `mlstacks` Python package. -
Launching MLOps Platform Sandbox: A Production-Ready MLOps Platform in an Ephemeral Environment (31 May 2023)
An easy way to deploy an ephemeral MLOps stack, inclusive of ZenML, Kubeflow, MLflow, and Minio Bucket. This one-stop sandbox provides users an interactive playground to explore pre-built pipelines and effortlessly experiment with various MLOps tools, without the burden of infrastructure setup and management. -
ZenML's Month of MLOps Recap (22 Nov 2022)
The ZenML MLOps Competition ran from October 10 to November 11, 2022, and was a wonderful expression of open-source MLOps problem-solving. -
Transforming Vanilla PyTorch Code into Production Ready ML Pipeline - Without Selling Your Soul (27 Oct 2022)
Transform quickstart PyTorch code as a ZenML pipeline and add experiment tracking and secrets manager component. -
ZenML's Month of MLOps: Competition Announcement (26 Sep 2022)
Join us for a celebration of open-source MLOps, where you get to both express your creativity and solve a problem that is interesting to you! Our MLOps Competition runs from October 10 to November 11, 2022. -
Keep the lint out of your ML pipelines! Use Deepchecks to build and maintain better models with ZenML! (06 Sep 2022)
Test automation is tedious enough with traditional software engineering, but machine learning complexities can make it even less appealing. Using Deepchecks with ZenML pipelines can get you started as quickly as it takes you to read this article. -
Deploy your ML models with KServe and ZenML (04 Aug 2022)
How to use ZenML and KServe to deploy serverless ML models in just a few steps. -
ZenML sets up Great Expectations for continuous data validation in your ML pipelines (07 Jul 2022)
ZenML combines forces with Great Expectations to add data validation to the list of continuous processes automated with MLOps. Discover why data validation is an important part of MLOps and try the new integration with a hands-on tutorial. -
How to run production ML workflows natively on Kubernetes (29 Jun 2022)
Getting started with distributed ML in the cloud: How to orchestrate ML workflows natively on Amazon Elastic Kubernetes Service (EKS). -
Serverless MLOps with Vertex AI (27 Jun 2022)
How ZenML lets you have the best of both worlds, serverless managed infrastructure without the vendor lock in. -
Move over Kubeflow, there's a new sheriff in town: Github Actions 🤠 (20 Jun 2022)
This tutorial presents an easy and quick way to use GitHub Actions to run ML pipelines in the cloud. We showcase this functionality using Microsoft's Azure Cloud but you can use any cloud provider you like. -
Need an open-source data annotation tool? We've got you covered! (10 Jun 2022)
We put together a list of 48 open-source annotation and labeling tools to support different kinds of machine-learning projects. -
Podcast: ML Engineering with Ben Wilson (08 Jun 2022)
This week I spoke with Ben Wilson, author of 'Machine Learning Engineering in Action', a jam-backed guide to all the lessons that Ben has learned over his years working to help companies get models out into the world and run them in production. -
How to get the most out of data annotation (02 Jun 2022)
I explain why data labeling and annotation should be seen as a key part of any machine learning workflow, and how you probably don't want to label data only at the beginning of your process. -
Will they stay or will they go? Building a Customer Loyalty Predictor (27 May 2022)
We built an end-to-end production-grade pipeline using ZenML for a customer churn model that can predict whether a customer will remain engaged with the company or not. -
The Framework Way is the Best Way: the pitfalls of MLOps and how to avoid them (24 May 2022)
As our AI/ML projects evolve and mature, our processes and tooling also need to keep up with the growing demand for automation, quality and performance. But how can we possibly reconcile our need for flexibility with the overwhelming complexity of a continuously evolving ecosystem of tools and technologies? MLOps frameworks promise to deliver the ideal balance between flexibility, usability and maintainability, but not all MLOps frameworks are created equal. In this post, I take a critical look at what makes an MLOps framework worth using and what you should expect from one. -
All Continuous, All The Time: Pipeline Deployment Patterns with ZenML (11 May 2022)
Connecting model training pipelines to deploying models in production is seen as a difficult milestone on the way to achieving MLOps maturity for an organization. ZenML rises to the challenge and introduces a novel approach to continuous model deployment that renders a smooth transition from experimentation to production. -
Predicting how a customer will feel about a product before they even ordered it (20 Apr 2022)
We built an end to end continuous deployment pipeline using ZenML for a customer satisfaction model that uses historical data of the customer predict the review score for the next order or purchase. -
'It's the data, silly!' How data-centric AI is driving MLOps (07 Apr 2022)
ML practitioners today are embracing data-centric machine learning, because of its substantive effect on MLOps practices. In this article, we take a brief excursion into how data-centric machine learning is fuelling MLOps best practices, and why you should care about this change. -
Podcast: Open-Source MLOps with Matt Squire (31 Mar 2022)
This week I spoke with Matt Squire, the CTO and co-founder of Fuzzy Labs, where they help partner organizations think through how best to productionise their machine learning workflows. -
Podcast: Practical Production ML with Emmanuel Ameisen (18 Mar 2022)
This week I spoke with Emmanuel Ameisen, a data scientist and ML engineer currently based at Stripe. Emmanuel also wrote an excellent O'Reilly book called 'Building Machine Learning Powered Applications', a book I find myself often returning to for inspiration and that I was pleased to get the chance to reread in preparation for our discussion. -
Everything you ever wanted to know about MLOps maturity models (07 Mar 2022)
An exploration of some frameworks created by Google and Microsoft that can help think through improvements to how machine learning models get developed and deployed in production. -
Podcast: From Academia to Industry with Johnny Greco (03 Mar 2022)
This week I spoke with Johnny Greco, a data scientist working at Radiology Partners. Johnny transitioned into his current work from a career as an academic — working in astronomy — where also worked in the open-source space to build a really interesting synthetic image data project. -
How to painlessly deploy your ML models with ZenML (02 Mar 2022)
Connecting model training pipelines to deploying models in production is regarded as a difficult milestone on the way to achieving Machine Learning operations maturity for an organization. ZenML rises to the challenge and introduces a novel approach to continuous model deployment that renders a smooth transition from experimentation to production. -
Podcast: The Modern Data Stack with Tristan Zajonc (10 Feb 2022)
Tristan and Alex discuss where machine learning and AI are headed in terms of the tooling landscape. Tristan outlined a vision of a higher abstraction level, something he's working on making a reality as CEO at Continual. -
Podcast: Neurosymbolic AI with Mohan Mahadevan (27 Jan 2022)
Mohan and Alex discuss neurosymbolic AI and the implications of a shift towards that as a core paradigm for production AI systems. In particular, we discuss the practical consequences of such a shift, both in terms of team composition as well as infrastructure requirements. -
10 Reasons ZenML ❤️ Evidently AI's Monitoring Tool (21 Jan 2022)
ZenML recently added an integration with Evidently, an open-source tool that allows you to monitor your data for drift (among other things). This post showcases the integration alongside some of the other parts of Evidently that we like. -
Podcast: Monitoring Your Way to ML Production Nirvana with Danny Leybzon (16 Dec 2021)
We discuss how to monitor models in production, and how it helps you in the long-run. -
Why you should be using caching in your machine learning pipelines (07 Dec 2021)
Use caches to save time in your training cycles, and potentially to save some money as well! -
Podcast: Practical MLOps with Noah Gift (02 Dec 2021)
We discuss the role of MLOps in an organization, some deployment war stories from his career as well as what he considers to be 'best practices' in production machine learning. -
Lazy Loading Integrations in ZenML (26 Nov 2021)
How integrations work under the hood to connect you to the tools you know and love. -
Pipeline Conversations: Our New Podcast (19 Nov 2021)
We launched a podcast to have conversations with people working to productionize their machine learning models and to learn from their experience. -
Why ML should be written as pipelines from the get-go (31 Mar 2021)
Eliminate technical debt with iterative, reproducible pipelines. -
MLOps: Learning from history (09 Nov 2020)
MLOps isn't just about new technologies and coding practices. Getting better at productionizing your models also likely requires some institutional and/or organisational shifts. -
Why ML in production is (still) broken - [#MLOps2020] (26 Jun 2020)
The MLOps movement and associated new tooling is starting to help tackle the very real technical debt problems associated with machine learning in production. -
Can you do the splits? (11 Jun 2020)
Splitting up datasets is part of the daily work of a data scientist, but there's more complexity and art to it than first meets the eye. -
A case for declarative configurations for ML training (17 May 2020)
Using config files to specify infrastructure for training isn't widely practiced in the machine learning community, but it helps a lot with reproducibility. -
Why deep learning development in production is (still) broken (01 Mar 2020)
Software engineering best practices have not been brought into the machine learning space, with the side-effect that there is a great deal of technical debt in these code bases. -
Distributed PCA using TFX (27 Feb 2020)
We use PCA to reduce the dimension of input vectors while retaining maximal variance.