Skip to main content

Getting Your AI/ML Workloads Into the Kubeflow

How to execute a global DevRel strategy with leadership buy-in. Common principles & KPIs of DevRel.

Full Session Description

As someone once said, the story of enterprise machine learning is three weeks to develop the model, and over a year to deploy. Putting ML into production is not a straight forward process and, what’s more, the actual ML capability is just a tiny cog in the entire AI/ML engine. Along with the technical challenges, it’s vital that enterprises avoid creating silos between data scientists and operations engineers (e.g. SREs) if they’re to break the cycle of enterprise ML. What’s required are new platforms that promote collaboration, environments that can deliver a set of core applications to efficiently developer, build, train and deploy models. One of those is Kubeflow, an AI/ML lifecycle management platform for Kubernetes. Its capabilities make it easy to train and tune models and deploy ML workloads anywhere. This session will cover key customer and user pain points, before looking at how the core features of Kubeflow 1.0 address those challenges. To finish, we’ll take a peek into the future to consider possible enhancements, as well as where the opportunities for increased community participation lie.

Elvira Dzhuraeva

Technical Product Engineer AI/ML @ Cisco

About the author

Elvira Dzhuraeva is a AI/ML Technical Product Engineer at Cisco where she leads cloud and on-premise machine learning and artificial intelligence strategy. She is a Technical Product Manager at Kubeflow and a member in MLPerf community

    Sign Up for
    More Courses