top of page

AGENDA

FEATURED SESSIONS
Julie Pitt.jpg

Netflix Presents: A Human-Friendly Approach to MLOps

What does it take to excel at operating reliable ML models in production while researching several new approaches at once? How can we scale our impact in solving tough business problems through collaboration? Can we spend less time on engineering heavy lifting and more time on science? At Netflix, these questions have motivated us to build human-centric infrastructure for ML called Metaflow that enables scientists and engineers to focus on a wide variety of business problems instead of building infrastructure. In this talk, we will discuss how Metaflow accomplishes this by sharing an in-depth example of how one team uses it to help Netflix estimate audience size for titles on our service.

​

​

1280px-Netflix_2015_logo.png

Julie Pitt, Director, Data Science Platform and Ashish Rastogi, Content Machine Learning Lead

Maulin.jpg

The Growth and Future of Kubeflow for ML

In December of 2017, a small number of folks from a handful of companies introduced Kubeflow; an open, cloud native platform for machine learning. The project has gained a lot of momentum with hundreds of committers, thousands of commits and stars. With the traction, many companies are asking if Kubeflow help them bring their ML practices to the next level, and where Kubeflow is going next. This talk will discuss the growth of the Kubeflow ecosystem and its place in the lifecycle of ML development. This talk will highlight the features of our latest release (0.6) which includes multi-user support,  a UI and SDK for tracking metadata, and the graduation of some Kubeflow applications to 1.0. We will highlight be advantages of running Kubeflow on Anthos, Google’s hybrid and multi-cloud PaaS. We will provide concrete examples of how Kubeflow is developing new applications such as Katib for hyperparameter tuning and Kubeflow pipelines to address gaps in the landscape. We will also show how  we are using Kubernetes and Cloud Native technologies to glue these applications (as well as existing applications) into a cohesive platform aimed at meeting the needs of enterprises who want to leverage ML, and where Kubeflow will be going next.

​

​

​

Maulin Patel, GKE Product Manager and Jeremy Lewi, Software Engineer

David Aronchick Headshot - 800x800.jpg

Production-Grade, Maintainable MLOps

Have you ever strugged with having different environments to build, train and serve ML models, and how to orchestrate between them? While DevOps and GitOps have made huge traction in recent years, many customers struggle to apply these practices to ML workloads.

This talk will focus on ways to effectively infuse AI into production-grade applications through establishing practices around model reproducibility, validation, versioning/tracking, and safe/compliant deployment. We will demonstrate how to run an E2E machine learning system using nothing more than Git. This will integrate DevOps, data and ML pipelines together, and show how to use multiple workload orchestrators together.

While the examples will be run using Azure Pipelines, Azure ML and Kubeflow, we will also show how to extend these platforms to any orchestration tool.

​

​

​

microsoft-azure-500x500.png

David Aronchick, Head of Open Source Machine Learning Strategy

Brittany Wills Twitter.JPG

The Architecture That Powers Twitter’s Feature Store

Twitter’s Feature Store serves as a catalog of ML features used at Twitter and an API for using features in an ML model.  The Feature Store allows our ML teams to share, organize, discover, and leverage features for ML models. It consists of a collection of libraries and tools that provide ML teams with a uniform way of defining and accessing features. Some of the problems it has sought to address include feature sharing, discrepancies between offline and online training data, and model features being defined in multiple places.  This talk will go over the motivation for building Feature Store at Twitter and the architecture that powers it.

​

​

​

Brittany Wills, Software Engineer

twitter-logo-C591CF37A1-seeklogo.com.png
1200px-Google_2015_logo.svg.png
Dan_Sun_2.jpg
Bloomberg.png

Dan Sun, Senior Software Developer

Introducing KFServing: Serverless Model Serving Across ML Frameworks 

Production-grade serving of ML models is a challenging task for data scientists. In this talk, we'll discuss how KFServing powers some real-world examples of inference in production at Bloomberg, which supports the business domains of NLP, computer vision, and time-series analysis. KFServing (https://github.com/kubeflow/kfserving) provides a Kubernetes Custom Resource Definition for serving ML models on arbitrary frameworks. It aims to solve 80% of model serving use cases by providing performant, high abstraction interfaces for common ML frameworks. It provides a consistent and richly featured abstraction that supports bleeding-edge serving features like CPU/GPU auto-scaling (scale to and from 0), health checks, and canary rollouts. It also enables a simple, pluggable and complete story for mission critical ML, including inference graphs, model explainability, outlier detection, and payload logging.

​

​

​

JoshuaPatterson-2_edited.jpg

Joshua Patterson, General Manager, Data Science

The RAPIDS Ecosystem – Scaling Accelerated Data Science

See how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the open source data science platform incubated by NVIDIA. Come learn how to get started leveraging these open-source libraries for faster performance and easier development on GPUs. This includes the core libraries of RAPIDS (cuDF for data frames, cuML for machine learning, and cuGraph for graph analytics), BlazingSQL (a SQL engine built on top of cuDF), Nuclio (a Kubernetes serverless library with GPU support), Numba (a high performance python just in time compiler), and Dask (a python distributed scheduler). See the latest engineering work new release features (including, benchmarks, roadmaps, and demos), and how all these libraries come together to make data science faster and easier than ever. Finally, hear how customers are leveraging RAPIDS in production, benefiting from early adoption, and outperforming CPU equivalents.

​

​

Logo Nvidia.png
Anna_AIMG_9503-607882-edited-788946-edit

Anna Povzner, Software Engineer

Confluent-Logo1.png

Why Cloud Elasticity is Harder Than You think

Data science is about data, putting more demand for serverless stateful services delivering your streaming data, data lakes, and databases. As a cloud user, you want a truly elastic experience - resources are available as soon as you need them, and you pay only for the resources you use. No more capacity planning and no more difficult trade-offs. As a cloud vendor, you want to give your users a truly elastic experience, but you also need to make some profit. Is this even possible for stateful cloud services?

 

In this talk, Anna Povzner, tech lead for Cloud Native Kafka team will discuss the different models cloud vendors use to provide an elastic experience, what kind of effort is needed to achieve true elasticity, the corners that sometimes get cut in the meantime and most important - we’ll learn how to read the small print of any cloud service, to avoid nasty surprises down the road.

​

​

​

bottom of page