As part of my journey as a VMware vExpert 2025 ๐, Iโve started exploring labs that combine Artificial Intelligence (AI), Machine Learning (ML), and the Tanzu Platform.
This first lab gave me a hands-on view of how to host, serve, and manage AI/ML models on private infrastructure โ while also introducing the essential concepts behind MLOps, the DevOps-flavored discipline tailored to machine learning.
Let me take you through the highlights ๐
๐น Part 1: AI, ML, and MLOps โ A Global View
Before diving into VMware Tanzu, the lab walks through the foundations of AI and ML:
- ๐ค Artificial Intelligence (AI): Systems simulating human intelligence.
- ๐ Machine Learning (ML): Algorithms learning from data without explicit programming.
- ๐ง Deep Learning (DL): Neural networks mimicking the human brain.
- โจ Generative AI: Producing text, images, and more from learned patterns.
What about MLOps?
- โ๏ธ MLOps: Applying DevOps principles to ML โ automating deployment, monitoring, and retraining of models.
- ๐งฌ DLOps: Focusing specifically on deep learning models.
- ๐ LLMOps: Targeted at Large Language Models and Generative AI.
The Key Personas
In any ML project, multiple roles collaborate:
- ๐ท ML Platform Engineer โ Builds and manages the environment.
- ๐ ๏ธ MLOps Engineer โ Operationalizes models into production pipelines.
- ๐ฌ Data Scientist โ Designs, trains, and experiments with models.
And of course, depending on the company size, roles may overlap or diversify (Data Engineer, ML Engineer, AI Engineerโฆ).
๐ Takeaway: MLOps is iterative. Models must adapt to data drift and concept drift, making monitoring and retraining essential.
๐น Part 2: VMware Tanzu in Action
After the theory, the lab shifts into practice with VMware Tanzu ๐.
The use case: ๐ธ Build an object detection platform for images (CIFAR-10 dataset with 60,000 labeled images, 10 classes).
The Required Stack
To achieve this, the lab sets up a full MLOps pipeline on Tanzu, including:
- ๐ Experimentation environment (Jupyter notebooks & alternatives).
- ๐ Pipelines & orchestration with Argo Workflows.
- ๐ฆ Model registry & versioning with MLflow.
- ๐ Data catalog with Datahub.
- ๐ ML Observability with Evidently.
- ๐ CI/CD integration (GitOps ready) for automated workflows.
All of this is built on Tanzuโs cloud-agnostic foundation, meaning you can run ML workloads across any cloud or on-prem infrastructure โ without becoming a Kubernetes guru.
Why Tanzu?
Because Tanzu provides: โ A unified way to deploy ML workloads. โ Flexibility to mix open-source tools. โ Scalability and governance with enterprise readiness.
๐ Takeaway: Tanzu makes it possible to manage end-to-end ML lifecycles โ from experimentation to production, observability, and retraining.
๐ฏ Conclusion
This first lab was a perfect way to:
- Get familiar with AI/ML fundamentals ๐
- Understand the roles & lifecycle of MLOps ๐
- See how VMware Tanzu enables real ML projects โก
As a vExpert 2025, Iโll keep sharing my journey with Tanzu and AI/ML here on LinkedIn and on my dedicated blog ๐. Stay tuned for the next modules where weโll go even deeper into hands-on MLOps with Tanzu.