Data and ML checks
ML monitoring “hello world”
Need help? Ask on Discord.
1. Set up your environment
This quickstart shows both local open-source and cloud workflows.
You will run a simple evaluation in Python and explore results in Evidently Cloud.
1.1. Set up Evidently Cloud
-
Sign up for a free Evidently Cloud account.
-
Create an Organization if you log in for the first time. Get an ID of your organization. (Link).
-
Get an API token. Click the Key icon in the left menu. Generate and save the token. (Link).
1.2. Installation and imports
Install the Evidently Python library:
Components to run the evals:
Components to connect with Evidently Cloud:
1.3. Create a Project
Connect to Evidently Cloud using your API token:
Create a Project within your Organization, or connect to an existing Project:
2. Prepare a toy dataset
Let’s import a toy dataset with tabular data:
Let’s split the data into two and introduce some artificial drift for demo purposes. Prod
data will include people with education levels unseen in the reference dataset:
Map the column types:
Create Evidently Datasets to work with:
3. Get a Report
Let’s a summary of all columns in the dataset, and run auto-generated Tests to check for data quality and core statistics between two datasets:
4. Explore the results
Upload the Report with summary results:
View the Report. Go to Evidently Cloud, open your Project, navigate to “Reports” in the left and open the Report. You will see the summary with scores and Test results.
Get a Dashboard. As you run repeated evals, you may want to track the results in time. Go to the “Dashboard” tab in the left menu and enter the “Edit” mode. Add a new tab, and select the “Columns” template.
You’ll see a set of panels that show column stats. Each has a single data point. As you log ongoing evaluation results, you can track trends and set up alerts.