For a general introduction, check Core Concepts.

Pre-requisites:

For a quick end-to-end example of generating Reports, Check the Quickstart for ML or LLM.

Imports

Import the Metrics and Presets you plan to use.

from evidently.future.report import Report
from evidently.future.metrics import *
from evidently.future.presets import *

You can use Metric Presets, which are pre-built Reports that work out of the box, or create a custom Report selecting Metrics one by one.

Presets

Available Presets. Check available evals in the Reference table.

To generate a template Report, simply pass the selected Preset to the Report.

Single dataset. To generate the Data Summary Report for a single dataset:

report = Report([
    DataSummaryPreset()
])

my_eval = report.run(eval_data_1, None)
my_eval
#my_eval.json

Two datasets. To generate the Data Drift Report that needs two datasets, pass the second one as a reference when you run it:

report = Report([
    DataDriftPreset()
])

my_eval = report.run(eval_data_1, eval_data_2)
my_eval
#my_eval.json

Note that in this case the order matters: the first eval_data_1 is the current data you evaluate, the second eval_data_2 is the reference dataset you consider as a baseline for drift detection.

If nothing else is specified, the Report will run with the default parameters for all columns in the dataset. You can also pass custom parameters to some Presets.

Combine Presets. You can also include multiple Presets in the same Report. List the m one by one.

report = Report([
    DataDriftPreset(), 
    DataSummaryPreset()
])

my_eval = report.run(eval_data_1, eval_data_2)
my_eval
#my_eval.json

Limit columns. You can limit the columns to which the Preset is applied.

report = Report([
    DataDriftPreset(column=["target", "prediction"])
])

my_eval = report.run(eval_data_1, eval_data_2)
my_eval
#my_eval.json

You can view the Report in Python, export the outputs (HTML, JSON, Python dictionary) or upload it to the Evidently platform. Check more in output formats.

Custom Report

Available Metrics and parameters. Check available evals in the Reference table.

Custom Report. To create a custom Report, simply list the Metics one by one. You can combine both dataset-level and column-level Metrics, and combine Reports and Metrics in one Report. When you use a column-level Metric, you must specify the column it refers to.

report = Report([
    ColumnCount(), 
    ValueStats(column="target")
])

my_eval = report.run(eval_data_1, None)
my_eval
#my_eval.json

Generating multiple column-level Metrics: You can use a helper function to easily generate multiple column-level Metrics for a list of columns. See the page on Metric Generator.

Metric Parameters. Metrics can have optional or required parameters.

For example, the data drift detection algorithm automatically selects a method, but you can override this by specifying your preferred method (Optional).

report = Report([
   ValueDrift(column="target", method="psi")
])

To calculate the Precision at K for a ranking task, you must always pass the k parameter (Required).

report = Report([
   PrecisionTopK(k=10)
])

What’s next?

You can also add conditions to Metrics: check the Tests guide.