You can score your text using ML models from HuggingFace. This lets you apply any criteria from the source model, e.g. classify texts by emotion. There are:

  • Ready-to-use descriptors that wrap a specific model,

  • A general interface to call other suitable models you select.

Pre-requisites:

  • You know how to use descriptors to evaluate text data.

Imports

from evidently.future.datasets import Descriptor
from evidently.future.descriptors import HuggingFace, HuggingFaceToxicity

Built-in ML evals

Available descriptors. Check all available built-in LLM evals in the reference table.

There are built-in evaluators for some models. You can call them like any other descriptor:

eval_df.add_descriptors(descriptors=[
    HuggingFaceToxicity("question", toxic_label="hate", alias="Toxicity") 
])

Custom ML evals

You can also add any custom checks directly as a Python function.

Alternatively, use the general HuggingFace() descriptor to call a specific named model. The model you use must return a numerical score or a category for each text in a column.

For example, to evaluate “curiousity” expressed in a text:

eval_df.add_descriptors(descriptors=[
   HuggingFace("question",
       model="SamLowe/roberta-base-go_emotions", 
       params={"label": "curiosity"},
       alias="Curiousity"
   )
])

Call the result as usual:

eval_df.as_dataframe()

Example output:

Sample models

Here are some models you can call using the HuggingFace() descriptor.

ModelExample useParameters
Emotion classification
  • Scores texts by 28 emotions.
  • Returns the predicted probability for the chosen emotion label.
  • Scale: 0 to 1.
  • HuggingFace Model
HuggingFace("response", model="SamLowe/roberta-base-go_emotions", params={"label": "disappointment"}, alias="disappointment")Required:
  • params={"label":"label"}
Available labels:
  • admiration
  • amusement
  • anger
  • annoyance
  • approval
  • caring
  • confusion
  • curiosity
  • desire
  • disappointment
  • disapproval
  • disgust
  • embarrassment
  • excitement
  • fear
  • gratitude
  • grief
  • joy
  • love
  • nervousness
  • optimism
  • pride
  • realization
  • relief
  • remorse
  • sadness
  • surprise
  • neutral
Optional:
  • alias="name"
Zero-shot classification
  • A natural language inference model.
  • Use it for zero-shot classification by user-provided topics.
  • List candidate topics as labels. You can provide one or several topics.
  • You can set a classification threshold: if the predicted probability is below, an “unknown” label will be assigned.
  • Returns a label.
  • HuggingFace Model
HuggingFace("response", model="MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli", params={"labels": ["science", "physics"], "threshold":0.5}, alias="Topic")Required:
  • params={"labels": ["label"]}
Optional:
  • params={"score_threshold": 0.7} (default: 0.5)
  • alias="name"
GPT-2 text detection
  • Predicts if a text is Real or Fake (generated by a GPT-2 model).
  • You can set a classification threshold: if the predicted probability is below, an “unknown” label will be assigned.
  • Note that it is not usable as a detector for more advanced models like ChatGPT.
  • Returns a label.
  • HuggingFace Model
HuggingFace("response", model="openai-community/roberta-base-openai-detector", params={"score_threshold": 0.7}, alias="fake")Optional:
  • params={"score_threshold": 0.7} (default: 0.5)
  • alias="name"

This list is not exhaustive, and the Descriptor may support other models published on Hugging Face. The implemented interface generally works for models that:

  • Output a single number (e.g., predicted score for a label) or a label, not an array of values.

  • Can process raw text input directly.

  • Name labels using label or labels fields.

  • Use methods named predict or predict_proba for scoring.

However, since each model is implemented differently, we cannot provide a complete list of models with a compatible interface. We suggest testing the implementation on your own using trial and error. If you discover useful models, feel free to share them with the community in Discord. You can also open an issue on GitHub to request support for a specific model.