MetaCLIP is a zero-shot classification and embedding model developed by Meta AI.
First, install Autodistill and Autodistill MetaCLP:
pip install autodistill autodistill-metaclip
Then, run:
from autodistill_metaclip import MetaCLIP
# define an ontology to map class names to our MetaCLIP prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = MetaCLIP(
ontology=CaptionOntology(
{
"person": "person",
"a forklift": "forklift"
}
)
)
results = base_model.predict("./image.png")
print(results)