Connect Clip Comparison to other blocks to build a custom workflow
Use the OpenAI CLIP zero-shot classification model to classify images.
This block accepts an image and a list of text prompts. The block then returns the
similarity of each text label to the provided image.
This block is useful for classifying images without having to train a fine-tuned
classification model. For example, you could use CLIP to classify the type of vehicle
in an image, or if an image contains NSFW material.
Connect pre-trained models, open source models, LLM APIs, advanced logic, and external applications. Deploy as an API endpoint, on-prem, or at the edge.