Models
VLPart vs. CoDet

VLPart vs. CoDet

Both VLPart and CoDet are commonly used in computer vision projects. Below, we compare and contrast VLPart and CoDet.

Models

icon-model

VLPart

VLPart, developed by Meta Research, is an object detection and segmentation model that works with an open vocabulary
icon-model

CoDet

CoDet is an open vocabulary zero-shot object detection model.
Learn more about CoDet
Model Type
Object Detection
--
Object Detection
--
Model Features
Item 1 Info
Item 2 Info
Architecture
--
--
Frameworks
--
--
Annotation Format
Instance Segmentation
Instance Segmentation
GitHub Stars
--
79
--
License
MIT License
--
Apache 2.0 License
--
Training Notebook
Compare Alternatives

Compare VLPart and CoDet with Autodistill

Using Autodistill, you can compare VLPart and CoDet on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-vlpart autodistill-codet

Next, create a new Python file and add the following code:


from autodistill_vlpart import VLPart
from autodistill_codet import CoDet

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    VLPart(ontology=ontology),
    CoDet(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)

Models

VLPart vs. CoDet

.

Both

VLPart

and

CoDet

are commonly used in computer vision projects. Below, we compare and contrast

VLPart

and

CoDet
  VLPart CoDet
Date of Release Oct 24, 2023
Model Type Object Detection Object Detection
Architecture
GitHub Stars 79

Using Autodistill, you can compare VLPart and CoDet on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-vlpart autodistill-codet

Next, create a new Python file and add the following code:


from autodistill_vlpart import VLPart
from autodistill_codet import CoDet

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    VLPart(ontology=ontology),
    CoDet(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)

VLPart

VLPart, developed by Meta Research, is an object detection and segmentation model that works with an open vocabulary

How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion Matrix

Compare VLPart to other models

Compare CoDet to other models

Deploy a computer vision model today

Join 250,000 developers curating high quality datasets and deploying better models with Roboflow.

Get started