Ye Lab · University of Michigan · Life Sciences Institute

EZannot

  • EZannot helps you build high-quality annotated image datasets for training LabGym detectors.
  • Powered by Meta’s Segment Anything 2 (SAM2): one click proposes an accurate mask you can accept or adjust.
  • Automatically creates up to 135 augmented variants per labeled image to enrich your training set.
  • After a handful of examples, let EZannot’s auto-annotator label the rest of your frames in bulk.
  • Exports masks and COCO-style JSON files that drop straight into LabGym’s detector-training scripts.
  • Runs fully offline on your computer; your data never leaves your lab.

AI-assisted one-click annotation

AI help demo Include area

Iterative annotation & dataset expansion

Export your annotations as masks and metadata ready for LabGym’s detector-training pipeline or other computer-vision workflows.

Get started in minutes

  • Install Python ≥ 3.10 (separate env if you already run LabGym/FluoSA).
  • (Optional GPU) Install CUDA 11.8 and PyTorch ≥ 2.5.1 with cu118 wheels.
  • Open your system’s terminal / command prompt, then copy-paste each command below and press Enter.
  • Upgrade pip, wheel, setuptools first:
    python -m pip install --upgrade pip wheel setuptools
  • Then install EZannot: pip install EZannot
  • Download Meta SAM2 model files and point EZannot to their folder the first time you launch.
  • From the same terminal, start the GUI with: EZannot (or python -m EZannot).