EZannot
- EZannot helps you build high-quality annotated image datasets for training LabGym detectors.
- Powered by Meta’s Segment Anything 2 (SAM2): one click proposes an accurate mask you can accept or adjust.
- Automatically creates up to 135 augmented variants per labeled image to enrich your training set.
- After a handful of examples, let EZannot’s auto-annotator label the rest of your frames in bulk.
- Exports masks and COCO-style JSON files that drop straight into LabGym’s detector-training scripts.
- Runs fully offline on your computer; your data never leaves your lab.
AI-assisted one-click annotation
- Detect object outline with a single left-click, press Enter to label.
Iterative annotation & dataset expansion
- Iterate between quick corrections and bulk auto-labeling to rapidly grow your detector training dataset.
Export your annotations as masks and metadata ready for LabGym’s detector-training pipeline or other computer-vision workflows.
Get started in minutes
- Install Python ≥ 3.10 (separate env if you already run LabGym/FluoSA).
- (Optional GPU) Install CUDA 11.8 and PyTorch ≥ 2.5.1 with
cu118wheels. - Open your system’s terminal / command prompt, then copy-paste each command below and press Enter.
- Upgrade
pip,wheel,setuptoolsfirst:python -m pip install --upgrade pip wheel setuptools - Then install EZannot:
pip install EZannot - Download Meta SAM2 model files and point EZannot to their folder the first time you launch.
- From the same terminal, start the GUI with:
EZannot(orpython -m EZannot).
