Deep SORT
Introduction
Slightly modified version of Deep SORT (https://github.com/nwojke/deep_sort) that works for the latest opencv and scikit-learn packages
Dependencies
The code is compatible with python 3. The following dependencies are needed to run the tracker:
- NumPy
- sklearn
- OpenCV
Additionally, feature generation requires TensorFlow (>= 1.0). tf version used for this project = 1.5
Installation
Get the model (mars-small1128.pb, trained on person images, but works also for hens in our case) from here.
Generating detections
The original model was trained on pedestrian data but it also seems to work on our data i.e Hens
It contains a script to generate features for person re-identification, suitable to compare the visual appearance of pedestrian bounding boxes using cosine similarity that works for us.
The following example generates these features from our detections. Again, we assume resources have been extracted to the repository root directory and required data is present:
python tools/generate_detections.py \
--model=resources/networks/mars-small128.pb \
--mot_dir=./data/train \
--output_dir=./resources/detections/
The model has been generated with TensorFlow 1.5. If you run into
incompatibility, re-export the frozen inference graph to obtain a new
mars-small128.pb
that is compatible with your version:
python tools/freeze_model.py
The generate_detections.py
stores for each sequence of the dataset
a separate binary file in NumPy native format. Each file contains an array of
shape Nx138
, where N is the number of detections in the corresponding MOT
sequence. The first 10 columns of this array contain the raw MOT detection
copied over from the input file. The remaining 128 columns store the appearance
descriptor. The files generated by this command can be used as input for the
deep_sort_app.py
.
NOTE: If python tools/generate_detections.py
raises a TensorFlow error,
try passing an absolute path to the --model
argument. This might help in
some cases.
Running the tracker
The following example starts the tracker on one of the
We assume resources (the detection file namely) has been extracted to the repository root directory and
the image data (sequence_dir) is in ./data/train/Hens
:
python deep_sort_app.py \
--sequence_dir=./data/train/Hens \
--detection_file=./resources/detections/Hens.npy \
--min_confidence=0.3 \
--nn_budget=100 \
--display=True \
--output_file=hens_output
Check python deep_sort_app.py -h
for an overview of available options.
There are also scripts in the repository to visualize results, generate videos,
and evaluate the MOT challenge benchmark.
To get more information about the detailed working, please visit the original repo