From 886ace4c3126965e8f112e73a7517d30461d8918 Mon Sep 17 00:00:00 2001 From: guis98 <guis98@mi.fu-berlin.de> Date: Mon, 12 Aug 2024 13:11:55 +0200 Subject: [PATCH] files added --- README.md | 87 ++++++++++++++++++++++++++++++------------------------- 1 file changed, 48 insertions(+), 39 deletions(-) diff --git a/README.md b/README.md index 9ed182c..ec0f76f 100644 --- a/README.md +++ b/README.md @@ -1,54 +1,63 @@ -0. Directory structure: - Hen Tracking - |_data - | |_frames - | | |_frame_0.jpg, frame_1.jpg, ..., frame_204.jpg - | |_video - | |_video.mkv - | - |_train - | |_images - | | |_ first 190 images - | |_labels - | - |_test - | |_images - | | |_ remaining 20 images - | |_labels - | - |_annotate.py - | - |_get_stuff.py - | - |_track.py - | - |_data.yaml - - -1. Annotated the frames using `ultralytics auto annotator` - --> Run `python annotate.py` (See output annotate_output.txt) +# 0. Directory structure: + +``` +Hens Tracking + +├── data +│ ├── frames +│ │ └── frame_0.jpg, frame_1.jpg, ..., frame_204.jpg +│ └── video +│ └── video.mkv +│ +├── train +│ ├── images +│ │ └── first 185 images +│ └── labels +│ +├── val +│ ├── images +│ │ └── remaining 20 images +│ └── labels +│ +├── annotate.py +│ +├── get_stuff.py +│ +├── track.py +│ +└── data.yaml +``` + +# 1. Annotated the frames using `ultralytics auto annotator` + +> Run `python annotate.py` + +(See output `annotate_output.txt`) This creates a folder under `data` named 'frames_auto_annotate_labels' and gives a .txt file for each frame containing the segmentations of the hens it detected. -2. Put the labels into a proper structure for training i.e. specify custom class (hen) +# 2. Put the labels into a proper structure for training i.e. specify custom class (hen) --> Run `python get_stuff.py` +> Run `python get_stuff.py` -This will put all the necesaary files under `train/labels`. Move the last 20 files to `val/labels` for validation in the next stripped +This will put all the necesaary files under `train/labels`. Move the last 20 files to `val/labels` for validation in the next stripped. -3. The model is now trained using a pretrained 'yolov8x-seg'. Ensure that the directory path is good +# 3. The model is now trained using a pretrained `yolov8x-seg` +Ensure that the directory path is good --> Run `yolo task=segment mode=train model=yolov8x-seg data=data.yaml epochs=20 imgsz=640` in a terminal opened in the directory -(See output train_output.txt) +> Run `yolo task=segment mode=train model=yolov8x-seg data=data.yaml epochs=20 imgsz=640` in a terminal opened in the directory + +(See output `train_output.txt`) This will create a `runs` folder and all the metrics and model weights will be stored there. -4. Use the best weights as input model for tracking +# 4. Use the best weights as input model for tracking + +> Run `python track.py` --> Run `python track.py` (See output output_track.py, uncomment line no. 34 if you want to visualize the output alongside running) +(See `output track_output.txt`, uncomment line no. 34 if you want to visualize the output alongside running) -The final tracking video would be saved under the directory \ No newline at end of file +The final tracking video would be saved under the directory. \ No newline at end of file -- GitLab