MOT Module

Modules related to MOT(multiple object tracking)

Video abstract

class eyewitness.mot.video.FilesAsVideoData(image_files, frame_shape=None, frame_rate=3)

Bases: eyewitness.mot.video.VideoData

frame_rate
frame_shape
n_frames
to_video(video_output_path, ffmpeg_quiet=True)
class eyewitness.mot.video.FolderAsVideoData(images_dir, file_template='*[0-9].jpg')

Bases: eyewitness.mot.video.FilesAsVideoData

class eyewitness.mot.video.Mp4AsVideoData(video_file, ffmpeg_quiet=True, in_memory=True)

Bases: eyewitness.mot.video.VideoData

frame_rate
frame_shape
n_frames
class eyewitness.mot.video.VideoData

Bases: object

this class were used to represent a Video (List of Frames)

frame_rate
frame_shape
n_frames
eyewitness.mot.video.is_program_exists(program)

since the python-ffmpeg needs the host install ffmpeg first thus we need a method that can used to find executable file exists

program: str
the executable file to fine
is_file_exists: bool
return the executable file exists or not

Tracker abstract

class eyewitness.mot.tracker.ObjectTracker

Bases: object

track(video_data)
Parameters:video_data (VideoData) -- the video data to be tracked
Returns:video_tracked_result -- the tracked video result
Return type:VideoTrackedObjects

Evaluation

class eyewitness.mot.evaluation.VideoTrackedObjects

Bases: collections.defaultdict

actually a VideoTrackedObjects object is a subclass of defaultdict with list and expected result were Dict[int, List[BoundedBoxObject]]

classmethod from_dict(tracked_obj_dict)
classmethod from_tracked_file(trajectory_file, ignore_gt_flag=False)

parsed the trajectory file, and reuse the BoundedBoxObject class

Parameters:track_file (str) -- the file path of object tracking ground_truth, format is <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>...
Returns:parsed_tracked_objects -- key is the frame_idx, value is the objects detected in this frame and the label field in BoundedBoxObject were set as object_id
Return type:Dict[int, List[BoundedBoxObject]]
to_file(dest_file)
eyewitness.mot.evaluation.mot_evaluation(video_gt_objects, video_tracked_objects, threshold=0.5)

with the help of motmetrics we can evaluate our mot tracker

video_gt_objects: Dict[int, List[BoundedBoxObject]]
ground_truth object of video, key is the frame_idx, value is the objects detected in this frame and the label field in BoundedBoxObject were set as object_id
video_tracked_objects: Dict[int, List[BoundedBoxObject]]
predicted mot result of video, key is the frame_idx, value is the objects detected in this frame and the label field in BoundedBoxObject were set as object_id
summary: Dataframe
the dataframe of evaluation result with the fields used in MOT2019 https://motchallenge.net/results/CVPR_2019_Tracking_Challenge/

Visualize

eyewitness.mot.visualize_mot.draw_tracking_result(parsed_tracked_objects, color_list, video_obj, output_images_dir=None, output_video_path=None, n_trajectory=50, ffmpeg_quiet=True)

this method used to draw the tracked result back to original video notice that, if you want to export to output_video_path, you need to install it in your host, e.g. apt install ffmpeg

Parameters:
  • parsed_tracked_objects (Dict[int, List[BoundedBoxObject]]) -- key is the frame_idx, value is the objects detected in this frame and the label field in BoundedBoxObject were set as object_id
  • color_list (List[tuple[int]]) -- the color_list used to draw each object_id
  • video_obj (VideoData) -- the original video object
  • output_images_dir (Optional[str]) -- the dir used to stored drawn image, the stored template is Path(output_images_dir, "%s.jpg" % str(t).zfill(6)), t is current frame number
  • output_video_path (Optional[str]) -- the output path of video
  • n_trajectory (int) -- the number of previous point to be drawn
  • ffmpeg_quiet (bool) -- route the ffmpeg_quiet logging to stdout or not