In LaSOT, we conduct one-pass evaluation (OPE) to assess the performance of each tracker. In detail, we utilize three types of metrics, Precision, Normalized Precision and Success, to measure different tracking algorithms. The definitions of the three metrics can be seen in the paper or paper.
The evaluation toolkit (note: conference version) can be found here or on Github.
A new version of evaluation toolkit (suppprt both conference and journal versions) with complete tracking results can be downloaded here (local), here (GoogleDrive) or here (Baidu Pan, pwd: 2020).
We define three protocols for evaluating trackers on LaSOT as follows
We assess 48 popular tracking algorithms on LaSOT under Protocol I, II and III (see their definitions above). These trackers include deep learning based ones, correlation filter based ones with hand-crafted or deep features, sparse representation based ones and other representatives. Table 1 shows these trackers.
Tracker | Paper | Where | When | Speed | Code |
---|
Note: Each tracker is used as it is from authors' implementation, without any modification.
The following plots demonstrate the evaluation results of tracking algorithms under three protocols using three metrics. Click on each image to zoom-in for better view.