The ActEV SDL (Activities in Extended Video Sequestered Data Leaderboard) is an ongoing ranking of software systems that process lengthy videos and detect activities of interest. The ActEV SDL guest task under the CVPR'20 ActivityNet workshop is based on sequestered data from the MEVA Test 3 dataset, which contains ~140 hours of videos, including indoor and outdoor scenes, night and day, crowds and individuals, and videos are from both EO/IR sensors. The top two teams on the ActEV SDL leaderboard by the submission deadline (May 10, 2020) will be invited to give an oral presentation at the CVPR'20 ActivityNet workshop
ActEV SDL website: https://actev.nist.gov/sdl
If you have any question about the SDL evaluation, please email to: actev-nist[at]nist.gov
|March 01, 2020||
ActEV SDL opens with MEVA Test3
|May 10, 2020 at 11 AM EST||
Deadline for CLI submissions to be included in ActEV guest task under CVPR'20 ActivityNet workshop.
|June 01, 2020||
We will invite the top two teams on the SDL leaderboard to give ActEV guest task oral presentations at the CVPR'20 ActivityNet workshop based on the CLI submission deadline.
|June 14, 2020||
CVPR'20 ActivityNet workshop ActEV SDL guest task presentations
The data is from the Multiview Extended Video with Activities (MEVA) dataset [mevadata.org] and the videos are from both EO (Electro-Optical) and IR (Infrared) sensors. The data used for ActEV SDL evaluation from March 01, 2020 is the MEVA test3 dataset (~140 hours of video) instead of MEVA Test2 dataset (72 hours of video). The public MEVA dataset includes hundreds of hours of data from the same cameras at the same facility, which can be used for training. If you register for ActEV you can download the MEVA dataset for free; info on how to download the data is on the data tab. We also provide 20 hours of MEVA annotations, and instructions on how to make and share activity annotations are at mevadata.org.
Detect if and when an activity occurs. Given a target activity type and a set of videos, submitted systems must automatically detect all instances of the activity in the videos. While different activities instances have different durations, a submitted system will be considered to have detected an activity if it correctly identifies at least 1 second of the activity. Submitted systems are scored for Pmiss and TFA at multiple thresholds, creating a detect error tradeoff (DET) curve. The leaderboard ranking of a system is based on a summary of its DET curve: the average value of 1-Pmiss across the TFA range between 0% to 20%. You can find more details of the ActEV SDL evaluation plan and the ActEV scoring software.
System delivery to the leaderboard must be in a form compatible with the ActEV Command Line Interface (ActEV CLI) and submitted to NIST for testing. The command line interface implementation that you will provide formalizes the entire process of evaluating a system, by providing the evaluation team a means to: (1) download and install your software via a single URL, (2) verify that the delivery works AND produces output that is "consistent" with output you produce, and (3) process a large collection of video in a fault-tolerant, parallelizable manner. See more information about CLI here.