Task D – Activity Detection in Extended Videos (ActEV-PC)

The Activities in Extended Videos Prize Challenge (ActEV-PC) seeks to encourage the development of robust automatic activity detection algorithms for an extended video. Challenge participants will develop algorithms that address the "Activity Detection Task" which requires systems to detect and temporally localize the activity instance for 18 activities that are to be found in extended videos. Extended videos contain significant spans without any activities and intervals with potentially multiple concurrent activities.

ActEV-PC task has two phases:

ActEV-PC Open Leaderboard Evaluation (Phase 1): challenge participants will run their activity detection software on their compute hardware and submit system output defined by the ActEV-PC evaluation plan to the NIST ActEV Scoring Server . This phase will serve as a qualifying stage where the top 6 participants will proceed to phase 2.

ActEV-PC Independent Evaluation (Phase 2): invited challenge participants will submit their runnable activity detection software to NIST using the forthcoming Evaluation Commandline Interface Submission Instructions. NIST will then evaluate system performance on sequestered data using NIST hardware.

For more detailed information about this task, visit the ActEV-PC website. For any questions please contact actev-nist@nist.gov

Schedule

Nov 8, 2018

Account registration opens, evaluation plan released and encrypted evaluation data available for download.

Nov 12, 2018

Evaluation data unlocked (decryption key published).

Dec 12, 2018

Leaderboard open for submissions.

March 21, 2019 - 4:00 PM EST

Top six teams on the Leaderboard selected for participation in Phase 2

March 25, 2019

Challenge participants selected for Phase 2 deliver software.

March 25, 2019

NIST evaluates challenge participants’ code on sequestered data.

May 22, 2019

NIST reports results to IARPA.

May 30, 2019

Challenge winners announced.

June 3, 2019 Deadline for the reports by winners.

Dataset

VIRAT V1, V2 and the M1 datasets are used for the ActEV-PC challenge. These datasets contain multi-camera, continuous, long-duration video and multiple activities can happen at any time, anywhere in the frame and across cameras. Please refer to the DATA tab (on the ActEV-PC website) for more information about how to download the data.

Evaluation

The main scoring metrics will be based on detection, temporal localization, and spatio-temporal localization using evaluation measures that include the probability of missed detection and rate of false alarm. You can find more details of the ActEV evaluation plan here, and the ActEV scoring software here

Submission Format

The system output file should be a JSON file that includes a list of videos processed by the system, along with a collection of activity instance records with spatio-temporal localization information (please see the ActEV-PC evaluation plan for more details).

Baselines

The top 6 challenge participants will deliver their algorithms that are compatible with the CLI protocol to NIST. The purpose is to test for compatibility and to ensure that the test results that the challenge participants obtained on the “validation dataset” when running on their own server match what NIST is getting when they run it on the Independent Evaluation Infrastructure. On ActEV-PC website, there is a CLI/BASELINE tab with more information about CLI and baseline codes.