We are glad to announce the 6th annual installment of the ActivityNet Challenge, which was first hosted during CVPR 2016. It focuses on the recognition of daily life, high-level, goal-oriented activities from user-generated videos as those found in internet video portals. We are proud to announce that this year the challenge will host a total of seven tasks that aim to push the limits of semantic visual understanding of videos as well as bridging visual content with human captions. Three of the seven tasks in the challenge are based on the ActivityNet dataset, which was introduced in CVPR 2015 and organized hierarchically in a semantic taxonomy. These three tasks will focus on temporally localizing activity and objects information (class labels, captions, and spatial localization of objects).
In this installment of the challenge, we will host seven guest tasks (tentative) focusing on different aspects of the activity recognition problem, especially expanding from online consumer video challenges to challenges on surveillance and first-person video. Each task is based on a recently compiled large-scale activity/action dataset, AVA-Kinetics (Google AI, Deepmind), ActEV (NIST), HACS (MIT), Tiny-Virat (UCF), MMAct (Hitachi), Home Action Genome (Panasonic, Stanford), SoccerNet (KAUST), and ActivityNet Entities (University of Michigan, Facebook).
Sign-up into our Google Group to receive updates.