The submission portal is now open until
11:59AM PDT, Oct 20, 2022 16:59PM PDT, Oct 22, 2022!
Based on the CO3Dv2 dataset, the challenge will encourage the community to develop methods for reconstructing a large variety of objects from many or few views of a scene. CO3D includes common objects from the COCO taxonomy, with a focus on rigid classes such as fire hydrants, potted plants, balls, cups, etc. The dataset consists of 40k turntable-like videos of such objects, crowd-sourced by nonexperts using cellphone cameras.
The challenge comprises two tasks:
Task 1 - Many-View Reconstruction:
The goal is to generate the unknown frames in the training videos. Specifically, one has access to a large number of views of an object and the goal is to generate the missing ones — a setup popularised by methods such as NeRF that “overfit” a model to individual videos. This task is representative of applications where one deliberately captures data to reconstruct an object, as in photogrammetry for asset creation for Computer Graphics / Augmented or Virtual Reality.
Task 2 - Few-View Reconstruction:
The goal is to generate the unknown frames in the testing videos. The task is similar to Task 1, except that only a very small number of views (1-9) from the testing videos with known category labels are available. Reconstruction and novel-view synthesis are likely only possible by learning suitable object priors from the training video collection to fill the “missing gaps”: the object details that cannot be inferred by solely analysing the few images available at test time. This task is representative of applications where one wishes to reconstruct an object captured from casually-recorded data, such as in most egocentric videos.
We will be releasing a new version of the dataset CO3Dv2, which consists of 40k videos of 50 object categories from the MS-COCO taxonomy (with nearly doubled amount of data and improved annotations compared to the first version). The test set of the challenge contains 20k videos where, for fairness, only the “known” frames will be publicly available. We have manually checked every test video and its 3D annotations to ensure reliable ground truth for evaluation.
|11:59AM PDT, Jul 20, 2022||Submission portal open|
|11:59AM PDT, Oct 20, 2022||Submission deadline|
|Oct 24, 2022||Day of workshop @ECCV'22|
If you have any questions, feel free to contact the organizers: David Novotny (email@example.com), Shangzhe Wu (firstname.lastname@example.org), Roman Shapovalov (email@example.com), Samarth Sinha (firstname.lastname@example.org).