1. 2017 Looking at People CVPR/IJCNN Coopetition

Coopetition = Cooperation + Competition


ChaLearn Job Candidate Screening Coopetition @CVPR17 and @IJCNN17


Sergio Escalera (University of Barcelona, Spain), Hugo Jair Escalante (INAOE, Mexico), Xavier Baró (Universitat Oberta de Catalunya & Computer Vision Center, Barcelona, Spain), Isabelle Guyon (University Paris- Saclay, France and ChaLearn USA), Meysam Madadi (Universitat de Barcelona and Computer Vision Center, Spain), Stephane Ayache, Julio Jacques (Universitat de Barcelona and Computer Vision Center, Spain), Umut Guclu (Rad- boud University, Netherlands), Yagmur Gucluturk (Radboud University, Netherlands), Marcel van Gerven (Radboud University, Netherlands), and Rob van Lier (Radboud University, Netherlands).

Aims and Scope:

Research progress in computer vision and pattern recognition has lead to a variety of modeling techniques with (almost) human-like performance in a variety of tasks. A clear example of this type of models are neural networks, whose deep variants dominate the arena of computer vision among other fields. Although this type of models have obtained astounding results in a variety of tasks they are limited in their explainability and interpretability. We are organizing a workshop and a competition on explainable computer vision systems. We aim to compile the latest efforts and research advances from the scientific community in enhancing traditional computer vision and pattern recognition algorithms with explainability capabilities at both the learning and decision stages.

Details: Candidate screening coopetition:

This proposed challenge is part of a larger project on speed interviews. The overall goal of the project is help both recruiters and job candidates using automatic recommendations based on multi-media CVs. As a first step, we organized in 2016 two rounds of a challenge on detecting personality traits from short videos, for the ECCV 2016 conference (May 15, 2016 to July 1st 2016), and the ICPR 2016 conference (June 30 2016 to 16 August 2016). This second round evaluated using the same data a coopetition setting (mixture of collaboration and competition) in which participants shared code. Both rounds revealed the feasibility of the task (AUC 0.85) and the dominance of deep learning methods. These challenges have been very successful, attracting in total 100 participants.

We propose for the competition programmers of IJCNN17 and CVPR 2017 a new edition of the challenge with the more ambitious goals to:

  • Stage 1: Predict whether the candidates are promising enough that the recruiter wants to invite him/her to an interview (quantitative competition).
  • Stage 2: Justify/explain with a TEXT DESCRIPTION the recommendation made such that a human can understand it (qualitative coopetition).

We will be using the same dataset, but with new annotations never used before about inviting the candidates for a job interview. For the quantitative task, the problem will be cast as a regression task (predict a continuous invite-for-interview score variable). For the qualitative task, a jury will decide whether the method developed proposes clear and useful explanations of recommendations.

In this new stage of the first impressions challenge, we are going several steps further:

  1. This will be the first time we will address the task of predicting “invite-for-interview”.
  2. We will also provide previous annotation data on personality traits (in training data only). This will encourage participants with work on algorithms in that benefit from learning both personality traits and hiring recommendations. In addition, predictions on personality traits could be also be exploited to explain decisions made.
  3. The competition will assess the explanatory capabilities of models, a topic that has not been previously considered in academic competitions. The topic of explainable computer vision and pattern recognition is very hot at the moment.
  4. We further explore the “coopetition” protocols (encouraging a mixture of collaboration and competition between the participants) using a new setting, for which we expect more participation.


Please see


2. The AIML Contest:
Full Automation of Machine Learning


Artificial Intelligence Machine Learning Contest:
Unique for Task-Independent and Modality-Independent Brain-Inspired Engines


Juyang (John) Weng and Juan Castro-Garcia (Michigan State University).


The terms artificial intelligence, machine learning, robotics, signal processing, control, dynamic systems, data mining, big data, and brain projects often have different emphases, but the related disciplines are converging. The Artificial Intelligence Machine Learning (AIML) Contest serves as a converging platform for these highly related disciplines and beyond. It is open to, but not limited to, all researchers, practitioners, students and investors. The main goal of the contest is to promote understanding of both natural intelligence and artificial intelligence, beyond the currently popular pattern classification. The AIML Contest aims to address major learning mechanisms in natural and artificial intelligence, including perception, cognition, behavior and motivation that occur in cluttered real-world environments. Attention, segmentation, emergence of spatiotemporal representations, and incremental scaffolding are parts of each life-long learning stream.

The major characteristics of this contest include:

  1. Use inspirations from learning by natural brains, such as grounding, emerging, natural inputs, incremental learning, real-time and online, attention, motivation, and abstraction from raw sensorimotor data.
  2. General purpose learning engines that are task-independant. Task-independant means that the learning engine is capable of being trained to generate a machine “brain” to learn and do any collection of body-capable and open-ended tasks. Base engines will be available to participants and open for enhancements. The providers of base engines are free to provide assistance to participants, such as courses, tutorials, and workshops.
  3. Modality-independant engines. Modalities that are the well-recognized bottlenecks of AI will be tested on the same machine learning engine from each contest entry, including vision, audition, language understanding, and autonomous thinking.
  4. Training-and-testing sensorimotor streams will be provided to the participants. Each frame of the stream contains a sensory vector and a motoric vector. Training and testing are mixed in the streams, so that learning systems can perform scaffolding: early learned simpler skills are automatically selected and used for learning later more complex skills.


Tuesday May 16, 2017, La Perouse room.

  • 9:20am–10:40am: AIML Contest Panel (1): Awards and Contest Presentations
  • 11:00am–12:20pm: AIML Contest Panel (2): AIML Contest 2017 Engine Download and Introductions


Please see