Understanding counter-intuitive results is essential, but under-valued, in an increasingly competitive and fast-paced research environment. This workshop aims to rescue papers with fresh and powerful insights that would otherwise fall into the graveyard of forgotten papers. Such works might be high-hanging fruits in the tree of science, and are often those that need help the most. Benefits from this workshop include:

  1. encouraging transparency and rigor in the scientific process
  2. sparing time of researchers by documenting failed attempts
  3. opening new research directions by identifying unexpected behaviors.

Enhancing research

Every researcher has faced a situation where a priori expectations based on solid arguments/intuition/theory are promising, and yet, not supported by empirical results. What is exactly going on? Why is there such a gap? Understanding counter-intuitive results is hard, time-consuming, and often not worthy if the final outcome is not a clear winner-beats-all approach. Yet, it is our belief that elucidating the reasons behind such gaps is extremely valuable per se. Rather than the final result, this workshop focuses on the quality and thoroughness of the scientific procedure, promoting higher quality and principled science (also via proceedings of top works). We hope to redirect some attention of the ML research community towards high-hanging, but potentially juicy fruits. Such a perspective might also open new research directions.

Theory vs. practice [1]

Fostering collaboration

This workshop aims to actively facilitate a friendly environment that allows research to move forward. By being open about unexpected results or open challenges, authors may receive advice from other experts that eventually make the presented works click at the end. By focusing our workshop within the probabilistic modeling community, we expect interactions between researchers to be more targeted and fruitful (both because of a shared language and expertise). Beyond receiving good advice, some authors may spark ongoing collaboration with experts who can substantially assist in moving the research past current challenges.

Encouraging transparency

Papers about negative results or highlighted difficulties are in themselves valuable. Even if the research project does not move past this point, the paper reporting the challenges faced may be useful, and help spare time on researchers (e.g., know when things do not work and why). Some ideas or experience never get documented into papers, and are clearly valuable; instead, those are conveyed privately between researchers: e.g., “I do not recommend this variant of my method as you will have a lot of troubles to tune parameter alpha.” The “selling” part of research is gaining prominence, so limitations are often less highlighted in the paper: there is a blurring of ¨fact and fancy’’, where salesmanship is sometimes as important as content.

Reshaping the scientific process

We expect this workshop to help rethink the ML community and to promote modular science composed of smaller but rigorous pieces of knowledge. Increasingly, ML papers are becoming such huge pieces of work that it becomes implausible for the authors to consider all possible scenarios, do enough experimental testing, sensitivity analyses, toy simulations, etc. On top of that, pressure to publish fast often comes at the expense of quality; misaligned incentives between scholarship and short-term measures of success also impact mental health of researchers. Now more than ever, our community needs principled toy examples, extensive simulations, clear statements on method assumptions, and limitations. This workshop will evaluate works on the basis of scientific interest, not on whether they achieve the best final results.

[1] source of figures: research in progress, use Firefox or Chrome browser for a correct display of animations.