I (Still) Can't Believe It's Not Better! Workshop
ICBINB@NeurIPS 2021 - A Workshop for "beautiful" ideas that *should* have worked
Call for Papers
Beautiful ideas have shaped scientific progress throughout history.1 As Paul Dirac said, “If one is working from the point of view of getting beauty in one’s equations, (…), one is on a sure line of progress.2” However, beautiful ideas are often overlooked in a research environment that heavily emphasizes state-of-the-art (SOTA) results, where the worth of scientific works is defined by their immediate utility and quantitative superiority instead of their creativity, diversity, and elegance. One example in machine learning is neural networks—a beautiful idea (stacked perceptron units) that did not outperform baselines initially but whose potential was still to be discovered3, eventually giving rise to the field of deep learning.
This workshop will explore gaps between the form and function (or, the intrinsic and extrinsic value) of ideas in ML and AI research. Some questions we will ask: Why do we find certain ideas “beautiful” (form) even when they don’t yet seem to “work” (function)? What is the value of such beautiful ideas? Does reliance on standardized evaluation or current incentives in publishing suppress beautiful ideas?
“Beauty” is of course subjective, but all researchers share the experience of valuing an idea intrinsically despite it lacking demonstrated extrinsic value.4 We will explore that disconnect by asking researchers to submit their “beautiful” ideas that don’t (yet) “work”. We will ask them to explain why their idea has intrinsic value, and hypothesize why it hasn’t (yet) shown its extrinsic value. In doing so, we will create a space for researchers to help each other get their “beautiful” ideas “working”.
Tensions between intrinsic and extrinsic value arise in multiple sub-fields, e.g.: (i) probabilistic modeling: a model’s parsimony versus its predictive performance (ii) deep learning: simple modular ideas easier to incorporate into different architectures versus more complicated schemes yielding higher performance (iii) differential privacy and algorithmic fairness: an algorithm’s mathematical elegance versus its downstream impact on sociotechnical system (iv) econometrics or causal inference: simple and theoretically appealing estimators vs complex estimators with more statistical efficiency
We invite submissions from these sub-fields among others in the broader ML community. Works in progress are encouraged. Submissions may touch one or more of the following aspects:
- Unexpected negative results or anomalies1: ideas that do not provide expected results, yet authors are able to explain why, bringing an interesting closed-form piece of knowledge to the community
- Papers that are “stuck” yet contain beautiful/elegant ideas. Authors should argue why the idea is of interest, rigorously describe the analysis, and include a self-critique
- Criticism of and alternatives to current evaluation metrics and default practices
- Meta-research on the role of “beauty” or negative results in ML research
Key Dates
- Submission Deadline:
17th September21th September AOE - Bidding Period: 22-25th September
- Reviewing Process: 26th September - October 10th
- Authors Notification: October 17th
- Mandatory SlidesLive upload for speaker videos: November 1st, 2021 AOE
- Workshop Day: Monday December 13th, 2021, virtually
Speakers
Tina Eliassi-Rad Northeastern |
Tom Griffiths Princeton |
Chris Maddison Toronto |
Nyalleng Moorosi Google AI |
Mihaela Rosca DeepMind |
Daniel McDonald British Columbia |
Robert Williamson Tübingen |
Panelists
Suresh Venkatasubramanian Brown |
Atoosa Kasirzadeh DeepMind |
Javier Gonzalez MSR |
Anna Rogers Copenhagen |
Emtiyaz Khan RIKEN |
The panel will be moderated by Robert C. Williamson and will take place between 8:45-9:45 EST.
How do I submit?
We accept submissions through our OpenReview workshop site.
Main deadline: September 17th September 21th 23:59 Anywhere on Earth. Accept/reject notification will be sent out by October 15th October 17th.
Check out our submission guidelines for more details.
Organizers
- Aaron Schein, Columbia University
- Jessica Zosa Forde, Brown University
- Stephanie Hyland, Microsoft Research
- Francisco Ruiz, DeepMind
- Melanie F. Pradier, Microsoft Research
Advisory Board
- Tamara Broderick, Massachusetts Institute of Technology
- Robert Williamson, University of Tübingen
- Max Welling, University of Amsterdam
Senior Advisory Board
- Finale Doshi-Velez, Harvard University
- Isabel Valera, MPI for Intelligent System
- David Blei, Columbia University
- Hanna Wallach, Microsoft Research
Contact
For any question or suggestion, please contact us at: cant.believe.it.is.not.better@gmail.com
About
This workshop builds upon the NeurIPS 2020 “I Can’t Believe It’s Not Better!” (ICBINB) workshop, which was motivated by the same principles but focused on unexpected negative results in probabilistic ML. In this call, we further unpack the concept of beauty that emerged in the discussions from the 2020 workshop. We also expand the scope to a broader ML community.
References
-
Kuhn, Thomas S. “The Structure of Scientific Revolutions.” (1962) ↩ ↩2
-
Dirac PAM. The Evolution of the Physicist’s Picture of Nature. Sci Am. 1963;208: 45. ↩
-
LeCun and Hinton mention how their papers (and their colleagues’) were routinely rejected from being published based on their subject: https://youtu.be/vShMxxqtDDs?t=6m59s ↩
-
Engler, Gideon. “Aesthetics in science and in art.” The British Journal of Aesthetics 30 (1990): 24-34. ↩