ICBINB @ NeurIPS2020
CfP Schedule Mission Accepted Talks Breakouts GatherTown Awards Feedback

Accepted

ICBINB@NeurIPS 2020

  • (Poster #1): Vincent Fortuin, Adrià Garriga-Alonso, Florian Wenzel, Gunnar Ratsch, Richard E Turner, Mark van der Wilk, Laurence Aitchison. Bayesian Neural Network Priors Revisited

  • (Poster #2): Charline Le Lan, Laurent Dinh. Perfect density models cannot guarantee anomaly detection

  • (Poster #4): Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, Geoff Pleiss, John Patrick Cunningham. Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep Learning

  • (Poster #5): Ziyu Wang, Bin Dai, David Wipf, Jun Zhu. Further Analysis of Outlier Detection with Deep Generative Models

  • (Poster #6): Tin D. Nguyen, Jonathan H. Huggins, Lorenzo Masoero, Lester Mackey, Tamara Broderick. Independent versus truncated finite approximations for Bayesian nonparametric inference

  • (Poster #7): Thibault Lesieur, Jérémie Messud, Issa Hammoud, Hanyuan Peng, Céline Lacombe, Paulien Jeunesse. Adversarial training for predictive tasks: theoretical analysis and limitations in the deterministic case.

  • (Poster #8): Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, Percy Liang. Selective Classification Can Magnify Disparities Across Groups

  • (Poster #9): Kai-Chun Hu, Ping-Chun Hsieh, Ting Han Wei, I-Chen Wu. Rethinking Deep Policy Gradients via State-Wise Policy Improvement

  • (Poster #10): Akshatha Kamath, Dwaraknath Gnaneshwar, Matias Valdenegro-Toro. Know Where To Drop Your Weights: Towards Faster Uncertainty Estimation

  • (Poster #11): Mihaela Rosca, Theophane Weber, Arthur Gretton, Shakir Mohamed. A case for new neural networks smoothness constraints

  • (Poster #12): Hoang Thanh-Tung, Truyen Tran. Toward a Generalization Metric for Deep Generative Models

  • (Poster #13): Fan Bao, Kun Xu, Chongxuan Li, Lanqing HONG, Jun Zhu, Bo Zhang. Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models

  • (Poster #15): Margot Selosse, Claire Gormley, Julien Jacques, Christophe Biernacki. A bumpy journey: exploring deep Gaussian mixture models

  • (Poster #16): Siwen Yan, Devendra Singh Dhami, Sriraam Natarajan. The Curious Case of Stacking Boosted Relational Dependency Networks

  • (Poster #17): Emilio Jorge, Hannes Eriksson, Christos Dimitrakakis, Debabrota Basu, Divya Grover. Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning

  • (Poster #18): Maurice Frank, Maximilian Ilse. Problems using deep generative models for probabilistic audio source separation

  • (Poster #20): Ricky T. Q. Chen, Dami Choi, Lukas Balles, David Duvenaud, Philipp Hennig. Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering

  • (Poster #21): Saiteja Utpala, Piyush Rai. Temperature Scaling for Quantile Calibration

  • (Poster #22): Jeremy Nixon, Balaji Lakshminarayanan, Dustin Tran. Why Are Bootstrapped Deep Ensembles Not Better?

  • (Poster #23): Matthias Rosynski, Frank Kirchner, Matias Valdenegro-Toro. Are Gradient-based Saliency Maps Useful in Deep Reinforcement Learning?

  • (Poster #24): Jovana Mitrovic, Brian McWilliams, Melanie Rey. Less can be more in contrastive learning

  • (Poster #25): Ângelo Gregório Lovatto, Thiago Pereira Bueno, Denis Mauá, Leliane Nunes de Barros. Decision-Aware Model Learning for Actor-Critic Methods: When Theory Does Not Meet Practice

  • (Poster #27): W Ronny Huang, Zeyad Ali Sami Emam, Micah Goldblum, Liam H Fowl, Justin K Terry, Furong Huang, Tom Goldstein. Understanding Generalization through Visualizations

  • (Poster #28): Diana Cai, Trevor Campbell, Tamara Broderick. Power posteriors do not reliably learn the number of components in a finite mixture

  • (Poster #29): Haydn Thomas Jones, Juston Moore. Is the Discrete VAE’s Power Stuck in its Prior?

  • (Poster #30): Udari Madhushani, Naomi Leonard. It Doesn’t Get Better and Here’s Why: A Fundamental Drawback in Natural Extensions of UCB to Multi-agent Bandits

  • (Poster #31): Seungjae Jung, Kyung-Min Kim, Hanock Kwak, Young-Jin Park. A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting

  • (Poster #32): Bo Pang, Erik Nijkamp, Jiali Cui, Tian Han, Ying Nian Wu. Semi-supervised Learning by Latent Space Energy-Based Model of Symbol-Vector Coupling

  • (Poster #33): Joseph Turian, Max Henry. I’m Sorry for Your Loss: Spectrally-Based Audio Distances Are Bad at Pitch

  • (Poster #34): Stella Biderman, Walter Scheirer. Pitfalls in Machine Learning Research: Reexamining the Development Cycle

  • (Poster #35): Sachin Kumar, Yulia Tsvetkov. End-to-End Differentiable GANs for Text Generation

  • (Poster #36): Ilya Kavalerov, Wojciech Czaja, Rama Chellappa. A study of quality and diversity in K+1 GANs

  • (Poster #37): Yannick Rudolph, Ulf Brefeld, Uwe Dick. Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?

  • (Poster #38): Ramiro Camino, Chris Hammerschmidt, Radu State. Oversampling Tabular Data with Deep Generative Models: Is it worth the effort?

© Copyright 2021 I Can't Believe It's Not Better! (ICBINB) NeurIPS 2020 Workshop. Powered by Jekyll with al-folio theme. Hosted by GitHub Pages. Photos from Unsplash.