Skip to main content Link Menu Expand (external link) Copy Copied
Splash photo of MBZUAI
Third Workshop on Seeking Low‑Dimensionality in Deep Neural Networks
January 2023, MBZUAI

Poster Presentations

Authors of accepted papers will present their work at one of three poster sessions at the workshop venue. A list of the accepted papers and the poster session assignments is given below, along with logistics information about the poster sessions.

Poster Session Logistics

There are three evening poster sessions, on Wednesday, Thursday, and Friday – see the schedule. The poster sessions will be held in the entryway at the W Hotel – follow signage for the SLowDNN workshop. On the day of your poster session, you will be able to hang your poster up for the entire day.

In addition, please note:

  • The space available per poster is 40 inches wide by 50 inches high, so please ensure your poster will fit. (We recommend a 36 x 24 inch size.)
  • Your poster may be in any visual format you wish.
  • Please print and bring your poster with you as you travel. Unfortunately, MBZUAI does not have any official in-house facility for printing posters.

Accepted Papers and Poster Session Assignments

Wednesday, January 4th

TT-NF: Tensor Train Neural Fields
Anton Obukhov, Mikhail Usvyatsov, Christos Sakaridis, Konrad Schindler, Luc Van Gool
Robust Calibration with Multi-domain Temperature Scaling
Yaodong Yu, Stephen Bates, Yi Ma, Michael Jordan
On the infinite-depth limit of finite-width neural networks
Soufiane Hayou
Combining Deep Learning and Adaptive Sparse Modeling for Low-dose CT Reconstruction
Ling Chen, Zhishen Huang, Yong Long, Saiprasad Ravishankar
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang
Deep Unfolded Tensor Robust PCA with Self-supervised Learning
Harry Dong, Megna Shah, Sean Donegan, Yuejie Chi
Closed-form Solutions of Learning Dynamics for Two-layer Nets for Collapsed Orthogonal Data
Yutong Wang, Qing Qu, Wei Hu
Closed-Loop Transcription via Convolutional Sparse Coding
Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel Ni, Yi Ma
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Noam Razin, Tom Verbin, Nadav Cohen
On the Geometry of Reinforcement Learning in Continuous State and Action Spaces
Saket Tiwari, Omer Gottesman, George Konidaris
State-driven Implicit Modeling for Sparsity and Robustness in Neural Networks
Alicia Y. Tsai, Juliette Decugis, Laurent El Ghaoui, Alper Atamturk
Robust Self-Guided Deep Image Prior
Shijun Liang, Evan Bell, Saiprasad Ravishankar, Qing Qu
Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data
Gabriel Maliakal, Anish Lahiri, Marc Louis Klasky, Jeffrey A Fessler, Saiprasad Ravishankar
Robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar, Jeremias Sulam
Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees
Darshan Thaker, Paris Giampouras, Rene Vidal

Thursday, January 5th

From Optimization Dynamics to Generalization Bounds via Lojasiewicz Gradient Inequality
Fusheng Liu, Haizhao Yang, Soufiane Hayou, Qianxiao Li
Unsupervised Manifold Linearizing and Clustering
Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin David Haeffele
Pursuit of a Discriminative Representation for Multiple Subspaces via Sequential Games
Druv Pai, Michael Psenka, Chih-Yuan Chiu, Manxi Wu, Edgar Dobriban, Yi Ma
VQ-Flows: Vector Quantized Local Normalizing Flows
Chris Barton Dock, Sahil Sidheekh, Maneesh Kumar Singh, Radu Balan
Latent-space disentanglement with untrained generator networks allows to isolate different motion types in video data
Abdullah, Martin Holler, Malena Sabate Landman, Karl Kunisch
Robust Training under Label Noise by Over-parameterization
Sheng Liu, Zhihui Zhu, Qing Qu, Chong You
Linear Convergence Analysis of Neural Collapse with Unconstrained Features
Peng Wang, Huikang Liu, Can Yaras, Laura Balzano, Qing Qu
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Kaiqi Zhang, Yu-Xiang Wang
Sparse MoE with Random Routing as the New Dropout: Training Bigger and Self-Scalable Models
Tianlong Chen, Zhenyu Zhang, AJAY KUMAR JAISWAL, Shiwei Liu, Zhangyang Wang
Finding Better Descent Directions for Adversarial Training
Fabian Latorre, Igor Krawczuk, Leello Tadesse Dadi, Thomas Pethick, Volkan Cevher
APP: Anytime Progressive Pruning
Diganta Misra, Bharat Runwal, Tianlong Chen, Zhangyang Wang, Irina Rish
Effects of Data Geometry in Early Deep Learning
Saket Tiwari, George Konidaris
Dimension Mixer Model: Group Mixing of Input Dimensions for Efficient Function Approximation
Suman Sapkota, Binod Bhattarai
PSPS: Preconditioned Stochastic Polyak Step-size method for badly scaled data
Farshed Abdukhakimov, XIANG CHULU, Dmitry Kamzolov, Robert M. Gower, Martin Takac
Lifted Bregman Training of Neural Networks
Xiaoyu Wang, Martin Benning
DynamicViT: Making Vision Transformer faster through layer skipping
Amanuel Negash Mersha, Sammy Assefa
Robustness via deep low rank representations
Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr

Friday, January 6th

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold
Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu
Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
Clayton Sanford, Navid Ardeshir, Daniel Hsu
SinkGAT: Doubly-Stochastic Graph Attention
Tianlin Liu, Cheng Shi, Anastasis Kratsios, Ivan Dokmanic
SMUG: Towards Robust MRI Reconstruction by Smoothed Unrolling
Hui Li, Jinghan Jia, Shijun Liang, Yuguang Yao, Saiprasad Ravishankar, Sijia Liu
Semi-private learning via low dimensional structures
Yaxi Hu, Francesco Pinto, Amartya Sanyal, Fanny Yang
Certified Defenses Against Near-Subspace Unrestricted Adversarial Attacks
Ambar Pal, Rene Vidal
Representation Learning Through Manifold Flattening and Reconstruction
Michael Psenka, Druv Pai, Vishal G Raman, Shankar Sastry, Yi Ma
Flat minima generalize for low-rank matrix recovery
Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel
Are All Losses Created Equal: A Neural Collapse Perspective
Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu
Fast Evaluation of Multilinear Operations in Convolutional Tensorial Neural Networks
Tahseen Rabbani, Jiahao Su, Xiaoyu Liu, David Chan, Geoffrey Sangston, Furong Huang
Dimensionality compression and expansion in Deep Neural Networks
Stefano Recanatesi, Matthew Farrell, Madhu Advani, Timothy Moore, Guillaume Lajoie, Eric Todd SheaBrown
A picture of the space of typical learnable tasks
Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James Sethna, Pratik Chaudhari
Deep Reinforcement Learning based Unrolling Network for MRI Reconstruction
Chong Wang, Rongkai Zhang, Gabriel Maliakal, Saiprasad Ravishankar, Bihan Wen
Bilevel learning of $\ell_{1}$ regularizers with closed-form gradients (BLORC)
Avrajit Ghosh, Saiprasad Ravishankar
Resource-Efficient Invariant Networks: Exponential Gains by Unrolled Optimization
Sam Buchanan, Jingkai Yan, Ellie Haber, John Wright