Dense Associative Memory Through the Lens of Random FeaturesBenjamin HooverDuen Horng Chauet al.2024NeurIPS 2024
Limits of Transformer Language Models on Learning to Compose AlgorithmsJonathan ThommAleksandar Terzicet al.2024NeurIPS 2024
Geometry of naturalistic object representations in recurrent neural network models of working memoryXiaoxuan LeiTaku Itoet al.2024NeurIPS 2024
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit DifferenceJiabao JiYujian Liuet al.2024NeurIPS 2024
Towards Exact Gradient-based Training on Analog In-memory ComputingZhaoxian WuTayfun Gokmenet al.2024NeurIPS 2024
A Mamba-Based Foundation Model for ChemistryEmilio Ashton Vital BrazilEduardo Almeida Soareset al.2024NeurIPS 2024
Modern Hopfield Networks meet Encoded Neural Representations - Addressing Practical ConsiderationsSatyananda KashyapNiharika DSouzaet al.2024NeurIPS 2024
On the role of noise in factorizers for disentangling distributed representationsKumudu Geethan KarunaratneMichael Herscheet al.2024NeurIPS 2024
Towards Unbiased Evaluation of Time-series Anomaly DetectorDebarpan BhattacharyyaSumanta Mukherjeeet al.2024NeurIPS 2024