Igor Melnyk, Youssef Mroueh, et al.
NeurIPS 2024
Recent advances in language modeling have had a tremendous impact on how we handle sequential data in science. Language architectures have emerged as a hotbed of innovation and creativity in natural language processing over the last decade, and have since gained prominence in modeling proteins and chemical processes, elucidating structural relationships from textual/sequential data. Surprisingly, some of these relationships refer to three-dimensional structural features, raising important questions on the dimensionality of the information encoded within sequential data. Here, we demonstrate that the unsupervised use of a language model architecture to a language representation of bio-catalyzed chemical reactions can capture the signal at the base of the substrate-binding site atomic interactions. This allows us to identify the three-dimensional binding site position in unknown protein sequences. The language representation comprises a reaction-simplified molecular-input line-entry system (SMILES) for substrate and products, and amino acid sequence information for the enzyme. This approach can recover, with no supervision, 52.13% of the binding site when considering co-crystallized substrate-enzyme structures as ground truth, vastly outperforming other attention-based models.
Igor Melnyk, Youssef Mroueh, et al.
NeurIPS 2024
Shashanka Ubaru, Lior Horesh, et al.
Journal of Biomedical Informatics
Pin-Yu Chen, Cho-Jui Hsieh, et al.
KDD 2022
Bo Zhao, Nima Dehmamy, et al.
ICML 2025