Towards Assurance of LLM Adversarial Robustness using Ontology-Driven ArgumentationTomas Bueno MomcilovicBeat Buesseret al.2024xAI 2024
Identifying Homogeneous and Interpretable Groups for Conformal PredictionNatalia Martinez GilDhaval Patelet al.2024UAI 2024
AUTOLYCUS: Exploiting Explainable Artificial Intelligence (XAI) for Model Extraction Attacks against Interpretable ModelsAbdullah Caglar OksuzAnisa Halimiet al.2024PETS 2024
Exploring Vulnerabilities in LLMs: A Red Teaming Approach to Evaluate Social BiasYuya Jeremy OngJay Pankaj Galaet al.2024IEEE CISOSE 2024
Quantifying Representation Reliability in Self-Supervised Learning ModelsYoung Jin ParkHao Wanget al.2024UAI 2024
Privacy-Preserving Verification of Preprocessing in Machine Learning ModelsWenbiao LiAnisa Halimiet al.2024PETS 2024
Effect of dataset partitioning strategies for evaluating out-of-distribution generalisation for predictive models in biochemistryRaúl Fernández DíazLam Thanh Hoanget al.2024ISMB 2024
Effective In-Silico Gene Perturbation by Machine Learning Model Interpretation for ImmunotherapiesTanwi BiswasAkira Kosekiet al.2024ISMB 2024
AutoPeptideML: A study on how to build more trustworthy peptide bioactivity predictorsRaúl Fernández DíazRodrigo Cossio-pérezet al.2024ISMB 2024