Manifold-Aligned Counterfactual Explanations for Neural Networks
Georgia Perakis, Wei Sun, et al.
AISTATS 2024
We investigate how useful are machine translators based on the fine-tuning of LLMs with very small amounts of training data, typical of extremely low-resource languages such as Indigenous languages. We started by developing translators for the Guarani Mbya and Nheengatu languages by fine-tuning a WMT- 19 German-English translator. We then performed a human evaluation of the usefulness of the results of test sets and compared them to their SacreBLUE scores. We had a level of alignment around 60-70%, although there were about 40% of very wrong translations. The results suggest the need of a filter for bad translations as a way to make the translators useful, possibly only in scenarios of human-AI collaboration such as writing-support assistants.
Georgia Perakis, Wei Sun, et al.
AISTATS 2024
Ioana Baldini Soares, Chhavi Yadav, et al.
ACL 2023
Akifumi Wachi, Yanan Sui
ICML 2020
Shashank Srikant, Sijia Liu, et al.
ICLR 2021