Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, et al.
NeurIPS 2023
We present a study on how and where personas---defined by distinct sets of human characteristics, values, and beliefs---are encoded in the representation space of large language models (LLMs). Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations. We then analyze the activations within a selected layer to examine how specific personas are encoded relative to others, including their shared and distinct embedding spaces. We find that, across multiple pre-trained decoder-only LLMs, a few personas exhibit early separation in representation space, while most personas show large differences in representation spaces only within the final third of the decoder layers. When we look at a single of these later layers, we observe overlapping activations for specific ethical perspectives---such as moral nihilism and utilitarianism---suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to have more distinct regions. These findings inform future efforts to improve the interpretability of representations and enhance control over human-centered concepts in LLM outputs.
Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, et al.
NeurIPS 2023
Gaetano Rossiello, Nhan Pham, et al.
ICLR 2025
Michael Hind, Dennis Wei, et al.
ICML 2020
Balaji Ganesan, Hima Patel, et al.
NeurIPS 2020