ACPBench: Reasoning About Action, Change, and Planning
Harsha Kokel, Michael Katz, et al.
AAAI 2025
We present EvalAssist, a framework that simplifies the LLM-as-a-judge workflow. The system provides an online criteria development environment, where users can interactively build, test, and share custom evaluation criteria in a structured and portable format. A library of LLM based evaluators is made available that incorporates various algorithmic innovations such as token-probability based judgement, positional bias checking, and certainty estimation that help to engender trust in the evaluation process. We have computed extensive benchmarks and also deployed the system internally in our organization with several hundreds of users.
Harsha Kokel, Michael Katz, et al.
AAAI 2025
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Sharmishtha Dutta, Alex Gittens, et al.
AAAI 2025
Fernando Martinez, Juntao Chen, et al.
AAAI 2025