Tutorial

Neural Network Reprogrammability: A Unified Framework for Parameter-Efficient Foundation Model Adaptation

Abstract

The goal of this tutorial is to provide machine learning researchers and practitioners with a clear guideline for adapting Foundation Models in the context of parameter-efficient fine-tuning (PEFT). This tutorial moves beyond a simple catalog of PEFT techniques to introduce Neural Network Reprogrammability as a unifying framework that explains how and why modern PEFT methods work. The audience will learn to view techniques, e.g., prompt tuning, in-context learning, and model reprogramming, not as isolated methodologies, but as principled instances of a shared underlying idea: repurposing a fixed pre-trained model by strategically manipulating information at its interfaces. Attendees will walk away with a structured understanding of the adaptation lifecycle: from input manipulation to output alignment.

The tutorial will synthesize existing methodologies and practical applications with a cohesive principle, enabling attendees to better analyze, choose, and design adaptation strategies for their own projects, without incurring substantial costs when fine-tuning Foundation Models.

Specific learning outcomes include:

(1) Understand the core principles of Neural Network Reprogrammability as a unifying paradigm for efficient model adaptation.

(2) Differentiate between popular adaptation methods using a systematic taxonomy.

(3) Select an appropriate adaptation strategy for a given task based on requirements, model access, and resource constraints.

(4) Analyze the trade-offs between different strategies in terms of performance, generalization, and computational cost.

(5) Identify and reason about the critical security and ethical challenges, such as bias amplification and prompt injection, inherent in reprogrammable models.

The tutorial website is https://github.com/zyecs/awesome-reprogrammability/tutorial-aaai26. All the accompanying information, including schedules and materials, will be posted therein.