Conditional Attribute Estimation with Autoregressive Sequence Models
Conditional Attribute Estimation with Autoregressive Sequence Models
基于自回归序列模型的条件属性估计
Generative models are often trained with a next-token prediction objective, yet many downstream applications require the ability to estimate or control sequence-level properties. Next-token prediction can lead to overfitting of local patterns during training, underfitting of global structure, and requires significant downstream modifications or expensive sampling to guide or predict the global attributes of generated samples at inference time.
生成模型通常采用“预测下一个词元”(next-token prediction)的目标进行训练,然而许多下游应用需要具备估计或控制序列级属性的能力。预测下一个词元的训练方式可能导致模型在训练过程中对局部模式过拟合,对全局结构欠拟合;同时,在推理阶段,若要引导或预测生成样本的全局属性,往往需要进行大量的下游修改或昂贵的采样操作。
Here, we introduce Conditional Attribute Transformers, a novel method for jointly estimating the next-token probability and the value of an attribute conditional on each potential next token selection. This framework enables three critical capabilities within a single forward pass, without modification of the input sequence: (1) per-token credit assignment across an entire sequence, by identifying how each token in a sequence is associated with an attribute’s value; (2) counterfactual analysis, by quantifying attribute differences conditional on alternative next token choices; (3) steerable generation, by decoding sequences based on a combination of next-token and attribute likelihoods.
本文引入了“条件属性 Transformer”(Conditional Attribute Transformers),这是一种能够联合估计下一个词元概率以及基于每个潜在词元选择的条件属性值的新方法。该框架无需修改输入序列,即可在单次前向传播中实现三个关键功能:(1) 跨整个序列的逐词元贡献分配,通过识别序列中每个词元如何与属性值相关联;(2) 反事实分析,通过量化在不同词元选择下的属性差异;(3) 可控生成,通过结合下一个词元概率和属性似然度来进行序列解码。
Our approach achieves state of the art performance on sparse reward tasks, improves next-token prediction at sufficient model sizes, estimates attribute probabilities orders of magnitude faster than sampling, and can guide decoding of autoregressive sequence models on a range of language tasks.
我们的方法在稀疏奖励任务上达到了最先进的性能,在模型规模足够大时提升了预测下一个词元的准确性,估计属性概率的速度比采样方法快几个数量级,并能在多种语言任务中引导自回归序列模型的解码过程。