AIDA-ReID: Adaptive Intermediate Domain Adaptation for Generalizable and Source-Free Person Re-Identification
AIDA-ReID: Adaptive Intermediate Domain Adaptation for Generalizable and Source-Free Person Re-Identification
AIDA-ReID:用于可泛化和无源行人重识别的自适应中间域适应
Abstract: Person re-identification (Re-ID) aims to match images of the same individual across non-overlapping camera views and remains challenging due to domain shifts caused by variations in illumination, background, camera characteristics, and population distributions. 摘要: 行人重识别(Re-ID)旨在跨非重叠摄像机视角匹配同一人的图像。由于光照、背景、摄像机特性和人群分布的变化导致的域偏移,该任务仍然极具挑战性。
Although supervised models perform well under matched training and testing conditions, their performance degrades significantly when deployed in unseen environments. 尽管监督学习模型在匹配的训练和测试条件下表现良好,但当部署在未见过的环境中时,其性能会显著下降。
Existing intermediate domain approaches such as IDM and IDM++ alleviate this gap by constructing bridge feature distributions between domains; however, they rely on fixed mixing strategies and joint source-target access, limiting their applicability to multi-source and source-free settings. 现有的中间域方法(如 IDM 和 IDM++)通过构建域间的桥接特征分布来缓解这一差距;然而,它们依赖于固定的混合策略以及对源域和目标域的联合访问,这限制了它们在多源和无源场景下的适用性。
To address these limitations, this paper proposes Adaptive Intermediate Domain Adaptation (AIDA), also referred to as Source-Free Multi-Source Intermediate Domain Adaptation (SF-MIDA). 为了解决这些局限性,本文提出了自适应中间域适应(AIDA),也称为无源多源中间域适应(SF-MIDA)。
The proposed framework treats intermediate-domain learning as a dynamically regulated process, where feature mixing and regularization strength are adaptively controlled using feedback signals derived from model uncertainty and training stability. 该框架将中间域学习视为一个动态调节过程,利用源自模型不确定性和训练稳定性的反馈信号,对特征混合和正则化强度进行自适应控制。
A multi-source intermediate domain generator synthesizes diverse intermediate representations, while a pseudo-mirror regularization strategy preserves identity consistency under domain perturbations. 多源中间域生成器合成了多样化的中间表示,而伪镜像正则化策略则在域扰动下保持了身份的一致性。
Extensive experiments across domain generalization and source-free settings demonstrate the effectiveness of the proposed framework. 在域泛化和无源设置下进行的广泛实验证明了该框架的有效性。
Paper Details:
- Authors: Sundas Iqbal, Qing Tian, Danish Ali, Jianping Gou, Weihua Oue
- arXiv ID: 2605.00111
- Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
论文详情:
- 作者: Sundas Iqbal, Qing Tian, Danish Ali, Jianping Gou, Weihua Oue
- arXiv ID: 2605.00111
- 学科分类: 计算机视觉与模式识别 (cs.CV);人工智能 (cs.AI)