전체 글 106

[CVPR 2023] GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection

- Introduction OOD detection scenarios Covariate shift: Change in the input distribution Semantic shift: Change in the label distribution Existing OOD detection works Predictive distribution Incorporate feature statistics for ID data Requires a portion of training data Internal feature activation GOAL: Explore and push the limits of OOD detection when the output of a softmax layer is the only av..

[CVPR 2021] ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning

- Introduction In real-world applications, incremental data are often partially labeled. ex) Face Recognition, Fingerprint Identification, Video Recognition → Semi-Supervised Continual Learning: Insufficient supervision and large amount of unlabeled data. SSCL 에선 기존 CL 에서 사용하던 regularization-based method, replay-based method 가 잘 작동하지 않음 (왜 Architecture-based method 를 뺐는지는 모르겠음) 다만, Joint trainin..

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD): 행렬을 분해하는 방법 (Square, Symmetric 상관없이) $\boldsymbol{A}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T$ $\boldsymbol{A}\in \mathbb{R}^{m\times n}$ $\boldsymbol{U}\in \mathbb{R}^{m\times m}$ orthogonal matrix $\boldsymbol{\Sigma}\in \mathbb{R}^{m\times n}$ diagonal matrix $\boldsymbol{V}\in \mathbb{R}^{n\times n}$ orthogonal matrix $\boldsymbol{V}$에서 orthogonal..

[Pytorch, timm] Optimizer & Parameter Group Learning Rate

pytorch 에서 일반적으로 optimizer 를 다음과 같이 사용 optimizer = optim.Adam(model, lr=0.0001) 만약 model 의 각 parameter 에 다른 옵션 (learning rate, eps 등)을 주고 싶은 경우 다음과 같이 각 parameter group 지정 optim.Adam([{'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3}], lr=1e-2) Parameter group 은 다음으로 확인 가능 #n 번째 parameter group 을 보고 싶은 경우 print(optimizer.param_groups[n]) >>> Adam (Paramet..