Few-shot image classification learns from the limited samples of known categories, with the purpose of identifying new categories of samples. However, due to the under-representation of the class prototype generated by support samples for few-shot image classification, it leads to large classification errors. Therefore, we propose the multi-loss dual prototype network. First, the class prototype of the support samples and the class prototype of the query samples are constructed by the support samples or query samples, respectively. Second, the local loss of self-calibration (SC) is obtained by calculating the difference between the class prototype features of the support samples and the support sample features. The local loss of mutual-calibration (MC) is obtained by computing the difference between the class prototype features of the query sample and the support sample features, as well as the difference between the class prototype features of the support samples and the query sample features. Then, the global loss of the support samples and the query sample classification is calculated, and the local and global losses are weighted as the total loss to train the network. The total loss includes both SC and MC, which enables the network to make full use of the sample feature information. To adaptively enhance the features of the target region, we propose a cross-channel spatial attention model that generates class prototype features that are more consistent with the feature distribution of the class. Finally, we improve the transductive inference algorithm by assigning weights to the unlabeled query samples using cosine similarity and expanding the weighted query samples into the support set through iteration to further enrich the class prototype features. Experimental results show that the proposed method is effective. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Prototyping
Education and training
Spatial learning
Calibration
Data modeling
Image classification
Feature extraction