GCCN: Global Context Convolutional Network

Publication Year: 2021 Publication Type : JournalArticle

Abstract:


In this paper, we propose Global Context Convolutional Network (GCCN) for visual recognition. GCCN computes global features representing contextual information across image patches. These global contextual features are defined as local maxima pixels with high visual sharpness in each patch. These feature are then concatenated and utilised to augment the convolutional features. The learnt feature vector is normalised using the global context features using Frobenius norm. This straightforward approach achieves high accuracy in compassion to the state-of-the-art methods with 94:6% and 95:41% on CIFAR-10 and STL-10 datasets, respectively. To explore potential impact of GCCN on other visual representation tasks, we implemented GCCN as a based model to few-shot image classification. We learn metric distances between the augmented feature vectors and their prototypes representations, similar to Prototypical and Matching Networks. GCCN outperforms stateof- the-art few-shot learning methods achieving 99:9%, 84:8% and 80:74% on Omniglot, MiniImageNet and CUB-200, respectively. GCCN has significantly improved on the accuracy of state-ofthe- art prototypical and matching networks by up to 30% in different few-shot learning scenarios.


BibTex:

@article{hamdi2021gccn, title={GCCN: Global Context Convolutional Network},
   
    author={Hamdi, Ali and Salim, Flora and Kim, Du Yong},
    journal={arXiv preprint arXiv:2110.11664},
    year={2021}
}

Cite:

Related Publications

RUP: Large Room Utilisation Prediction with carbon dioxide sensor
Type : JournalArticle
Show More
A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO 2 Sensor Data
Type : JournalArticle
Show More
Topical Event Detection on Twitter
Type : ConferenceProceeding
Show More

© 2021 Flora Salim - CRUISE Research Group.