딥러닝 7

[논문 리뷰] Masked-attention Mask Transformer for Universal Image Segmentation (CVPR 2022)

목차 Masked-attention Mask Transformer for Universal Image Segmentation CVPR 2022 https://arxiv.org/abs/2112.01527 Masked-attention Mask Transformer for Universal Image Segmentation Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While only the semantics of each task differ, current research..

[논문 리뷰] MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021)

목차 Per-Pixel Classification is Not All You Need for Semantic Segmentation NeurIPS 2021 https://arxiv.org/abs/2107.06278 Per-Pixel Classification is Not All You Need for Semantic Segmentation Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask cla..

[논문 리뷰] iBOT: Image BERT Pre-Training with Online Tokenizer (ICLR 2022)

목차 iBOT: Image BERT Pre-Training with Online Tokenizer ICLR 2022 https://arxiv.org/abs/2111.07832 iBOT: Image BERT Pre-Training with Online Tokenizer The success of language Transformers is primarily attributed to the pretext task of masked language modeling (MLM), where texts are first tokenized into semantically meaningful pieces. In this work, we study masked image modeling (MIM) and indicate..

[논문 리뷰] Improved Regularization of Convolutional Neural Networks with Cutout (arXiv 2017)

목차 Improved Regularization of Convolutional Neural Networks with Cutout arXiv 2017 https://arxiv.org/abs/1708.04552 Improved Regularization of Convolutional Neural Networks with Cutout Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such represen..

[논문 리뷰] Class-Balanced Loss Based on Effective Number of Samples (CVPR 2019)

목차 Class-Balanced Loss Based on Effective Number of Samples CVPR 2019 https://arxiv.org/abs/1901.05555 Class-Balanced Loss Based on Effective Number of Samples With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existin..