YOLOv11改进 | 添加注意力机制篇 | 添加YOLO-Face提出的SEAM注意力机制优化物体遮挡检测(附二次创新C2PSA机制)

一、本文介绍

本文给大家带来的改进机制是由YOLO-Face提出能够改善物体遮挡检测的 注意力机制 SEAM, SEAM(Spatially Enhanced Attention Module) 注意力 网络模块旨在补偿被遮挡面部的响应损失,通过增强未遮挡面部的响应来实现这一目标,其希望通过学习遮挡面和未遮挡面之间的关系来改善遮挡情况下的损失从而达到改善物体遮挡检测的效果,同时本文附二次创新PSA机制。 本文将通过介绍其主要原理后,提供该机制的代码和修改教程,并附上运行的yaml文件和运行代码,小白也可轻松上手 (本文附二次创新C2PSASEAM机制)

欢迎大家订阅我的专栏一起学习YOLO!


目录

一、本文介绍

二、原理介绍

2.1 遮挡改进

2.2 SEAM模块

2.3 排斥损失

三、核心代码

四、添加教程

4.1 修改一

4.2 修改二

4.3 修改三

4.4 修改四

五、SEAM的yaml文件和运行记录

5.1 C2PSASEAM的yaml文件

5.3 SEAM训练Yaml文件

5.3 训练代码

5.4 训练过程截图

五、本文总结


二、原理介绍

2.1 遮挡改进

本文重点介绍 遮挡改进 ,其主要体现在两个方面: 注意力网络模块(SEAM) 排斥损失(Repulsion Loss)

1. SEAM模块:SEAM(Spatially Enhanced Attention Module) 注意力 网络模块 旨在补偿被遮挡面部的响应损失,通过增强未遮挡面部的响应来实现这一目标。SEAM模块通过深度可分离卷积和残差连接的组合来实现,其中深度可分离卷积按通道进行操作,虽然可以学习不同通道的重要性并减少参数量,但忽略了通道间的信息关系。为了弥补这一损失,不同深度卷积的输出通过点对点(1x1)卷积组合。然后使用两层全连接网络融合每个通道的信息,以增强所有通道之间的联系。这种 模型 希望通过学习遮挡面和未遮挡面之间的关系,来弥补遮挡情况下的损失。

2. 排斥损失(Repulsion Loss) :一种设计来处理面部遮挡问题的 损失函数 。具体来说,排斥损失被分为两部分:RepGT和RepBox。RepGT的功能是使当前的边界框尽可能远离周围的真实边界框,而RepBox的目的是使预测框尽可能远离周围的预测框,从而减少它们之间的IOU,以避免某个预测框被NMS抑制,从而属于两个面部。


2.2 SEAM模块

下图展示了 SEAM(Separated and Enhancement Attention Module)的架构 以及 CSMM(Channel and Spatial Mixing Module)的结构

左侧 是SEAM的整体架构,包括三个不同尺寸(patch-6、patch-7、patch-8)的CSMM模块。这些模块的输出进行平均池化,然后通过通道扩展(Channel exp)操作,最后相乘以提供增强的特征表示。 右侧 是CSMM模块的详细结构,它通过不同尺寸的patch来利用多尺度特征,并使用深度可分离卷积来学习空间维度和通道之间的相关性。模块包括了以下元素:

(a)Patch Embedding:对输入的patch进行嵌入。
(b)GELU:Gaussian Error Linear Unit,一种激活函数。
(c)BatchNorm:批量归一化,用于加速训练过程并提高 性能
(d)Depthwise Convolution:深度可分离卷积,对每个输入通道分别进行卷积操作。
(f)Pointwise Convolution:逐点卷积,其使用1x1的卷积核来融合深度可分离卷积的特征。

这种模块设计旨在 通过对空间维度和通道的细致处理 ,从而增强网络对遮挡面部特征的注意力和捕捉能力。通过综合利用多尺度特征和深度可分离卷积,CSMM在保持计算效率的同时,提高了特征提取的精确度。这对于面部检测尤其重要,因为面部特征的大小、形状和遮挡程度可以在不同情况下大相径庭。通过SEAM和CSMM,YOLO-FaceV2提高了模型对复杂场景中各种面部特征的识别能力。


2.3 排斥损失

排斥损失(Repulsion Loss) 是一种用于处理面部检测中遮挡问题的损失函数。在面部检测中,类内遮挡可能会导致一个面部包含另一个面部的特征,从而增加错误检测率。排斥损失能够有效地通过排斥效应来缓解这一问题。排斥损失被分为两个部分: RepGT RepBox

(a)RepGT损失: 其功能是使当前边界框尽可能远离周围的真实边界框。这里的“周围真实边界框”指的是与除了要预测的边界框外的面部标签具有最大IoU的那个边界框。RepGT损失的计算方法如下:

L_{\text{RepGT}} = \sum_{P \in P^+} \text{SmoothLn}(\text{IoG}(P, G_{\text{Rep}}))

其中, P 代表面部预测框, G_{\text{Rep}} 是周围具有最大IoU的真实边界框。这里的IoG(Intersection over Ground truth)定义为 \frac{\text{area}(P \cap G)}{\text{area}(G)} ,且其值范围在0到1之间。 SmoothLn 是一个连续可导的对数函数, \sigma 是一个在[0,1)范围内的平滑参数,用于调整排斥损失对 异常 值的敏感度。

(b)RepBox损失: 其目的是使预测框尽可能远离周围的预测框,从而减少它们之间的IOU,以避免一个预测框因NMS(非最大抑制)而被压制,并归属于两个面部。预测框被分成多个组,不同组之间的预测框对应不同的面部标签。对于不同组之间的预测框 p_ip_j ,希望它们之间的重叠面积尽可能小。RepBox也使用SmoothLn作为优化函数。

L_{\text{RepBox}} = \sum_{i \neq j} \text{SmoothLn}(\text{IoU}(B_{p_i}, B_{p_j}))

排斥损失通过使边界框之间保持距离,减少预测框之间的重叠,从而提高面部检测在遮挡情况下的准确性。


三、核心代码

代码的使用方式看章节四!

  1. import torch
  2. import torch.nn as nn
  3. __all__ = ['SEAM', 'C2PSA_SEAM']
  4. class Residual(nn.Module):
  5. def __init__(self, fn):
  6. super(Residual, self).__init__()
  7. self.fn = fn
  8. def forward(self, x):
  9. return self.fn(x) + x
  10. class SEAM(nn.Module):
  11. def __init__(self, c1, n=1, reduction=16):
  12. super(SEAM, self).__init__()
  13. c2 = c1
  14. self.DCovN = nn.Sequential(
  15. # nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1, groups=c1),
  16. # nn.GELU(),
  17. # nn.BatchNorm2d(c2),
  18. *[nn.Sequential(
  19. Residual(nn.Sequential(
  20. nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=3, stride=1, padding=1, groups=c2),
  21. nn.GELU(),
  22. nn.BatchNorm2d(c2)
  23. )),
  24. nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=1, stride=1, padding=0, groups=1),
  25. nn.GELU(),
  26. nn.BatchNorm2d(c2)
  27. ) for i in range(n)]
  28. )
  29. self.avg_pool = torch.nn.AdaptiveAvgPool2d(1)
  30. self.fc = nn.Sequential(
  31. nn.Linear(c2, c2 // reduction, bias=False),
  32. nn.ReLU(inplace=True),
  33. nn.Linear(c2 // reduction, c2, bias=False),
  34. nn.Sigmoid()
  35. )
  36. self._initialize_weights()
  37. # self.initialize_layer(self.avg_pool)
  38. self.initialize_layer(self.fc)
  39. def forward(self, x):
  40. b, c, _, _ = x.size()
  41. y = self.DCovN(x)
  42. y = self.avg_pool(y).view(b, c)
  43. y = self.fc(y).view(b, c, 1, 1)
  44. y = torch.exp(y)
  45. return x * y.expand_as(x)
  46. def _initialize_weights(self):
  47. for m in self.modules():
  48. if isinstance(m, nn.Conv2d):
  49. nn.init.xavier_uniform_(m.weight, gain=1)
  50. elif isinstance(m, nn.BatchNorm2d):
  51. nn.init.constant_(m.weight, 1)
  52. nn.init.constant_(m.bias, 0)
  53. def initialize_layer(self, layer):
  54. if isinstance(layer, (nn.Conv2d, nn.Linear)):
  55. torch.nn.init.normal_(layer.weight, mean=0., std=0.001)
  56. if layer.bias is not None:
  57. torch.nn.init.constant_(layer.bias, 0)
  58. def DcovN(c1, c2, depth, kernel_size=3, patch_size=3):
  59. dcovn = nn.Sequential(
  60. nn.Conv2d(c1, c2, kernel_size=patch_size, stride=patch_size),
  61. nn.SiLU(),
  62. nn.BatchNorm2d(c2),
  63. *[nn.Sequential(
  64. Residual(nn.Sequential(
  65. nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=kernel_size, stride=1, padding=1, groups=c2),
  66. nn.SiLU(),
  67. nn.BatchNorm2d(c2)
  68. )),
  69. nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=1, stride=1, padding=0, groups=1),
  70. nn.SiLU(),
  71. nn.BatchNorm2d(c2)
  72. ) for i in range(depth)]
  73. )
  74. return dcovn
  75. def autopad(k, p=None, d=1): # kernel, padding, dilation
  76. """Pad to 'same' shape outputs."""
  77. if d > 1:
  78. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  79. if p is None:
  80. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  81. return p
  82. class Conv(nn.Module):
  83. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  84. default_act = nn.SiLU() # default activation
  85. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  86. """Initialize Conv layer with given arguments including activation."""
  87. super().__init__()
  88. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  89. self.bn = nn.BatchNorm2d(c2)
  90. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  91. def forward(self, x):
  92. """Apply convolution, batch normalization and activation to input tensor."""
  93. return self.act(self.bn(self.conv(x)))
  94. def forward_fuse(self, x):
  95. """Perform transposed convolution of 2D data."""
  96. return self.act(self.conv(x))
  97. class PSABlock(nn.Module):
  98. """
  99. PSABlock class implementing a Position-Sensitive Attention block for neural networks.
  100. This class encapsulates the functionality for applying multi-head attention and feed-forward neural network layers
  101. with optional shortcut connections.
  102. Attributes:
  103. attn (Attention): Multi-head attention module.
  104. ffn (nn.Sequential): Feed-forward neural network module.
  105. add (bool): Flag indicating whether to add shortcut connections.
  106. Methods:
  107. forward: Performs a forward pass through the PSABlock, applying attention and feed-forward layers.
  108. Examples:
  109. Create a PSABlock and perform a forward pass
  110. >>> psablock = PSABlock(c=128, attn_ratio=0.5, num_heads=4, shortcut=True)
  111. >>> input_tensor = torch.randn(1, 128, 32, 32)
  112. >>> output_tensor = psablock(input_tensor)
  113. """
  114. def __init__(self, c, attn_ratio=0.5, num_heads=4, shortcut=True) -> None:
  115. """Initializes the PSABlock with attention and feed-forward layers for enhanced feature extraction."""
  116. super().__init__()
  117. self.attn = SEAM(c)
  118. self.ffn = nn.Sequential(Conv(c, c * 2, 1), Conv(c * 2, c, 1, act=False))
  119. self.add = shortcut
  120. def forward(self, x):
  121. """Executes a forward pass through PSABlock, applying attention and feed-forward layers to the input tensor."""
  122. x = x + self.attn(x) if self.add else self.attn(x)
  123. x = x + self.ffn(x) if self.add else self.ffn(x)
  124. return x
  125. class C2PSA_SEAM(nn.Module):
  126. """
  127. C2PSA module with attention mechanism for enhanced feature extraction and processing.
  128. This module implements a convolutional block with attention mechanisms to enhance feature extraction and processing
  129. capabilities. It includes a series of PSABlock modules for self-attention and feed-forward operations.
  130. Attributes:
  131. c (int): Number of hidden channels.
  132. cv1 (Conv): 1x1 convolution layer to reduce the number of input channels to 2*c.
  133. cv2 (Conv): 1x1 convolution layer to reduce the number of output channels to c.
  134. m (nn.Sequential): Sequential container of PSABlock modules for attention and feed-forward operations.
  135. Methods:
  136. forward: Performs a forward pass through the C2PSA module, applying attention and feed-forward operations.
  137. Notes:
  138. This module essentially is the same as PSA module, but refactored to allow stacking more PSABlock modules.
  139. Examples:
  140. >>> c2psa = C2PSA(c1=256, c2=256, n=3, e=0.5)
  141. >>> input_tensor = torch.randn(1, 256, 64, 64)
  142. >>> output_tensor = c2psa(input_tensor)
  143. """
  144. def __init__(self, c1, c2, n=1, e=0.5):
  145. """Initializes the C2PSA module with specified input/output channels, number of layers, and expansion ratio."""
  146. super().__init__()
  147. assert c1 == c2
  148. self.c = int(c1 * e)
  149. self.cv1 = Conv(c1, 2 * self.c, 1, 1)
  150. self.cv2 = Conv(2 * self.c, c1, 1)
  151. self.m = nn.Sequential(*(PSABlock(self.c, attn_ratio=0.5, num_heads=self.c // 64) for _ in range(n)))
  152. def forward(self, x):
  153. """Processes the input tensor 'x' through a series of PSA blocks and returns the transformed tensor."""
  154. a, b = self.cv1(x).split((self.c, self.c), dim=1)
  155. b = self.m(b)
  156. return self.cv2(torch.cat((a, b), 1))
  157. if __name__ == "__main__":
  158. # Generating Sample image
  159. image_size = (1, 64, 240, 240)
  160. image = torch.rand(*image_size)
  161. # Model
  162. mobilenet_v1 = C2PSA_SEAM(64, 64)
  163. out = mobilenet_v1(image)
  164. print(out.size())


四、添加教程

本文的改善遮挡的注意力机制我是建议用在Neck的输出部分来进行使用。


4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn/modules文件夹下建立一个目录名字呢就是'Addmodules'文件夹( !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( !


4.4 修改四

按照我的添加在parse_model里添加即可。


到此就修改完成了,大家可以复制下面的yaml文件运行。


五、 SEAM 的yaml文件和运行记录

5.1 C2PSA SEAM 的yaml文件

此版本训练信息:YOLO11-C2PSA-SEAM summary: 324 layers, 2,563,739 parameters, 2,563,723 gradients, 6.4 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA_SEAM, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2 SEAM训练Yaml文件

此版本训练信息:YO4LO11-SEAM summary: 370 layers, 2,698,203 parameters, 2,698,187 gradients, 6.7 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, SEAM, []] # 17 (P3/8-small) 小目标检测层输出位置增加注意力机制
  35. - [-1, 1, Conv, [256, 3, 2]]
  36. - [[-1, 13], 1, Concat, [1]] # cat head P4
  37. - [-1, 2, C3k2, [512, False]] # 20 (P4/16-medium)
  38. - [-1, 1, SEAM, []] # 21 (P4/16-medium) 中目标检测层输出位置增加注意力机制
  39. - [-1, 1, Conv, [512, 3, 2]]
  40. - [[-1, 10], 1, Concat, [1]] # cat head P5
  41. - [-1, 2, C3k2, [1024, True]] # 24 (P5/32-large)
  42. - [-1, 1, SEAM, []] # 25 (P5/32-large) 大目标检测层输出位置增加注意力机制
  43. # 具体在那一层用注意力机制可以根据自己的数据集场景进行选择。
  44. # 如果你自己配置注意力位置注意from[17, 21, 25]位置要对应上对应的检测层!
  45. - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.3 训练代码

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
  6. # model.load('yolov8n.pt') # loading pretrain weights
  7. model.train(data=r'替换数据集yaml文件地址',
  8. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  9. cache=False,
  10. imgsz=640,
  11. epochs=150,
  12. single_cls=False, # 是否是单类别检测
  13. batch=4,
  14. close_mosaic=10,
  15. workers=0,
  16. device='0',
  17. optimizer='SGD', # using SGD
  18. # resume='', # 如过想续训就设置last.pt的地址
  19. amp=False, # 如果出现训练损失为Nan可以关闭amp
  20. project='runs/train',
  21. name='exp',
  22. )


5.4 训练过程截图


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~