YOLOv11改进 | Conv/卷积篇 | 利用轻量化PartialConv提出一种全新的结构C3k2CSPPC (全网独家创新改进)

一、本文介绍

本文给大家带来的改进机制是由我独家研制的,根据PartialConv提出了一种全新的结构 CSPPC 用来替换网络中的C2f, 将其替换我们网络中的C2f参数量后直接下降30万,计算量GFLOPs降低至5.9GFLOPs同时 其中的PartialConv作为一种具有高速推理的Conv,其对于网络的速度提升也是非常的 有效的 ,本文的网络结构大家只要使用上,作为一种轻量化的模块来使用,可以说是轻量化中的王者, 同时该结构在我的数据上还伴随着一定幅度涨点约一个点 ,同时本文的结构为我独家创新全网无第二份,非常适合大家用来发表论文, 同时文章内包含手撕结构图!

欢迎大家订阅我的专栏一起学习YOLO!

创新1训练信息:YOLO11-C3k2-PConv-1 summary: 315 layers, 2,511,435 parameters, 2,511,419 gradients, 5.9 GFLOPs
创新2训练信息:YOLO11-C3k2-PConv-2 summary: 314 layers, 2,282,395 parameters, 2,282,379 gradients, 6.1 GFLOPs
基础未改进版本YOLO11 summary: 319 layers, 2,594,715 parameters, 2,594,699 gradients, 6.5 GFLOPs

目录

一、本文介绍

二、PConv卷积原理

2.1  PConv卷积的基本原理

2.2 特征图冗余

2.3手撕结构图

三、CSPPC的核心代码

四、CSPPC的添加方式

4.1 修改一

4.2 修改二

4.3 修改三

4.4 修改四

五、CSPPC的yaml文件和运行记录

5.1 CSPPC的yaml文件

5.2 CSPPC的训练过程截图

五、本文总结


二、PConv卷积原理

论文地址: 官方论文地址

代码地址: 官方代码地址


2.1  PConv卷积的基本原理

PConv(部分卷积) 的基本原理是利用 特征图的冗余 ,从而减少计算和内存访问。具体来说,PConv 只在输入通道的一部分上应用常规卷积进行空间特征提取,而保留剩余通道不变。这种设计的 优势 在于:

1. 减少计算复杂度:PConv 通过在较少的通道上进行计算,降低了浮点操作(FLOPs)的数量。例如,如果部分率设置为 1/4,则PConv的计算量只有常规卷积的 1/16。

2. 降低内存访问:与常规卷积相比,PConv减少了内存访问量,这对于输入/输出(I/O)受限的设备尤其有益。

3. 保持特征信息流:尽管只对输入通道的一部分进行计算,但保留的通道在后续的逐点卷积(PWConv)层中仍然有用,允许特征信息在所有通道中流动。

下图为大家展示了我们提出的 部分卷积(PConv)的概念 。它通过仅在少数输入通道上应用滤波器,同时保留其他通道不变,从而实现快速高效的运算。

与常规卷积相比,PConv降低了FLOPs,同时比深度/分组卷积具有更高的FLOPs。这种方法提升了运算效率,因为它减少了必须执行的计算量,并且减少了内存访问。

图中 (a) 展示了常规卷积, (b) 展示了深度/分组卷积,而 (c) 则展示了我们的部分卷积方法。在PConv中,一部分通道直接通过身份操作传递,而不进行卷积处理。


2.2 特征图冗余

特征图冗余 指的是在 卷积神经网络 的特征图(也称为激活图)中,不同通道间存在大量相似或重复的信息。在许多情况下,特征图的某些通道可能会包含与其他通道高度相似的特征,这意味着在进行网络的前向传播时,这部分信息的多次处理并没有提供额外的有用信息,反而增加了计算量和内存访问的开销。

在实际应用中,这种冗余性可能会导致计算资源的浪费,因为 神经网络 会在所有通道上执行卷积运算,包括那些冗余或者不会对网络性能产生显著影响的通道。为了解决这个问题,可以通过各种方法来降低这种冗余,例如:

1. 通道剪枝: 通过分析通道的重要性并移除那些对最终性能影响不大的通道来减少冗余。
2. 组卷积: 将输入特征图分成多个组,每个组独立进行卷积运算,可以减少参数数量和计算量。
3. 部分卷积(PConv): 正如论文中提出的,PConv只在输入通道的一部分上应用卷积,减少了计算上的冗余和内存访问,同时仍能有效提取空间特征。


2.3手撕结构图


三、C3k2_PConv的核心代码

  1. import torch
  2. import torch.nn as nn
  3. __all__ = ['C3k2_PConv1', 'C3k2_PConv2']
  4. def autopad(k, p=None, d=1): # kernel, padding, dilation
  5. """Pad to 'same' shape outputs."""
  6. if d > 1:
  7. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  8. if p is None:
  9. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  10. return p
  11. class Conv(nn.Module):
  12. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  13. default_act = nn.SiLU() # default activation
  14. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  15. """Initialize Conv layer with given arguments including activation."""
  16. super().__init__()
  17. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  18. self.bn = nn.BatchNorm2d(c2)
  19. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  20. def forward(self, x):
  21. """Apply convolution, batch normalization and activation to input tensor."""
  22. return self.act(self.bn(self.conv(x)))
  23. def forward_fuse(self, x):
  24. """Perform transposed convolution of 2D data."""
  25. return self.act(self.conv(x))
  26. class Partial_conv3(nn.Module):
  27. def __init__(self, dim, n_div, forward):
  28. super().__init__()
  29. self.dim_conv3 = dim // n_div
  30. self.dim_untouched = dim - self.dim_conv3
  31. self.partial_conv3 = nn.Conv2d(self.dim_conv3, self.dim_conv3, 3, 1, 1, bias=False)
  32. if forward == 'slicing':
  33. self.forward = self.forward_slicing
  34. elif forward == 'split_cat':
  35. self.forward = self.forward_split_cat
  36. else:
  37. raise NotImplementedError
  38. def forward_slicing(self, x):
  39. # only for inference
  40. x = x.clone() # !!! Keep the original input intact for the residual connection later
  41. x[:, :self.dim_conv3, :, :] = self.partial_conv3(x[:, :self.dim_conv3, :, :])
  42. return x
  43. def forward_split_cat(self, x):
  44. # for training/inference
  45. x1, x2 = torch.split(x, [self.dim_conv3, self.dim_untouched], dim=1)
  46. x1 = self.partial_conv3(x1)
  47. x = torch.cat((x1, x2), 1)
  48. return x
  49. class CSPPC_Bottleneck(nn.Module): # 通道部分Conv
  50. def __init__(self, dim):
  51. super().__init__()
  52. self.DualPConv = nn.Sequential(Partial_conv3(dim, n_div=4, forward='split_cat'), Partial_conv3(dim, n_div=4, forward='split_cat'))
  53. def forward(self, x):
  54. return self.DualPConv(x)
  55. class Bottleneck(nn.Module):
  56. """Standard bottleneck."""
  57. def __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5):
  58. """Initializes a standard bottleneck module with optional shortcut connection and configurable parameters."""
  59. super().__init__()
  60. c_ = int(c2 * e) # hidden channels
  61. self.cv1 = Conv(c1, c_, k[0], 1)
  62. self.cv2 = Conv(c_, c2, k[1], 1, g=g)
  63. self.add = shortcut and c1 == c2
  64. def forward(self, x):
  65. """Applies the YOLO FPN to input data."""
  66. return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  67. class C2f(nn.Module):
  68. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  69. def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):
  70. """Initializes a CSP bottleneck with 2 convolutions and n Bottleneck blocks for faster processing."""
  71. super().__init__()
  72. self.c = int(c2 * e) # hidden channels
  73. self.cv1 = Conv(c1, 2 * self.c, 1, 1)
  74. self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
  75. self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
  76. def forward(self, x):
  77. """Forward pass through C2f layer."""
  78. y = list(self.cv1(x).chunk(2, 1))
  79. y.extend(m(y[-1]) for m in self.m)
  80. return self.cv2(torch.cat(y, 1))
  81. def forward_split(self, x):
  82. """Forward pass using split() instead of chunk()."""
  83. y = list(self.cv1(x).split((self.c, self.c), 1))
  84. y.extend(m(y[-1]) for m in self.m)
  85. return self.cv2(torch.cat(y, 1))
  86. class C3(nn.Module):
  87. """CSP Bottleneck with 3 convolutions."""
  88. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
  89. """Initialize the CSP Bottleneck with given channels, number, shortcut, groups, and expansion values."""
  90. super().__init__()
  91. c_ = int(c2 * e) # hidden channels
  92. self.cv1 = Conv(c1, c_, 1, 1)
  93. self.cv2 = Conv(c1, c_, 1, 1)
  94. self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
  95. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=((1, 1), (3, 3)), e=1.0) for _ in range(n)))
  96. def forward(self, x):
  97. """Forward pass through the CSP bottleneck with 2 convolutions."""
  98. return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
  99. class C3k(C3):
  100. """C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks."""
  101. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3):
  102. """Initializes the C3k module with specified channels, number of layers, and configurations."""
  103. super().__init__(c1, c2, n, shortcut, g, e)
  104. c_ = int(c2 * e) # hidden channels
  105. # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  106. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  107. class C3kPConv(C3):
  108. """C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks."""
  109. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3):
  110. """Initializes the C3k module with specified channels, number of layers, and configurations."""
  111. super().__init__(c1, c2, n, shortcut, g, e)
  112. c_ = int(c2 * e) # hidden channels
  113. # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  114. self.m = nn.Sequential(*(CSPPC_Bottleneck(c_) for _ in range(n)))
  115. class C3k2_PConv1(C2f):
  116. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  117. def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):
  118. """Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""
  119. super().__init__(c1, c2, n, shortcut, g, e)
  120. self.m = nn.ModuleList(
  121. C3k(self.c, self.c, 2, shortcut, g) if c3k else CSPPC_Bottleneck(self.c)for _ in range(n)
  122. )
  123. # 解析利用MLLABlock替换Bottneck
  124. class C3k2_PConv2(C2f):
  125. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  126. def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):
  127. """Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""
  128. super().__init__(c1, c2, n, shortcut, g, e)
  129. self.m = nn.ModuleList(
  130. C3kPConv(self.c, self.c, 2, shortcut, g) if c3k else Bottleneck(self.c, self.c, shortcut, g) for _ in
  131. range(n)
  132. )
  133. # 解析利用MLLABlock替换C3k中的Bottneck
  134. if __name__ == "__main__":
  135. # Generating Sample image
  136. image_size = (1, 64, 224, 224)
  137. image = torch.rand(*image_size)
  138. # Model
  139. model = C3k2_PConv1(64, 128)
  140. out = model(image)
  141. print(out.size())

四、C3k2_PConv的添加方式


4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn/modules文件夹下建立一个目录名字呢就是'Addmodules'文件夹 !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py' ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( !


4.4 修改四

按照我的添加在parse_model里添加即可。


到此就修改完成了,大家可以复制下面的yaml文件运行。


五、C3k2_PConv的yaml文件和运行记录

5.1 C3k2_PConv的yaml文件1

下面的添加CSPPC是我实验结果的版本,大家需要注意的是轻量化的结构往往模型收敛速度都会变慢因为模型变简单了,学习特征的能力变弱了,一般需要加大epochs训练的次数。

训练信息:YOLO11-C3k2-PConv-1 summary: 315 layers, 2,511,435 parameters, 2,511,419 gradients, 5.9 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2_PConv1, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2_PConv1, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2_PConv1, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2_PConv1, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2_PConv1, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2_PConv1, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2_PConv1, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2_PConv1, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2 C3k2_PConv的yaml文件2

训练信息:YOLO11-C3k2-PConv-2 summary: 314 layers, 2,282,395 parameters, 2,282,379 gradients, 6.1 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2_PConv2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2_PConv2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2_PConv2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2_PConv2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2_PConv2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2_PConv2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2_PConv2, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2_PConv2, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

5.3 CSPPC的训练过程截图


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~