模型转换问题 · Issue #1 · FeiGeChuanShu/ncnn-android-yolov8 · GitHub
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

模型转换问题 #1

Open
superbayes opened this issue Jan 10, 2023 · 37 comments
Open

模型转换问题 #1

superbayes opened this issue Jan 10, 2023 · 37 comments

Comments

@superbayes
Copy link

飞哥,yoloV8的pt模型如何转换为ONNX,再转换为NCNN模型的?

@xellDart
Copy link

Same question, I already try onn2ncnn and pnnx but none works 😭

@FeiGeChuanShu
Copy link
Owner

FeiGeChuanShu commented Jan 11, 2023

@FeiGeChuanShu
Copy link
Owner

Same question, I already try onn2ncnn and pnnx but none works 😭

my model converted from onnx. you should change the split to slice in c2f block

@liuguicen
Copy link

飞哥太快了吧 我反手就是一个star
832FEC8E

@FeiGeChuanShu
Copy link
Owner

飞哥太快了吧 我反手就是一个star 832FEC8E

yolov8-seg也有了哦

@liuguicen
Copy link

太强了
09E5344E

@Digital2Slave
Copy link

Same question, I already try onn2ncnn and pnnx but none works sob

my model converted from onnx. you should change the split to slice in c2f block

How to change the split to slice in c2f block? @FeiGeChuanShu

@FeiGeChuanShu
Copy link
Owner

Same question, I already try onn2ncnn and pnnx but none works sob

my model converted from onnx. you should change the split to slice in c2f block

How to change the split to slice in c2f block? @FeiGeChuanShu

https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/models/common.py#L87-L91

@Digital2Slave
Copy link

Same question, I already try onn2ncnn and pnnx but none works sob

my model converted from onnx. you should change the split to slice in c2f block

How to change the split to slice in c2f block? @FeiGeChuanShu

https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/models/common.py#L87-L91

Got it! Thanks a lot.

@apanand14
Copy link

Hello @FeiGeChuanShu @Qengineering and others,
First of all congratulations and thank you for your speedy implementation of yolov8 seg for ncnn inference.
I'm trying to convert my .pth model to .onnx and then .param .bin files.
I didn't get any error while converting to onnx and then onnx-sim.
I also didn't get any error while converting to .param and bin but I'm little confused after comparing my .param and .bin to yours and mine looks different. I share my converted .param file bellow. Please look into it and let me know your inputs. Thank you in advance.

7767517
233 270
Input images 0 1 images
MemoryData /model.22/Constant_10_output_0 0 1 /model.22/Constant_10_output_0 0=8400
MemoryData /model.22/Constant_6_output_0 0 1 /model.22/Constant_6_output_0 0=2
MemoryData /model.22/Constant_7_output_0 0 1 /model.22/Constant_7_output_0 0=8400 1=2
Split splitncnn_0 1 2 /model.22/Constant_7_output_0 /model.22/Constant_7_output_0_splitncnn_0 /model.22/Constant_7_output_0_splitncnn_1
MemoryData onnx::Split_496 0 1 onnx::Split_496 0=2
Convolution /model.0/conv/Conv 1 1 images /model.0/conv/Conv_output_0 0=16 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=432
Swish /model.0/act/Mul 1 1 /model.0/conv/Conv_output_0 /model.0/act/Mul_output_0
Convolution /model.1/conv/Conv 1 1 /model.0/act/Mul_output_0 /model.1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=4608
Swish /model.1/act/Mul 1 1 /model.1/conv/Conv_output_0 /model.1/act/Mul_output_0
Convolution /model.2/cv1/conv/Conv 1 1 /model.1/act/Mul_output_0 /model.2/cv1/conv/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1024
Swish /model.2/cv1/act/Mul 1 1 /model.2/cv1/conv/Conv_output_0 /model.2/cv1/act/Mul_output_0
Split splitncnn_1 1 2 /model.2/cv1/act/Mul_output_0 /model.2/cv1/act/Mul_output_0_splitncnn_0 /model.2/cv1/act/Mul_output_0_splitncnn_1
Crop /model.2/Slice 1 1 /model.2/cv1/act/Mul_output_0_splitncnn_1 /model.2/Slice_output_0 -23309=1,16 -23310=1,2147483647 -23311=1,0
Split splitncnn_2 1 2 /model.2/Slice_output_0 /model.2/Slice_output_0_splitncnn_0 /model.2/Slice_output_0_splitncnn_1
Convolution /model.2/m.0/cv1/conv/Conv 1 1 /model.2/Slice_output_0_splitncnn_1 /model.2/m.0/cv1/conv/Conv_output_0 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2304
Swish /model.2/m.0/cv1/act/Mul 1 1 /model.2/m.0/cv1/conv/Conv_output_0 /model.2/m.0/cv1/act/Mul_output_0
Convolution /model.2/m.0/cv2/conv/Conv 1 1 /model.2/m.0/cv1/act/Mul_output_0 /model.2/m.0/cv2/conv/Conv_output_0 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2304
Swish /model.2/m.0/cv2/act/Mul 1 1 /model.2/m.0/cv2/conv/Conv_output_0 /model.2/m.0/cv2/act/Mul_output_0
BinaryOp /model.2/m.0/Add 2 1 /model.2/Slice_output_0_splitncnn_0 /model.2/m.0/cv2/act/Mul_output_0 /model.2/m.0/Add_output_0 0=0
Concat /model.2/Concat 2 1 /model.2/cv1/act/Mul_output_0_splitncnn_0 /model.2/m.0/Add_output_0 /model.2/Concat_output_0 0=0
Convolution /model.2/cv2/conv/Conv 1 1 /model.2/Concat_output_0 /model.2/cv2/conv/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1536
Swish /model.2/cv2/act/Mul 1 1 /model.2/cv2/conv/Conv_output_0 /model.2/cv2/act/Mul_output_0
Convolution /model.3/conv/Conv 1 1 /model.2/cv2/act/Mul_output_0 /model.3/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=18432
Swish /model.3/act/Mul 1 1 /model.3/conv/Conv_output_0 /model.3/act/Mul_output_0
Convolution /model.4/cv1/conv/Conv 1 1 /model.3/act/Mul_output_0 /model.4/cv1/conv/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=4096
Swish /model.4/cv1/act/Mul 1 1 /model.4/cv1/conv/Conv_output_0 /model.4/cv1/act/Mul_output_0
Split splitncnn_3 1 2 /model.4/cv1/act/Mul_output_0 /model.4/cv1/act/Mul_output_0_splitncnn_0 /model.4/cv1/act/Mul_output_0_splitncnn_1
Crop /model.4/Slice 1 1 /model.4/cv1/act/Mul_output_0_splitncnn_1 /model.4/Slice_output_0 -23309=1,32 -23310=1,2147483647 -23311=1,0
Split splitncnn_4 1 2 /model.4/Slice_output_0 /model.4/Slice_output_0_splitncnn_0 /model.4/Slice_output_0_splitncnn_1
Convolution /model.4/m.0/cv1/conv/Conv 1 1 /model.4/Slice_output_0_splitncnn_1 /model.4/m.0/cv1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.4/m.0/cv1/act/Mul 1 1 /model.4/m.0/cv1/conv/Conv_output_0 /model.4/m.0/cv1/act/Mul_output_0
Convolution /model.4/m.0/cv2/conv/Conv 1 1 /model.4/m.0/cv1/act/Mul_output_0 /model.4/m.0/cv2/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.4/m.0/cv2/act/Mul 1 1 /model.4/m.0/cv2/conv/Conv_output_0 /model.4/m.0/cv2/act/Mul_output_0
BinaryOp /model.4/m.0/Add 2 1 /model.4/Slice_output_0_splitncnn_0 /model.4/m.0/cv2/act/Mul_output_0 /model.4/m.0/Add_output_0 0=0
Split splitncnn_5 1 3 /model.4/m.0/Add_output_0 /model.4/m.0/Add_output_0_splitncnn_0 /model.4/m.0/Add_output_0_splitncnn_1 /model.4/m.0/Add_output_0_splitncnn_2
Convolution /model.4/m.1/cv1/conv/Conv 1 1 /model.4/m.0/Add_output_0_splitncnn_2 /model.4/m.1/cv1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.4/m.1/cv1/act/Mul 1 1 /model.4/m.1/cv1/conv/Conv_output_0 /model.4/m.1/cv1/act/Mul_output_0
Convolution /model.4/m.1/cv2/conv/Conv 1 1 /model.4/m.1/cv1/act/Mul_output_0 /model.4/m.1/cv2/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.4/m.1/cv2/act/Mul 1 1 /model.4/m.1/cv2/conv/Conv_output_0 /model.4/m.1/cv2/act/Mul_output_0
BinaryOp /model.4/m.1/Add 2 1 /model.4/m.0/Add_output_0_splitncnn_1 /model.4/m.1/cv2/act/Mul_output_0 /model.4/m.1/Add_output_0 0=0
Concat /model.4/Concat 3 1 /model.4/cv1/act/Mul_output_0_splitncnn_0 /model.4/m.0/Add_output_0_splitncnn_0 /model.4/m.1/Add_output_0 /model.4/Concat_output_0 0=0
Convolution /model.4/cv2/conv/Conv 1 1 /model.4/Concat_output_0 /model.4/cv2/conv/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=8192
Swish /model.4/cv2/act/Mul 1 1 /model.4/cv2/conv/Conv_output_0 /model.4/cv2/act/Mul_output_0
Split splitncnn_6 1 2 /model.4/cv2/act/Mul_output_0 /model.4/cv2/act/Mul_output_0_splitncnn_0 /model.4/cv2/act/Mul_output_0_splitncnn_1
Convolution /model.5/conv/Conv 1 1 /model.4/cv2/act/Mul_output_0_splitncnn_1 /model.5/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=73728
Swish /model.5/act/Mul 1 1 /model.5/conv/Conv_output_0 /model.5/act/Mul_output_0
Convolution /model.6/cv1/conv/Conv 1 1 /model.5/act/Mul_output_0 /model.6/cv1/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=16384
Swish /model.6/cv1/act/Mul 1 1 /model.6/cv1/conv/Conv_output_0 /model.6/cv1/act/Mul_output_0
Split splitncnn_7 1 2 /model.6/cv1/act/Mul_output_0 /model.6/cv1/act/Mul_output_0_splitncnn_0 /model.6/cv1/act/Mul_output_0_splitncnn_1
Crop /model.6/Slice 1 1 /model.6/cv1/act/Mul_output_0_splitncnn_1 /model.6/Slice_output_0 -23309=1,64 -23310=1,2147483647 -23311=1,0
Split splitncnn_8 1 2 /model.6/Slice_output_0 /model.6/Slice_output_0_splitncnn_0 /model.6/Slice_output_0_splitncnn_1
Convolution /model.6/m.0/cv1/conv/Conv 1 1 /model.6/Slice_output_0_splitncnn_1 /model.6/m.0/cv1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.6/m.0/cv1/act/Mul 1 1 /model.6/m.0/cv1/conv/Conv_output_0 /model.6/m.0/cv1/act/Mul_output_0
Convolution /model.6/m.0/cv2/conv/Conv 1 1 /model.6/m.0/cv1/act/Mul_output_0 /model.6/m.0/cv2/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.6/m.0/cv2/act/Mul 1 1 /model.6/m.0/cv2/conv/Conv_output_0 /model.6/m.0/cv2/act/Mul_output_0
BinaryOp /model.6/m.0/Add 2 1 /model.6/Slice_output_0_splitncnn_0 /model.6/m.0/cv2/act/Mul_output_0 /model.6/m.0/Add_output_0 0=0
Split splitncnn_9 1 3 /model.6/m.0/Add_output_0 /model.6/m.0/Add_output_0_splitncnn_0 /model.6/m.0/Add_output_0_splitncnn_1 /model.6/m.0/Add_output_0_splitncnn_2
Convolution /model.6/m.1/cv1/conv/Conv 1 1 /model.6/m.0/Add_output_0_splitncnn_2 /model.6/m.1/cv1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.6/m.1/cv1/act/Mul 1 1 /model.6/m.1/cv1/conv/Conv_output_0 /model.6/m.1/cv1/act/Mul_output_0
Convolution /model.6/m.1/cv2/conv/Conv 1 1 /model.6/m.1/cv1/act/Mul_output_0 /model.6/m.1/cv2/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.6/m.1/cv2/act/Mul 1 1 /model.6/m.1/cv2/conv/Conv_output_0 /model.6/m.1/cv2/act/Mul_output_0
BinaryOp /model.6/m.1/Add 2 1 /model.6/m.0/Add_output_0_splitncnn_1 /model.6/m.1/cv2/act/Mul_output_0 /model.6/m.1/Add_output_0 0=0
Concat /model.6/Concat 3 1 /model.6/cv1/act/Mul_output_0_splitncnn_0 /model.6/m.0/Add_output_0_splitncnn_0 /model.6/m.1/Add_output_0 /model.6/Concat_output_0 0=0
Convolution /model.6/cv2/conv/Conv 1 1 /model.6/Concat_output_0 /model.6/cv2/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=32768
Swish /model.6/cv2/act/Mul 1 1 /model.6/cv2/conv/Conv_output_0 /model.6/cv2/act/Mul_output_0
Split splitncnn_10 1 2 /model.6/cv2/act/Mul_output_0 /model.6/cv2/act/Mul_output_0_splitncnn_0 /model.6/cv2/act/Mul_output_0_splitncnn_1
Convolution /model.7/conv/Conv 1 1 /model.6/cv2/act/Mul_output_0_splitncnn_1 /model.7/conv/Conv_output_0 0=256 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=294912
Swish /model.7/act/Mul 1 1 /model.7/conv/Conv_output_0 /model.7/act/Mul_output_0
Convolution /model.8/cv1/conv/Conv 1 1 /model.7/act/Mul_output_0 /model.8/cv1/conv/Conv_output_0 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=65536
Swish /model.8/cv1/act/Mul 1 1 /model.8/cv1/conv/Conv_output_0 /model.8/cv1/act/Mul_output_0
Split splitncnn_11 1 2 /model.8/cv1/act/Mul_output_0 /model.8/cv1/act/Mul_output_0_splitncnn_0 /model.8/cv1/act/Mul_output_0_splitncnn_1
Crop /model.8/Slice 1 1 /model.8/cv1/act/Mul_output_0_splitncnn_1 /model.8/Slice_output_0 -23309=1,128 -23310=1,2147483647 -23311=1,0
Split splitncnn_12 1 2 /model.8/Slice_output_0 /model.8/Slice_output_0_splitncnn_0 /model.8/Slice_output_0_splitncnn_1
Convolution /model.8/m.0/cv1/conv/Conv 1 1 /model.8/Slice_output_0_splitncnn_1 /model.8/m.0/cv1/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.8/m.0/cv1/act/Mul 1 1 /model.8/m.0/cv1/conv/Conv_output_0 /model.8/m.0/cv1/act/Mul_output_0
Convolution /model.8/m.0/cv2/conv/Conv 1 1 /model.8/m.0/cv1/act/Mul_output_0 /model.8/m.0/cv2/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.8/m.0/cv2/act/Mul 1 1 /model.8/m.0/cv2/conv/Conv_output_0 /model.8/m.0/cv2/act/Mul_output_0
BinaryOp /model.8/m.0/Add 2 1 /model.8/Slice_output_0_splitncnn_0 /model.8/m.0/cv2/act/Mul_output_0 /model.8/m.0/Add_output_0 0=0
Concat /model.8/Concat 2 1 /model.8/cv1/act/Mul_output_0_splitncnn_0 /model.8/m.0/Add_output_0 /model.8/Concat_output_0 0=0
Convolution /model.8/cv2/conv/Conv 1 1 /model.8/Concat_output_0 /model.8/cv2/conv/Conv_output_0 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=98304
Swish /model.8/cv2/act/Mul 1 1 /model.8/cv2/conv/Conv_output_0 /model.8/cv2/act/Mul_output_0
Convolution /model.9/cv1/conv/Conv 1 1 /model.8/cv2/act/Mul_output_0 /model.9/cv1/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=32768
Swish /model.9/cv1/act/Mul 1 1 /model.9/cv1/conv/Conv_output_0 /model.9/cv1/act/Mul_output_0
Split splitncnn_13 1 2 /model.9/cv1/act/Mul_output_0 /model.9/cv1/act/Mul_output_0_splitncnn_0 /model.9/cv1/act/Mul_output_0_splitncnn_1
Pooling /model.9/m/MaxPool 1 1 /model.9/cv1/act/Mul_output_0_splitncnn_1 /model.9/m/MaxPool_output_0 0=0 1=5 11=5 2=1 12=1 3=2 13=2 14=2 15=2 5=1
Split splitncnn_14 1 2 /model.9/m/MaxPool_output_0 /model.9/m/MaxPool_output_0_splitncnn_0 /model.9/m/MaxPool_output_0_splitncnn_1
Pooling /model.9/m_1/MaxPool 1 1 /model.9/m/MaxPool_output_0_splitncnn_1 /model.9/m_1/MaxPool_output_0 0=0 1=5 11=5 2=1 12=1 3=2 13=2 14=2 15=2 5=1
Split splitncnn_15 1 2 /model.9/m_1/MaxPool_output_0 /model.9/m_1/MaxPool_output_0_splitncnn_0 /model.9/m_1/MaxPool_output_0_splitncnn_1
Pooling /model.9/m_2/MaxPool 1 1 /model.9/m_1/MaxPool_output_0_splitncnn_1 /model.9/m_2/MaxPool_output_0 0=0 1=5 11=5 2=1 12=1 3=2 13=2 14=2 15=2 5=1
Concat /model.9/Concat 4 1 /model.9/cv1/act/Mul_output_0_splitncnn_0 /model.9/m/MaxPool_output_0_splitncnn_0 /model.9/m_1/MaxPool_output_0_splitncnn_0 /model.9/m_2/MaxPool_output_0 /model.9/Concat_output_0 0=0
Convolution /model.9/cv2/conv/Conv 1 1 /model.9/Concat_output_0 /model.9/cv2/conv/Conv_output_0 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=131072
Swish /model.9/cv2/act/Mul 1 1 /model.9/cv2/conv/Conv_output_0 /model.9/cv2/act/Mul_output_0
Split splitncnn_16 1 2 /model.9/cv2/act/Mul_output_0 /model.9/cv2/act/Mul_output_0_splitncnn_0 /model.9/cv2/act/Mul_output_0_splitncnn_1
Interp /model.10/Resize 1 1 /model.9/cv2/act/Mul_output_0_splitncnn_1 /model.10/Resize_output_0 0=1 1=2.000000e+00 2=2.000000e+00 3=0 4=0 6=0
Concat /model.11/Concat 2 1 /model.10/Resize_output_0 /model.6/cv2/act/Mul_output_0_splitncnn_0 /model.11/Concat_output_0 0=0
Convolution /model.12/cv1/conv/Conv 1 1 /model.11/Concat_output_0 /model.12/cv1/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=49152
Swish /model.12/cv1/act/Mul 1 1 /model.12/cv1/conv/Conv_output_0 /model.12/cv1/act/Mul_output_0
Split splitncnn_17 1 2 /model.12/cv1/act/Mul_output_0 /model.12/cv1/act/Mul_output_0_splitncnn_0 /model.12/cv1/act/Mul_output_0_splitncnn_1
Crop /model.12/Slice 1 1 /model.12/cv1/act/Mul_output_0_splitncnn_1 /model.12/Slice_output_0 -23309=1,64 -23310=1,2147483647 -23311=1,0
Convolution /model.12/m.0/cv1/conv/Conv 1 1 /model.12/Slice_output_0 /model.12/m.0/cv1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.12/m.0/cv1/act/Mul 1 1 /model.12/m.0/cv1/conv/Conv_output_0 /model.12/m.0/cv1/act/Mul_output_0
Convolution /model.12/m.0/cv2/conv/Conv 1 1 /model.12/m.0/cv1/act/Mul_output_0 /model.12/m.0/cv2/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.12/m.0/cv2/act/Mul 1 1 /model.12/m.0/cv2/conv/Conv_output_0 /model.12/m.0/cv2/act/Mul_output_0
Concat /model.12/Concat 2 1 /model.12/cv1/act/Mul_output_0_splitncnn_0 /model.12/m.0/cv2/act/Mul_output_0 /model.12/Concat_output_0 0=0
Convolution /model.12/cv2/conv/Conv 1 1 /model.12/Concat_output_0 /model.12/cv2/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=24576
Swish /model.12/cv2/act/Mul 1 1 /model.12/cv2/conv/Conv_output_0 /model.12/cv2/act/Mul_output_0
Split splitncnn_18 1 2 /model.12/cv2/act/Mul_output_0 /model.12/cv2/act/Mul_output_0_splitncnn_0 /model.12/cv2/act/Mul_output_0_splitncnn_1
Interp /model.13/Resize 1 1 /model.12/cv2/act/Mul_output_0_splitncnn_1 /model.13/Resize_output_0 0=1 1=2.000000e+00 2=2.000000e+00 3=0 4=0 6=0
Concat /model.14/Concat 2 1 /model.13/Resize_output_0 /model.4/cv2/act/Mul_output_0_splitncnn_0 /model.14/Concat_output_0 0=0
Convolution /model.15/cv1/conv/Conv 1 1 /model.14/Concat_output_0 /model.15/cv1/conv/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=12288
Swish /model.15/cv1/act/Mul 1 1 /model.15/cv1/conv/Conv_output_0 /model.15/cv1/act/Mul_output_0
Split splitncnn_19 1 2 /model.15/cv1/act/Mul_output_0 /model.15/cv1/act/Mul_output_0_splitncnn_0 /model.15/cv1/act/Mul_output_0_splitncnn_1
Crop /model.15/Slice 1 1 /model.15/cv1/act/Mul_output_0_splitncnn_1 /model.15/Slice_output_0 -23309=1,32 -23310=1,2147483647 -23311=1,0
Convolution /model.15/m.0/cv1/conv/Conv 1 1 /model.15/Slice_output_0 /model.15/m.0/cv1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.15/m.0/cv1/act/Mul 1 1 /model.15/m.0/cv1/conv/Conv_output_0 /model.15/m.0/cv1/act/Mul_output_0
Convolution /model.15/m.0/cv2/conv/Conv 1 1 /model.15/m.0/cv1/act/Mul_output_0 /model.15/m.0/cv2/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.15/m.0/cv2/act/Mul 1 1 /model.15/m.0/cv2/conv/Conv_output_0 /model.15/m.0/cv2/act/Mul_output_0
Concat /model.15/Concat 2 1 /model.15/cv1/act/Mul_output_0_splitncnn_0 /model.15/m.0/cv2/act/Mul_output_0 /model.15/Concat_output_0 0=0
Convolution /model.15/cv2/conv/Conv 1 1 /model.15/Concat_output_0 /model.15/cv2/conv/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=6144
Swish /model.15/cv2/act/Mul 1 1 /model.15/cv2/conv/Conv_output_0 /model.15/cv2/act/Mul_output_0
Split splitncnn_20 1 5 /model.15/cv2/act/Mul_output_0 /model.15/cv2/act/Mul_output_0_splitncnn_0 /model.15/cv2/act/Mul_output_0_splitncnn_1 /model.15/cv2/act/Mul_output_0_splitncnn_2 /model.15/cv2/act/Mul_output_0_splitncnn_3 /model.15/cv2/act/Mul_output_0_splitncnn_4
Convolution /model.16/conv/Conv 1 1 /model.15/cv2/act/Mul_output_0_splitncnn_4 /model.16/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.16/act/Mul 1 1 /model.16/conv/Conv_output_0 /model.16/act/Mul_output_0
Concat /model.17/Concat 2 1 /model.16/act/Mul_output_0 /model.12/cv2/act/Mul_output_0_splitncnn_0 /model.17/Concat_output_0 0=0
Convolution /model.18/cv1/conv/Conv 1 1 /model.17/Concat_output_0 /model.18/cv1/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=24576
Swish /model.18/cv1/act/Mul 1 1 /model.18/cv1/conv/Conv_output_0 /model.18/cv1/act/Mul_output_0
Split splitncnn_21 1 2 /model.18/cv1/act/Mul_output_0 /model.18/cv1/act/Mul_output_0_splitncnn_0 /model.18/cv1/act/Mul_output_0_splitncnn_1
Crop /model.18/Slice 1 1 /model.18/cv1/act/Mul_output_0_splitncnn_1 /model.18/Slice_output_0 -23309=1,64 -23310=1,2147483647 -23311=1,0
Convolution /model.18/m.0/cv1/conv/Conv 1 1 /model.18/Slice_output_0 /model.18/m.0/cv1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.18/m.0/cv1/act/Mul 1 1 /model.18/m.0/cv1/conv/Conv_output_0 /model.18/m.0/cv1/act/Mul_output_0
Convolution /model.18/m.0/cv2/conv/Conv 1 1 /model.18/m.0/cv1/act/Mul_output_0 /model.18/m.0/cv2/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.18/m.0/cv2/act/Mul 1 1 /model.18/m.0/cv2/conv/Conv_output_0 /model.18/m.0/cv2/act/Mul_output_0
Concat /model.18/Concat 2 1 /model.18/cv1/act/Mul_output_0_splitncnn_0 /model.18/m.0/cv2/act/Mul_output_0 /model.18/Concat_output_0 0=0
Convolution /model.18/cv2/conv/Conv 1 1 /model.18/Concat_output_0 /model.18/cv2/conv/Conv_output_0 0=128 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=24576
Swish /model.18/cv2/act/Mul 1 1 /model.18/cv2/conv/Conv_output_0 /model.18/cv2/act/Mul_output_0
Split splitncnn_22 1 4 /model.18/cv2/act/Mul_output_0 /model.18/cv2/act/Mul_output_0_splitncnn_0 /model.18/cv2/act/Mul_output_0_splitncnn_1 /model.18/cv2/act/Mul_output_0_splitncnn_2 /model.18/cv2/act/Mul_output_0_splitncnn_3
Convolution /model.19/conv/Conv 1 1 /model.18/cv2/act/Mul_output_0_splitncnn_3 /model.19/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.19/act/Mul 1 1 /model.19/conv/Conv_output_0 /model.19/act/Mul_output_0
Concat /model.20/Concat 2 1 /model.19/act/Mul_output_0 /model.9/cv2/act/Mul_output_0_splitncnn_0 /model.20/Concat_output_0 0=0
Convolution /model.21/cv1/conv/Conv 1 1 /model.20/Concat_output_0 /model.21/cv1/conv/Conv_output_0 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=98304
Swish /model.21/cv1/act/Mul 1 1 /model.21/cv1/conv/Conv_output_0 /model.21/cv1/act/Mul_output_0
Split splitncnn_23 1 2 /model.21/cv1/act/Mul_output_0 /model.21/cv1/act/Mul_output_0_splitncnn_0 /model.21/cv1/act/Mul_output_0_splitncnn_1
Crop /model.21/Slice 1 1 /model.21/cv1/act/Mul_output_0_splitncnn_1 /model.21/Slice_output_0 -23309=1,128 -23310=1,2147483647 -23311=1,0
Convolution /model.21/m.0/cv1/conv/Conv 1 1 /model.21/Slice_output_0 /model.21/m.0/cv1/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.21/m.0/cv1/act/Mul 1 1 /model.21/m.0/cv1/conv/Conv_output_0 /model.21/m.0/cv1/act/Mul_output_0
Convolution /model.21/m.0/cv2/conv/Conv 1 1 /model.21/m.0/cv1/act/Mul_output_0 /model.21/m.0/cv2/conv/Conv_output_0 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.21/m.0/cv2/act/Mul 1 1 /model.21/m.0/cv2/conv/Conv_output_0 /model.21/m.0/cv2/act/Mul_output_0
Concat /model.21/Concat 2 1 /model.21/cv1/act/Mul_output_0_splitncnn_0 /model.21/m.0/cv2/act/Mul_output_0 /model.21/Concat_output_0 0=0
Convolution /model.21/cv2/conv/Conv 1 1 /model.21/Concat_output_0 /model.21/cv2/conv/Conv_output_0 0=256 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=98304
Swish /model.21/cv2/act/Mul 1 1 /model.21/cv2/conv/Conv_output_0 /model.21/cv2/act/Mul_output_0
Split splitncnn_24 1 3 /model.21/cv2/act/Mul_output_0 /model.21/cv2/act/Mul_output_0_splitncnn_0 /model.21/cv2/act/Mul_output_0_splitncnn_1 /model.21/cv2/act/Mul_output_0_splitncnn_2
Convolution /model.22/proto/cv1/conv/Conv 1 1 /model.15/cv2/act/Mul_output_0_splitncnn_3 /model.22/proto/cv1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/proto/cv1/act/Mul 1 1 /model.22/proto/cv1/conv/Conv_output_0 /model.22/proto/cv1/act/Mul_output_0
Deconvolution /model.22/proto/upsample/ConvTranspose 1 1 /model.22/proto/cv1/act/Mul_output_0 /model.22/proto/upsample/ConvTranspose_output_0 0=64 1=2 11=2 2=1 12=1 3=2 13=2 4=0 14=0 15=0 16=0 5=1 6=16384
Convolution /model.22/proto/cv2/conv/Conv 1 1 /model.22/proto/upsample/ConvTranspose_output_0 /model.22/proto/cv2/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/proto/cv2/act/Mul 1 1 /model.22/proto/cv2/conv/Conv_output_0 /model.22/proto/cv2/act/Mul_output_0
Convolution /model.22/proto/cv3/conv/Conv 1 1 /model.22/proto/cv2/act/Mul_output_0 /model.22/proto/cv3/conv/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=2048
Swish /model.22/proto/cv3/act/Mul 1 1 /model.22/proto/cv3/conv/Conv_output_0 output1
Convolution /model.22/cv4.0/cv4.0.0/conv/Conv 1 1 /model.15/cv2/act/Mul_output_0_splitncnn_2 /model.22/cv4.0/cv4.0.0/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=18432
Swish /model.22/cv4.0/cv4.0.0/act/Mul 1 1 /model.22/cv4.0/cv4.0.0/conv/Conv_output_0 /model.22/cv4.0/cv4.0.0/act/Mul_output_0
Convolution /model.22/cv4.0/cv4.0.1/conv/Conv 1 1 /model.22/cv4.0/cv4.0.0/act/Mul_output_0 /model.22/cv4.0/cv4.0.1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.22/cv4.0/cv4.0.1/act/Mul 1 1 /model.22/cv4.0/cv4.0.1/conv/Conv_output_0 /model.22/cv4.0/cv4.0.1/act/Mul_output_0
Convolution /model.22/cv4.0/cv4.0.2/Conv 1 1 /model.22/cv4.0/cv4.0.1/act/Mul_output_0 /model.22/cv4.0/cv4.0.2/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1024
Reshape /model.22/Reshape 1 1 /model.22/cv4.0/cv4.0.2/Conv_output_0 /model.22/Reshape_output_0 0=-1 1=32
Convolution /model.22/cv4.1/cv4.1.0/conv/Conv 1 1 /model.18/cv2/act/Mul_output_0_splitncnn_2 /model.22/cv4.1/cv4.1.0/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv4.1/cv4.1.0/act/Mul 1 1 /model.22/cv4.1/cv4.1.0/conv/Conv_output_0 /model.22/cv4.1/cv4.1.0/act/Mul_output_0
Convolution /model.22/cv4.1/cv4.1.1/conv/Conv 1 1 /model.22/cv4.1/cv4.1.0/act/Mul_output_0 /model.22/cv4.1/cv4.1.1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.22/cv4.1/cv4.1.1/act/Mul 1 1 /model.22/cv4.1/cv4.1.1/conv/Conv_output_0 /model.22/cv4.1/cv4.1.1/act/Mul_output_0
Convolution /model.22/cv4.1/cv4.1.2/Conv 1 1 /model.22/cv4.1/cv4.1.1/act/Mul_output_0 /model.22/cv4.1/cv4.1.2/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1024
Reshape /model.22/Reshape_1 1 1 /model.22/cv4.1/cv4.1.2/Conv_output_0 /model.22/Reshape_1_output_0 0=-1 1=32
Convolution /model.22/cv4.2/cv4.2.0/conv/Conv 1 1 /model.21/cv2/act/Mul_output_0_splitncnn_2 /model.22/cv4.2/cv4.2.0/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=73728
Swish /model.22/cv4.2/cv4.2.0/act/Mul 1 1 /model.22/cv4.2/cv4.2.0/conv/Conv_output_0 /model.22/cv4.2/cv4.2.0/act/Mul_output_0
Convolution /model.22/cv4.2/cv4.2.1/conv/Conv 1 1 /model.22/cv4.2/cv4.2.0/act/Mul_output_0 /model.22/cv4.2/cv4.2.1/conv/Conv_output_0 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=9216
Swish /model.22/cv4.2/cv4.2.1/act/Mul 1 1 /model.22/cv4.2/cv4.2.1/conv/Conv_output_0 /model.22/cv4.2/cv4.2.1/act/Mul_output_0
Convolution /model.22/cv4.2/cv4.2.2/Conv 1 1 /model.22/cv4.2/cv4.2.1/act/Mul_output_0 /model.22/cv4.2/cv4.2.2/Conv_output_0 0=32 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=1024
Reshape /model.22/Reshape_2 1 1 /model.22/cv4.2/cv4.2.2/Conv_output_0 /model.22/Reshape_2_output_0 0=-1 1=32
Concat /model.22/Concat 3 1 /model.22/Reshape_output_0 /model.22/Reshape_1_output_0 /model.22/Reshape_2_output_0 /model.22/Concat_output_0 0=1
Convolution /model.22/cv2.0/cv2.0.0/conv/Conv 1 1 /model.15/cv2/act/Mul_output_0_splitncnn_1 /model.22/cv2.0/cv2.0.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv2.0/cv2.0.0/act/Mul 1 1 /model.22/cv2.0/cv2.0.0/conv/Conv_output_0 /model.22/cv2.0/cv2.0.0/act/Mul_output_0
Convolution /model.22/cv2.0/cv2.0.1/conv/Conv 1 1 /model.22/cv2.0/cv2.0.0/act/Mul_output_0 /model.22/cv2.0/cv2.0.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv2.0/cv2.0.1/act/Mul 1 1 /model.22/cv2.0/cv2.0.1/conv/Conv_output_0 /model.22/cv2.0/cv2.0.1/act/Mul_output_0
Convolution /model.22/cv2.0/cv2.0.2/Conv 1 1 /model.22/cv2.0/cv2.0.1/act/Mul_output_0 /model.22/cv2.0/cv2.0.2/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=4096
Convolution /model.22/cv3.0/cv3.0.0/conv/Conv 1 1 /model.15/cv2/act/Mul_output_0_splitncnn_0 /model.22/cv3.0/cv3.0.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv3.0/cv3.0.0/act/Mul 1 1 /model.22/cv3.0/cv3.0.0/conv/Conv_output_0 /model.22/cv3.0/cv3.0.0/act/Mul_output_0
Convolution /model.22/cv3.0/cv3.0.1/conv/Conv 1 1 /model.22/cv3.0/cv3.0.0/act/Mul_output_0 /model.22/cv3.0/cv3.0.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv3.0/cv3.0.1/act/Mul 1 1 /model.22/cv3.0/cv3.0.1/conv/Conv_output_0 /model.22/cv3.0/cv3.0.1/act/Mul_output_0
Convolution /model.22/cv3.0/cv3.0.2/Conv 1 1 /model.22/cv3.0/cv3.0.1/act/Mul_output_0 /model.22/cv3.0/cv3.0.2/Conv_output_0 0=2 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=128
Concat /model.22/Concat_1 2 1 /model.22/cv2.0/cv2.0.2/Conv_output_0 /model.22/cv3.0/cv3.0.2/Conv_output_0 /model.22/Concat_1_output_0 0=0
Convolution /model.22/cv2.1/cv2.1.0/conv/Conv 1 1 /model.18/cv2/act/Mul_output_0_splitncnn_1 /model.22/cv2.1/cv2.1.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=73728
Swish /model.22/cv2.1/cv2.1.0/act/Mul 1 1 /model.22/cv2.1/cv2.1.0/conv/Conv_output_0 /model.22/cv2.1/cv2.1.0/act/Mul_output_0
Convolution /model.22/cv2.1/cv2.1.1/conv/Conv 1 1 /model.22/cv2.1/cv2.1.0/act/Mul_output_0 /model.22/cv2.1/cv2.1.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv2.1/cv2.1.1/act/Mul 1 1 /model.22/cv2.1/cv2.1.1/conv/Conv_output_0 /model.22/cv2.1/cv2.1.1/act/Mul_output_0
Convolution /model.22/cv2.1/cv2.1.2/Conv 1 1 /model.22/cv2.1/cv2.1.1/act/Mul_output_0 /model.22/cv2.1/cv2.1.2/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=4096
Convolution /model.22/cv3.1/cv3.1.0/conv/Conv 1 1 /model.18/cv2/act/Mul_output_0_splitncnn_0 /model.22/cv3.1/cv3.1.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=73728
Swish /model.22/cv3.1/cv3.1.0/act/Mul 1 1 /model.22/cv3.1/cv3.1.0/conv/Conv_output_0 /model.22/cv3.1/cv3.1.0/act/Mul_output_0
Convolution /model.22/cv3.1/cv3.1.1/conv/Conv 1 1 /model.22/cv3.1/cv3.1.0/act/Mul_output_0 /model.22/cv3.1/cv3.1.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv3.1/cv3.1.1/act/Mul 1 1 /model.22/cv3.1/cv3.1.1/conv/Conv_output_0 /model.22/cv3.1/cv3.1.1/act/Mul_output_0
Convolution /model.22/cv3.1/cv3.1.2/Conv 1 1 /model.22/cv3.1/cv3.1.1/act/Mul_output_0 /model.22/cv3.1/cv3.1.2/Conv_output_0 0=2 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=128
Concat /model.22/Concat_2 2 1 /model.22/cv2.1/cv2.1.2/Conv_output_0 /model.22/cv3.1/cv3.1.2/Conv_output_0 /model.22/Concat_2_output_0 0=0
Convolution /model.22/cv2.2/cv2.2.0/conv/Conv 1 1 /model.21/cv2/act/Mul_output_0_splitncnn_1 /model.22/cv2.2/cv2.2.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.22/cv2.2/cv2.2.0/act/Mul 1 1 /model.22/cv2.2/cv2.2.0/conv/Conv_output_0 /model.22/cv2.2/cv2.2.0/act/Mul_output_0
Convolution /model.22/cv2.2/cv2.2.1/conv/Conv 1 1 /model.22/cv2.2/cv2.2.0/act/Mul_output_0 /model.22/cv2.2/cv2.2.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv2.2/cv2.2.1/act/Mul 1 1 /model.22/cv2.2/cv2.2.1/conv/Conv_output_0 /model.22/cv2.2/cv2.2.1/act/Mul_output_0
Convolution /model.22/cv2.2/cv2.2.2/Conv 1 1 /model.22/cv2.2/cv2.2.1/act/Mul_output_0 /model.22/cv2.2/cv2.2.2/Conv_output_0 0=64 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=4096
Convolution /model.22/cv3.2/cv3.2.0/conv/Conv 1 1 /model.21/cv2/act/Mul_output_0_splitncnn_0 /model.22/cv3.2/cv3.2.0/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=147456
Swish /model.22/cv3.2/cv3.2.0/act/Mul 1 1 /model.22/cv3.2/cv3.2.0/conv/Conv_output_0 /model.22/cv3.2/cv3.2.0/act/Mul_output_0
Convolution /model.22/cv3.2/cv3.2.1/conv/Conv 1 1 /model.22/cv3.2/cv3.2.0/act/Mul_output_0 /model.22/cv3.2/cv3.2.1/conv/Conv_output_0 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=36864
Swish /model.22/cv3.2/cv3.2.1/act/Mul 1 1 /model.22/cv3.2/cv3.2.1/conv/Conv_output_0 /model.22/cv3.2/cv3.2.1/act/Mul_output_0
Convolution /model.22/cv3.2/cv3.2.2/Conv 1 1 /model.22/cv3.2/cv3.2.1/act/Mul_output_0 /model.22/cv3.2/cv3.2.2/Conv_output_0 0=2 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=128
Concat /model.22/Concat_3 2 1 /model.22/cv2.2/cv2.2.2/Conv_output_0 /model.22/cv3.2/cv3.2.2/Conv_output_0 /model.22/Concat_3_output_0 0=0
Reshape /model.22/Reshape_3 1 1 /model.22/Concat_1_output_0 /model.22/Reshape_3_output_0 0=-1 1=66
Reshape /model.22/Reshape_4 1 1 /model.22/Concat_2_output_0 /model.22/Reshape_4_output_0 0=-1 1=66
Reshape /model.22/Reshape_5 1 1 /model.22/Concat_3_output_0 /model.22/Reshape_5_output_0 0=-1 1=66
Concat /model.22/Concat_4 3 1 /model.22/Reshape_3_output_0 /model.22/Reshape_4_output_0 /model.22/Reshape_5_output_0 /model.22/Concat_4_output_0 0=1
Slice /model.22/Split 2 2 /model.22/Concat_4_output_0 onnx::Split_496 /model.22/Split_output_0 /model.22/Split_output_1 -23300=2,-233,-233 1=0
Reshape /model.22/dfl/Reshape 1 1 /model.22/Split_output_0 /model.22/dfl/Reshape_output_0 0=8400 1=16 2=4
Permute /model.22/dfl/Transpose 1 1 /model.22/dfl/Reshape_output_0 /model.22/dfl/Transpose_output_0 0=2
Softmax /model.22/dfl/Softmax 1 1 /model.22/dfl/Transpose_output_0 /model.22/dfl/Softmax_output_0 0=0 1=1
Convolution /model.22/dfl/conv/Conv 1 1 /model.22/dfl/Softmax_output_0 /model.22/dfl/conv/Conv_output_0 0=1 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=0 6=16
Reshape /model.22/dfl/Reshape_1 1 1 /model.22/dfl/conv/Conv_output_0 /model.22/dfl/Reshape_1_output_0 0=8400 1=4
Slice /model.22/Split_1 2 2 /model.22/dfl/Reshape_1_output_0 /model.22/Constant_6_output_0 /model.22/Split_1_output_0 /model.22/Split_1_output_1 -23300=2,-233,-233 1=0
BinaryOp /model.22/Sub 2 1 /model.22/Constant_7_output_0_splitncnn_1 /model.22/Split_1_output_0 /model.22/Sub_output_0 0=1
Split splitncnn_25 1 2 /model.22/Sub_output_0 /model.22/Sub_output_0_splitncnn_0 /model.22/Sub_output_0_splitncnn_1
BinaryOp /model.22/Add 2 1 /model.22/Constant_7_output_0_splitncnn_0 /model.22/Split_1_output_1 /model.22/Add_output_0 0=0
Split splitncnn_26 1 2 /model.22/Add_output_0 /model.22/Add_output_0_splitncnn_0 /model.22/Add_output_0_splitncnn_1
BinaryOp /model.22/Add_1 2 1 /model.22/Sub_output_0_splitncnn_1 /model.22/Add_output_0_splitncnn_1 /model.22/Add_1_output_0 0=0
BinaryOp /model.22/Div 1 1 /model.22/Add_1_output_0 /model.22/Div_output_0 0=3 1=1 2=2.000000e+00
BinaryOp /model.22/Sub_1 2 1 /model.22/Add_output_0_splitncnn_0 /model.22/Sub_output_0_splitncnn_0 /model.22/Sub_1_output_0 0=1
Concat /model.22/Concat_5 2 1 /model.22/Div_output_0 /model.22/Sub_1_output_0 /model.22/Concat_5_output_0 0=0
BinaryOp /model.22/Mul 2 1 /model.22/Concat_5_output_0 /model.22/Constant_10_output_0 /model.22/Mul_output_0 0=2
Sigmoid /model.22/Sigmoid 1 1 /model.22/Split_output_1 /model.22/Sigmoid_output_0
Concat /model.22/Concat_6 3 1 /model.22/Mul_output_0 /model.22/Sigmoid_output_0 /model.22/Concat_output_0 output0 0=0

@apanand14
Copy link

Same question, I already try onn2ncnn and pnnx but none works sob

my model converted from onnx. you should change the split to slice in c2f block

How to change the split to slice in c2f block? @FeiGeChuanShu

https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/models/common.py#L87-L91

Got it! Thanks a lot.

Can you help me what changes need to be made before converting .pt to onnx and then .param and .bin? Thank you in advance.

@Digital2Slave
Copy link

@apanand14 Not focus on yolov8 yet.

@apanand14
Copy link

Same question, I already try onn2ncnn and pnnx but none works 😭

my model converted from onnx. you should change the split to slice in c2f block

Any other modifications than this? and this modification should be done before exporting it to ONNX right?
Thank you in advance for your answer and valuable time.

@Digital2Slave
Copy link

@apanand14
I think you should use export_seg.py to convert yolov8s-seg.pt to yolov8s-seg.onnx.

But here is an issue ArgMax not supported yet! when use onnx2ncnn.

There maybe two ways to handle this issue.

  1. modify codes relate https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/export_seg.py.
  2. modify https://github.com/Tencent/ncnn/blob/master/tools/onnx/onnx2ncnn.cpp to support ArgMax.

Note that

  1. ncnn already have Argmax layer implementation. https://github.com/Tencent/ncnn/blob/master/src/CMakeLists.txt#L66
  2. https://github.com/Tencent/ncnn/blob/master/src/layer/argmax.cpp
  3. ArgMax not supported yet Tencent/ncnn#2582

@apanand14
Copy link

Thank you for your valuable inputs! @Digital2Slave Yes. I have converted from onnx to ncnn with export_seg.py and generated
best_updated.txt. Also getting issues you mentioed **ArgMax not supported yet!

axis=-1

keepdims=1

Cast not supported yet!

to=1**

Sorry but I didn't get the modifications you talked about. I tried to make ArgMax layer ON in and then tried but results are same. I understood that some modifications needs to be done to run properly but if you help me out there then it would be grateful of you. Thank in advance for your time.

@FeiGeChuanShu
Copy link
Owner

i have update the readme show how to change v8 code

@Digital2Slave
Copy link

Digital2Slave commented Feb 6, 2023

i have update the readme show how to change v8 code

Refer the codes convert to onnx for ncnn in README. I modified ultralytics/ultralytics/nn/modules.py the forward method in C2f class and Detect class.

  • class C2f(nn.Module)
    def forward(self, x):
        # y = list(self.cv1(x).split((self.c, self.c), 1))
        # y.extend(m(y[-1]) for m in self.m)
        # return self.cv2(torch.cat(y, 1))
        # !< https://github.com/FeiGeChuanShu/ncnn-android-yolov8
        x = self.cv1(x)
        x = [x, x[:, self.c:, ...]]
        x.extend(m(x[-1]) for m in self.m)
        x.pop(1)
        return self.cv2(torch.cat(x, 1))
  • class Detect(nn.Module)
    def forward(self, x):
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape
        # box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)
        # dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
        # y = torch.cat((dbox, cls.sigmoid()), 1)
        # return y if self.export else (y, x)
        # !< https://github.com/FeiGeChuanShu/ncnn-android-yolov8
        pred = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).permute(0, 2, 1)
        return pred

However, I have an issue when convert yolov8s-seg.pt to yolov8s-seg.onnx by the export.py.

  • exporty.py
from ultralytics import YOLO

# load model
model = YOLO("/home/tianzx/AI/pre_weights/test/yolov8/normal/yolov8s-seg.pt")  

# Export model
success = model.export(format="onnx", opset=12, simplify=True) 
  • issue
$ python export.py
Ultralytics YOLOv8.0.29 🚀 Python-3.7.16 torch-1.8.0+cpu CPU
YOLOv8s-seg summary (fused): 195 layers, 11810560 parameters, 0 gradients
Traceback (most recent call last):
  File "export.py", line 7, in <module>
    success = model.export(format="onnx", opset=12, simplify=True)
  File "/home/tianzx/.virtualenvs/d2l/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/tianzx/Github/ultralytics/ultralytics/yolo/engine/model.py", line 188, in export
    exporter(model=self.model)
  File "/home/tianzx/.virtualenvs/d2l/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/tianzx/Github/ultralytics/ultralytics/yolo/engine/exporter.py", line 184, in __call__
    y = model(im)  # dry runs
  File "/home/tianzx/.virtualenvs/d2l/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/tianzx/Github/ultralytics/ultralytics/nn/tasks.py", line 198, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "/home/tianzx/Github/ultralytics/ultralytics/nn/tasks.py", line 57, in _forward_once
    x = m(x)  # run
  File "/home/tianzx/.virtualenvs/d2l/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/tianzx/Github/ultralytics/ultralytics/nn/modules.py", line 451, in forward
    return (torch.cat([x, mc], 1), p) if self.export else (torch.cat([x[0], mc], 1), (x[1], mc, p))
RuntimeError: Sizes of tensors must match except in dimension 1. Got 144 and 8400 in dimension 2 (The offending index is 1)

The ultralytics repo at commit 09265b1 (HEAD -> main, origin/main, origin/HEAD) Setup template for community examples (#718).

Same issue for the following export script.

  1. success = model.export(task="detect", format="onnx", opset=12, simplify=True)
  2. success = model.export(task="segment", format="onnx", opset=12, simplify=True)
  3. success = model.export(format="onnx", opset=12, simplify=True)

Environment:

# OS
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.6 LTS
Release:	18.04
Codename:	bionic
# packages
torch                     1.8.0+cpu
torchaudio                0.8.0
torchvision               0.9.0+cpu
onnx                      1.12.0
onnx-simplifier           0.4.8
onnxruntime               1.12.0
onnxsim                   0.4.13

@FeiGeChuanShu
Copy link
Owner

@Digital2Slave i have fixed the readme for seg model. you can try it.

@apanand14
Copy link

@Digital2Slave i have fixed the readme for seg model. you can try it.

Thank you so much but not able to see the change after self.export in return of forward method for seg in your .jpg. It would be helpful if you share it again. Thank you in advance

@FeiGeChuanShu
Copy link
Owner

@Digital2Slave i have fixed the readme for seg model. you can try it.

Thank you so much but not able to see the change after self.export in return of forward method for seg in your .jpg. It would be helpful if you share it again. Thank you in advance

it's same as the original code after self.export.you should only change the code before self.export.

@apanand14
Copy link

Okay!! Thank you so much for your input. Just one more thing. Should train again with these changes or I can export my trained model with these changes and convert to ncnn?

@FeiGeChuanShu
Copy link
Owner

Yes. maybe you should change the num_class here if your class number isn't 80. https://github.com/FeiGeChuanShu/ncnn-android-yolov8/blob/main/ncnn-yolov8s-seg/yolov8-seg.cpp#L261

@Digital2Slave
Copy link

Digital2Slave commented Feb 6, 2023

@Digital2Slave i have fixed the readme for seg model. you can try it.

@FeiGeChuanShu Thanks a lot! Great job!

As for yolov8 segment model. I need modify three forward methods in ultralytics/ultralytics/nn/modules.py:

  1. class C2f(nn.Module)
    def forward(self, x):
        # y = list(self.cv1(x).split((self.c, self.c), 1))
        # y.extend(m(y[-1]) for m in self.m)
        # return self.cv2(torch.cat(y, 1))
        # !< https://github.com/FeiGeChuanShu/ncnn-android-yolov8
        x = self.cv1(x)
        x = [x, x[:, self.c:, ...]]
        x.extend(m(x[-1]) for m in self.m)
        x.pop(1)
        return self.cv2(torch.cat(x, 1))
  1. class Detect(nn.Module)
    def forward(self, x):
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape
        # box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)
        # dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
        # y = torch.cat((dbox, cls.sigmoid()), 1)
        # return y if self.export else (y, x)
        # !< https://github.com/FeiGeChuanShu/ncnn-android-yolov8
        pred = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        return pred
  1. class Segment(Detect)
    def forward(self, x):
        p = self.proto(x[0])  # mask protos
        bs = p.shape[0]  # batch size

        mc = torch.cat([self.cv4[i](x[i]).view(bs, self.nm, -1) for i in range(self.nl)], 2)  # mask coefficients
        x = self.detect(self, x)
        if self.training:
            return x, mc, p
        # return (torch.cat([x, mc], 1), p) if self.export else (torch.cat([x[0], mc], 1), (x[1], mc, p))
        # !< https://github.com/FeiGeChuanShu/ncnn-android-yolov8
        return (torch.cat([x, mc], 1).permute(0, 2, 1), p.view(bs, self.nm, -1)) if self.export else (torch.cat([x[0], mc], 1), (x[1], mc, p))

Run export.py to convert yolov8s-seg.pt to yolov8s-seg.onnx.

exporty.py
from ultralytics import YOLO

# load model
model = YOLO("/home/tianzx/AI/pre_weights/test/yolov8/normal/yolov8s-seg.pt")  

# Export model
success = model.export(format="onnx", opset=12, simplify=True) 

Run onnx2ncnn to convert yolov8s-seg.onnx to ncnn model files

$ ./onnx2ncnn /home/tianzx/AI/pre_weights/test/yolov8/normal/yolov8s-seg.onnx /home/tianzx/AI/pre_weights/test/yolov8/normal/yolov8s-seg.param /home/tianzx/AI/pre_weights/test/yolov8/normal/yolov8s-seg.bin

Note that:

Compare yolov8s-seg.param from your, my yolov8s-seg.param is still a little different.

Build ncnn-yolov8s-seg to test the ncnn model.

$ git clone https://github.com/FeiGeChuanShu/ncnn-android-yolov8
$ cd ncnn-android-yolov8/ncnn-yolov8s-seg
  • modify yolov8-seg.cpp

(1) change output name in detect_yolov8 function

    ncnn::Mat out;
    ex.extract("output0", out);

    ncnn::Mat mask_proto;
    ex.extract("output1", mask_proto);

(2) add save result.jpg in draw_objects function

    cv::imshow("image", image);
    cv::imwrite("result.jpg", image);
    cv::waitKey(0);
  • CMakeLists.txt
cmake_minimum_required(VERSION 3.5)
project(ncnn-yolov8s-seg)
set(CMAKE_BUILD_TYPE Release)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -pie -fPIE -fPIC -Wall -O3")

find_package(OpenCV REQUIRED)
if (OpenCV_FOUND)
    message(STATUS "OpenCV_LIBS: ${OpenCV_LIBS}")
    message(STATUS "OpenCV_INCLUDE_DIRS: ${OpenCV_INCLUDE_DIRS}")
else ()
    message(FATAL_ERROR "opencv Not Found!")
endif (OpenCV_FOUND)

find_package(OpenMP REQUIRED)
if (OPENMP_FOUND)
    message("OPENMP FOUND")
    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
    set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${OpenMP_EXE_LINKER_FLAGS}")
else ()
    message(FATAL_ERROR "OpenMP Not Found!")
endif ()

include_directories(/usr/local/include)
include_directories(/usr/local/include/ncnn)
link_directories(/usr/local/lib)

# Source files
file(GLOB SRC "*.h" "*.cpp")

add_executable(ncnn-yolov8s-seg ${SRC})
target_link_libraries(ncnn-yolov8s-seg ncnn ${OpenCV_LIBS})
  • build ncnn-yolov8s-seg
$ cd ncnn-android-yolov8/ncnn-yolov8s-seg
$ mkdir build && cd build
$ cmake ..
$ make -j$(nproc)
$ cp ncnn-yolov8s-seg ../
$ ./ncnn-yolov8s-seg /home/tianzx/Pictures/coco_sample.png 
15 = 0.92688 at 12.03 52.23 305.47 x 420.98
15 = 0.89253 at 344.51 25.41 294.49 x 346.10
65 = 0.84357 at 40.06 73.78 135.51 x 44.37
65 = 0.69806 at 334.26 77.02 35.89 x 111.01
57 = 0.68551 at 1.36 0.81 637.40 x 478.19
  • coco_sample.png
    coco_sample

  • result.jpg
    result

@apanand14
Copy link

Great! Thank you! I attac

Yes. maybe you should change the num_class here if your class number isn't 80. https://github.com/FeiGeChuanShu/ncnn-android-yolov8/blob/main/ncnn-yolov8s-seg/yolov8-seg.cpp#L261

Thank you. I just converted my ONNX to NCNN. my param looks quite similar like yours but I have only one crop where you have two. I attach my best.param file in .txt format here if you would just have a glance then it would be great.
best.txt

@FeiGeChuanShu
Copy link
Owner

FeiGeChuanShu commented Feb 6, 2023

Great! Thank you! I attac

Yes. maybe you should change the num_class here if your class number isn't 80. https://github.com/FeiGeChuanShu/ncnn-android-yolov8/blob/main/ncnn-yolov8s-seg/yolov8-seg.cpp#L261

Thank you. I just converted my ONNX to NCNN. my param looks quite similar like yours but I have only one crop where you have two. I attach my best.param file in .txt format here if you would just have a glance then it would be great. best.txt

my model is old.The new one crop model is more efficient than my old model

@apanand14
Copy link

Great! Then I asssume that I'm good to go for inference with NCNN

@apanand14
Copy link

Btw, I have trained yolov8n-seg for my custom dataset. Your .cpp will work with this model as well right?

@FeiGeChuanShu
Copy link
Owner

Of course

@apanand14
Copy link

Thank you again!!

@apanand14
Copy link

Thank you so much @FeiGeChuanShu and @Digital2Slave for all your help!! everything works well and able to test successfully my custom model. One more thing, do you have an app for seg model like detection model then I would like try with an app as well.
Thank you for help once again!

@Digital2Slave
Copy link

Thank you so much @FeiGeChuanShu and @Digital2Slave for all your help!! everything works well and able to test successfully my custom model. One more thing, do you have an app for seg model like detection model then I would like try with an app as well. Thank you for help once again!

refer https://github.com/FeiGeChuanShu/yolov5-seg-ncnn

@Digital2Slave
Copy link

Digital2Slave commented Feb 7, 2023

@apanand14

1. Prepare yolov8 segment ncnn models

I use yolov8n-seg.pt and yolov8s-seg.pt to convert ncnn model files.

  • yolov8n-seg.param
  • yolov8n-seg.bin
  • yolov8s-seg.param
  • yolov8s-seg.bin

Put the ncnn models to app/src/main/assets folder.

2. Modify yolo.h and yolo.cpp

Refer https://github.com/FeiGeChuanShu/yolov5-seg-ncnn and https://github.com/FeiGeChuanShu/ncnn-android-yolov8/blob/main/ncnn-yolov8s-seg/yolov8-seg.cpp to modify yolo.h and yolo.cpp file under app/src/main/jni folder.

yolo.zip contain the yolo.h and yolo.cpp
file.

eadc42754a54e0b816dbec30151af8f

3. Refer

Note that:
android API need 24+ for opencv-mobile-4.6.0.

@apanand14
Copy link

Thank you so much @Digital2Slave for your support to run ncnn anroid app for yolov8. It runs fine. And thank you @FeiGeChuanShu for proving an inference script for yolov8-seg and yolov5-seg ncnn anroid app. Great work guys!

@visonpon
Copy link

visonpon commented Mar 9, 2024

飞哥 @FeiGeChuanShu 我这边还是有闪退问题。
我先按Readme里把c2f和detect 代码修改,然后用export方法得到onnx,然后再用onnx2ncnn得到param和bin文件,放到assert后测试,还是会闪退。
烦请解答一下,谢谢~

@nqthai309
Copy link

@FeiGeChuanShu @Digital2Slave Thank you for everyone's contributions to the paradigm shift
Currently I am having trouble converting the yolov8s-obb.pt model to onnx format, specifically I am getting an error in the forward function of the class OBB(Detect)

  1. class OBB(Detect):
    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        bs = x[0].shape[0]  # batch size
        angle = torch.cat([self.cv4[i](x[i]).view(bs, self.ne, -1) for i in range(self.nl)], 2)  # OBB theta logits
        # NOTE: set `angle` as an attribute so that `decode_bboxes` could use it.
        angle = (angle.sigmoid() - 0.25) * math.pi  # [-pi/4, 3pi/4]
        # angle = angle.sigmoid() * math.pi / 2  # [0, pi/2]
        if not self.training:
            self.angle = angle
        x = self.detect(self, x)
        if self.training:
            return x, angle

        return torch.cat([x, angle], 1) if self.export else (torch.cat([x[0], angle], 1), (x[1], angle))

My conversion command is as follows
(https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-obb.pt)

from ultralytics import YOLO

# load yolov8 obb model
model = YOLO("/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/yolov8s-obb.pt")

# Use the model
success = model.export(format="onnx", opset=12, simplify=True)

Error

Ultralytics YOLOv8.1.0 🚀 Python-3.8.18 torch-1.13.1+cu117 CPU (AMD Ryzen 5 3600 6-Core Processor)
YOLOv8s-obb summary (fused): 187 layers, 11417376 parameters, 0 gradients
Traceback (most recent call last):
  File "export_v8.py", line 7, in <module>
    success = model.export(format="onnx", opset=12, simplify=True)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/engine/model.py", line 347, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/engine/exporter.py", line 229, in __call__
    y = model(im)  # dry runs
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 80, in forward
    return self.predict(x, *args, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 98, in predict
    return self._predict_once(x, profile, visualize, embed)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 119, in _predict_once
    x = m(x)  # run
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/modules/head.py", line 125, in forward
    return torch.cat([x, angle], 1) if self.export else (torch.cat([x[0], angle], 1), (x[1], angle))
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 79 but got size 21504 for tensor number 1 in the list.

I really look forward to everyone's help with this problem, I think there are many people having the same problem as me. Thanks in advance

@xiaojingang1777
Copy link

Permute Transpose_526 1 1 custom_output_8 output 0=1
The valid param end with Permute action, but my param is not this, but like the following:
Permute transpose_199 1 1 321 322 0=2
Softmax softmax_189 1 1 322 323 0=0 1=1
Convolution conv_95 1 1 323 324 0=1 1=1 11=1 12=1 13=1 14=0 2=1 3=1 4=0 5=0 6=16
Reshape view_198 1 1 324 325 0=8400 1=4
MemoryData pnnx_fold_anchor_points.1 0 1 326 0=8400 1=2
MemoryData pnnx_fold_anchor_points.1_1 0 1 327 0=8400 1=2
Slice chunk_0 1 2 325 328 329 -23300=2,-233,-233 1=0
BinaryOp sub_12 2 1 326 328 330 0=1
Split splitncnn_30 1 2 330 331 332
BinaryOp add_13 2 1 327 329 333 0=0
Split splitncnn_31 1 2 333 334 335
BinaryOp add_14 2 1 331 334 336 0=0
BinaryOp div_15 1 1 336 337 0=3 1=1 2=2.000000e+00
BinaryOp sub_16 2 1 335 332 338 0=1
Concat cat_18 2 1 337 338 339 0=0
Reshape reshape_190 1 1 255 340 0=8400 1=1
BinaryOp mul_17 2 1 339 340 341 0=2
Sigmoid sigmoid_188 1 1 320 342
Concat cat_19 2 1 341 342 343 0=0
Concat cat_20 2 1 343 281 out0 0=0

@moy812782484
Copy link

@FeiGeChuanShu @Digital2Slave Thank you for everyone's contributions to the paradigm shift Currently I am having trouble converting the yolov8s-obb.pt model to onnx format, specifically I am getting an error in the forward function of the class OBB(Detect)

  1. class OBB(Detect):
    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        bs = x[0].shape[0]  # batch size
        angle = torch.cat([self.cv4[i](x[i]).view(bs, self.ne, -1) for i in range(self.nl)], 2)  # OBB theta logits
        # NOTE: set `angle` as an attribute so that `decode_bboxes` could use it.
        angle = (angle.sigmoid() - 0.25) * math.pi  # [-pi/4, 3pi/4]
        # angle = angle.sigmoid() * math.pi / 2  # [0, pi/2]
        if not self.training:
            self.angle = angle
        x = self.detect(self, x)
        if self.training:
            return x, angle

        return torch.cat([x, angle], 1) if self.export else (torch.cat([x[0], angle], 1), (x[1], angle))

My conversion command is as follows (https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-obb.pt)

from ultralytics import YOLO

# load yolov8 obb model
model = YOLO("/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/yolov8s-obb.pt")

# Use the model
success = model.export(format="onnx", opset=12, simplify=True)

Error

Ultralytics YOLOv8.1.0 🚀 Python-3.8.18 torch-1.13.1+cu117 CPU (AMD Ryzen 5 3600 6-Core Processor)
YOLOv8s-obb summary (fused): 187 layers, 11417376 parameters, 0 gradients
Traceback (most recent call last):
  File "export_v8.py", line 7, in <module>
    success = model.export(format="onnx", opset=12, simplify=True)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/engine/model.py", line 347, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/engine/exporter.py", line 229, in __call__
    y = model(im)  # dry runs
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 80, in forward
    return self.predict(x, *args, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 98, in predict
    return self._predict_once(x, profile, visualize, embed)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/tasks.py", line 119, in _predict_once
    x = m(x)  # run
  File "/home/thainq97/miniconda3/envs/yolov8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/thainq97/DATA/GHTK/CLONE/ultralytics-8.1.0/ultralytics/nn/modules/head.py", line 125, in forward
    return torch.cat([x, angle], 1) if self.export else (torch.cat([x[0], angle], 1), (x[1], angle))
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 79 but got size 21504 for tensor number 1 in the list.

I really look forward to everyone's help with this problem, I think there are many people having the same problem as me. Thanks in advance
Now, is it solved now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants