ArgMax not supported yet · Issue #2582 · Tencent/ncnn · GitHub
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgMax not supported yet #2582

Open
imliupu opened this issue Jan 11, 2021 · 2 comments
Open

ArgMax not supported yet #2582

imliupu opened this issue Jan 11, 2021 · 2 comments

Comments

@imliupu
Copy link

imliupu commented Jan 11, 2021

我在使用onnx2ncnn时出现转换失败,提示ArgMax操作不支持,我的onnx opt版本为11,官方文档中显示是支持这个OP的,onnx文件上传在了附件中,有人能帮忙尝试一下能否转换成功,或者解决下这个问题吗?谢谢!
(ArgMax not supported yet!
axis=3
keepdims=0)

onnx文件(经过了onnx-simplifier处理):
mobile_sim.zip

@novioleo
Copy link

@imliupu first of all,the argmax can be enable in the ncnn/src/cmakelist.txt,switch the argmax from OFF to ON.
however,the argmax defined in ncnn is not the argmax defined in onnx.the argmax will generate the global max value location,and you can't set the specific axis.
i use the two for loop the get the final result.
just like:

template<class T>
T get_value(const ncnn::Mat &_to_process_mat,
            const int row_index,
            const int col_index,
            const int channel_index) {
    return *((T *) _to_process_mat.data + (channel_index * _to_process_mat.cstep) + row_index * _to_process_mat.w +
             col_index);
}

template<class T>
void set_value(const ncnn::Mat &_to_process_mat,
               const int row_index,
               const int col_index,
               const int channel_index,
               T value) {
    *((T *) _to_process_mat.data + (channel_index * _to_process_mat.cstep) + row_index * _to_process_mat.w +
      col_index) = value;
}

template<class T>
void argmax_first_axis(const ncnn::Mat &_to_process_mat, ncnn::Mat &_result_mat) {
    assert(sizeof(T) == _to_process_mat.elemsize);
    int classes = _to_process_mat.c;
    for (int h = 0; h < _to_process_mat.h; ++h) {
        for (int w = 0; w < _to_process_mat.w; ++w) {
            int max_index = 0;
            T max_value = get_value<T>(_to_process_mat, h, w, 0);
            for (int i = 1; i < classes; ++i) {
                T current_value = get_value<T>(_to_process_mat, h, w, i);
                if (current_value > max_value) {
                    max_index = i;
                    max_value = current_value;
                }
            }
            set_value<T>(_result_mat, h, w, 0, max_index);
        }
    }
}

this code DO NOT optimized by omp,you can optimize by yourself.

@nihui
Copy link
Member

nihui commented Aug 5, 2024

针对onnx模型转换的各种问题,推荐使用最新的pnnx工具转换到ncnn
In view of various problems in onnx model conversion, it is recommended to use the latest pnnx tool to convert your model to ncnn

pip install pnnx
pnnx model.onnx inputshape=[1,3,224,224]

详细参考文档
Detailed reference documentation
https://github.com/pnnx/pnnx
https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx#how-to-use-pnnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants