未验证 提交 56cf67d0 编写于 作者: T teng 提交者: GitHub

Fix typo and remove redundant code (#1015)

fix typo

* Update quant_tool_int8.cpp
上级 e1d54110
...@@ -276,4 +276,4 @@ Finally, you need to write the registration function and the de-registration fun ...@@ -276,4 +276,4 @@ Finally, you need to write the registration function and the de-registration fun
From the above description, we can know that the core work of adding a custom device is to fill the `ir_device_t` structure. After the description is completed, all the work of device registration is completed. The modular `device` makes **Tengine Lite** very easy to expand and has enough flexibility. From the above description, we can know that the core work of adding a custom device is to fill the `ir_device_t` structure. After the description is completed, all the work of device registration is completed. The modular `device` makes **Tengine Lite** very easy to expand and has enough flexibility.
## Surprise ## Surprise
In the `init_tengine(void)` function, when the `operator prototype` completes the registration, the registered ones are `serializer` and `devices`, but the function does not jump in the static code state, and the user can install an integrated development environment , Such as `Microsoft Visual Studio` or `Jetbrains Clion`, after opening the folder and generating the `CMake` process, you can jump. In the `init_tengine(void)` function, when the `operator prototype` completes the registration, the registered ones are `serializer` and `devices`, but the function does not jump in the static code state, and the user can install an integrated development environment , Such as `Microsoft Visual Studio` or `JetBrains CLion`, after opening the folder and generating the `CMake` process, you can jump.
\ No newline at end of file
...@@ -7,7 +7,7 @@ Update the Flag once every quarter. ...@@ -7,7 +7,7 @@ Update the Flag once every quarter.
- [x] refactor the framework code - [x] refactor the framework code
- [x] refactor the NPU plugin code - [x] refactor the NPU plugin code
- [x] support VS2017 compile - [x] support VS2017 compile
- [ ] support MacOS compile - [ ] support macOS compile
- [x] support the mode type of PaddlePaddle - [x] support the mode type of PaddlePaddle
- [ ] add more examples with NPU platform - [ ] add more examples with NPU platform
- [ ] fix the Float32 bugs of Vulkan - [ ] fix the Float32 bugs of Vulkan
...@@ -277,4 +277,4 @@ static struct optimizer tpu_optimizer = { ...@@ -277,4 +277,4 @@ static struct optimizer tpu_optimizer = {
通过上文的描述,可以知道添加一个自定义设备的核心工作就是填充 `ir_device_t` 结构体,描述完成后,设备注册的所有工作就完成了。模块化的 `device` 使得 **Tengine** 非常易于扩展,并有足够的灵活性。 通过上文的描述,可以知道添加一个自定义设备的核心工作就是填充 `ir_device_t` 结构体,描述完成后,设备注册的所有工作就完成了。模块化的 `device` 使得 **Tengine** 非常易于扩展,并有足够的灵活性。
## 彩蛋 ## 彩蛋
`init_tengine(void)` 函数中,当 `operator prototype` 完成注册后,注册的就是 `serializer``devices`,但在静态代码状态下函数并不会跳转,用户可以安装一款集成开发环境,比如 `Microsoft Visual Studio``Jetbrains Clion`,打开文件夹后生成 `CMake` 过程后即可进行跳转。 `init_tengine(void)` 函数中,当 `operator prototype` 完成注册后,注册的就是 `serializer``devices`,但在静态代码状态下函数并不会跳转,用户可以安装一款集成开发环境,比如 `Microsoft Visual Studio``JetBrains CLion`,打开文件夹后生成 `CMake` 过程后即可进行跳转。
...@@ -514,4 +514,4 @@ ENDFUNCTION() ...@@ -514,4 +514,4 @@ ENDFUNCTION()
# generate all serializer # generate all serializer
GENERATE_REGISTER_HEADER_FILE("register_" "unregister_" "" "${_SRL_SRC_ROOT}/register.h.in" "${_SRL_BIN_ROOT}/register.h" "${_SRL_TM2_SRL_SOURCE}") GENERATE_REGISTER_HEADER_FILE("register_" "unregister_" "" "${_SRL_SRC_ROOT}/register.h.in" "${_SRL_BIN_ROOT}/register.h" "${_SRL_TM2_SRL_SOURCE}")
``` ```
这样就完成了配置过程,生成的头文件进一步的在后续的编译过程中发挥作用。当用户使用静态分析功能的 `IDE` 审阅代码时,由于相关头文件没有生成,所以可能会发生无法跳转的情况。经过编译配置的 `Microsoft Visual Studio Code` 等在打开文件夹后,会启动 `CMake` 进行配置和生成,这时的头文件就会生成,也能进行跳转了。其他 `Microsoft Visual Studio``Jetbrains Clion``IDE` 也可以完成配置和生成过程,推荐使用。 这样就完成了配置过程,生成的头文件进一步的在后续的编译过程中发挥作用。当用户使用静态分析功能的 `IDE` 审阅代码时,由于相关头文件没有生成,所以可能会发生无法跳转的情况。经过编译配置的 `Microsoft Visual Studio Code` 等在打开文件夹后,会启动 `CMake` 进行配置和生成,这时的头文件就会生成,也能进行跳转了。其他 `Microsoft Visual Studio``JetBrains CLion``IDE` 也可以完成配置和生成过程,推荐使用。
\ No newline at end of file
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
- [x] refactor the framework code - [x] refactor the framework code
- [x] refactor the NPU plugin code - [x] refactor the NPU plugin code
- [x] support VS2017 compile - [x] support VS2017 compile
- [ ] support MacOS compile - [ ] support macOS compile
- [x] support the mode type of PaddlePaddle - [x] support the mode type of PaddlePaddle
- [ ] add more examples with NPU platform - [ ] add more examples with NPU platform
- [ ] fix the Float32 bugs of Vulkan - [ ] fix the Float32 bugs of Vulkan
...@@ -228,7 +228,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n ...@@ -228,7 +228,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n
{ {
bool array_selection = id <= -23300; bool array_selection = id <= -23300;
if (node.op == "Input" && array_selection == true) if (node.op == "Input" && array_selection)
{ {
node.optimized = 1; node.optimized = 1;
} }
...@@ -236,7 +236,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n ...@@ -236,7 +236,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n
{ {
id = -id - 23300; id = -id - 23300;
} }
if (node.optimized == 1 && array_selection == true) if (node.optimized == 1 && array_selection)
{ {
int len = 0; int len = 0;
int nscan = fscanf(fp, "%d", &len); int nscan = fscanf(fp, "%d", &len);
...@@ -290,7 +290,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n ...@@ -290,7 +290,7 @@ int ncnn_serializer::load_model_file(const char* fname, std::vector<NcnnNode>& n
} }
else else
{ {
if (array_selection == true) if (array_selection)
{ {
int len = 0; int len = 0;
int nscan = fscanf(fp, "%d", &len); int nscan = fscanf(fp, "%d", &len);
...@@ -492,7 +492,7 @@ int ncnn_serializer::load_binary_file(const char* fname, std::vector<NcnnParam>& ...@@ -492,7 +492,7 @@ int ncnn_serializer::load_binary_file(const char* fname, std::vector<NcnnParam>&
} }
else if (nodelist[i].op == "InnerProduct") else if (nodelist[i].op == "InnerProduct")
{ {
NcnnParam weight, bias; NcnnParam weight;
nscan = read(&magic, sizeof(float)); nscan = read(&magic, sizeof(float));
weight.name = nodelist[i].name + "_w"; weight.name = nodelist[i].name + "_w";
std::map<int, std::string>::iterator iter; std::map<int, std::string>::iterator iter;
......
...@@ -237,7 +237,7 @@ int QuantTool::activation_quant_tool() ...@@ -237,7 +237,7 @@ int QuantTool::activation_quant_tool()
float threshold = compute_aciq_gaussian_clip(absmax, emlement_num, 8); float threshold = compute_aciq_gaussian_clip(absmax, emlement_num, 8);
act_scale = threshold / 127.f; act_scale = threshold / 127.f;
/* the scale of softmax always is scale = 1 / 127.f */ /* the scale of softmax is always scale = 1 / 127.f */
for (int j = 0; j < ir_graph->node_num; j++) for (int j = 0; j < ir_graph->node_num; j++)
{ {
struct node* noden = ir_graph->node_list[j]; struct node* noden = ir_graph->node_list[j];
...@@ -277,7 +277,7 @@ int QuantTool::activation_quant_tool() ...@@ -277,7 +277,7 @@ int QuantTool::activation_quant_tool()
act_scale = std::max(std::abs(max_activation[i]), std::abs(min_activation[i])) / 127.f; act_scale = std::max(std::abs(max_activation[i]), std::abs(min_activation[i])) / 127.f;
/* the scale of softmax always is scale = 1 / 127.f */ /* the scale of softmax is always scale = 1 / 127.f */
for (int j = 0; j < ir_graph->node_num; j++) for (int j = 0; j < ir_graph->node_num; j++)
{ {
struct node* noden = ir_graph->node_list[j]; struct node* noden = ir_graph->node_list[j];
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册