/ohos5.0/docs/zh-cn/application-dev/ai/mindspore/ |
H A D | mindspore-lite-converter-guidelines.md | 89 | --fp16 | 否 | 设定在模型序列化时是否需要将float32数据格式的权重存储为float16数据格式。<br/>默认值为off… 139 input_dtypes=float32;float32 143 output_dtypes=float32
|
H A D | mindspore-guidelines-based-js.md | 119 // 处理readBuffer,转换成float32格式,并进行标准化处理
|
H A D | mindspore-guidelines-based-native.md | 103 // 处理readBuffer,转换成float32格式,并进行标准化处理
|
/ohos5.0/docs/zh-cn/device-dev/reference/hdi-apis/nnrt/ |
H A D | _model_config_v10.md | 20 | boolean [enableFloat16](#enablefloat16) | float32浮点模型是否以float16浮点运行 | 35 float32浮点模型是否以float16浮点运行
|
H A D | _model_config_v20.md | 20 | boolean [enableFloat16](#enablefloat16) | float32浮点模型是否以float16浮点运行 | 36 float32浮点模型是否以float16浮点运行
|
/ohos5.0/docs/en/application-dev/ai/mindspore/ |
H A D | mindspore-lite-converter-guidelines.md | 90 | --fp16 | No | Whether to store the weights of float32 data as float16… 140 input_dtypes=float32;float32 144 output_dtypes=float32
|
H A D | mindspore-guidelines-based-js.md | 119 // Convert readBuffer to the float32 format, and standardize the image.
|
H A D | mindspore-guidelines-based-native.md | 103 // Convert readBuffer to the float32 format, and standardize the image.
|
/ohos5.0/foundation/graphic/graphic_3d/lume/LumeRender/api/render/device/ |
H A D | pipeline_state_desc.h | 655 constexpr ClearColorValue(float r, float g, float b, float a) : float32 { r, g, b, a } {}; in ClearColorValue() 659 float float32[4]; member
|
/ohos5.0/foundation/ai/neural_network_runtime/ |
H A D | neural-network-runtime-guidelines.md | 291 // 添加Add算子的第一个输入张量,类型为float32,张量形状为[1, 2, 2, 3] 311 // 添加Add算子的第二个输入张量,类型为float32,张量形状为[1, 2, 2, 3] 355 // 设置Add算子的输出张量,类型为float32,张量形状为[1, 2, 2, 3]
|
/ohos5.0/docs/zh-cn/application-dev/ai/nnrt/ |
H A D | neural-network-runtime-guidelines.md | 282 // 添加Add算子的第一个输入张量,类型为float32,张量形状为[1, 2, 2, 3] 302 // 添加Add算子的第二个输入张量,类型为float32,张量形状为[1, 2, 2, 3] 346 // 设置Add算子的输出张量,类型为float32,张量形状为[1, 2, 2, 3]
|
/ohos5.0/docs/en/application-dev/ai/nnrt/ |
H A D | neural-network-runtime-guidelines.md | 282 …// Add the first input tensor of the float32 type for the Add operator. The tensor shape is [1, 2,… 302 …// Add the second input tensor of the float32 type for the Add operator. The tensor shape is [1, 2… 346 …// Add the output tensor of the float32 type for the Add operator. The tensor shape is [1, 2, 2, 3…
|
/ohos5.0/foundation/graphic/graphic_3d/lume/LumeRender/src/nodecontext/ |
H A D | render_node_parser_util.cpp | 241 FromJson(*pos, context.data.clearValue.color.float32); in FromJson()
|
/ohos5.0/docs/zh-cn/application-dev/reference/apis-neural-network-runtime-kit/ |
H A D | _neural_network_runtime.md | 499 | OH_NN_FLOAT32 | 张量数据类型为float32 | 1147 浮点模型默认使用float32精度计算。如果在支持float16精度的硬件上调用该接口,float32浮点数精度的模型将以float16的精度执行计算, 可减少内存占用和执行时间。 1160 | enableFloat16 | Float16低精度计算标志位。设置为true时,执行Float16推理;设置为false时,执行float32推理。 |
|
H A D | _neural_nework_runtime.md | 499 | OH_NN_FLOAT32 | 张量数据类型为float32 | 1147 浮点模型默认使用float32精度计算。如果在支持float16精度的硬件上调用该接口,float32浮点数精度的模型将以float16的精度执行计算, 可减少内存占用和执行时间。 1160 | enableFloat16 | Float16低精度计算标志位。设置为true时,执行Float16推理;设置为false时,执行float32推理。 |
|
/ohos5.0/docs/zh-cn/application-dev/reference/apis-mindspore-lite-kit/ |
H A D | js-apis-mindSporeLite.md | 682 | O2 | 2 | 将网络转换为float16, 保持批量归一化层和损失函数为float32。 |
|
H A D | _mind_spore.md | 599 | OH_AI_KO2 | 将网络转换为float16, 保持批量归一化层和损失函数为float32。 |
|
/ohos5.0/docs/en/application-dev/reference/apis-mindspore-lite-kit/ |
H A D | js-apis-mindSporeLite.md | 682 …loat16 and keeps the precision type of the batch normalization layer and loss function as float32.|
|
H A D | _mind_spore.md | 599 …loat16 and keeps the precision type of the batch normalization layer and loss function as float32.|
|
/ohos5.0/foundation/graphic/graphic_3d/lume/LumeRender/src/gles/ |
H A D | render_backend_gles.cpp | 1276 glClearBufferfv(GL_COLOR, static_cast<GLint>(idx), ref.clearValue.color.float32); in HandleColorAttachments()
|
/ohos5.0/docs/en/application-dev/reference/apis-neural-network-runtime-kit/ |
H A D | _neural_network_runtime.md | 499 | OH_NN_FLOAT32 | float32 type.| 1147 …int model uses float32 for computing. If this API is called on a device that supports float16, flo… 1160 …, float16 inference is performed. If this parameter is set to **false**, float32 inference is perf…
|
/ohos5.0/docs/zh-cn/application-dev/reference/common/ |
H A D | _j_s_v_m.md | 739 | JSVM_FLOAT32_ARRAY | float32类型。 |
|