1# MindSpore
2
3
4## Overview
5
6Provides APIs related to MindSpore Lite model inference. The APIs in this module are non-thread-safe.
7
8**Since**: 9
9
10## Summary
11
12
13### Files
14
15| Name| Description|
16| -------- | -------- |
17| [context.h](context_8h.md) | Provides **Context** APIs for configuring runtime information.<br>File to include: &lt;mindspore/context.h&gt;<br>Library: libmindspore_lite_ndk.so|
18| [data_type.h](data__type_8h.md) | Declares tensor data types.<br>File to include: &lt;mindspore/data_type.h&gt;<br>Library: libmindspore_lite_ndk.so|
19| [format.h](format_8h.md) | Declares tensor data formats.<br>File to include: &lt;mindspore/format.h&gt;<br>Library: libmindspore_lite_ndk.so|
20| [model.h](model_8h.md) | Provides model-related APIs for model creation and inference.<br>File to include: &lt;mindspore/model.h&gt;<br>Library: libmindspore_lite_ndk.so|
21| [status.h](status_8h.md) | Provides the status codes of MindSpore Lite.<br>File to include: &lt;mindspore/status.h&gt;<br>Library: libmindspore_lite_ndk.so|
22| [tensor.h](tensor_8h.md) | Provides APIs for creating and modifying tensor information.<br>File to include: &lt;mindspore/tensor.h&gt;<br>Library: libmindspore_lite_ndk.so|
23| [types.h](types_8h.md) | Provides the model file types and device types supported by MindSpore Lite.<br>File to include: &lt;mindspore/types.h&gt;<br>Library: libmindspore_lite_ndk.so|
24
25
26### Structs
27
28| Name| Description|
29| -------- | -------- |
30| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
31| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **OH_AI_MAX_SHAPE_NUM**.|
32| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback.|
33
34
35### Macro Definition
36
37| Name| Description|
38| -------- | -------- |
39| [OH_AI_MAX_SHAPE_NUM](#oh_ai_max_shape_num) 32 | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.|
40
41
42### Types
43
44| Name| Description|
45| -------- | -------- |
46| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
47| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.|
48| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor.|
49| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor.|
50| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object.|
51| [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) | Defines the pointer to a training configuration object.|
52| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
53| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **OH_AI_MAX_SHAPE_NUM**.|
54| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback.|
55| [OH_AI_KernelCallBack](#oh_ai_kernelcallback)) (const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) outputs, const [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) kernel_Info) | Defines the pointer to a callback.|
56| [OH_AI_Status](#oh_ai_status) | MindSpore status codes.|
57| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object.|
58| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types.|
59| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types.|
60| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) | Defines NNRt device types.|
61| [OH_AI_PerformanceMode](#oh_ai_performancemode) | Defines performance modes of the NNRt device.|
62| [OH_AI_Priority](#oh_ai_priority) | Defines NNRt inference task priorities.|
63| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) | Defines training optimization levels.|
64| [OH_AI_QuantizationType](#oh_ai_quantizationtype) | Defines quantization types.|
65| [NNRTDeviceDesc](#nnrtdevicedesc) | Defines NNRt device information, including the device ID and device name.|
66| [OH_AI_AllocatorHandle](#oh_ai_allocatorhandle) | Handle of the memory allocator.|
67
68
69### Enums
70
71| Name| Description|
72| -------- | -------- |
73| [OH_AI_DataType](#oh_ai_datatype) {<br>OH_AI_DATATYPE_UNKNOWN = 0, OH_AI_DATATYPE_OBJECTTYPE_STRING = 12, OH_AI_DATATYPE_OBJECTTYPE_LIST = 13, OH_AI_DATATYPE_OBJECTTYPE_TUPLE = 14,<br>OH_AI_DATATYPE_OBJECTTYPE_TENSOR = 17, OH_AI_DATATYPE_NUMBERTYPE_BEGIN = 29, OH_AI_DATATYPE_NUMBERTYPE_BOOL = 30, OH_AI_DATATYPE_NUMBERTYPE_INT8 = 32,<br>OH_AI_DATATYPE_NUMBERTYPE_INT16 = 33, OH_AI_DATATYPE_NUMBERTYPE_INT32 = 34, OH_AI_DATATYPE_NUMBERTYPE_INT64 = 35, OH_AI_DATATYPE_NUMBERTYPE_UINT8 = 37,<br>OH_AI_DATATYPE_NUMBERTYPE_UINT16 = 38, OH_AI_DATATYPE_NUMBERTYPE_UINT32 = 39, OH_AI_DATATYPE_NUMBERTYPE_UINT64 = 40, OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 = 42,<br>OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 = 43, OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 = 44, OH_AI_DATATYPE_NUMBERTYPE_END = 46, OH_AI_DataTypeInvalid = INT32_MAX<br>} | Declares data types supported by MSTensor.|
74| [OH_AI_Format](#oh_ai_format) {<br>OH_AI_FORMAT_NCHW = 0, OH_AI_FORMAT_NHWC = 1, OH_AI_FORMAT_NHWC4 = 2, OH_AI_FORMAT_HWKC = 3,<br>OH_AI_FORMAT_HWCK = 4, OH_AI_FORMAT_KCHW = 5, OH_AI_FORMAT_CKHW = 6, OH_AI_FORMAT_KHWC = 7,<br>OH_AI_FORMAT_CHWK = 8, OH_AI_FORMAT_HW = 9, OH_AI_FORMAT_HW4 = 10, OH_AI_FORMAT_NC = 11,<br>OH_AI_FORMAT_NC4 = 12, OH_AI_FORMAT_NC4HW4 = 13, OH_AI_FORMAT_NCDHW = 15, OH_AI_FORMAT_NWC = 16,<br>OH_AI_FORMAT_NCW = 17<br>} | Declares data formats supported by MSTensor.|
75| [OH_AI_CompCode](#oh_ai_compcode) { <br>OH_AI_COMPCODE_CORE = 0x00000000u, <br>OH_AI_COMPCODE_MD = 0x10000000u, <br>OH_AI_COMPCODE_ME = 0x20000000u, <br>OH_AI_COMPCODE_MC = 0x30000000u, <br>OH_AI_COMPCODE_LITE = 0xF0000000u<br> } | Defines MindSpore component codes.
76| [OH_AI_Status](#oh_ai_status) {<br>OH_AI_STATUS_SUCCESS = 0, OH_AI_STATUS_CORE_FAILED = OH_AI_COMPCODE_CORE \| 0x1, OH_AI_STATUS_LITE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -1), OH_AI_STATUS_LITE_NULLPTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -2),<br>OH_AI_STATUS_LITE_PARAM_INVALID = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -3), OH_AI_STATUS_LITE_NO_CHANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -4), OH_AI_STATUS_LITE_SUCCESS_EXIT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -5), OH_AI_STATUS_LITE_MEMORY_FAILED = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -6),<br>OH_AI_STATUS_LITE_NOT_SUPPORT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -7), OH_AI_STATUS_LITE_THREADPOOL_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -8), OH_AI_STATUS_LITE_UNINITIALIZED_OBJ = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -9), OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -100),<br>OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR, OH_AI_STATUS_LITE_REENTRANT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -102), OH_AI_STATUS_LITE_GRAPH_FILE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -200), OH_AI_STATUS_LITE_NOT_FIND_OP = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -300),<br>OH_AI_STATUS_LITE_INVALID_OP_NAME = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -301), OH_AI_STATUS_LITE_INVALID_OP_ATTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -302), OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE, OH_AI_STATUS_LITE_FORMAT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -400),<br>OH_AI_STATUS_LITE_INFER_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -500), OH_AI_STATUS_LITE_INFER_INVALID, OH_AI_STATUS_LITE_INPUT_PARAM_INVALID<br>} | MindSpore status codes.|
77| [OH_AI_ModelType](#oh_ai_modeltype) { OH_AI_MODELTYPE_MINDIR = 0, OH_AI_MODELTYPE_INVALID = 0xFFFFFFFF } | Defines model file types.|
78| [OH_AI_DeviceType](#oh_ai_devicetype) {<br>OH_AI_DEVICETYPE_CPU = 0, OH_AI_DEVICETYPE_GPU, OH_AI_DEVICETYPE_KIRIN_NPU, OH_AI_DEVICETYPE_NNRT = 60,<br>OH_AI_DEVICETYPE_INVALID = 100<br>} | Defines the supported device types.|
79| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) { OH_AI_NNRTDEVICE_OTHERS = 0, OH_AI_NNRTDEVICE_CPU = 1, OH_AI_NNRTDEVICE_GPU = 2, OH_AI_NNRTDEVICE_ACCELERATOR = 3 } | Defines NNRt device types.|
80| [OH_AI_PerformanceMode](#oh_ai_performancemode) {<br>OH_AI_PERFORMANCE_NONE = 0, OH_AI_PERFORMANCE_LOW = 1, OH_AI_PERFORMANCE_MEDIUM = 2, OH_AI_PERFORMANCE_HIGH = 3,<br>OH_AI_PERFORMANCE_EXTREME = 4<br>} | Defines performance modes of the NNRt device.|
81| [OH_AI_Priority](#oh_ai_priority) { OH_AI_PRIORITY_NONE = 0, OH_AI_PRIORITY_LOW = 1, OH_AI_PRIORITY_MEDIUM = 2, OH_AI_PRIORITY_HIGH = 3 } | Defines NNRt inference task priorities.|
82| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) {<br>OH_AI_KO0 = 0, OH_AI_KO2 = 2, OH_AI_KO3 = 3, OH_AI_KAUTO = 4,<br>OH_AI_KOPTIMIZATIONTYPE = 0xFFFFFFFF<br>} | Defines training optimization levels.|
83| [OH_AI_QuantizationType](#oh_ai_quantizationtype) { OH_AI_NO_QUANT = 0, OH_AI_WEIGHT_QUANT = 1, OH_AI_FULL_QUANT = 2, OH_AI_UNKNOWN_QUANT_TYPE = 0xFFFFFFFF } | Defines quantization types.|
84
85
86### Functions
87
88| Name| Description|
89| -------- | -------- |
90| [OH_AI_ContextCreate](#oh_ai_contextcreate) () | Creates a context object. This API must be used together with [OH_AI_ContextDestroy](#oh_ai_contextdestroy).|
91| [OH_AI_ContextDestroy](#oh_ai_contextdestroy) ([OH_AI_ContextHandle](#oh_ai_contexthandle) \*context) | Destroys a context object.|
92| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.|
93| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads.|
94| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
95| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.|
96| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.|
97| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.|
98| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.|
99| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.|
100| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.|
101| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object.|
102| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.|
103| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of the provider.|
104| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.|
105| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.|
106| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.|
107| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the device type.|
108| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
109| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.|
110| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
111| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
112| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system.|
113| [OH_AI_GetElementOfNNRTDeviceDescs](#oh_ai_getelementofnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*descs, size_t index) | Obtains the pointer to an element in the NNRt device description array.|
114| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*\*desc) | Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
115| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRtDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
116| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
117| [OH_AI_GetTypeFromNNRtDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
118| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
119| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
120| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the NNRt device ID. This function is available only for NNRt devices.|
121| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NNRt device ID. This function is available only for NNRt devices.|
122| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the NNRt performance mode. This function is available only for NNRt devices.|
123| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NNRt performance mode. This function is available only for NNRt devices.|
124| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRt task. This function is available only for NNRt devices.|
125| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRt task. This function is available only for NNRt devices.|
126| [OH_AI_DeviceInfoAddExtension](#oh_ai_deviceinfoaddextension) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*name, const char \*value, size_t value_size) | Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt devices.|
127| [OH_AI_ModelCreate](#oh_ai_modelcreate) () | Creates a model object.|
128| [OH_AI_ModelDestroy](#oh_ai_modeldestroy) ([OH_AI_ModelHandle](#oh_ai_modelhandle) \*model) | Destroys a model object.|
129| [OH_AI_ModelBuild](#oh_ai_modelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer.|
130| [OH_AI_ModelBuildFromFile](#oh_ai_modelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from a model file.|
131| [OH_AI_ModelResize](#oh_ai_modelresize) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) \*shape_infos, size_t shape_info_num) | Adjusts the input tensor shapes of a built model.|
132| [OH_AI_ModelPredict](#oh_ai_modelpredict) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) \*outputs, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Performs model inference.|
133| [OH_AI_ModelGetInputs](#oh_ai_modelgetinputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the input tensor array structure of a model.|
134| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model.|
135| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name.|
136| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name.|
137| [OH_AI_TrainCfgCreate](#oh_ai_traincfgcreate) () | Creates the pointer to the training configuration object. This API is used only for on-device training.|
138| [OH_AI_TrainCfgDestroy](#oh_ai_traincfgdestroy) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) \*train_cfg) | Destroys the pointer to the training configuration object. This API is used only for on-device training.|
139| [OH_AI_TrainCfgGetLossName](#oh_ai_traincfggetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, size_t \*num) | Obtains the list of loss functions, which are used only for on-device training.|
140| [OH_AI_TrainCfgSetLossName](#oh_ai_traincfgsetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, const char \*\*loss_name, size_t num) | Sets the list of loss functions, which are used only for on-device training.|
141| [OH_AI_TrainCfgGetOptimizationLevel](#oh_ai_traincfggetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Obtains the optimization level of the training configuration object. This API is used only for on-device training.|
142| [OH_AI_TrainCfgSetOptimizationLevel](#oh_ai_traincfgsetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) level) | Sets the optimization level of the training configuration object. This API is used only for on-device training.|
143| [OH_AI_TrainModelBuild](#oh_ai_trainmodelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
144| [OH_AI_TrainModelBuildFromFile](#oh_ai_trainmodelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
145| [OH_AI_RunStep](#oh_ai_runstep) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Defines a single-step training model. This API is used only for on-device training.|
146| [OH_AI_ModelSetLearningRate](#oh_ai_modelsetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, float learning_rate) | Sets the learning rate for model training. This API is used only for on-device training.|
147| [OH_AI_ModelGetLearningRate](#oh_ai_modelgetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the learning rate for model training. This API is used only for on-device training.|
148| [OH_AI_ModelGetWeights](#oh_ai_modelgetweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains all weight tensors of a model. This API is used only for on-device training.|
149| [OH_AI_ModelUpdateWeights](#oh_ai_modelupdateweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) new_weights) | Updates the weight tensors of a model. This API is used only for on-device training.|
150| [OH_AI_ModelGetTrainMode](#oh_ai_modelgettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the training mode.|
151| [OH_AI_ModelSetTrainMode](#oh_ai_modelsettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, bool train) | Sets the training mode. This API is used only for on-device training.|
152| [OH_AI_ModelSetupVirtualBatch](#oh_ai_modelsetupvirtualbatch) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, int virtual_batch_multiplier, float lr, float momentum) | OH_AI_API [OH_AI_Status](#oh_ai_status)<br>Sets the virtual batch for training. This API is used only for on-device training.|
153| [OH_AI_ExportModel](#oh_ai_exportmodel) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*model_file, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports a training model. This API is used only for on-device training.|
154| [OH_AI_ExportModelBuffer](#oh_ai_exportmodelbuffer) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, char \*\*model_data, size_t \*data_size, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports the memory cache of the training model. This API is used only for on-device training. |
155| [OH_AI_ExportWeightsCollaborateWithMicro](#oh_ai_exportweightscollaboratewithmicro) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*weight_file, bool is_inference, bool enable_fp16, char \*\*changeable_weights_name, size_t num) | Exports the weight file of the training model for micro inference. This API is used only for on-device training.|
156| [OH_AI_TensorCreate](#oh_ai_tensorcreate) (const char \*name, [OH_AI_DataType](#oh_ai_datatype) type, const int64_t \*shape, size_t shape_num, const void \*data, size_t data_len) | Creates a tensor object.|
157| [OH_AI_TensorDestroy](#oh_ai_tensordestroy) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) \*tensor) | Destroys a tensor object.|
158| [OH_AI_TensorClone](#oh_ai_tensorclone) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Clones a tensor.|
159| [OH_AI_TensorSetName](#oh_ai_tensorsetname) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const char \*name) | Sets the tensor name.|
160| [OH_AI_TensorGetName](#oh_ai_tensorgetname) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name.|
161| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor.|
162| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor type.|
163| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the tensor shape.|
164| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the tensor shape.|
165| [OH_AI_TensorSetFormat](#oh_ai_tensorsetformat) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_Format](#oh_ai_format) format) | Sets the tensor data format.|
166| [OH_AI_TensorGetFormat](#oh_ai_tensorgetformat) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor data format.|
167| [OH_AI_TensorSetData](#oh_ai_tensorsetdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data) | Sets the tensor data.|
168| [OH_AI_TensorGetData](#oh_ai_tensorgetdata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data.|
169| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.|
170| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements.|
171| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data.|
172| [OH_AI_TensorSetUserData](#oh_ai_tensorsetuserdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data, size_t data_size) | Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time. > **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.|
173| [OH_AI_TensorGetAllocator](#oh_ai_tensorgetallocator)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.|
174| [OH_AI_TensorSetAllocator](#oh_ai_tensorsetallocator)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_AllocatorHandle](#oh_ai_allocatorhandle) allocator) | Sets the memory allocator.  The allocator is responsible for allocating memory for tensors.|
175
176
177## Macro Description
178
179
180### OH_AI_MAX_SHAPE_NUM
181
182```
183#define OH_AI_MAX_SHAPE_NUM   32
184```
185
186**Description**
187
188Defines dimension information. The maximum dimension is set by **OH_AI_MAX_SHAPE_NUM**.
189
190**Since**: 9
191
192
193## Type Description
194
195
196### NNRTDeviceDesc
197
198```
199typedef struct NNRTDeviceDesc NNRTDeviceDesc
200```
201
202**Description**
203
204Defines NNRt device information, including the device ID and device name.
205
206**Since**: 10
207
208### OH_AI_AllocatorHandle
209
210```
211typedef void *OH_AI_AllocatorHandle
212```
213
214**Description**
215
216Handle of the memory allocator.
217
218**Since**: 12
219
220### OH_AI_CallBackParam
221
222```
223typedef struct OH_AI_CallBackParam OH_AI_CallBackParam
224```
225
226**Description**
227
228Defines the operator information passed in a callback.
229
230**Since**: 9
231
232
233### OH_AI_ContextHandle
234
235```
236typedef void* OH_AI_ContextHandle
237```
238
239**Description**
240
241Defines the pointer to the MindSpore context.
242
243**Since**: 9
244
245
246### OH_AI_DataType
247
248```
249typedef enum OH_AI_DataType OH_AI_DataType
250```
251
252**Description**
253
254Declares data types supported by MSTensor.
255
256**Since**: 9
257
258
259### OH_AI_DeviceInfoHandle
260
261```
262typedef void* OH_AI_DeviceInfoHandle
263```
264
265**Description**
266
267Defines the pointer to the MindSpore device information.
268
269**Since**: 9
270
271
272### OH_AI_DeviceType
273
274```
275typedef enum OH_AI_DeviceType OH_AI_DeviceType
276```
277
278**Description**
279
280Defines the supported device types.
281
282**Since**: 9
283
284
285### OH_AI_Format
286
287```
288typedef enum OH_AI_Format OH_AI_Format
289```
290
291**Description**
292
293Declares data formats supported by MSTensor.
294
295**Since**: 9
296
297
298### OH_AI_KernelCallBack
299
300```
301typedef bool(* OH_AI_KernelCallBack) (const OH_AI_TensorHandleArray inputs, const OH_AI_TensorHandleArray outputs, const OH_AI_CallBackParam kernel_Info)
302```
303
304**Description**
305
306Defines the pointer to a callback.
307
308This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness.
309
310**Since**: 9
311
312
313### OH_AI_ModelHandle
314
315```
316typedef void* OH_AI_ModelHandle
317```
318
319**Description**
320
321Defines the pointer to a model object.
322
323**Since**: 9
324
325
326### OH_AI_ModelType
327
328```
329typedef enum OH_AI_ModelType OH_AI_ModelType
330```
331
332**Description**
333
334Defines model file types.
335
336**Since**: 9
337
338
339### OH_AI_NNRTDeviceType
340
341```
342typedef enum OH_AI_NNRTDeviceType OH_AI_NNRTDeviceType
343```
344
345**Description**
346
347Defines the NNRt device types.
348
349**Since**: 10
350
351
352### OH_AI_PerformanceMode
353
354```
355typedef enum OH_AI_PerformanceMode OH_AI_PerformanceMode
356```
357
358**Description**
359
360Defines performance modes of the NNRt device.
361
362**Since**: 10
363
364
365### OH_AI_Priority
366
367```
368typedef enum OH_AI_Priority OH_AI_Priority
369```
370
371**Description**
372
373Defines NNRt inference task priorities.
374
375**Since**: 10
376
377
378### OH_AI_Status
379
380```
381typedef enum OH_AI_Status OH_AI_Status
382```
383
384**Description**
385
386Defines MindSpore status codes.
387
388**Since**: 9
389
390
391### OH_AI_TensorHandle
392
393```
394typedef void* OH_AI_TensorHandle
395```
396
397**Description**
398
399Defines the handle of a tensor object.
400
401**Since**: 9
402
403
404### OH_AI_TensorHandleArray
405
406```
407typedef struct OH_AI_TensorHandleArray OH_AI_TensorHandleArray
408```
409
410**Description**
411
412Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.
413
414**Since**: 9
415
416
417### OH_AI_TrainCfgHandle
418
419```
420typedef void* OH_AI_TrainCfgHandle
421```
422
423**Description**
424
425Defines the pointer to a training configuration object.
426
427**Since**: 11
428
429
430## Enum Description
431
432
433### OH_AI_CompCode
434
435```
436enum OH_AI_CompCode
437```
438
439**Description**
440
441Defines MindSpore component codes.
442
443**Since**: 9
444
445| Value| Description|
446| -------- | -------- |
447| OH_AI_COMPCODE_CORE | MindSpore Core code.|
448| OH_AI_COMPCODE_MD   | MindSpore MindData code.|
449| OH_AI_COMPCODE_ME   | MindSpore MindExpression code.|
450| OH_AI_COMPCODE_MC   | MindSpore code.|
451| OH_AI_COMPCODE_LITE | MindSpore Lite code.|
452
453
454### OH_AI_DataType
455
456```
457enum OH_AI_DataType
458```
459
460**Description**
461
462Declares data types supported by MSTensor.
463
464**Since**: 9
465
466| Value| Description|
467| -------- | -------- |
468| OH_AI_DATATYPE_UNKNOWN | Unknown data type.|
469| OH_AI_DATATYPE_OBJECTTYPE_STRING | String data.|
470| OH_AI_DATATYPE_OBJECTTYPE_LIST | List data.|
471| OH_AI_DATATYPE_OBJECTTYPE_TUPLE | Tuple data.|
472| OH_AI_DATATYPE_OBJECTTYPE_TENSOR | TensorList data.|
473| OH_AI_DATATYPE_NUMBERTYPE_BEGIN | Beginning of the number type.|
474| OH_AI_DATATYPE_NUMBERTYPE_BOOL | Bool data.|
475| OH_AI_DATATYPE_NUMBERTYPE_INT8 | Int8 data.|
476| OH_AI_DATATYPE_NUMBERTYPE_INT16 | Int16 data.|
477| OH_AI_DATATYPE_NUMBERTYPE_INT32 | Int32 data.|
478| OH_AI_DATATYPE_NUMBERTYPE_INT64 | Int64 data.|
479| OH_AI_DATATYPE_NUMBERTYPE_UINT8 | UInt8 data.|
480| OH_AI_DATATYPE_NUMBERTYPE_UINT16 | UInt16 data.|
481| OH_AI_DATATYPE_NUMBERTYPE_UINT32 | UInt32 data.|
482| OH_AI_DATATYPE_NUMBERTYPE_UINT64 | UInt64 data.|
483| OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 | Float16 data.|
484| OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 | Float32 data.|
485| OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 | Float64 data.|
486| OH_AI_DATATYPE_NUMBERTYPE_END | End of the number type.|
487| OH_AI_DataTypeInvalid | Invalid data type.|
488
489
490### OH_AI_DeviceType
491
492```
493enum OH_AI_DeviceType
494```
495
496**Description**
497
498Defines the supported device types.
499
500**Since**: 9
501
502| Value| Description|
503| -------- | -------- |
504| OH_AI_DEVICETYPE_CPU | CPU.|
505| OH_AI_DEVICETYPE_GPU | GPU.<br>This configuration is open for upstream open source projects and is not supported by OpenHarmony.|
506| OH_AI_DEVICETYPE_KIRIN_NPU | Kirin NPU.<br>This configuration is open for upstream open source projects and is not supported by OpenHarmony.<br>To use KIRIN_NPU, set **OH_AI_DEVICETYPE_NNRT**.|
507| OH_AI_DEVICETYPE_NNRT | NNRt, a cross-chip inference and computing runtime oriented to the AI field.<br>OHOS device range: [60, 80)|
508| OH_AI_DEVICETYPE_INVALID | Invalid device type.|
509
510
511### OH_AI_Format
512
513```
514enum OH_AI_Format
515```
516
517**Description**
518
519Declares data formats supported by MSTensor.
520
521**Since**: 9
522
523| Value| Description|
524| -------- | -------- |
525| OH_AI_FORMAT_NCHW | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W.|
526| OH_AI_FORMAT_NHWC | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C.|
527| OH_AI_FORMAT_NHWC4 | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. The C axis is 4-byte aligned.|
528| OH_AI_FORMAT_HWKC | Tensor data is stored in the sequence of height H, width W, core count K, and channel C.|
529| OH_AI_FORMAT_HWCK | Tensor data is stored in the sequence of height H, width W, channel C, and core count K.|
530| OH_AI_FORMAT_KCHW | Tensor data is stored in the sequence of core count K, channel C, height H, and width W.|
531| OH_AI_FORMAT_CKHW | Tensor data is stored in the sequence of channel C, core count K, height H, and width W.|
532| OH_AI_FORMAT_KHWC | Tensor data is stored in the sequence of core count K, height H, width W, and channel C.|
533| OH_AI_FORMAT_CHWK | Tensor data is stored in the sequence of channel C, height H, width W, and core count K.|
534| OH_AI_FORMAT_HW | Tensor data is stored in the sequence of height H and width W.|
535| OH_AI_FORMAT_HW4 | Tensor data is stored in the sequence of height H and width W. The W axis is 4-byte aligned.|
536| OH_AI_FORMAT_NC | Tensor data is stored in the sequence of batch number N and channel C.|
537| OH_AI_FORMAT_NC4 | Tensor data is stored in the sequence of batch number N and channel C. The C axis is 4-byte aligned.|
538| OH_AI_FORMAT_NC4HW4 | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. The C axis and W axis are 4-byte aligned.|
539| OH_AI_FORMAT_NCDHW | Tensor data is stored in the sequence of batch number N, channel C, depth D, height H, and width W.|
540| OH_AI_FORMAT_NWC | Tensor data is stored in the sequence of batch number N, width W, and channel C.|
541| OH_AI_FORMAT_NCW | Tensor data is stored in the sequence of batch number N, channel C, and width W.|
542
543
544### OH_AI_ModelType
545
546```
547enum OH_AI_ModelType
548```
549
550**Description**
551
552Defines model file types.
553
554**Since**: 9
555
556| Value| Description|
557| -------- | -------- |
558| OH_AI_MODELTYPE_MINDIR | Model type of MindIR. The extension of the model file name is **.ms**.|
559| OH_AI_MODELTYPE_INVALID | Invalid model type.|
560
561
562### OH_AI_NNRTDeviceType
563
564```
565enum OH_AI_NNRTDeviceType
566```
567
568**Description**
569
570Defines the NNRt device types.
571
572**Since**: 10
573
574| Value| Description|
575| -------- | -------- |
576| OH_AI_NNRTDEVICE_OTHERS | Others (any device type except the following three types).|
577| OH_AI_NNRTDEVICE_CPU | CPU.|
578| OH_AI_NNRTDEVICE_GPU | GPU.|
579| OH_AI_NNRTDEVICE_ACCELERATOR | Specific acceleration device.|
580
581
582### OH_AI_OptimizationLevel
583
584```
585enum OH_AI_OptimizationLevel
586```
587
588**Description**
589
590Defines training optimization levels.
591
592**Since**
593
594**11**
595
596| Value| Description|
597| -------- | -------- |
598| OH_AI_KO0 | No optimization level.|
599| OH_AI_KO2 | Converts the precision type of the network to float16 and keeps the precision type of the batch normalization layer and loss function as float32.|
600| OH_AI_KO3 | Converts the precision type of the network (including the batch normalization layer) to float16.|
601| OH_AI_KAUTO | Selects an optimization level based on the device.|
602| OH_AI_KOPTIMIZATIONTYPE | Invalid optimization level.|
603
604
605### OH_AI_PerformanceMode
606
607```
608enum OH_AI_PerformanceMode
609```
610
611**Description**
612
613Defines performance modes of the NNRt device.
614
615**Since**: 10
616
617| Value| Description|
618| -------- | -------- |
619| OH_AI_PERFORMANCE_NONE | No special settings.|
620| OH_AI_PERFORMANCE_LOW | Low power consumption.|
621| OH_AI_PERFORMANCE_MEDIUM | Power consumption and performance balancing.|
622| OH_AI_PERFORMANCE_HIGH | High performance.|
623| OH_AI_PERFORMANCE_EXTREME | Ultimate performance.|
624
625
626### OH_AI_Priority
627
628```
629enum OH_AI_Priority
630```
631
632**Description**
633
634Defines NNRt inference task priorities.
635
636**Since**: 10
637
638| Value| Description|
639| -------- | -------- |
640| OH_AI_PRIORITY_NONE | No priority preference.|
641| OH_AI_PRIORITY_LOW | Low priority.|
642| OH_AI_PRIORITY_MEDIUM | Medium priority.|
643| OH_AI_PRIORITY_HIGH | High priority.|
644
645
646### OH_AI_QuantizationType
647
648```
649enum OH_AI_QuantizationType
650```
651
652**Description**
653
654Defines quantization types.
655
656**Since**
657
658**11**
659
660| Value| Description|
661| -------- | -------- |
662| OH_AI_NO_QUANT | No quantification.|
663| OH_AI_WEIGHT_QUANT | Weight quantization.|
664| OH_AI_FULL_QUANT | Full quantization.|
665| OH_AI_UNKNOWN_QUANT_TYPE | Invalid quantization type.|
666
667
668### OH_AI_Status
669
670```
671enum OH_AI_Status
672```
673
674**Description**
675
676Defines MindSpore status codes.
677
678**Since**: 9
679
680| Value| Description|
681| -------- | -------- |
682| OH_AI_STATUS_SUCCESS | Success.|
683| OH_AI_STATUS_CORE_FAILED | MindSpore Core failure.|
684| OH_AI_STATUS_LITE_ERROR | MindSpore Lite error.|
685| OH_AI_STATUS_LITE_NULLPTR | MindSpore Lite null pointer.|
686| OH_AI_STATUS_LITE_PARAM_INVALID | MindSpore Lite invalid parameters.|
687| OH_AI_STATUS_LITE_NO_CHANGE | MindSpore Lite no change.|
688| OH_AI_STATUS_LITE_SUCCESS_EXIT | MindSpore Lite exit without errors.|
689| OH_AI_STATUS_LITE_MEMORY_FAILED | MindSpore Lite memory allocation failure.|
690| OH_AI_STATUS_LITE_NOT_SUPPORT | MindSpore Lite function not supported.|
691| OH_AI_STATUS_LITE_THREADPOOL_ERROR | MindSpore Lite thread pool error.|
692| OH_AI_STATUS_LITE_UNINITIALIZED_OBJ | MindSpore Lite uninitialized.|
693| OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE | MindSpore Lite tensor overflow.|
694| OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR | MindSpore Lite input tensor error.|
695| OH_AI_STATUS_LITE_REENTRANT_ERROR | MindSpore Lite reentry error.|
696| OH_AI_STATUS_LITE_GRAPH_FILE_ERROR | MindSpore Lite file error.|
697| OH_AI_STATUS_LITE_NOT_FIND_OP | MindSpore Lite operator not found.|
698| OH_AI_STATUS_LITE_INVALID_OP_NAME | MindSpore Lite invalid operators.|
699| OH_AI_STATUS_LITE_INVALID_OP_ATTR | MindSpore Lite invalid operator hyperparameters.|
700| OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE | MindSpore Lite operator execution failure.|
701| OH_AI_STATUS_LITE_FORMAT_ERROR | MindSpore Lite tensor format error.|
702| OH_AI_STATUS_LITE_INFER_ERROR | MindSpore Lite shape inference error.|
703| OH_AI_STATUS_LITE_INFER_INVALID | MindSpore Lite invalid shape inference.|
704| OH_AI_STATUS_LITE_INPUT_PARAM_INVALID | MindSpore Lite invalid input parameters.|
705
706
707## Function Description
708
709
710### OH_AI_ContextAddDeviceInfo()
711
712```
713OH_AI_API void OH_AI_ContextAddDeviceInfo (OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info )
714```
715
716**Description**
717
718Attaches the custom device information to the inference context.
719
720**Since**: 9
721
722**Parameters**
723
724| Name| Description|
725| -------- | -------- |
726| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
727| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
728
729
730### OH_AI_ContextCreate()
731
732```
733OH_AI_API OH_AI_ContextHandle OH_AI_ContextCreate ()
734```
735
736**Description**
737
738Creates a context object. This API must be used together with [OH_AI_ContextDestroy](#oh_ai_contextdestroy).
739
740**Since**: 9
741
742**Returns**
743
744[OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.
745
746
747### OH_AI_ContextDestroy()
748
749```
750OH_AI_API void OH_AI_ContextDestroy (OH_AI_ContextHandle * context)
751```
752
753**Description**
754
755Destroys a context object.
756
757**Since**: 9
758
759**Parameters**
760
761| Name| Description|
762| -------- | -------- |
763| context | Level-2 pointer to [OH_AI_ContextHandle](#oh_ai_contexthandle). After the context is destroyed, the pointer is set to null. |
764
765
766### OH_AI_ContextGetEnableParallel()
767
768```
769OH_AI_API bool OH_AI_ContextGetEnableParallel (const OH_AI_ContextHandle context)
770```
771
772**Description**
773
774Checks whether parallelism between operators is supported.
775
776**Since**: 9
777
778**Parameters**
779
780| Name| Description|
781| -------- | -------- |
782| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
783
784**Returns**
785
786Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.
787
788
789### OH_AI_ContextGetThreadAffinityCoreList()
790
791```
792OH_AI_API const int32_t* OH_AI_ContextGetThreadAffinityCoreList (const OH_AI_ContextHandle context, size_t * core_num )
793```
794
795**Description**
796
797Obtains the list of bound CPU cores.
798
799**Since**: 9
800
801**Parameters**
802
803| Name| Description|
804| -------- | -------- |
805| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
806| core_num | Number of CPU cores.|
807
808**Returns**
809
810CPU core binding list. This list is managed by [OH_AI_ContextHandle](#oh_ai_contexthandle). The caller does not need to destroy it manually.
811
812
813### OH_AI_ContextGetThreadAffinityMode()
814
815```
816OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context)
817```
818
819**Description**
820
821Obtains the affinity mode for binding runtime threads to CPU cores.
822
823**Since**: 9
824
825**Parameters**
826
827| Name| Description|
828| -------- | -------- |
829| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
830
831**Returns**
832
833Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first
834
835
836### OH_AI_ContextGetThreadNum()
837
838```
839OH_AI_API int32_t OH_AI_ContextGetThreadNum (const OH_AI_ContextHandle context)
840```
841
842**Description**
843
844Obtains the number of threads.
845
846**Since**: 9
847
848**Parameters**
849
850| Name| Description|
851| -------- | -------- |
852| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
853
854**Returns**
855
856Number of threads.
857
858
859### OH_AI_ContextSetEnableParallel()
860
861```
862OH_AI_API void OH_AI_ContextSetEnableParallel (OH_AI_ContextHandle context, bool is_parallel )
863```
864
865**Description**
866
867Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.
868
869**Since**: 9
870
871**Parameters**
872
873| Name| Description|
874| -------- | -------- |
875| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
876| is_parallel | Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.|
877
878
879### OH_AI_ContextSetThreadAffinityCoreList()
880
881```
882OH_AI_API void OH_AI_ContextSetThreadAffinityCoreList (OH_AI_ContextHandle context, const int32_t * core_list, size_t core_num )
883```
884
885**Description**
886
887Sets the list of CPU cores bound to a runtime thread.
888
889For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not.
890
891**Since**: 9
892
893**Parameters**
894
895| Name| Description|
896| -------- | -------- |
897| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
898| core_list | List of bound CPU cores.|
899| core_num | Number of cores, which indicates the length of **core_list**.|
900
901
902### OH_AI_ContextSetThreadAffinityMode()
903
904```
905OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode )
906```
907
908**Description**
909
910Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.
911
912**Since**: 9
913
914**Parameters**
915
916| Name| Description|
917| -------- | -------- |
918| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
919| mode | Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first|
920
921
922### OH_AI_ContextSetThreadNum()
923
924```
925OH_AI_API void OH_AI_ContextSetThreadNum (OH_AI_ContextHandle context, int32_t thread_num )
926```
927
928**Description**
929
930Sets the number of runtime threads.
931
932**Since**: 9
933
934**Parameters**
935
936| Name| Description|
937| -------- | -------- |
938| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
939| thread_num | Number of runtime threads.|
940
941
942### OH_AI_CreateNNRTDeviceInfoByName()
943
944```
945OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByName (const char * name)
946```
947
948**Description**
949
950Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.
951
952**Since**: 10
953
954**Parameters**
955
956| Name| Description|
957| -------- | -------- |
958| name | Name of the target NNRt device.|
959
960**Returns**
961
962[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
963
964
965### OH_AI_CreateNNRTDeviceInfoByType()
966
967```
968OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByType (OH_AI_NNRTDeviceType type)
969```
970
971**Description**
972
973Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.
974
975**Since**: 10
976
977**Parameters**
978
979| Name| Description|
980| -------- | -------- |
981| type | NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).|
982
983**Returns**
984
985[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
986
987
988### OH_AI_DestroyAllNNRTDeviceDescs()
989
990```
991OH_AI_API void OH_AI_DestroyAllNNRTDeviceDescs (NNRTDeviceDesc ** desc)
992```
993
994**Description**
995
996Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).
997
998**Since**: 10
999
1000**Parameters**
1001
1002| Name| Description|
1003| -------- | -------- |
1004| desc | Double pointer to the array of the NNRt device descriptions. After the operation is complete, the content pointed to by **desc** is set to **NULL**.|
1005
1006
1007### OH_AI_DeviceInfoAddExtension()
1008
1009```
1010OH_AI_API OH_AI_Status OH_AI_DeviceInfoAddExtension (OH_AI_DeviceInfoHandle device_info, const char * name, const char * value, size_t value_size )
1011```
1012
1013**Description**
1014
1015Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt devices.
1016
1017Note: Currently, only 11 key-value pairs are supported, including: {"CachePath": "YourCachePath"}, {"CacheVersion": "YouCacheVersion"}, {"QuantBuffer": "YourQuantBuffer"}, {"ModelName": "YourModelName"}, {"isProfiling": "YourisProfiling"}, {"opLayout": "YouropLayout"}, {"InputDims": "YourInputDims"}, {"DynamicDims": "YourDynamicDims"}, {"QuantConfigData": "YourQuantConfigData"}, {"BandMode": "YourBandMode"}, {"NPU_FM_SHARED": "YourNPU_FM_SHARED"}. Replace them as required.
1018
1019**Since**: 10
1020
1021**Parameters**
1022
1023| Name| Description|
1024| -------- | -------- |
1025| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1026| name | Key in an extended key/value pair. The value is a C string.|
1027| value |  Start address of the value in an extended key/value pair.|
1028| value_size | Length of the value in an extended key/value pair.|
1029
1030**Returns**
1031
1032Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1033
1034
1035### OH_AI_DeviceInfoCreate()
1036
1037```
1038OH_AI_API OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate (OH_AI_DeviceType device_type)
1039```
1040
1041**Description**
1042
1043Creates a device information object.
1044
1045**Since**: 9
1046
1047**Parameters**
1048
1049| Name| Description|
1050| -------- | -------- |
1051| device_type | Device type, which is specified by [OH_AI_DeviceType](#oh_ai_devicetype).|
1052
1053**Returns**
1054
1055[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
1056
1057
1058### OH_AI_DeviceInfoDestroy()
1059
1060```
1061OH_AI_API void OH_AI_DeviceInfoDestroy (OH_AI_DeviceInfoHandle * device_info)
1062```
1063
1064**Description**
1065
1066Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.
1067
1068**Since**: 9
1069
1070**Parameters**
1071
1072| Name| Description|
1073| -------- | -------- |
1074| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1075
1076
1077### OH_AI_DeviceInfoGetDeviceId()
1078
1079```
1080OH_AI_API size_t OH_AI_DeviceInfoGetDeviceId (const OH_AI_DeviceInfoHandle device_info)
1081```
1082
1083**Description**
1084
1085Obtains the NNRt device ID. This function is available only for NNRt devices.
1086
1087**Since**: 10
1088
1089**Parameters**
1090
1091| Name| Description|
1092| -------- | -------- |
1093| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1094
1095**Returns**
1096
1097NNRt device ID.
1098
1099
1100### OH_AI_DeviceInfoGetDeviceType()
1101
1102```
1103OH_AI_API OH_AI_DeviceType OH_AI_DeviceInfoGetDeviceType (const OH_AI_DeviceInfoHandle device_info)
1104```
1105
1106**Description**
1107
1108Obtains the device type.
1109
1110**Since**: 9
1111
1112**Parameters**
1113
1114| Name| Description|
1115| -------- | -------- |
1116| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1117
1118**Returns**
1119
1120Type of the provider device.
1121
1122
1123### OH_AI_DeviceInfoGetEnableFP16()
1124
1125```
1126OH_AI_API bool OH_AI_DeviceInfoGetEnableFP16 (const OH_AI_DeviceInfoHandle device_info)
1127```
1128
1129**Description**
1130
1131Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.
1132
1133**Since**: 9
1134
1135**Parameters**
1136
1137| Name| Description|
1138| -------- | -------- |
1139| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1140
1141**Returns**
1142
1143Whether float16 inference is enabled.
1144
1145
1146### OH_AI_DeviceInfoGetFrequency()
1147
1148```
1149OH_AI_API int OH_AI_DeviceInfoGetFrequency (const OH_AI_DeviceInfoHandle device_info)
1150```
1151
1152**Description**
1153
1154Obtains the NPU frequency type. This function is available only for NPU devices.
1155
1156**Since**: 9
1157
1158**Parameters**
1159
1160| Name| Description|
1161| -------- | -------- |
1162| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1163
1164**Returns**
1165
1166NPU frequency type. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance
1167
1168
1169### OH_AI_DeviceInfoGetPerformanceMode()
1170
1171```
1172OH_AI_API OH_AI_PerformanceMode OH_AI_DeviceInfoGetPerformanceMode (const OH_AI_DeviceInfoHandle device_info)
1173```
1174
1175**Description**
1176
1177Obtains the NNRt performance mode. This function is available only for NNRt devices.
1178
1179**Since**: 10
1180
1181**Parameters**
1182
1183| Name| Description|
1184| -------- | -------- |
1185| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1186
1187**Returns**
1188
1189NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).
1190
1191
1192### OH_AI_DeviceInfoGetPriority()
1193
1194```
1195OH_AI_API OH_AI_Priority OH_AI_DeviceInfoGetPriority (const OH_AI_DeviceInfoHandle device_info)
1196```
1197
1198**Description**
1199
1200Obtains the priority of an NNRt task. This function is available only for NNRt devices.
1201
1202**Since**: 10
1203
1204**Parameters**
1205
1206| Name| Description|
1207| -------- | -------- |
1208| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1209
1210**Returns**
1211
1212NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).
1213
1214
1215### OH_AI_DeviceInfoGetProvider()
1216
1217```
1218OH_AI_API const char* OH_AI_DeviceInfoGetProvider (const OH_AI_DeviceInfoHandle device_info)
1219```
1220
1221**Description**
1222
1223Obtains the provider name.
1224
1225**Since**: 9
1226
1227**Parameters**
1228
1229| Name| Description|
1230| -------- | -------- |
1231| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1232
1233**Returns**
1234
1235Provider name.
1236
1237
1238### OH_AI_DeviceInfoGetProviderDevice()
1239
1240```
1241OH_AI_API const char* OH_AI_DeviceInfoGetProviderDevice (const OH_AI_DeviceInfoHandle device_info)
1242```
1243
1244**Description**
1245
1246Obtains the name of a provider device.
1247
1248**Since**: 9
1249
1250**Parameters**
1251
1252| Name| Description|
1253| -------- | -------- |
1254| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1255
1256**Returns**
1257
1258Name of the provider device.
1259
1260
1261### OH_AI_DeviceInfoSetDeviceId()
1262
1263```
1264OH_AI_API void OH_AI_DeviceInfoSetDeviceId (OH_AI_DeviceInfoHandle device_info, size_t device_id )
1265```
1266
1267**Description**
1268
1269Sets the NNRt device ID. This function is available only for NNRt devices.
1270
1271**Since**: 10
1272
1273**Parameters**
1274
1275| Name| Description|
1276| -------- | -------- |
1277| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1278| device_id | NNRt device ID.|
1279
1280
1281### OH_AI_DeviceInfoSetEnableFP16()
1282
1283```
1284OH_AI_API void OH_AI_DeviceInfoSetEnableFP16 (OH_AI_DeviceInfoHandle device_info, bool is_fp16 )
1285```
1286
1287**Description**
1288
1289Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.
1290
1291**Since**: 9
1292
1293**Parameters**
1294
1295| Name| Description|
1296| -------- | -------- |
1297| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1298| is_fp16 | Whether to enable the float16 inference mode.|
1299
1300
1301### OH_AI_DeviceInfoSetFrequency()
1302
1303```
1304OH_AI_API void OH_AI_DeviceInfoSetFrequency (OH_AI_DeviceInfoHandle device_info, int frequency )
1305```
1306
1307**Description**
1308
1309Sets the NPU frequency type. This function is available only for NPU devices.
1310
1311**Since**: 9
1312
1313**Parameters**
1314
1315| Name| Description|
1316| -------- | -------- |
1317| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1318| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance|
1319
1320
1321### OH_AI_DeviceInfoSetPerformanceMode()
1322
1323```
1324OH_AI_API void OH_AI_DeviceInfoSetPerformanceMode (OH_AI_DeviceInfoHandle device_info, OH_AI_PerformanceMode mode )
1325```
1326
1327**Description**
1328
1329Sets the NNRt performance mode. This function is available only for NNRt devices.
1330
1331**Since**: 10
1332
1333**Parameters**
1334
1335| Name| Description|
1336| -------- | -------- |
1337| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1338| mode | NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).|
1339
1340
1341### OH_AI_DeviceInfoSetPriority()
1342
1343```
1344OH_AI_API void OH_AI_DeviceInfoSetPriority (OH_AI_DeviceInfoHandle device_info, OH_AI_Priority priority )
1345```
1346
1347**Description**
1348
1349Sets the priority of an NNRt task. This function is available only for NNRt devices.
1350
1351**Since**: 10
1352
1353**Parameters**
1354
1355| Name| Description|
1356| -------- | -------- |
1357| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1358| priority | NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).|
1359
1360
1361### OH_AI_DeviceInfoSetProvider()
1362
1363```
1364OH_AI_API void OH_AI_DeviceInfoSetProvider (OH_AI_DeviceInfoHandle device_info, const char * provider )
1365```
1366
1367**Description**
1368
1369Sets the name of the provider.
1370
1371**Since**: 9
1372
1373**Parameters**
1374
1375| Name| Description|
1376| -------- | -------- |
1377| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1378| provider | Provider name.|
1379
1380
1381### OH_AI_DeviceInfoSetProviderDevice()
1382
1383```
1384OH_AI_API void OH_AI_DeviceInfoSetProviderDevice (OH_AI_DeviceInfoHandle device_info, const char * device )
1385```
1386
1387**Description**
1388
1389Sets the name of a provider device.
1390
1391**Since**: 9
1392
1393**Parameters**
1394
1395| Name| Description|
1396| -------- | -------- |
1397| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1398| device | Name of the provider device, for example, CPU.|
1399
1400
1401### OH_AI_ExportModel()
1402
1403```
1404OH_AI_API OH_AI_Status OH_AI_ExportModel (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * model_file, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1405```
1406
1407**Description**
1408
1409Exports a training model. This API is used only for on-device training.
1410
1411**Since**: 11
1412
1413**Parameters**
1414
1415| Name| Description|
1416| -------- | -------- |
1417| model | Pointer to the model object.|
1418| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1419| model_file | Path of the exported model file.|
1420| quantization_type | Quantization type.|
1421| export_inference_only | Whether to export an inference model.|
1422| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export.|
1423| num | Number of output tensors.|
1424
1425**Returns**
1426
1427Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1428
1429### OH_AI_ExportModelBuffer()
1430
1431```
1432OH_AI_API OH_AI_Status OH_AI_ExportModelBuffer (OH_AI_ModelHandle model, OH_AI_ModelType model_type, char ** model_data, size_t * data_size, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1433```
1434**Description**
1435Exports the memory cache of the training model. This API is used only for on-device training.
1436
1437**Since**: 11
1438
1439**Parameters**
1440
1441| Name| Description|
1442| -------- | -------- |
1443| model | Pointer to the model object. |
1444| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype). |
1445| model_data | Pointer to the buffer that stores the exported model file. |
1446| data_size | Buffer size. |
1447| quantization_type | Quantization type. |
1448| export_inference_only | Whether to export an inference model. |
1449| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export. |
1450| num | Number of output tensors. |
1451
1452**Returns**
1453
1454Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1455
1456
1457### OH_AI_ExportWeightsCollaborateWithMicro()
1458
1459```
1460OH_AI_API OH_AI_Status OH_AI_ExportWeightsCollaborateWithMicro (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * weight_file, bool is_inference, bool enable_fp16, char ** changeable_weights_name, size_t num )
1461```
1462
1463**Description**
1464
1465Exports the weight file of the training model for micro inference. This API is used only for on-device training.
1466
1467**Since**: 11
1468
1469**Parameters**
1470
1471| Name| Description|
1472| -------- | -------- |
1473| model | Pointer to the model object.|
1474| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1475| weight_file | Path of the exported weight file.|
1476| is_inference | Whether to export inference models. Currently, this parameter can only be set to **true**.|
1477| enable_fp16 | Whether to save floating-point weights in float16 format.|
1478| changeable_weights_name | Name of the weight tensor with a variable shape.|
1479| num | Number of weight tensors with a variable shape.|
1480
1481**Returns**
1482
1483Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1484
1485
1486### OH_AI_GetAllNNRTDeviceDescs()
1487
1488```
1489OH_AI_API NNRTDeviceDesc* OH_AI_GetAllNNRTDeviceDescs (size_t * num)
1490```
1491
1492**Description**
1493
1494Obtains the descriptions of all NNRt devices in the system.
1495
1496**Since**: 10
1497
1498**Parameters**
1499
1500| Name| Description|
1501| -------- | -------- |
1502| num | Number of NNRt devices.|
1503
1504**Returns**
1505
1506Pointer to the array of the NNRt device descriptions. If the operation fails, **NULL** is returned.
1507
1508
1509### OH_AI_GetDeviceIdFromNNRTDeviceDesc()
1510
1511```
1512OH_AI_API size_t OH_AI_GetDeviceIdFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1513```
1514
1515**Description**
1516
1517Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.
1518
1519**Since**: 10
1520
1521**Parameters**
1522
1523| Name| Description|
1524| -------- | -------- |
1525| desc | Pointer to the NNRt device description.|
1526
1527**Returns**
1528
1529NNRt device ID.
1530
1531
1532### OH_AI_GetElementOfNNRTDeviceDescs()
1533
1534```
1535OH_AI_API NNRTDeviceDesc* OH_AI_GetElementOfNNRTDeviceDescs (NNRTDeviceDesc * descs, size_t index )
1536```
1537
1538**Description**
1539
1540Obtains the pointer to an element in the NNRt device description array.
1541
1542**Since**: 10
1543
1544**Parameters**
1545
1546| Name| Description|
1547| -------- | -------- |
1548| descs | NNRt device description array.|
1549| index | Index of an array element.|
1550
1551**Returns**
1552
1553Pointer to an element in the NNRt device description array.
1554
1555
1556### OH_AI_GetNameFromNNRTDeviceDesc()
1557
1558```
1559OH_AI_API const char* OH_AI_GetNameFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1560```
1561
1562**Description**
1563
1564Obtains the NNRt device name from the specified NNRt device description.
1565
1566**Since**: 10
1567
1568**Parameters**
1569
1570| Name| Description|
1571| -------- | -------- |
1572| desc | Pointer to the NNRt device description.|
1573
1574**Returns**
1575
1576NNRt device name. The value is a pointer that points to a constant string, which is held by **desc**. The caller does not need to destroy it separately.
1577
1578
1579### OH_AI_GetTypeFromNNRTDeviceDesc()
1580
1581```
1582OH_AI_API OH_AI_NNRTDeviceType OH_AI_GetTypeFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1583```
1584
1585**Description**
1586
1587Obtains the NNRt device type from the specified NNRt device description.
1588
1589**Since**: 10
1590
1591**Parameters**
1592
1593| Name| Description|
1594| -------- | -------- |
1595| desc | Pointer to the NNRt device description.|
1596
1597**Returns**
1598
1599NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).
1600
1601
1602### OH_AI_ModelBuild()
1603
1604```
1605OH_AI_API OH_AI_Status OH_AI_ModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1606```
1607
1608**Description**
1609
1610Loads and builds a MindSpore model from the memory buffer.
1611
1612Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1613
1614**Since**: 9
1615
1616**Parameters**
1617
1618| Name| Description|
1619| -------- | -------- |
1620| model | Pointer to the model object.|
1621| model_data | Address of the loaded model data in the memory.|
1622| data_size | Length of the model data.|
1623| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1624| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1625
1626**Returns**
1627
1628Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1629
1630
1631### OH_AI_ModelBuildFromFile()
1632
1633```
1634OH_AI_API OH_AI_Status OH_AI_ModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1635```
1636
1637**Description**
1638
1639Loads and builds a MindSpore model from a model file.
1640
1641Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1642
1643**Since**: 9
1644
1645**Parameters**
1646
1647| Name| Description|
1648| -------- | -------- |
1649| model | Pointer to the model object.|
1650| model_path | Path of the model file.|
1651| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1652| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1653
1654**Returns**
1655
1656Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1657
1658
1659### OH_AI_ModelCreate()
1660
1661```
1662OH_AI_API OH_AI_ModelHandle OH_AI_ModelCreate ()
1663```
1664
1665**Description**
1666
1667Creates a model object.
1668
1669**Since**: 9
1670
1671**Returns**
1672
1673Pointer to the model object.
1674
1675
1676### OH_AI_ModelDestroy()
1677
1678```
1679OH_AI_API void OH_AI_ModelDestroy (OH_AI_ModelHandle * model)
1680```
1681
1682**Description**
1683
1684Destroys a model object.
1685
1686**Since**: 9
1687
1688**Parameters**
1689
1690| Name| Description|
1691| -------- | -------- |
1692| model | Pointer to the model object.|
1693
1694
1695### OH_AI_ModelGetInputByTensorName()
1696
1697```
1698OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetInputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1699```
1700
1701**Description**
1702
1703Obtains the input tensor of a model by tensor name.
1704
1705**Since**: 9
1706
1707**Parameters**
1708
1709| Name| Description|
1710| -------- | -------- |
1711| model | Pointer to the model object.|
1712| tensor_name | Tensor name.|
1713
1714**Returns**
1715
1716Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist in the input, **null** will be returned.
1717
1718
1719### OH_AI_ModelGetInputs()
1720
1721```
1722OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetInputs (const OH_AI_ModelHandle model)
1723```
1724
1725**Description**
1726
1727Obtains the input tensor array structure of a model.
1728
1729**Since**: 9
1730
1731**Parameters**
1732
1733| Name| Description|
1734| -------- | -------- |
1735| model | Pointer to the model object.|
1736
1737**Returns**
1738
1739Tensor array structure corresponding to the model input.
1740
1741
1742### OH_AI_ModelGetLearningRate()
1743
1744```
1745OH_AI_API float OH_AI_ModelGetLearningRate (OH_AI_ModelHandle model)
1746```
1747
1748**Description**
1749
1750Obtains the learning rate for model training. This API is used only for on-device training.
1751
1752**Since**: 11
1753
1754**Parameters**
1755
1756| Name| Description|
1757| -------- | -------- |
1758| model | Pointer to the model object.|
1759
1760**Returns**
1761
1762Learning rate. If no optimizer is set, the value is **0.0**.
1763
1764
1765### OH_AI_ModelGetOutputByTensorName()
1766
1767```
1768OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetOutputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1769```
1770
1771**Description**
1772
1773Obtains the output tensor of a model by tensor name.
1774
1775**Since**: 9
1776
1777**Parameters**
1778
1779| Name| Description|
1780| -------- | -------- |
1781| model | Pointer to the model object.|
1782| tensor_name | Tensor name.|
1783
1784**Returns**
1785
1786Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist, **null** will be returned.
1787
1788
1789### OH_AI_ModelGetOutputs()
1790
1791```
1792OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetOutputs (const OH_AI_ModelHandle model)
1793```
1794
1795**Description**
1796
1797Obtains the output tensor array structure of a model.
1798
1799**Since**: 9
1800
1801**Parameters**
1802
1803| Name| Description|
1804| -------- | -------- |
1805| model | Pointer to the model object.|
1806
1807**Returns**
1808
1809Tensor array structure corresponding to the model output.
1810
1811
1812### OH_AI_ModelGetTrainMode()
1813
1814```
1815OH_AI_API bool OH_AI_ModelGetTrainMode (OH_AI_ModelHandle model)
1816```
1817
1818**Description**
1819
1820Obtains the training mode.
1821
1822**Since**: 11
1823
1824**Parameters**
1825
1826| Name| Description|
1827| -------- | -------- |
1828| model | Pointer to the model object.|
1829
1830**Returns**
1831
1832Whether the training mode is used.
1833
1834
1835### OH_AI_ModelGetWeights()
1836
1837```
1838OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetWeights (OH_AI_ModelHandle model)
1839```
1840
1841**Description**
1842
1843Obtains all weight tensors of a model. This API is used only for on-device training.
1844
1845**Since**: 11
1846
1847**Parameters**
1848
1849| Name| Description|
1850| -------- | -------- |
1851| model | Pointer to the model object.|
1852
1853**Returns**
1854
1855All weight tensors of the model.
1856
1857
1858### OH_AI_ModelPredict()
1859
1860```
1861OH_AI_API OH_AI_Status OH_AI_ModelPredict (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_TensorHandleArray * outputs, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
1862```
1863
1864**Description**
1865
1866Performs model inference.
1867
1868**Since**: 9
1869
1870**Parameters**
1871
1872| Name| Description|
1873| -------- | -------- |
1874| model | Pointer to the model object.|
1875| inputs | Tensor array structure corresponding to the model input.|
1876| outputs | Pointer to the tensor array structure corresponding to the model output.|
1877| before | Callback function executed before model inference.|
1878| after | Callback function executed after model inference.|
1879
1880**Returns**
1881
1882Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1883
1884
1885### OH_AI_ModelResize()
1886
1887```
1888OH_AI_API OH_AI_Status OH_AI_ModelResize (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_ShapeInfo * shape_infos, size_t shape_info_num )
1889```
1890
1891**Description**
1892
1893Adjusts the input tensor shapes of a built model.
1894
1895**Since**: 9
1896
1897**Parameters**
1898
1899| Name| Description|
1900| -------- | -------- |
1901| model | Pointer to the model object.|
1902| inputs | Tensor array structure corresponding to the model input.|
1903| shape_infos | Input shape information array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence.|
1904| shape_info_num | Length of the shape information array.|
1905
1906**Returns**
1907
1908Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1909
1910
1911### OH_AI_ModelSetLearningRate()
1912
1913```
1914OH_AI_API OH_AI_Status OH_AI_ModelSetLearningRate (OH_AI_ModelHandle model, float learning_rate )
1915```
1916
1917**Description**
1918
1919Sets the learning rate for model training. This API is used only for on-device training.
1920
1921**Since**: 11
1922
1923**Parameters**
1924
1925| Name| Description|
1926| -------- | -------- |
1927| learning_rate | Learning rate.|
1928
1929**Returns**
1930
1931Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1932
1933
1934### OH_AI_ModelSetTrainMode()
1935
1936```
1937OH_AI_API OH_AI_Status OH_AI_ModelSetTrainMode (OH_AI_ModelHandle model, bool train )
1938```
1939
1940**Description**
1941
1942Sets the training mode. This API is used only for on-device training.
1943
1944**Since**: 11
1945
1946**Parameters**
1947
1948| Name| Description|
1949| -------- | -------- |
1950| model | Pointer to the model object.|
1951| train | Whether the training mode is used.|
1952
1953**Returns**
1954
1955Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1956
1957
1958### OH_AI_ModelSetupVirtualBatch()
1959
1960```
1961OH_AI_API OH_AI_Status OH_AI_ModelSetupVirtualBatch (OH_AI_ModelHandle model, int virtual_batch_multiplier, float lr, float momentum )
1962```
1963
1964**Description**
1965
1966Sets the virtual batch for training. This API is used only for on-device training.
1967
1968**Since**: 11
1969
1970**Parameters**
1971
1972| Name| Description|
1973| -------- | -------- |
1974| model | Pointer to the model object.|
1975| virtual_batch_multiplier | Virtual batch multiplier. If the value is less than **1**, the virtual batch is disabled.|
1976| lr | Learning rate. The default value is **-1.0f**.|
1977| momentum | Momentum. The default value is **-1.0f**.|
1978
1979**Returns**
1980
1981Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1982
1983
1984### OH_AI_ModelUpdateWeights()
1985
1986```
1987OH_AI_API OH_AI_Status OH_AI_ModelUpdateWeights (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray new_weights )
1988```
1989
1990**Description**
1991
1992Updates the weight tensors of a model. This API is used only for on-device training.
1993
1994**Since**: 11
1995
1996**Parameters**
1997
1998| Name| Description|
1999| -------- | -------- |
2000| new_weights | Weight tensors to be updated.|
2001
2002**Returns**
2003
2004Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2005
2006
2007### OH_AI_RunStep()
2008
2009```
2010OH_AI_API OH_AI_Status OH_AI_RunStep (OH_AI_ModelHandle model, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
2011```
2012
2013**Description**
2014
2015Defines a single-step training model. This API is used only for on-device training.
2016
2017**Since**: 11
2018
2019**Parameters**
2020
2021| Name| Description|
2022| -------- | -------- |
2023| model | Pointer to the model object.|
2024| before | Callback function executed before model inference.|
2025| after | Callback function executed after model inference.|
2026
2027**Returns**
2028
2029Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2030
2031
2032### OH_AI_TensorClone()
2033
2034```
2035OH_AI_API OH_AI_TensorHandle OH_AI_TensorClone (OH_AI_TensorHandle tensor)
2036```
2037
2038**Description**
2039
2040Clones a tensor.
2041
2042**Since**: 9
2043
2044**Parameters**
2045
2046| Name| Description|
2047| -------- | -------- |
2048| tensor | Pointer to the tensor to clone.|
2049
2050**Returns**
2051
2052Handle of a tensor object.
2053
2054
2055### OH_AI_TensorCreate()
2056
2057```
2058OH_AI_API OH_AI_TensorHandle OH_AI_TensorCreate (const char * name, OH_AI_DataType type, const int64_t * shape, size_t shape_num, const void * data, size_t data_len )
2059```
2060
2061**Description**
2062
2063Creates a tensor object.
2064
2065**Since**: 9
2066
2067**Parameters**
2068
2069| Name| Description|
2070| -------- | -------- |
2071| name | Tensor name.|
2072| type | Tensor data type.|
2073| shape | Tensor dimension array.|
2074| shape_num | Length of the tensor dimension array.|
2075| data | Data pointer.|
2076| data_len | Data length.|
2077
2078**Returns**
2079
2080Handle of a tensor object.
2081
2082### OH_AI_TensorGetAllocator()
2083
2084```
2085OH_AI_API OH_AI_AllocatorHandle OH_AI_TensorGetAllocator(OH_AI_TensorHandle tensor)
2086```
2087
2088**Description**
2089
2090Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.
2091
2092**Since**: 12
2093
2094**Parameters**
2095
2096| Name| Description|
2097| -------- | -------- |
2098| tensor | Handle of the tensor object.|
2099
2100**Returns**
2101
2102Handle of the memory allocator.
2103
2104
2105### OH_AI_TensorDestroy()
2106
2107```
2108OH_AI_API void OH_AI_TensorDestroy (OH_AI_TensorHandle * tensor)
2109```
2110
2111**Description**
2112
2113Destroys a tensor object.
2114
2115**Since**: 9
2116
2117**Parameters**
2118
2119| Name| Description|
2120| -------- | -------- |
2121| tensor | Level-2 pointer to the tensor handle.|
2122
2123
2124### OH_AI_TensorGetData()
2125
2126```
2127OH_AI_API const void* OH_AI_TensorGetData (const OH_AI_TensorHandle tensor)
2128```
2129
2130**Description**
2131
2132Obtains the pointer to tensor data.
2133
2134**Since**: 9
2135
2136**Parameters**
2137
2138| Name| Description|
2139| -------- | -------- |
2140| tensor | Handle of the tensor object.|
2141
2142**Returns**
2143
2144Pointer to tensor data.
2145
2146
2147### OH_AI_TensorGetDataSize()
2148
2149```
2150OH_AI_API size_t OH_AI_TensorGetDataSize (const OH_AI_TensorHandle tensor)
2151```
2152
2153**Description**
2154
2155Obtains the number of bytes of the tensor data.
2156
2157**Since**: 9
2158
2159**Parameters**
2160
2161| Name| Description|
2162| -------- | -------- |
2163| tensor | Handle of the tensor object.|
2164
2165**Returns**
2166
2167Number of bytes of the tensor data.
2168
2169
2170### OH_AI_TensorGetDataType()
2171
2172```
2173OH_AI_API OH_AI_DataType OH_AI_TensorGetDataType (const OH_AI_TensorHandle tensor)
2174```
2175
2176**Description**
2177
2178Obtains the tensor type.
2179
2180**Since**: 9
2181
2182**Parameters**
2183
2184| Name| Description|
2185| -------- | -------- |
2186| tensor | Handle of the tensor object.|
2187
2188**Returns**
2189
2190Tensor data type.
2191
2192
2193### OH_AI_TensorGetElementNum()
2194
2195```
2196OH_AI_API int64_t OH_AI_TensorGetElementNum (const OH_AI_TensorHandle tensor)
2197```
2198
2199**Description**
2200
2201Obtains the number of tensor elements.
2202
2203**Since**: 9
2204
2205**Parameters**
2206
2207| Name| Description|
2208| -------- | -------- |
2209| tensor | Handle of the tensor object.|
2210
2211**Returns**
2212
2213Number of tensor elements.
2214
2215
2216### OH_AI_TensorGetFormat()
2217
2218```
2219OH_AI_API OH_AI_Format OH_AI_TensorGetFormat (const OH_AI_TensorHandle tensor)
2220```
2221
2222**Description**
2223
2224Obtains the tensor data format.
2225
2226**Since**: 9
2227
2228**Parameters**
2229
2230| Name| Description|
2231| -------- | -------- |
2232| tensor | Handle of the tensor object.|
2233
2234**Returns**
2235
2236Tensor data format.
2237
2238
2239### OH_AI_TensorGetMutableData()
2240
2241```
2242OH_AI_API void* OH_AI_TensorGetMutableData (const OH_AI_TensorHandle tensor)
2243```
2244
2245**Description**
2246
2247Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.
2248
2249**Since**: 9
2250
2251**Parameters**
2252
2253| Name| Description|
2254| -------- | -------- |
2255| tensor | Handle of the tensor object.|
2256
2257**Returns**
2258
2259Pointer to tensor data.
2260
2261
2262### OH_AI_TensorGetName()
2263
2264```
2265OH_AI_API const char* OH_AI_TensorGetName (const OH_AI_TensorHandle tensor)
2266```
2267
2268**Description**
2269
2270Obtains the tensor name.
2271
2272**Since**: 9
2273
2274**Parameters**
2275
2276| Name| Description|
2277| -------- | -------- |
2278| tensor | Handle of the tensor object.|
2279
2280**Returns**
2281
2282Tensor name.
2283
2284
2285### OH_AI_TensorGetShape()
2286
2287```
2288OH_AI_API const int64_t* OH_AI_TensorGetShape (const OH_AI_TensorHandle tensor, size_t * shape_num )
2289```
2290
2291**Description**
2292
2293Obtains the tensor shape.
2294
2295**Since**: 9
2296
2297**Parameters**
2298
2299| Name| Description|
2300| -------- | -------- |
2301| tensor | Handle of the tensor object.|
2302| shape_num | Length of the tensor shape array.|
2303
2304**Returns**
2305
2306Shape array.
2307
2308### OH_AI_TensorSetAllocator()
2309
2310```
2311OH_AI_API OH_AI_Status OH_AI_TensorSetAllocator(OH_AI_TensorHandle tensor, OH_AI_AllocatorHandle allocator)
2312```
2313
2314**Description**
2315
2316Sets the memory allocator. The allocator is responsible for allocating memory for tensors.
2317
2318**Since**: 12
2319
2320**Parameters**
2321
2322| Name     | Description                |
2323| --------- | -------------------- |
2324| tensor    | Handle of the tensor object.      |
2325| allocator | Handle of the memory allocator.|
2326
2327**Returns**
2328
2329Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2330
2331
2332### OH_AI_TensorSetData()
2333
2334```
2335OH_AI_API void OH_AI_TensorSetData (OH_AI_TensorHandle tensor, void * data )
2336```
2337
2338**Description**
2339
2340Sets the tensor data.
2341
2342**Since**: 9
2343
2344**Parameters**
2345
2346| Name| Description|
2347| -------- | -------- |
2348| tensor | Handle of the tensor object.|
2349| data | Data pointer.|
2350
2351
2352### OH_AI_TensorSetDataType()
2353
2354```
2355OH_AI_API void OH_AI_TensorSetDataType (OH_AI_TensorHandle tensor, OH_AI_DataType type )
2356```
2357
2358**Description**
2359
2360Sets the data type of a tensor.
2361
2362**Since**: 9
2363
2364**Parameters**
2365
2366| Name| Description|
2367| -------- | -------- |
2368| tensor | Handle of the tensor object.|
2369| type | Data type, which is specified by [OH_AI_DataType](#oh_ai_datatype).|
2370
2371
2372### OH_AI_TensorSetFormat()
2373
2374```
2375OH_AI_API void OH_AI_TensorSetFormat (OH_AI_TensorHandle tensor, OH_AI_Format format )
2376```
2377
2378**Description**
2379
2380Sets the tensor data format.
2381
2382**Since**: 9
2383
2384**Parameters**
2385
2386| Name| Description|
2387| -------- | -------- |
2388| tensor | Handle of the tensor object.|
2389| format | Tensor data format.|
2390
2391
2392### OH_AI_TensorSetName()
2393
2394```
2395OH_AI_API void OH_AI_TensorSetName (OH_AI_TensorHandle tensor, const char * name )
2396```
2397
2398**Description**
2399
2400Sets the tensor name.
2401
2402**Since**: 9
2403
2404**Parameters**
2405
2406| Name| Description|
2407| -------- | -------- |
2408| tensor | Handle of the tensor object.|
2409| name | Tensor name.|
2410
2411
2412### OH_AI_TensorSetShape()
2413
2414```
2415OH_AI_API void OH_AI_TensorSetShape (OH_AI_TensorHandle tensor, const int64_t * shape, size_t shape_num )
2416```
2417
2418**Description**
2419
2420Sets the tensor shape.
2421
2422**Since**: 9
2423
2424**Parameters**
2425
2426| Name| Description|
2427| -------- | -------- |
2428| tensor | Handle of the tensor object.|
2429| shape | Shape array.|
2430| shape_num | Length of the tensor shape array.|
2431
2432
2433### OH_AI_TensorSetUserData()
2434
2435```
2436OH_AI_API OH_AI_Status OH_AI_TensorSetUserData (OH_AI_TensorHandle tensor, void * data, size_t data_size )
2437```
2438
2439**Description**
2440
2441Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time. > **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.
2442
2443**Since**: 10
2444
2445**Parameters**
2446
2447| Name| Description|
2448| -------- | -------- |
2449| tensor | Handle of the tensor object.|
2450| data | Start address of user data.|
2451| data_size | Length of the user data length.|
2452
2453**Returns**
2454
2455Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2456
2457
2458### OH_AI_TrainCfgCreate()
2459
2460```
2461OH_AI_API OH_AI_TrainCfgHandle OH_AI_TrainCfgCreate ()
2462```
2463
2464**Description**
2465
2466Creates the pointer to the training configuration object. This API is used only for on-device training.
2467
2468**Since**: 11
2469
2470**Returns**
2471
2472Pointer to the training configuration object.
2473
2474
2475### OH_AI_TrainCfgDestroy()
2476
2477```
2478OH_AI_API void OH_AI_TrainCfgDestroy (OH_AI_TrainCfgHandle * train_cfg)
2479```
2480
2481**Description**
2482
2483Destroys the pointer to the training configuration object. This API is used only for on-device training.
2484
2485**Since**: 11
2486
2487**Parameters**
2488
2489| Name| Description|
2490| -------- | -------- |
2491| train_cfg | Pointer to the training configuration object.|
2492
2493
2494### OH_AI_TrainCfgGetLossName()
2495
2496```
2497OH_AI_API char** OH_AI_TrainCfgGetLossName (OH_AI_TrainCfgHandle train_cfg, size_t * num )
2498```
2499
2500**Description**
2501
2502Obtains the list of loss functions, which are used only for on-device training.
2503
2504**Since**: 11
2505
2506**Parameters**
2507
2508| Name| Description|
2509| -------- | -------- |
2510| train_cfg | Pointer to the training configuration object.|
2511| num | Number of loss functions.|
2512
2513**Returns**
2514
2515List of loss functions.
2516
2517
2518### OH_AI_TrainCfgGetOptimizationLevel()
2519
2520```
2521OH_AI_API OH_AI_OptimizationLevel OH_AI_TrainCfgGetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg)
2522```
2523
2524**Description**
2525
2526Obtains the optimization level of the training configuration object. This API is used only for on-device training.
2527
2528**Since**: 11
2529
2530**Parameters**
2531
2532| Name| Description|
2533| -------- | -------- |
2534| train_cfg | Pointer to the training configuration object.|
2535
2536**Returns**
2537
2538Optimization level.
2539
2540
2541### OH_AI_TrainCfgSetLossName()
2542
2543```
2544OH_AI_API void OH_AI_TrainCfgSetLossName (OH_AI_TrainCfgHandle train_cfg, const char ** loss_name, size_t num )
2545```
2546
2547**Description**
2548
2549Sets the list of loss functions, which are used only for on-device training.
2550
2551**Since**: 11
2552
2553**Parameters**
2554
2555| Name| Description|
2556| -------- | -------- |
2557| train_cfg | Pointer to the training configuration object.|
2558| loss_name | List of loss functions.|
2559| num | Number of loss functions.|
2560
2561
2562### OH_AI_TrainCfgSetOptimizationLevel()
2563
2564```
2565OH_AI_API void OH_AI_TrainCfgSetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg, OH_AI_OptimizationLevel level )
2566```
2567
2568**Description**
2569
2570Sets the optimization level of the training configuration object. This API is used only for on-device training.
2571
2572**Since**: 11
2573
2574**Parameters**
2575
2576| Name| Description|
2577| -------- | -------- |
2578| train_cfg | Pointer to the training configuration object.|
2579| level | Optimization level.|
2580
2581
2582### OH_AI_TrainModelBuild()
2583
2584```
2585OH_AI_API OH_AI_Status OH_AI_TrainModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2586```
2587
2588**Description**
2589
2590Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2591
2592**Since**: 11
2593
2594**Parameters**
2595
2596| Name| Description|
2597| -------- | -------- |
2598| model | Pointer to the model object.|
2599| model_data | Pointer to the buffer for storing the model file to be read.|
2600| data_size | Buffer size.|
2601| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2602| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2603| train_cfg | Pointer to the training configuration object.|
2604
2605**Returns**
2606
2607Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2608
2609
2610### OH_AI_TrainModelBuildFromFile()
2611
2612```
2613OH_AI_API OH_AI_Status OH_AI_TrainModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2614```
2615
2616**Description**
2617
2618Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2619
2620**Since**: 11
2621
2622**Parameters**
2623
2624| Name| Description|
2625| -------- | -------- |
2626| model | Pointer to the model object.|
2627| model_path | Path of the model file.|
2628| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2629| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2630| train_cfg | Pointer to the training configuration object.|
2631
2632**Returns**
2633
2634Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2635