1# Using the MindSpore Lite Engine for On-Device Training (C/C++)
2
3## When to Use
4
5MindSpore Lite is an AI engine that implements AI model inference for different hardware devices. It has been used in a wide range of fields, such as image classification, target recognition, facial recognition, and character recognition. In addition, MindSpore Lite supports deployment of model training on devices, making it possible to adapt to user behavior in actual service scenarios.
6
7This topic describes the general development process for using MindSpore Lite for model training on devices.
8
9
10## Available APIs
11The following table list some APIs for using MindSpore Lite for model training.
12
13| API       | Description       |
14| ------------------ | ----------------- |
15|OH_AI_ContextHandle OH_AI_ContextCreate()|Creates a context object. This API must be used together with **OH_AI_ContextDestroy**.|
16|OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate(OH_AI_DeviceType device_type)|Creates a runtime device information object.|
17|void OH_AI_ContextDestroy(OH_AI_ContextHandle *context)|Destroys a context object.|
18|void OH_AI_ContextAddDeviceInfo(OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info)|Adds a runtime device information object.|
19|OH_AI_TrainCfgHandle OH_AI_TrainCfgCreate()|Creates the pointer to a training configuration object.|
20|void OH_AI_TrainCfgDestroy(OH_AI_TrainCfgHandle *train_cfg)|Destroys the pointer to a training configuration object.|
21|OH_AI_ModelHandle OH_AI_ModelCreate()|Creates a model object.|
22|OH_AI_Status OH_AI_TrainModelBuildFromFile(OH_AI_ModelHandle model, const char *model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg)|Loads and builds a MindSpore training model from a model file.|
23|OH_AI_Status OH_AI_RunStep(OH_AI_ModelHandle model, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after)|Runs a single-step training model.|
24|OH_AI_Status OH_AI_ModelSetTrainMode(OH_AI_ModelHandle model, bool train)|Sets the training mode.|
25|OH_AI_Status OH_AI_ExportModel(OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char *model_file, OH_AI_QuantizationType quantization_type, bool export_inference_only, char **output_tensor_name, size_t num)|Exports a trained MS model.|
26|void OH_AI_ModelDestroy(OH_AI_ModelHandle *model)|Destroys a model object.|
27
28
29## How to Develop
30The following figure shows the development process for MindSpore Lite model training.
31
32**Figure 1** Development process for MindSpore Lite model training
33![how-to-use-train](figures/train_sequence_unify_api.png)
34
35Before moving to the development process, you need to reference related header files and compile functions to generate random input. The sample code is as follows:
36
37```c
38#include <stdlib.h>
39#include <stdio.h>
40#include <string.h>
41#include "mindspore/model.h"
42
43int GenerateInputDataWithRandom(OH_AI_TensorHandleArray inputs) {
44  for (size_t i = 0; i < inputs.handle_num; ++i) {
45    float *input_data = (float *)OH_AI_TensorGetMutableData(inputs.handle_list[i]);
46    if (input_data == NULL) {
47      printf("OH_AI_TensorGetMutableData failed.\n");
48      return  OH_AI_STATUS_LITE_ERROR;
49    }
50    int64_t num = OH_AI_TensorGetElementNum(inputs.handle_list[i]);
51    const int divisor = 10;
52    for (size_t j = 0; j < num; j++) {
53      input_data[j] = (float)(rand() % divisor) / divisor;  // 0--0.9f
54    }
55  }
56  return OH_AI_STATUS_SUCCESS;
57}
58```
59
60The development process consists of the following main steps:
61
621. Prepare the required model.
63
64    The prepared model is in `.ms` format. This topic uses [lenet_train.ms](https://gitee.com/openharmony-sig/compatibility/blob/master/test_suite/resource/master/standard%20system/acts/resource/ai/mindspore/lenet_train/lenet_train.ms) as an example. To use a custom model, perform the following steps:
65
66    - Use Python to create a network model based on the MindSpore architecture and export the model as a `.mindir` file. For details, see [Quick Start](https://www.mindspore.cn/tutorials/en/r2.1/beginner/quick_start.html).
67    - Convert the `.mindir` model file into an `.ms` file. For details about the conversion procedure, see [Converting MindSpore Lite Models](https://www.mindspore.cn/lite/docs/en/r2.1/use/converter_train.html). The `.ms` file can be imported to the device to implement training based on the MindSpore device framework.
68
692. Create a context and set parameters such as the device type and training configuration.
70
71    ```c
72    // Create and init context, add CPU device info
73    OH_AI_ContextHandle context = OH_AI_ContextCreate();
74    if (context == NULL) {
75        printf("OH_AI_ContextCreate failed.\n");
76        return OH_AI_STATUS_LITE_ERROR;
77    }
78
79    OH_AI_DeviceInfoHandle cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
80    if (cpu_device_info == NULL) {
81        printf("OH_AI_DeviceInfoCreate failed.\n");
82        OH_AI_ContextDestroy(&context);
83        return OH_AI_STATUS_LITE_ERROR;
84    }
85    OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
86
87    // Create trainCfg
88    OH_AI_TrainCfgHandle trainCfg = OH_AI_TrainCfgCreate();
89    if (trainCfg == NULL) {
90        printf("OH_AI_TrainCfgCreate failed.\n");
91        OH_AI_ContextDestroy(&context);
92        return OH_AI_STATUS_LITE_ERROR;
93    }
94    ```
95
963. Create, load, and build the model.
97
98    Call **OH_AI_TrainModelBuildFromFile** to load and build the model.
99
100    ```c
101    // Create model
102    OH_AI_ModelHandle model = OH_AI_ModelCreate();
103    if (model == NULL) {
104        printf("OH_AI_ModelCreate failed.\n");
105        OH_AI_TrainCfgDestroy(&trainCfg);
106        OH_AI_ContextDestroy(&context);
107        return OH_AI_STATUS_LITE_ERROR;
108    }
109
110    // Build model
111    int ret = OH_AI_TrainModelBuildFromFile(model, model_file, OH_AI_MODELTYPE_MINDIR, context, trainCfg);
112    if (ret != OH_AI_STATUS_SUCCESS) {
113        printf("OH_AI_TrainModelBuildFromFile failed, ret: %d.\n", ret);
114        OH_AI_ModelDestroy(&model);
115        OH_AI_ContextDestroy(&context);
116        return ret;
117    }
118    ```
119
1204. Input data.
121
122    Before executing model training, you need to populate data to the input tensor. In this example, random data is used to populate the model.
123
124    ```c
125    // Get Inputs
126    OH_AI_TensorHandleArray inputs = OH_AI_ModelGetInputs(model);
127    if (inputs.handle_list == NULL) {
128        printf("OH_AI_ModelGetInputs failed, ret: %d.\n", ret);
129        OH_AI_ModelDestroy(&model);
130        OH_AI_ContextDestroy(&context);
131        return ret;
132    }
133
134    // Generate random data as input data.
135    ret = GenerateInputDataWithRandom(inputs);
136    if (ret != OH_AI_STATUS_SUCCESS) {
137        printf("GenerateInputDataWithRandom failed, ret: %d.\n", ret);
138        OH_AI_ModelDestroy(&model);
139        OH_AI_ContextDestroy(&context);
140        return ret;
141    }
142    ```
143
1445. Execute model training.
145
146    Use **OH_AI_ModelSetTrainMode** to set the training mode and use **OH_AI_RunStep** to run model training.
147
148    ```c
149    // Set Traim Mode
150    ret = OH_AI_ModelSetTrainMode(model, true);
151    if (ret != OH_AI_STATUS_SUCCESS) {
152        printf("OH_AI_ModelSetTrainMode failed, ret: %d.\n", ret);
153        OH_AI_ModelDestroy(&model);
154        OH_AI_ContextDestroy(&context);
155        return ret;
156    }
157
158    // Model Train Step
159    ret = OH_AI_RunStep(model, NULL, NULL);
160    if (ret != OH_AI_STATUS_SUCCESS) {
161        printf("OH_AI_RunStep failed, ret: %d.\n", ret);
162        OH_AI_ModelDestroy(&model);
163        OH_AI_ContextDestroy(&context);
164        return ret;
165    }
166    printf("Train Step Success.\n");
167    ```
168
1696. Export the trained model.
170
171    Use **OH_AI_ExportModel** to export the trained model.
172
173    ```c
174    // Export Train Model
175    ret = OH_AI_ExportModel(model, OH_AI_MODELTYPE_MINDIR, export_train_model, OH_AI_NO_QUANT, false, NULL, 0);
176    if (ret != OH_AI_STATUS_SUCCESS) {
177        printf("OH_AI_ExportModel train failed, ret: %d.\n", ret);
178        OH_AI_ModelDestroy(&model);
179        OH_AI_ContextDestroy(&context);
180        return ret;
181    }
182    printf("Export Train Model Success.\n");
183
184    // Export Inference Model
185    ret = OH_AI_ExportModel(model, OH_AI_MODELTYPE_MINDIR, export_infer_model, OH_AI_NO_QUANT, true, NULL, 0);
186    if (ret != OH_AI_STATUS_SUCCESS) {
187        printf("OH_AI_ExportModel inference failed, ret: %d.\n", ret);
188        OH_AI_ModelDestroy(&model);
189        OH_AI_ContextDestroy(&context);
190        return ret;
191    }
192    printf("Export Inference Model Success.\n");
193    ```
194
1957. Destroy the model.
196
197    If the MindSpore Lite inference framework is no longer needed, you need to destroy the created model.
198
199    ```c
200    // Delete model and context.
201    OH_AI_ModelDestroy(&model);
202    OH_AI_ContextDestroy(&context);
203    ```
204
205
206## Verification
207
2081. Write **CMakeLists.txt**.
209    ```c
210    cmake_minimum_required(VERSION 3.14)
211    project(TrainDemo)
212
213    add_executable(train_demo main.c)
214
215    target_link_libraries(
216            train_demo
217            mindspore_lite_ndk
218    )
219    ```
220
221   - To use ohos-sdk for cross compilation, you need to set the native toolchain path for the CMake tool as follows: `-DCMAKE_TOOLCHAIN_FILE="/xxx/native/build/cmake/ohos.toolchain.camke"`.
222
223   - Start cross compilation. When running the compilation command, set **OHOS_NDK** to the native toolchain path.
224      ```shell
225        mkdir -p build
226
227        cd ./build || exit
228        OHOS_NDK=""
229        cmake -G "Unix Makefiles" \
230              -S ../ \
231              -DCMAKE_TOOLCHAIN_FILE="$OHOS_NDK/build/cmake/ohos.toolchain.cmake" \
232              -DOHOS_ARCH=arm64-v8a \
233              -DCMAKE_BUILD_TYPE=Release
234
235        make
236      ```
237
2382. Run the executable program for compilation.
239
240    - Use hdc to connect to the device and put **train_demo** and **lenet_train.ms** to the same directory on the device.
241    - Use hdc shell to access the device, go to the directory where **train_demo** is located, and run the following command:
242
243    ```shell
244    ./train_demo ./lenet_train.ms export_train_model export_infer_model
245    ```
246
247    The operation is successful if the output is similar to the following:
248
249    ```shell
250    Train Step Success.
251    Export Train Model Success.
252    Export Inference Model Success.
253    Tensor name: Default/network-WithLossCell/_backbone-LeNet5/fc3-Dense/BiasAdd-op121, tensor size is 80, elements num: 20.
254    output data is:
255    0.000265 0.000231 0.000254 0.000269 0.000238 0.000228
256    ```
257
258    In the directory where **train_demo** is located, you can view the exported model files **export_train_model.ms** and **export_infer_model.ms**.
259
260
261## Sample
262
263```c
264#include <stdlib.h>
265#include <stdio.h>
266#include <string.h>
267#include "mindspore/model.h"
268
269int GenerateInputDataWithRandom(OH_AI_TensorHandleArray inputs) {
270  for (size_t i = 0; i < inputs.handle_num; ++i) {
271    float *input_data = (float *)OH_AI_TensorGetMutableData(inputs.handle_list[i]);
272    if (input_data == NULL) {
273      printf("OH_AI_TensorGetMutableData failed.\n");
274      return  OH_AI_STATUS_LITE_ERROR;
275    }
276    int64_t num = OH_AI_TensorGetElementNum(inputs.handle_list[i]);
277    const int divisor = 10;
278    for (size_t j = 0; j < num; j++) {
279      input_data[j] = (float)(rand() % divisor) / divisor;  // 0--0.9f
280    }
281  }
282  return OH_AI_STATUS_SUCCESS;
283}
284
285int ModelPredict(char* model_file) {
286  // Create and init context, add CPU device info
287  OH_AI_ContextHandle context = OH_AI_ContextCreate();
288  if (context == NULL) {
289    printf("OH_AI_ContextCreate failed.\n");
290    return OH_AI_STATUS_LITE_ERROR;
291  }
292
293  OH_AI_DeviceInfoHandle cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
294  if (cpu_device_info == NULL) {
295    printf("OH_AI_DeviceInfoCreate failed.\n");
296    OH_AI_ContextDestroy(&context);
297    return OH_AI_STATUS_LITE_ERROR;
298  }
299  OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
300
301  // Create model
302  OH_AI_ModelHandle model = OH_AI_ModelCreate();
303  if (model == NULL) {
304    printf("OH_AI_ModelCreate failed.\n");
305    OH_AI_ContextDestroy(&context);
306    return OH_AI_STATUS_LITE_ERROR;
307  }
308
309  // Build model
310  int ret = OH_AI_ModelBuildFromFile(model, model_file, OH_AI_MODELTYPE_MINDIR, context);
311  if (ret != OH_AI_STATUS_SUCCESS) {
312    printf("OH_AI_ModelBuildFromFile failed, ret: %d.\n", ret);
313    OH_AI_ModelDestroy(&model);
314    OH_AI_ContextDestroy(&context);
315    return ret;
316  }
317
318  // Get Inputs
319  OH_AI_TensorHandleArray inputs = OH_AI_ModelGetInputs(model);
320  if (inputs.handle_list == NULL) {
321    printf("OH_AI_ModelGetInputs failed, ret: %d.\n", ret);
322    OH_AI_ModelDestroy(&model);
323    OH_AI_ContextDestroy(&context);
324    return ret;
325  }
326
327  // Generate random data as input data.
328  ret = GenerateInputDataWithRandom(inputs);
329  if (ret != OH_AI_STATUS_SUCCESS) {
330    printf("GenerateInputDataWithRandom failed, ret: %d.\n", ret);
331    OH_AI_ModelDestroy(&model);
332    OH_AI_ContextDestroy(&context);
333    return ret;
334  }
335
336  // Model Predict
337  OH_AI_TensorHandleArray outputs;
338  ret = OH_AI_ModelPredict(model, inputs, &outputs, NULL, NULL);
339  if (ret != OH_AI_STATUS_SUCCESS) {
340    printf("MSModelPredict failed, ret: %d.\n", ret);
341    OH_AI_ModelDestroy(&model);
342    OH_AI_ContextDestroy(&context);
343    return ret;
344  }
345
346  // Print Output Tensor Data.
347  for (size_t i = 0; i < outputs.handle_num; ++i) {
348    OH_AI_TensorHandle tensor = outputs.handle_list[i];
349    int64_t element_num = OH_AI_TensorGetElementNum(tensor);
350    printf("Tensor name: %s, tensor size is %ld ,elements num: %ld.\n", OH_AI_TensorGetName(tensor),
351           OH_AI_TensorGetDataSize(tensor), element_num);
352    const float *data = (const float *)OH_AI_TensorGetData(tensor);
353    printf("output data is:\n");
354    const int max_print_num = 50;
355    for (int j = 0; j < element_num && j <= max_print_num; ++j) {
356      printf("%f ", data[j]);
357    }
358    printf("\n");
359  }
360
361  OH_AI_ModelDestroy(&model);
362  OH_AI_ContextDestroy(&context);
363  return OH_AI_STATUS_SUCCESS;
364}
365
366int TrainDemo(int argc, const char **argv) {
367  if (argc < 4) {
368    printf("Model file must be provided.\n");
369    printf("Export Train Model path must be provided.\n");
370    printf("Export Inference Model path must be provided.\n");
371    return OH_AI_STATUS_LITE_ERROR;
372  }
373  const char *model_file = argv[1];
374  const char *export_train_model = argv[2];
375  const char *export_infer_model = argv[3];
376
377  // Create and init context, add CPU device info
378  OH_AI_ContextHandle context = OH_AI_ContextCreate();
379  if (context == NULL) {
380    printf("OH_AI_ContextCreate failed.\n");
381    return OH_AI_STATUS_LITE_ERROR;
382  }
383
384  OH_AI_DeviceInfoHandle cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
385  if (cpu_device_info == NULL) {
386    printf("OH_AI_DeviceInfoCreate failed.\n");
387    OH_AI_ContextDestroy(&context);
388    return OH_AI_STATUS_LITE_ERROR;
389  }
390  OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
391
392  // Create trainCfg
393  OH_AI_TrainCfgHandle trainCfg = OH_AI_TrainCfgCreate();
394  if (trainCfg == NULL) {
395    printf("OH_AI_TrainCfgCreate failed.\n");
396    OH_AI_ContextDestroy(&context);
397    return OH_AI_STATUS_LITE_ERROR;
398  }
399
400  // Create model
401  OH_AI_ModelHandle model = OH_AI_ModelCreate();
402  if (model == NULL) {
403    printf("OH_AI_ModelCreate failed.\n");
404    OH_AI_TrainCfgDestroy(&trainCfg);
405    OH_AI_ContextDestroy(&context);
406    return OH_AI_STATUS_LITE_ERROR;
407  }
408
409  // Build model
410  int ret = OH_AI_TrainModelBuildFromFile(model, model_file, OH_AI_MODELTYPE_MINDIR, context, trainCfg);
411  if (ret != OH_AI_STATUS_SUCCESS) {
412    printf("OH_AI_TrainModelBuildFromFile failed, ret: %d.\n", ret);
413    OH_AI_ModelDestroy(&model);
414    OH_AI_ContextDestroy(&context);
415    return ret;
416  }
417
418  // Get Inputs
419  OH_AI_TensorHandleArray inputs = OH_AI_ModelGetInputs(model);
420  if (inputs.handle_list == NULL) {
421    printf("OH_AI_ModelGetInputs failed, ret: %d.\n", ret);
422    OH_AI_ModelDestroy(&model);
423    OH_AI_ContextDestroy(&context);
424    return ret;
425  }
426
427  // Generate random data as input data.
428  ret = GenerateInputDataWithRandom(inputs);
429  if (ret != OH_AI_STATUS_SUCCESS) {
430    printf("GenerateInputDataWithRandom failed, ret: %d.\n", ret);
431    OH_AI_ModelDestroy(&model);
432    OH_AI_ContextDestroy(&context);
433    return ret;
434  }
435
436  // Set Traim Mode
437  ret = OH_AI_ModelSetTrainMode(model, true);
438  if (ret != OH_AI_STATUS_SUCCESS) {
439    printf("OH_AI_ModelSetTrainMode failed, ret: %d.\n", ret);
440    OH_AI_ModelDestroy(&model);
441	OH_AI_ContextDestroy(&context);
442    return ret;
443  }
444
445  // Model Train Step
446  ret = OH_AI_RunStep(model, NULL, NULL);
447  if (ret != OH_AI_STATUS_SUCCESS) {
448    printf("OH_AI_RunStep failed, ret: %d.\n", ret);
449    OH_AI_ModelDestroy(&model);
450    OH_AI_ContextDestroy(&context);
451    return ret;
452  }
453  printf("Train Step Success.\n");
454
455  // Export Train Model
456  ret = OH_AI_ExportModel(model, OH_AI_MODELTYPE_MINDIR, export_train_model, OH_AI_NO_QUANT, false, NULL, 0);
457  if (ret != OH_AI_STATUS_SUCCESS) {
458    printf("OH_AI_ExportModel train failed, ret: %d.\n", ret);
459    OH_AI_ModelDestroy(&model);
460    OH_AI_ContextDestroy(&context);
461    return ret;
462  }
463  printf("Export Train Model Success.\n");
464
465  // Export Inference Model
466  ret = OH_AI_ExportModel(model, OH_AI_MODELTYPE_MINDIR, export_infer_model, OH_AI_NO_QUANT, true, NULL, 0);
467  if (ret != OH_AI_STATUS_SUCCESS) {
468    printf("OH_AI_ExportModel inference failed, ret: %d.\n", ret);
469    OH_AI_ModelDestroy(&model);
470    OH_AI_ContextDestroy(&context);
471    return ret;
472  }
473  printf("Export Inference Model Success.\n");
474
475  // Delete model and context.
476  OH_AI_ModelDestroy(&model);
477  OH_AI_ContextDestroy(&context);
478
479  // Use The Exported Model to predict
480  ret = ModelPredict(strcat(export_infer_model, ".ms"));
481  if (ret != OH_AI_STATUS_SUCCESS) {
482    printf("Exported Model to predict failed, ret: %d.\n", ret);
483    return ret;
484  }
485  return OH_AI_STATUS_SUCCESS;
486}
487
488int main(int argc, const char **argv) { return TrainDemo(argc, argv); }
489
490```
491