1# Using MindSpore Lite for Image Classification (ArkTS)
2
3## When to Use
4
5You can use [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md) to quickly deploy AI algorithms into your application to perform AI model inference for image classification.
6
7Image classification can be used to recognize objects in images and is widely used in medical image analysis, auto driving, e-commerce, and facial recognition.
8
9## Basic Concepts
10
11Before getting started, you need to understand the following basic concepts:
12
13**Tensor**: a special data structure that is similar to an array or matrix. It is the basic data structure used in MindSpore Lite network operations.
14
15**Float16 inference mode**: an inference mode in half-precision format, where a number is represented with 16 bits.
16
17## **Available APIs**
18
19APIs involved in MindSpore Lite model inference are categorized into context APIs, model APIs, and tensor APIs. For details about APIs, see [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md).
20
21| API                                                      | Description            |
22| ------------------------------------------------------------ | ---------------- |
23| loadModelFromFile(model: string, context?: Context): Promise<Model> | Loads a model from a file.|
24| getInputs(): MSTensor[]                                      | Obtains the model input.|
25| predict(inputs: MSTensor[]): Promise<MSTensor[]>       | Performs model inference.      |
26| getData(): ArrayBuffer                                       | Obtains tensor data.|
27| setData(inputArray: ArrayBuffer): void                       | Sets tensor data.|
28
29## Development Process
30
311. Select an image classification model.
322. Use the MindSpore Lite inference model on the device to classify the selected images.
33
34## Environment Setup
35
36Install DevEco Studio 4.1 or later, and update the SDK to API version 11 or later.
37
38## How to Develop
39
40The following uses inference on an image in the album as an example to describe how to use MindSpore Lite to implement image classification.
41
42### Selecting a Model
43
44This sample application uses [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/1.5/mobilenetv2.ms) as the image classification model. The model file is available in the **entry/src/main/resources/rawfile** project directory.
45
46If you have other pre-trained models for image classification, convert the original model into the .ms format by referring to [Using MindSpore Lite for Model Conversion](mindspore-lite-converter-guidelines.md).
47
48### Writing Code
49
50#### Image Input and Preprocessing
51
521. Call [@ohos.file.picker](../../reference/apis-core-file-kit/js-apis-file-picker.md) to pick up the desired image in the album.
53
54   ```ts
55   import { photoAccessHelper } from '@kit.MediaLibraryKit';
56   import { BusinessError } from '@kit.BasicServicesKit';
57
58   let uris: Array<string> = [];
59
60   // Create an image picker instance.
61   let photoSelectOptions = new photoAccessHelper.PhotoSelectOptions();
62
63   // Set the media file type to IMAGE and set the maximum number of media files that can be selected.
64   photoSelectOptions.MIMEType = photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE;
65   photoSelectOptions.maxSelectNumber = 1;
66
67   // Create an album picker instance and call select() to open the album page for file selection. After file selection is done, the result set is returned through photoSelectResult.
68   let photoPicker = new photoAccessHelper.PhotoViewPicker();
69   photoPicker.select(photoSelectOptions, async (
70     err: BusinessError, photoSelectResult: photoAccessHelper.PhotoSelectResult) => {
71     if (err) {
72       console.error('MS_LITE_ERR: PhotoViewPicker.select failed with err: ' + JSON.stringify(err));
73       return;
74     }
75     console.info('MS_LITE_LOG: PhotoViewPicker.select successfully, ' +
76       'photoSelectResult uri: ' + JSON.stringify(photoSelectResult));
77     uris = photoSelectResult.photoUris;
78     console.info('MS_LITE_LOG: uri: ' + uris);
79   })
80   ```
81
822. Based on the input image size, call [@ohos.multimedia.image](../../reference/apis-image-kit/js-apis-image.md) and [@ohos.file.fs](../../reference/apis-core-file-kit/js-apis-file-fs.md) to perform operations such as cropping the image, obtaining the image buffer, and standardizing the image.
83
84   ```ts
85   import { image } from '@kit.ImageKit';
86   import { fileIo } from '@kit.CoreFileKit';
87
88   let modelInputHeight: number = 224;
89   let modelInputWidth: number = 224;
90
91   // Based on the specified URI, call fileIo.openSync to open the file to obtain the FD.
92   let file = fileIo.openSync(this.uris[0], fileIo.OpenMode.READ_ONLY);
93   console.info('MS_LITE_LOG: file fd: ' + file.fd);
94
95   // Based on the FD, call fileIo.readSync to read the data in the file.
96   let inputBuffer = new ArrayBuffer(4096000);
97   let readLen = fileIo.readSync(file.fd, inputBuffer);
98   console.info('MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:' + readLen);
99
100   // Perform image preprocessing through PixelMap.
101   let imageSource = image.createImageSource(file.fd);
102   imageSource.createPixelMap().then((pixelMap) => {
103     pixelMap.getImageInfo().then((info) => {
104       console.info('MS_LITE_LOG: info.width = ' + info.size.width);
105       console.info('MS_LITE_LOG: info.height = ' + info.size.height);
106       // Crop the image based on the input image size and obtain the image buffer readBuffer.
107       pixelMap.scale(256.0 / info.size.width, 256.0 / info.size.height).then(() => {
108         pixelMap.crop(
109           { x: 16, y: 16, size: { height: modelInputHeight, width: modelInputWidth } }
110         ).then(async () => {
111           let info = await pixelMap.getImageInfo();
112           console.info('MS_LITE_LOG: crop info.width = ' + info.size.width);
113           console.info('MS_LITE_LOG: crop info.height = ' + info.size.height);
114           // Set the size of readBuffer.
115           let readBuffer = new ArrayBuffer(modelInputHeight * modelInputWidth * 4);
116           await pixelMap.readPixelsToBuffer(readBuffer);
117           console.info('MS_LITE_LOG: Succeeded in reading image pixel data, buffer: ' +
118           readBuffer.byteLength);
119           // Convert readBuffer to the float32 format, and standardize the image.
120           const imageArr = new Uint8Array(
121             readBuffer.slice(0, modelInputHeight * modelInputWidth * 4));
122           console.info('MS_LITE_LOG: imageArr length: ' + imageArr.length);
123           let means = [0.485, 0.456, 0.406];
124           let stds = [0.229, 0.224, 0.225];
125           let float32View = new Float32Array(modelInputHeight * modelInputWidth * 3);
126           let index = 0;
127           for (let i = 0; i < imageArr.length; i++) {
128             if ((i + 1) % 4 == 0) {
129               float32View[index] = (imageArr[i - 3] / 255.0 - means[0]) / stds[0]; // B
130               float32View[index+1] = (imageArr[i - 2] / 255.0 - means[1]) / stds[1]; // G
131               float32View[index+2] = (imageArr[i - 1] / 255.0 - means[2]) / stds[2]; // R
132               index += 3;
133             }
134           }
135           console.info('MS_LITE_LOG: float32View length: ' + float32View.length);
136           let printStr = 'float32View data:';
137           for (let i = 0; i < 20; i++) {
138             printStr += ' ' + float32View[i];
139           }
140           console.info('MS_LITE_LOG: float32View data: ' + printStr);
141         })
142       })
143     });
144   });
145   ```
146
147#### Writing Inference Code
148
1491. If the capability set defined by the project does not contain MindSpore Lite, create the **syscap.json** file in the **entry/src/main** directory of the DevEco Studio project. The file content is as follows:
150
151   ```json
152   {
153     "devices": {
154       "general": [
155         // The value must be the same as the value of deviceTypes in the module.json5 file.
156         "default"
157       ]
158     },
159     "development": {
160       "addedSysCaps": [
161         "SystemCapability.AI.MindSporeLite"
162       ]
163     }
164   }
165   ```
166
1672. Call [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md) to implement inference on the device. The operation process is as follows:
168
169   1. Create a context, and set parameters such as the number of runtime threads and device type.
170   2. Load the model. In this example, the model is loaded from the memory.
171   3. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensors.
172   4. Perform model inference through the **predict** API.
173
174   ```ts
175   // model.ets
176   import { mindSporeLite } from '@kit.MindSporeLiteKit'
177
178   export default async function modelPredict(
179     modelBuffer: ArrayBuffer, inputsBuffer: ArrayBuffer[]): Promise<mindSporeLite.MSTensor[]> {
180
181     // 1. Create a context, and set parameters such as the number of runtime threads and device type.
182     let context: mindSporeLite.Context = {};
183     context.target = ['cpu'];
184     context.cpu = {}
185     context.cpu.threadNum = 2;
186     context.cpu.threadAffinityMode = 1;
187     context.cpu.precisionMode = 'enforce_fp32';
188
189     // 2. Load the model from the memory.
190     let msLiteModel: mindSporeLite.Model = await mindSporeLite.loadModelFromBuffer(modelBuffer, context);
191
192     // 3. Set the input data.
193     let modelInputs: mindSporeLite.MSTensor[] = msLiteModel.getInputs();
194     for (let i = 0; i < inputsBuffer.length; i++) {
195       let inputBuffer = inputsBuffer[i];
196       if (inputBuffer != null) {
197         modelInputs[i].setData(inputBuffer as ArrayBuffer);
198       }
199     }
200
201     // 4. Perform inference.
202     console.info('=========MS_LITE_LOG: MS_LITE predict start=====');
203     let modelOutputs: mindSporeLite.MSTensor[] = await msLiteModel.predict(modelInputs);
204     return modelOutputs;
205   }
206   ```
207
208#### Executing Inference
209
210Load the model file and call the inference function to perform inference on the selected image, and process the inference result.
211
212```ts
213import modelPredict from './model';
214import { resourceManager } from '@kit.LocalizationKit'
215
216let modelName: string = 'mobilenetv2.ms';
217let max: number = 0;
218let maxIndex: number = 0;
219let maxArray: Array<number> = [];
220let maxIndexArray: Array<number> = [];
221
222// The buffer data of the input image is stored in float32View after preprocessing. For details, see Image Input and Preprocessing.
223let inputs: ArrayBuffer[] = [float32View.buffer];
224let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
225resMgr.getRawFileContent(modelName).then(modelBuffer => {
226  // predict
227  modelPredict(modelBuffer.buffer.slice(0), inputs).then(outputs => {
228    console.info('=========MS_LITE_LOG: MS_LITE predict success=====');
229    // Print the result.
230    for (let i = 0; i < outputs.length; i++) {
231      let out = new Float32Array(outputs[i].getData());
232      let printStr = outputs[i].name + ':';
233      for (let j = 0; j < out.length; j++) {
234        printStr += out[j].toString() + ',';
235      }
236      console.info('MS_LITE_LOG: ' + printStr);
237      // Obtain the maximum number of categories.
238      this.max = 0;
239      this.maxIndex = 0;
240      this.maxArray = [];
241      this.maxIndexArray = [];
242      let newArray = out.filter(value => value !== max)
243      for (let n = 0; n < 5; n++) {
244        max = out[0];
245        maxIndex = 0;
246        for (let m = 0; m < newArray.length; m++) {
247          if (newArray[m] > max) {
248            max = newArray[m];
249            maxIndex = m;
250          }
251        }
252        maxArray.push(Math.round(max * 10000))
253        maxIndexArray.push(maxIndex)
254        // Call the array filter function.
255        newArray = newArray.filter(value => value !== max)
256      }
257      console.info('MS_LITE_LOG: max:' + maxArray);
258      console.info('MS_LITE_LOG: maxIndex:' + maxIndexArray);
259    }
260    console.info('=========MS_LITE_LOG END=========');
261  })
262})
263```
264
265### Debugging and Verification
266
2671. On DevEco Studio, connect to the device, click **Run entry**, and build your own HAP.
268
269   ```shell
270   Launching com.samples.mindsporelitearktsdemo
271   $ hdc shell aa force-stop com.samples.mindsporelitearktsdemo
272   $ hdc shell mkdir data/local/tmp/xxx
273   $ hdc file send C:\Users\xxx\MindSporeLiteArkTSDemo\entry\build\default\outputs\default\entry-default-signed.hap "data/local/tmp/xxx"
274   $ hdc shell bm install -p data/local/tmp/xxx
275   $ hdc shell rm -rf data/local/tmp/xxx
276   $ hdc shell aa start -a EntryAbility -b com.samples.mindsporelitearktsdemo
277   ```
278
2792. Touch the **photo** button on the device screen, select an image, and touch **OK**. The classification result of the selected image is displayed on the device screen. In the log printing result, filter images by the keyword **MS_LITE**. The following information is displayed:
280
281   ```verilog
282   08-06 03:24:33.743   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: {"photoUris":["file://media/Photo/13/IMG_1501955351_012/plant.jpg"]}
283   08-06 03:24:33.795   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:32824
284   08-06 03:24:34.147   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: crop info.width = 224
285   08-06 03:24:34.147   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: crop info.height = 224
286   08-06 03:24:34.160   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: Succeeded in reading image pixel data, buffer: 200704
287   08-06 03:24:34.970   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG: MS_LITE predict start=====
288   08-06 03:24:35.432   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG: MS_LITE predict success=====
289   08-06 03:24:35.447   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: Default/head-MobileNetV2Head/Sigmoid-op466:0.0000034338463592575863,0.000014028532859811094,9.119685273617506e-7,0.000049100715841632336,9.502661555416125e-7,3.945370394831116e-7,0.04346757382154465,0.00003971960904891603...
290   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: max:9497,7756,1970,435,46
291   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: maxIndex:323,46,13,6,349
292   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG END=========
293   ```
294
295### Effects
296
297Touch the **photo** button on the device screen, select an image, and touch **OK**. The top 4 categories of the image are displayed below the image.
298
299<img src="figures/step1.png" width="20%"/>     <img src="figures/step2.png" width="20%"/>     <img src="figures/step3.png" width="20%"/>     <img src="figures/step4.png" width="20%"/>
300
301
302