1# @ohos.ai.mindSporeLite (On-device AI Framework)
2
3MindSpore Lite is a lightweight and high-performance on-device AI engine that provides standard model inference and training APIs and built-in universal high-performance operator libraries. It natively supports native Neural Network Runtime Kit for a higher inference efficiency, empowering intelligent applications in all scenarios.
4
5This topic describes the model inference and training capabilities supported by the MindSpore Lite AI engine.
6
7> **NOTE**
8>
9> - The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version. Unless otherwise stated, the MindSpore model is used in the sample code.
10>
11> - The APIs of this module can be used only in the stage model.
12
13## Modules to Import
14
15```ts
16import { mindSporeLite } from '@kit.MindSporeLiteKit';
17```
18
19## mindSporeLite.loadModelFromFile
20
21loadModelFromFile(model: string, callback: Callback<Model>): void
22
23Loads the input model from the full path for model inference. This API uses an asynchronous callback to return the result.
24
25**System capability**: SystemCapability.AI.MindSporeLite
26
27**Parameters**
28
29| Name  | Type                     | Mandatory| Description                    |
30| -------- | ------------------------- | ---- | ------------------------ |
31| model    | string                    | Yes  | Complete path of the input model.    |
32| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
33
34**Example**
35
36```ts
37let modelFile : string = '/path/to/xxx.ms';
38mindSporeLite.loadModelFromFile(modelFile, (mindSporeLiteModel : mindSporeLite.Model) => {
39  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
40  console.info(modelInputs[0].name);
41})
42```
43## mindSporeLite.loadModelFromFile
44
45loadModelFromFile(model: string, context: Context, callback: Callback&lt;Model&gt;): void
46
47Loads the input model from the full path for model inference. This API uses an asynchronous callback to return the result.
48
49**System capability**: SystemCapability.AI.MindSporeLite
50
51**Parameters**
52
53| Name  | Type                               | Mandatory| Description                  |
54| -------- | ----------------------------------- | ---- | ---------------------- |
55| model    | string                              | Yes  | Complete path of the input model.  |
56| context | [Context](#context) | Yes| Configuration information of the running environment.|
57| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
58
59**Example**
60
61```ts
62let context: mindSporeLite.Context = {};
63context.target = ['cpu'];
64let modelFile : string = '/path/to/xxx.ms';
65mindSporeLite.loadModelFromFile(modelFile, context, (mindSporeLiteModel : mindSporeLite.Model) => {
66  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
67  console.info(modelInputs[0].name);
68})
69```
70## mindSporeLite.loadModelFromFile
71
72loadModelFromFile(model: string, context?: Context): Promise&lt;Model&gt;
73
74Loads the input model from the full path for model inference. This API uses a promise to return the result.
75
76**System capability**: SystemCapability.AI.MindSporeLite
77
78**Parameters**
79
80| Name | Type               | Mandatory| Description                                         |
81| ------- | ------------------- | ---- | --------------------------------------------- |
82| model   | string              | Yes  | Complete path of the input model.                         |
83| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
84
85**Return value**
86
87| Type                     | Description                        |
88| ------------------------- | ---------------------------- |
89| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
90
91**Example**
92
93```ts
94let modelFile = '/path/to/xxx.ms';
95mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
96  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
97  console.info(modelInputs[0].name);
98})
99```
100## mindSporeLite.loadModelFromBuffer
101
102loadModelFromBuffer(model: ArrayBuffer, callback: Callback&lt;Model&gt;): void
103
104Loads the input model from the memory for inference. This API uses an asynchronous callback to return the result.
105
106**System capability**: SystemCapability.AI.MindSporeLite
107
108**Parameters**
109
110| Name  | Type                     | Mandatory| Description                    |
111| -------- | ------------------------- | ---- | ------------------------ |
112| model    | ArrayBuffer               | Yes  | Memory that contains the input model.        |
113| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
114
115**Example**
116
117```ts
118import { mindSporeLite } from '@kit.MindSporeLiteKit';
119import { common } from '@kit.AbilityKit';
120
121let modelFile = '/path/to/xxx.ms';
122getContext(this).resourceManager.getRawFileContent(modelFile).then((buffer : Uint8Array) => {
123  let modelBuffer = buffer.buffer;
124  mindSporeLite.loadModelFromBuffer(modelBuffer, (mindSporeLiteModel : mindSporeLite.Model) => {
125    let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
126    console.info(modelInputs[0].name);
127  })
128})
129```
130## mindSporeLite.loadModelFromBuffer
131
132loadModelFromBuffer(model: ArrayBuffer, context: Context, callback: Callback&lt;Model&gt;): void
133
134Loads the input model from the memory for inference. This API uses an asynchronous callback to return the result.
135
136**System capability**: SystemCapability.AI.MindSporeLite
137
138**Parameters**
139
140| Name  | Type                               | Mandatory| Description                  |
141| -------- | ----------------------------------- | ---- | ---------------------- |
142| model    | ArrayBuffer                   | Yes  | Memory that contains the input model.|
143| context | [Context](#context) | Yes | Configuration information of the running environment.|
144| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
145
146**Example**
147
148```ts
149import { resourceManager } from '@kit.LocalizationKit';
150import { GlobalContext } from '../GlobalContext';
151import { mindSporeLite } from '@kit.MindSporeLiteKit';
152import { common } from '@kit.AbilityKit';
153let modelFile = '/path/to/xxx.ms';
154export class Test {
155  value:number = 0;
156  foo(): void {
157    GlobalContext.getContext().setObject("value", this.value);
158  }
159}
160let globalContext= GlobalContext.getContext().getObject("value") as common.UIAbilityContext;
161
162globalContext.resourceManager.getRawFileContent(modelFile).then((buffer : Uint8Array) => {
163  let modelBuffer = buffer.buffer;
164  let context: mindSporeLite.Context = {};
165  context.target = ['cpu'];
166  mindSporeLite.loadModelFromBuffer(modelBuffer, context, (mindSporeLiteModel : mindSporeLite.Model) => {
167    let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
168    console.info(modelInputs[0].name);
169  })
170})
171```
172## mindSporeLite.loadModelFromBuffer
173
174loadModelFromBuffer(model: ArrayBuffer, context?: Context): Promise&lt;Model&gt;
175
176Loads the input model from the memory for inference. This API uses a promise to return the result.
177
178**System capability**: SystemCapability.AI.MindSporeLite
179
180**Parameters**
181
182| Name | Type               | Mandatory| Description                                         |
183| ------- | ------------------- | ---- | --------------------------------------------- |
184| model   | ArrayBuffer         | Yes  | Memory that contains the input model.                             |
185| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
186
187**Return value**
188
189| Type                           | Description                        |
190| ------------------------------- | ---------------------------- |
191| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
192
193**Example**
194
195```ts
196import { resourceManager } from '@kit.LocalizationKit';
197import { GlobalContext } from '../GlobalContext';
198import { mindSporeLite } from '@kit.MindSporeLiteKit';
199import { common } from '@kit.AbilityKit';
200let modelFile = '/path/to/xxx.ms';
201export class Test {
202  value:number = 0;
203  foo(): void {
204    GlobalContext.getContext().setObject("value", this.value);
205  }
206}
207let globalContext = GlobalContext.getContext().getObject("value") as common.UIAbilityContext;
208
209globalContext.resourceManager.getRawFileContent(modelFile).then((buffer : Uint8Array) => {
210  let modelBuffer = buffer.buffer;
211  mindSporeLite.loadModelFromBuffer(modelBuffer).then((mindSporeLiteModel : mindSporeLite.Model) => {
212    let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
213    console.info(modelInputs[0].name);
214  })
215})
216```
217## mindSporeLite.loadModelFromFd
218
219loadModelFromFd(model: number, callback: Callback&lt;Model&gt;): void
220
221Loads the input model based on the specified file descriptor for inference. This API uses an asynchronous callback to return the result.
222
223**System capability**: SystemCapability.AI.MindSporeLite
224
225**Parameters**
226
227| Name  | Type                               | Mandatory| Description                  |
228| -------- | ----------------------------------- | ---- | ---------------------- |
229| model    | number                         | Yes  | File descriptor of the input model.|
230| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
231
232**Example**
233
234```ts
235import { fileIo } from '@kit.CoreFileKit';
236let modelFile = '/path/to/xxx.ms';
237let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
238mindSporeLite.loadModelFromFd(file.fd, (mindSporeLiteModel : mindSporeLite.Model) => {
239  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
240  console.info(modelInputs[0].name);
241})
242```
243## mindSporeLite.loadModelFromFd
244
245loadModelFromFd(model: number, context: Context, callback: Callback&lt;Model&gt;): void
246
247Loads the input model based on the specified file descriptor for inference. This API uses an asynchronous callback to return the result.
248
249**System capability**: SystemCapability.AI.MindSporeLite
250
251**Parameters**
252
253| Name  | Type                               | Mandatory| Description                  |
254| -------- | ----------------------------------- | ---- | ---------------------- |
255| model    | number                   | Yes  | File descriptor of the input model.|
256| context | [Context](#context) | Yes | Configuration information of the running environment.|
257| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
258
259**Example**
260
261```ts
262import { fileIo } from '@kit.CoreFileKit';
263let modelFile = '/path/to/xxx.ms';
264let context : mindSporeLite.Context = {};
265context.target = ['cpu'];
266let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
267mindSporeLite.loadModelFromFd(file.fd, context, (mindSporeLiteModel : mindSporeLite.Model) => {
268  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
269  console.info(modelInputs[0].name);
270})
271```
272## mindSporeLite.loadModelFromFd
273
274loadModelFromFd(model: number, context?: Context): Promise&lt;Model&gt;
275
276Loads the input model based on the specified file descriptor for inference. This API uses a promise to return the result.
277
278**System capability**: SystemCapability.AI.MindSporeLite
279
280**Parameters**
281
282| Name | Type               | Mandatory| Description                                         |
283| ------- | ------------------- | ---- | --------------------------------------------- |
284| model   | number              | Yes  | File descriptor of the input model.                           |
285| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
286
287**Return value**
288
289| Type                     | Description                        |
290| ------------------------- | ---------------------------- |
291| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
292
293**Example**
294
295```ts
296import { fileIo } from '@kit.CoreFileKit';
297let modelFile = '/path/to/xxx.ms';
298let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
299mindSporeLite.loadModelFromFd(file.fd).then((mindSporeLiteModel: mindSporeLite.Model) => {
300  let modelInputs: mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
301  console.info(modelInputs[0].name);
302})
303```
304
305## mindSporeLite.loadTrainModelFromFile<sup>12+</sup>
306
307loadTrainModelFromFile(model: string, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
308
309Loads the training model file based on the specified path. This API uses a promise to return the result.
310
311**System capability**: SystemCapability.AI.MindSporeLite
312
313**Parameters**
314
315| Name  | Type                   | Mandatory| Description                                          |
316| -------- | ----------------------- | ---- | ---------------------------------------------- |
317| model    | string                  | Yes  | Complete path of the input model.                          |
318| trainCfg | [TrainCfg](#traincfg12) | No  | Model training configuration. The default value is an array of the default values of attributes in **TrainCfg**.  |
319| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
320
321**Return value**
322
323| Type                      | Description                  |
324| ------------------------ | -------------------- |
325| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
326
327**Example**
328
329```ts
330let modelFile = '/path/to/xxx.ms';
331mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
332  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
333  console.info(modelInputs[0].name);
334})
335```
336
337## mindSporeLite.loadTrainModelFromBuffer<sup>12+</sup>
338
339loadTrainModelFromBuffer(model: ArrayBuffer, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
340
341Loads a training model from the memory buffer. This API uses a promise to return the result.
342
343**System capability**: SystemCapability.AI.MindSporeLite
344
345**Parameters**
346
347| Name  | Type                   | Mandatory| Description                                         |
348| -------- | ----------------------- | ---- | --------------------------------------------- |
349| model    | ArrayBuffer             | Yes  | Memory accommodating the training model.                         |
350| trainCfg | [TrainCfg](#traincfg12) | No  | Model training configuration. The default value is an array of the default values of attributes in **TrainCfg**. |
351| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
352
353**Return value**
354
355| Type                      | Description                  |
356| ------------------------ | -------------------- |
357| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
358
359**Example**
360
361```ts
362import { resourceManager } from '@kit.LocalizationKit'
363let modelFile = 'xxx.ms';
364let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
365resMgr.getRawFileContent(modelFile).then(modelBuffer => {
366  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer).then((mindSporeLiteModel: mindSporeLite.Model) => {
367    console.info("MSLITE trainMode: ", mindSporeLiteModel.trainMode);
368  })
369})
370```
371
372## mindSporeLite.loadTrainModelFromFd<sup>12+</sup>
373
374loadTrainModelFromFd(model: number, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
375
376Loads the training model file from the file descriptor. This API uses a promise to return the result.
377
378**System capability**: SystemCapability.AI.MindSporeLite
379
380**Parameters**
381
382| Name  | Type                   | Mandatory| Description                                         |
383| -------- | ----------------------- | ---- | --------------------------------------------- |
384| model    | number                  | Yes  | File descriptor of the training model.                       |
385| trainCfg | [TrainCfg](#traincfg12) | No  | Model training configuration. The default value is an array of the default values of attributes in **TrainCfg**. |
386| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
387
388**Return value**
389
390| Type                    | Description                        |
391| ------------------------ | ---------------------------- |
392| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
393
394**Example**
395
396```ts
397import { fileIo } from '@kit.CoreFileKit';
398let modelFile = '/path/to/xxx.ms';
399let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
400mindSporeLite.loadTrainModelFromFd(file.fd).then((mindSporeLiteModel: mindSporeLite.Model) => {
401  console.info("MSLITE trainMode: ", mindSporeLiteModel.trainMode);
402});
403```
404
405## mindSporeLite.getAllNNRTDeviceDescriptions<sup>12+</sup>
406
407getAllNNRTDeviceDescriptions() : NNRTDeviceDescription[]
408
409Obtains all device descriptions in NNRt.
410
411**System capability**: SystemCapability.AI.MindSporeLite
412
413**Return value**
414
415| Type                                               | Description                  |
416| --------------------------------------------------- | ---------------------- |
417| [NNRTDeviceDescription](#nnrtdevicedescription12)[] | NNRt device description array.|
418
419**Example**
420
421```ts
422let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
423if (allDevices == null) {
424  console.error('MS_LITE_LOG: getAllNNRTDeviceDescriptions is NULL.');
425}
426```
427
428## Context
429
430Defines the configuration information of the running environment.
431
432### Attributes
433
434**System capability**: SystemCapability.AI.MindSporeLite
435
436
437| Name  | Type                     | Read Only| Optional| Description                                                        |
438| ------ | ------------------------- | ---- | ---- | ------------------------------------------------------------ |
439| target | string[]                  | No  | Yes  | Target backend. The value can be **cpu** or **nnrt**. The default value is **cpu**.                |
440| cpu    | [CpuDevice](#cpudevice)   | No  | Yes  | CPU backend device option. Set this parameter set only when **target** is set to **cpu**. The default value is an array of the default values of attributes in **CpuDevice**.|
441| nnrt   | [NNRTDevice](#nnrtdevice) | No  | Yes  | NNRt backend device option. Set this parameter set only when **target** is set to **nnrt**. The default value is an array of the default values of attributes in **NNRTDevice**.|
442
443**Example**
444
445```ts
446let context: mindSporeLite.Context = {};
447context.target = ['cpu','nnrt'];
448```
449
450## CpuDevice
451
452Defines the CPU backend device option.
453
454### Attributes
455
456**System capability**: SystemCapability.AI.MindSporeLite
457
458| Name                  | Type                                     | Read Only| Optional| Description                                                        |
459| ---------------------- | ----------------------------------------- | ---- | ---- | ------------------------------------------------------------ |
460| threadNum              | number                                    | No  | Yes  | Number of runtime threads. The default value is **2**.                             |
461| threadAffinityMode     | [ThreadAffinityMode](#threadaffinitymode) | No  | Yes  | Affinity mode for binding runtime threads to CPU cores. The default value is **mindSporeLite.ThreadAffinityMode.NO_AFFINITIES**.|
462| threadAffinityCoreList | number[]                                  | No  | Yes  | List of CPU cores bound to runtime threads. Set this parameter only when **threadAffinityMode** is set. If **threadAffinityMode** is set to **mindSporeLite.ThreadAffinityMode.NO_AFFINITIES**, this parameter is empty. The number in the list indicates the SN of the CPU core. The default value is **[]**.|
463| precisionMode          | string                                    | No  | Yes  | Whether to enable the Float16 inference mode. The value **preferred_fp16** means to enable half-precision inference and the default value **enforce_fp32** means to disable half-precision inference. Other settings are not supported.|
464
465**Float16 inference mode**: a mode that uses half-precision inference. Float16 uses 16 bits to represent a number and therefore it is also called half-precision.
466
467**Example**
468
469```ts
470let context: mindSporeLite.Context = {};
471context.cpu = {};
472context.target = ['cpu'];
473context.cpu.threadNum = 2;
474context.cpu.threadAffinityMode = 0;
475context.cpu.precisionMode = 'preferred_fp16';
476context.cpu.threadAffinityCoreList = [0, 1, 2];
477```
478
479## ThreadAffinityMode
480
481Specifies the affinity mode for binding runtime threads to CPU cores.
482
483**System capability**: SystemCapability.AI.MindSporeLite
484
485| Name              | Value  | Description        |
486| ------------------ | ---- | ------------ |
487| NO_AFFINITIES      | 0    | No affinities.    |
488| BIG_CORES_FIRST    | 1    | Big cores first.|
489| LITTLE_CORES_FIRST | 2    | Medium cores first.|
490
491## NNRTDevice
492
493Represents an NNRt device. Neural Network Runtime (NNRt) is a bridge that connects the upper-layer AI inference framework to the bottom-layer acceleration chip to implement cross-chip inference and computing of AI models. An NNRt backend can be configured for MindSpore Lite.
494
495### Attributes
496
497**System capability**: SystemCapability.AI.MindSporeLite
498
499| Name                         | Type                               | Read Only| Optional| Description                    |
500| ----------------------------- | ----------------------------------- | ---- | ------------------------ | ------------------------ |
501| deviceID<sup>12+</sup>        | bigint                              | No| Yes | NNRt device ID. The default value is **0**.    |
502| performanceMode<sup>12+</sup> | [PerformanceMode](#performancemode12) | No | Yes | NNRt device performance mode. The default value is **PERFORMANCE_NONE**.|
503| priority<sup>12+</sup>        | [Priority](#priority12)               | No | Yes | NNRt inference task priority. The default value is **PRIORITY_MEDIUM**.|
504| extensions<sup>12+</sup>      | [Extension](#extension12)[]         | No | Yes | Extended NNRt device configuration. This parameter is left empty by default.|
505
506## PerformanceMode<sup>12+</sup>
507
508Enumerates NNRt device performance modes.
509
510**System capability**: SystemCapability.AI.MindSporeLite
511
512| Name               | Value  | Description               |
513| ------------------- | ---- | ------------------- |
514| PERFORMANCE_NONE    | 0    | No special settings.       |
515| PERFORMANCE_LOW     | 1    | Low power consumption.       |
516| PERFORMANCE_MEDIUM  | 2    | Power consumption and performance balancing.|
517| PERFORMANCE_HIGH    | 3    | High performance.       |
518| PERFORMANCE_EXTREME | 4    | Ultimate performance.     |
519
520## Priority<sup>12+</sup>
521
522Enumerates NNRt inference task priorities.
523
524**System capability**: SystemCapability.AI.MindSporeLite
525
526| Name           | Value  | Description          |
527| --------------- | ---- | -------------- |
528| PRIORITY_NONE   | 0    | No priority preference.|
529| PRIORITY_LOW    | 1    | Low priority.|
530| PRIORITY_MEDIUM | 2    | Medium priority.|
531| PRIORITY_HIGH   | 3    | High priority.|
532
533## Extension<sup>12+</sup>
534
535Defines the extended NNRt device configuration.
536
537### Attributes
538
539**System capability**: SystemCapability.AI.MindSporeLite
540
541| Name               | Type       | Read Only| Optional| Description            |
542| ------------------- | ----------- | ---- | ---- | ---------------- |
543| name<sup>12+</sup>  | string      | No  | No  | Configuration name.      |
544| value<sup>12+</sup> | ArrayBuffer | No  | No  | Memory accommodating the extended configuration.|
545
546## NNRTDeviceDescription<sup>12+</sup>
547
548Defines NNRt device information, including the device ID and device name.
549
550**System capability**: SystemCapability.AI.MindSporeLite
551
552### deviceID
553
554deviceID() : bigint
555
556Obtains the NNRt device ID.
557
558**System capability**: SystemCapability.AI.MindSporeLite
559
560**Return value**
561
562| Type  | Description        |
563| ------ | ------------ |
564| bigint | NNRt device ID.|
565
566**Example**
567
568```ts
569let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
570if (allDevices == null) {
571  console.error('getAllNNRTDeviceDescriptions is NULL.');
572}
573let context: mindSporeLite.Context = {};
574context.target = ["nnrt"];
575context.nnrt = {};
576for (let i: number = 0; i < allDevices.length; i++) {
577  console.info(allDevices[i].deviceID().toString());
578}
579```
580
581### deviceType
582
583deviceType() : NNRTDeviceType
584
585Obtains the device model.
586
587**System capability**: SystemCapability.AI.MindSporeLite
588
589**Return value**
590
591| Type                               | Description          |
592| ----------------------------------- | -------------- |
593| [NNRTDeviceType](#nnrtdevicetype12) | NNRt device type.|
594
595**Example**
596
597```ts
598let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
599if (allDevices == null) {
600  console.error('getAllNNRTDeviceDescriptions is NULL.');
601}
602let context: mindSporeLite.Context = {};
603context.target = ["nnrt"];
604context.nnrt = {};
605for (let i: number = 0; i < allDevices.length; i++) {
606  console.info(allDevices[i].deviceType().toString());
607}
608```
609
610### deviceName
611
612deviceName() : string
613
614Obtains the NNRt device name.
615
616**System capability**: SystemCapability.AI.MindSporeLite
617
618**Return value**
619
620| Type  | Description          |
621| ------ | -------------- |
622| string | NNRt device name.|
623
624**Example**
625
626```ts
627let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
628if (allDevices == null) {
629  console.error('getAllNNRTDeviceDescriptions is NULL.');
630}
631let context: mindSporeLite.Context = {};
632context.target = ["nnrt"];
633context.nnrt = {};
634for (let i: number = 0; i < allDevices.length; i++) {
635  console.info(allDevices[i].deviceName().toString());
636}
637```
638
639## NNRTDeviceType<sup>12+</sup>
640
641Enumerates NNRt device types.
642
643**System capability**: SystemCapability.AI.MindSporeLite
644
645| Name                  | Value  | Description                               |
646| ---------------------- | ---- | ----------------------------------- |
647| NNRTDEVICE_OTHERS      | 0    | Others (any device type except the following three types).|
648| NNRTDEVICE_CPU         | 1    | CPU.                          |
649| NNRTDEVICE_GPU         | 2    | GPU.                          |
650| NNRTDEVICE_ACCELERATOR | 3    | Specific acceleration device.                   |
651
652## TrainCfg<sup>12+</sup>
653
654Defines the configuration for on-device training.
655
656### Attributes
657
658**System capability**: SystemCapability.AI.MindSporeLite
659
660| Name                           | Type                                     | Read Only| Optional| Description                                                        |
661| ------------------------------- | ----------------------------------------- | ---- | ---- | ------------------------------------------------------------ |
662| lossName<sup>12+</sup>          | string[]                                  | No  | Yes  | List of loss functions. The default value is ["loss\_fct", "\_loss\_fn", "SigmoidCrossEntropy"].|
663| optimizationLevel<sup>12+</sup> | [OptimizationLevel](#optimizationlevel12) | No  | Yes  | Network optimization level for on-device training. The default value is **O0**.                        |
664
665**Example**
666
667```ts
668let cfg: mindSporeLite.TrainCfg = {};
669cfg.lossName = ["loss_fct", "_loss_fn", "SigmoidCrossEntropy"];
670cfg.optimizationLevel = mindSporeLite.OptimizationLevel.O0;
671```
672
673## OptimizationLevel<sup>12+</sup>
674
675Enumerates network optimization levels for on-device training.
676
677**System capability**: SystemCapability.AI.MindSporeLite
678
679| Name| Value  | Description                                                      |
680| ---- | ---- | ---------------------------------------------------------- |
681| O0   | 0    | No optimization level.                                              |
682| O2   | 2    | Converts the precision type of the network to float16 and keeps the precision type of the batch normalization layer and loss function as float32.|
683| O3   | 3    | Converts the precision type of the network (including the batch normalization layer) to float16.                   |
684| AUTO | 4    | Selects an optimization level based on the device.                                    |
685
686
687## QuantizationType<sup>12+</sup>
688
689Enumerates quantization types.
690
691**System capability**: SystemCapability.AI.MindSporeLite
692
693| Name        | Value  | Description      |
694| ------------ | ---- | ---------- |
695| NO_QUANT     | 0    | No quantification.|
696| WEIGHT_QUANT | 1    | Weight quantization.|
697| FULL_QUANT   | 2    | Full quantization.  |
698
699## Model
700
701Represents a **Model** instance, with properties and APIs defined.
702
703In the following sample code, you first need to use [loadModelFromFile()](#mindsporeliteloadmodelfromfile), [loadModelFromBuffer()](#mindsporeliteloadmodelfrombuffer), or [loadModelFromFd()](#mindsporeliteloadmodelfromfd) to obtain a **Model** instance before calling related APIs.
704
705### Attributes
706
707**System capability**: SystemCapability.AI.MindSporeLite
708
709| Name                      | Type   | Read Only| Optional| Description                                                        |
710| -------------------------- | ------- | ---- | ---- | ------------------------------------------------------------ |
711| learningRate<sup>12+</sup> | number  | No  | Yes  | Learning rate of a training model. The default value is read from the loaded model.                |
712| trainMode<sup>12+</sup>    | boolean | No  | Yes  | Training mode. The value **true** indicates the training mode, and the value **false** indicates the non-training mode. The default value is **true** for a training model and **false** for an inference model.|
713
714### getInputs
715
716getInputs(): MSTensor[]
717
718Obtains the model input for inference.
719
720**System capability**: SystemCapability.AI.MindSporeLite
721
722**Return value**
723
724| Type                   | Description              |
725| ----------------------- | ------------------ |
726| [MSTensor](#mstensor)[] | **MSTensor** object.|
727
728**Example**
729
730```ts
731let modelFile = '/path/to/xxx.ms';
732mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
733  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
734  console.info(modelInputs[0].name);
735})
736```
737### predict
738
739predict(inputs: MSTensor[], callback: Callback&lt;MSTensor[]&gt;): void
740
741Executes the inference model. This API uses an asynchronous callback to return the result. Ensure that the model object is not reclaimed when being invoked.
742
743**System capability**: SystemCapability.AI.MindSporeLite
744
745**Parameters**
746
747| Name| Type                   | Mandatory| Description                      |
748| ------ | ----------------------- | ---- | -------------------------- |
749| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.  |
750| callback | Callback<[MSTensor](#mstensor)[]> | Yes  | Callback used to return the result, which is a list of **MSTensor** objects.|
751
752**Example**
753
754```ts
755import { resourceManager } from '@kit.LocalizationKit';
756import { GlobalContext } from '../GlobalContext';
757import { mindSporeLite } from '@kit.MindSporeLiteKit';
758import { common } from '@kit.AbilityKit';
759export class Test {
760  value:number = 0;
761  foo(): void {
762    GlobalContext.getContext().setObject("value", this.value);
763  }
764}
765let globalContext = GlobalContext.getContext().getObject("value") as common.UIAbilityContext;
766
767let inputName = 'input_data.bin';
768globalContext.resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
769  let modelBuffer = buffer.buffer;
770  let modelFile : string = '/path/to/xxx.ms';
771  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
772  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
773
774  modelInputs[0].setData(modelBuffer);
775  mindSporeLiteModel.predict(modelInputs, (mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
776    let output = new Float32Array(mindSporeLiteTensor[0].getData());
777    for (let i = 0; i < output.length; i++) {
778      console.info(output[i].toString());
779    }
780  })
781})
782```
783### predict
784
785predict(inputs: MSTensor[]): Promise&lt;MSTensor[]&gt;
786
787Executes model inference. This API uses a promise to return the result. Ensure that the model object is not reclaimed when being invoked.
788
789**System capability**: SystemCapability.AI.MindSporeLite
790
791**Parameters**
792
793| Name| Type                   | Mandatory| Description                          |
794| ------ | ----------------------- | ---- | ------------------------------ |
795| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.  |
796
797**Return value**
798
799| Type                   | Description                  |
800| ----------------------- | ---------------------- |
801| Promise<[MSTensor](#mstensor)[]> | Promise used to return the result, List of **MSTensor** objects.|
802
803**Example**
804
805```ts
806import { resourceManager } from '@kit.LocalizationKit';
807import { GlobalContext } from '../GlobalContext';
808import { mindSporeLite } from '@kit.MindSporeLiteKit';
809import { common } from '@kit.AbilityKit';
810export class Test {
811    value:number = 0;
812    foo(): void {
813    GlobalContext.getContext().setObject("value", this.value);
814}
815}
816let globalContext = GlobalContext.getContext().getObject("value") as common.UIAbilityContext;;
817let inputName = 'input_data.bin';
818globalContext.resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
819  let modelBuffer = buffer.buffer;
820  let modelFile = '/path/to/xxx.ms';
821  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
822  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
823  modelInputs[0].setData(modelBuffer);
824  mindSporeLiteModel.predict(modelInputs).then((mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
825    let output = new Float32Array(mindSporeLiteTensor[0].getData());
826    for (let i = 0; i < output.length; i++) {
827      console.info(output[i].toString());
828    }
829  })
830})
831```
832
833### resize
834
835resize(inputs: MSTensor[], dims: Array&lt;Array&lt;number&gt;&gt;): boolean
836
837Resets the tensor size.
838
839**System capability**: SystemCapability.AI.MindSporeLite
840
841**Parameters**
842
843| Name| Type                 | Mandatory| Description                         |
844| ------ | --------------------- | ---- | ----------------------------- |
845| inputs | [MSTensor](#mstensor)[]            | Yes  | List of input models. |
846| dims   | Array&lt;Array&lt;number&gt;&gt; | Yes  | Target tensor size.|
847
848**Return value**
849
850| Type   | Description                                                        |
851| ------- | ------------------------------------------------------------ |
852| boolean | Result indicating whether the setting is successful. The value **true** indicates that the tensor size is successfully reset, and the value **false** indicates the opposite.|
853
854**Example**
855
856```ts
857let modelFile = '/path/to/xxx.ms';
858mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
859  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
860  let new_dim = new Array([1,32,32,1]);
861  mindSporeLiteModel.resize(modelInputs, new_dim);
862})
863```
864
865### runStep<sup>12+</sup>
866
867runStep(inputs: MSTensor[]): boolean
868
869Defines a single-step training model. This API is used only for on-device training.
870
871**System capability**: SystemCapability.AI.MindSporeLite
872
873**Parameters**
874
875| Name   | Type                     | Mandatory | Description      |
876| ------ | ----------------------- | --- | -------- |
877| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.|
878
879**Return value**
880
881| Type   | Description                                                        |
882| ------- | ------------------------------------------------------------ |
883| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
884
885**Example**
886
887```ts
888let modelFile = '/path/to/xxx.ms';
889mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
890  mindSporeLiteModel.trainMode = true;
891  const modelInputs = mindSporeLiteModel.getInputs();
892  let ret = mindSporeLiteModel.runStep(modelInputs);
893  if (ret == false) {
894    console.error('MS_LITE_LOG: runStep failed.')
895  }
896})
897```
898
899### getWeights<sup>12+</sup>
900
901getWeights(): MSTensor[]
902
903Obtains all weight tensors of a model. This API is used only for on-device training.
904
905**System capability**: SystemCapability.AI.MindSporeLite
906
907**Return value**
908
909| Type                     | Description        |
910| ----------------------- | ---------- |
911| [MSTensor](#mstensor)[] | Weight tensor of the training model.|
912
913**Example**
914
915```ts
916import { resourceManager } from '@kit.LocalizationKit';
917
918let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
919let modelFile = 'xxx.ms';
920resMgr.getRawFileContent(modelFile).then((modelBuffer) => {
921  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
922    mindSporeLiteModel.trainMode = true;
923    const weights = mindSporeLiteModel.getWeights();
924    for (let i = 0; i < weights.length; i++) {
925      let printStr = weights[i].name + ", ";
926      printStr += weights[i].shape + ", ";
927      printStr += weights[i].dtype + ", ";
928      printStr += weights[i].dataSize + ", ";
929      printStr += weights[i].getData();
930      console.info("MS_LITE weights: ", printStr);
931    }
932  })
933})
934```
935
936### updateWeights<sup>12+</sup>
937
938updateWeights(weights: MSTensor[]): boolean
939
940Weight of the updated model, which is used only for on-device training.
941
942**System capability**: SystemCapability.AI.MindSporeLite
943
944**Parameters**
945
946| Name | Type                   | Mandatory| Description          |
947| ------- | ----------------------- | ---- | -------------- |
948| weights | [MSTensor](#mstensor)[] | Yes  | List of weight tensors.|
949
950**Return value**
951
952| Type   | Description                                                        |
953| ------- | ------------------------------------------------------------ |
954| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
955
956**Example**
957
958```ts
959import { resourceManager } from '@kit.LocalizationKit';
960
961let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
962let modelFile = 'xxx.ms';
963resMgr.getRawFileContent(modelFile).then((modelBuffer) => {
964  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
965    mindSporeLiteModel.trainMode = true;
966    const weights = mindSporeLiteModel.getWeights();
967    let ret = mindSporeLiteModel.updateWeights(weights);
968    if (ret == false) {
969      console.error('MS_LITE_LOG: updateWeights failed.')
970    }
971  })
972})
973```
974
975### setupVirtualBatch<sup>12+</sup>
976
977setupVirtualBatch(virtualBatchMultiplier: number, lr: number, momentum: number): boolean
978
979Sets the virtual batch for training. This API is used only for on-device training.
980
981**System capability**: SystemCapability.AI.MindSporeLite
982
983**Parameters**
984
985| Name                | Type  | Mandatory| Description                                                |
986| ---------------------- | ------ | ---- | ---------------------------------------------------- |
987| virtualBatchMultiplier | number | Yes  | Virtual batch multiplier. If the value is less than **1**, the virtual batch is disabled.|
988| lr                     | number | Yes  | Learning rate.                                            |
989| momentum               | number | Yes  | Momentum.                                              |
990
991**Return value**
992
993| Type   | Description                                                        |
994| ------- | ------------------------------------------------------------ |
995| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
996
997**Example**
998
999```ts
1000import { resourceManager } from '@kit.LocalizationKit';
1001
1002let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
1003let modelFile = 'xxx.ms';
1004resMgr.getRawFileContent(modelFile).then((modelBuffer) => {
1005  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
1006    mindSporeLiteModel.trainMode = true;
1007    let ret = mindSporeLiteModel.setupVirtualBatch(2,-1,-1);
1008    if (ret == false) {
1009      console.error('MS_LITE setupVirtualBatch failed.')
1010    }
1011  })
1012})
1013```
1014
1015### exportModel<sup>12+</sup>
1016
1017exportModel(modelFile: string, quantizationType?: QuantizationType, exportInferenceOnly?: boolean, outputTensorName?: string[]): boolean
1018
1019Exports a training model. This API is used only for on-device training.
1020
1021**System capability**: SystemCapability.AI.MindSporeLite
1022
1023**Parameters**
1024
1025| Name             | Type                                   | Mandatory| Description                                                        |
1026| ------------------- | --------------------------------------- | ---- | ------------------------------------------------------------ |
1027| modelFile           | string                                  | Yes  | File path of the training models.                                        |
1028| quantizationType    | [QuantizationType](#quantizationtype12) | No  | Quantization type. The default value is **NO_QUANT**.                                  |
1029| exportInferenceOnly | boolean                                 | No  | Whether to export inference models only. The value **true** means to export only inference models, and the value **false** means to export both training and inference models. The default value is **true**.|
1030| outputTensorName    | string[]                                | No  | Name of the output tensor of the exported training model. The default value is an empty string array, which indicates full export.|
1031
1032**Return value**
1033
1034| Type   | Description                                                        |
1035| ------- | ------------------------------------------------------------ |
1036| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
1037
1038**Example**
1039
1040```ts
1041let modelFile = '/path/to/xxx.ms';
1042let newPath = '/newpath/to';
1043mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
1044  mindSporeLiteModel.trainMode = true;
1045  let ret = mindSporeLiteModel.exportModel(newPath + "/new_model.ms", mindSporeLite.QuantizationType.NO_QUANT, true);
1046  if (ret == false) {
1047    console.error('MS_LITE exportModel failed.')
1048  }
1049})
1050```
1051
1052
1053### exportWeightsCollaborateWithMicro<sup>12+</sup>
1054
1055exportWeightsCollaborateWithMicro(weightFile: string, isInference?: boolean, enableFp16?: boolean, changeableWeightsName?: string[]): boolean;
1056
1057Exports model weights for micro inference. This API is available only for on-device training.
1058
1059Micro inference is a ultra-lightweight micro AI deployment solution provided by MindSpore Lite to deploy hardware backends for Micro Controller Units (MCUs). This solution directly converts models into lightweight code in offline mode, eliminating the need for online model parsing and graph compilation.
1060
1061**System capability**: SystemCapability.AI.MindSporeLite
1062
1063**Parameters**
1064
1065| Name               | Type    | Mandatory| Description                                                        |
1066| --------------------- | -------- | ---- | ------------------------------------------------------------ |
1067| weightFile            | string   | Yes  | Path of the weight file.                                              |
1068| isInference           | boolean  | No  | Whether to export weights from the inference model. The value **true** means to export weights from the inference model. The default value is **true**. Currently, only **true** is supported.|
1069| enableFp16            | boolean  | No  | Whether to store floating-point weights in float16 format. The value **true** means to store floating-point weights in float16 format, and the value **false** means the opposite. The default value is **false**.|
1070| changeableWeightsName | string[] | No  | Name of the variable weight. The default value is an empty string array.                    |
1071
1072**Return value**
1073
1074| Type   | Description                                                        |
1075| ------- | ------------------------------------------------------------ |
1076| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
1077
1078**Example**
1079
1080```ts
1081let modelFile = '/path/to/xxx.ms';
1082let microWeight = '/path/to/xxx.bin';
1083mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
1084  let ret = mindSporeLiteModel.exportWeightsCollaborateWithMicro(microWeight);
1085  if (ret == false) {
1086    console.error('MSLITE exportWeightsCollaborateWithMicro failed.')
1087  }
1088})
1089```
1090
1091## MSTensor
1092
1093Represents an **MSTensor** instance, with properties and APIs defined. It is a special data structure similar to arrays and matrices. It is the basic data structure used in MindSpore Lite network operations.
1094
1095In the following sample code, you first need to use [getInputs()](#getinputs) to obtain an **MSTensor** instance before calling related APIs.
1096
1097### Attributes
1098
1099**System capability**: SystemCapability.AI.MindSporeLite
1100
1101| Name      | Type                 | Read Only| Optional| Description                  |
1102| ---------- | --------------------- | ---- | ---- | ---------------------- |
1103| name       | string                | No  | No  | Tensor name.          |
1104| shape      | number[]              | No  | No  | Tensor dimension array.      |
1105| elementNum | number                | No  | No  | Length of the tensor dimension array.|
1106| dataSize   | number                | No  | No  | Length of tensor data.    |
1107| dtype      | [DataType](#datatype) | No  | No  | Tensor data type.      |
1108| format     | [Format](#format)     | No  | No  | Tensor data format.  |
1109
1110**Example**
1111
1112```ts
1113let modelFile = '/path/to/xxx.ms';
1114mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
1115  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1116  console.info(modelInputs[0].name);
1117  console.info(modelInputs[0].shape.toString());
1118  console.info(modelInputs[0].elementNum.toString());
1119  console.info(modelInputs[0].dtype.toString());
1120  console.info(modelInputs[0].format.toString());
1121  console.info(modelInputs[0].dataSize.toString());
1122})
1123```
1124
1125### getData
1126
1127getData(): ArrayBuffer
1128
1129Obtains tensor data.
1130
1131**System capability**: SystemCapability.AI.MindSporeLite
1132
1133**Return value**
1134
1135| Type       | Description                |
1136| ----------- | -------------------- |
1137| ArrayBuffer | Pointer to the tensor data.|
1138
1139**Example**
1140
1141```ts
1142import { resourceManager } from '@kit.LocalizationKit';
1143import { GlobalContext } from '../GlobalContext';
1144import { mindSporeLite } from '@kit.MindSporeLiteKit';
1145import { common } from '@kit.AbilityKit';
1146export class Test {
1147  value:number = 0;
1148  foo(): void {
1149    GlobalContext.getContext().setObject("value", this.value);
1150  }
1151}
1152let globalContext = GlobalContext.getContext().getObject("value") as common.UIAbilityContext;
1153let inputName = 'input_data.bin';
1154globalContext.resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
1155  let inputBuffer = buffer.buffer;
1156  let modelFile = '/path/to/xxx.ms';
1157  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
1158  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1159  modelInputs[0].setData(inputBuffer);
1160  mindSporeLiteModel.predict(modelInputs).then((mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
1161    let output = new Float32Array(mindSporeLiteTensor[0].getData());
1162    for (let i = 0; i < output.length; i++) {
1163      console.info(output[i].toString());
1164    }
1165  })
1166})
1167```
1168
1169### setData
1170
1171setData(inputArray: ArrayBuffer): void
1172
1173Sets the tensor data.
1174
1175**System capability**: SystemCapability.AI.MindSporeLite
1176
1177**Parameters**
1178
1179| Name    | Type       | Mandatory| Description                  |
1180| ---------- | ----------- | ---- | ---------------------- |
1181| inputArray | ArrayBuffer | Yes  | Input data buffer of the tensor.|
1182
1183**Example**
1184
1185```ts
1186import { resourceManager } from '@kit.LocalizationKit';
1187import { GlobalContext } from '../GlobalContext';
1188import { mindSporeLite } from '@kit.MindSporeLiteKit';
1189import { common } from '@kit.AbilityKit';
1190export class Test {
1191  value:number = 0;
1192  foo(): void {
1193    GlobalContext.getContext().setObject("value", this.value);
1194  }
1195}
1196let globalContext = GlobalContext.getContext().getObject("value") as common.UIAbilityContext;
1197let inputName = 'input_data.bin';
1198globalContext.resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
1199  let inputBuffer = buffer.buffer;
1200  let modelFile = '/path/to/xxx.ms';
1201  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
1202  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1203  modelInputs[0].setData(inputBuffer);
1204})
1205```
1206
1207## DataType
1208
1209Tensor data type.
1210
1211**System capability**: SystemCapability.AI.MindSporeLite
1212
1213| Name               | Value  | Description               |
1214| ------------------- | ---- | ------------------- |
1215| TYPE_UNKNOWN        | 0    | Unknown type.         |
1216| NUMBER_TYPE_INT8    | 32   | Int8 type.   |
1217| NUMBER_TYPE_INT16   | 33   | Int16 type.  |
1218| NUMBER_TYPE_INT32   | 34   | Int32 type.  |
1219| NUMBER_TYPE_INT64   | 35   | Int64 type.  |
1220| NUMBER_TYPE_UINT8   | 37   | UInt8 type.  |
1221| NUMBER_TYPE_UINT16  | 38   | UInt16 type. |
1222| NUMBER_TYPE_UINT32  | 39   | UInt32 type. |
1223| NUMBER_TYPE_UINT64  | 40   | UInt64 type. |
1224| NUMBER_TYPE_FLOAT16 | 42   | Float16 type.|
1225| NUMBER_TYPE_FLOAT32 | 43   | Float32 type.|
1226| NUMBER_TYPE_FLOAT64 | 44   | Float64 type.|
1227
1228## Format
1229
1230Enumerates tensor data formats.
1231
1232**System capability**: SystemCapability.AI.MindSporeLite
1233
1234| Name          | Value  | Description                 |
1235| -------------- | ---- | --------------------- |
1236| DEFAULT_FORMAT | -1   | Unknown data format.   |
1237| NCHW           | 0    | NCHW format. |
1238| NHWC           | 1    | NHWC format. |
1239| NHWC4          | 2    | NHWC4 format.|
1240| HWKC           | 3    | HWKC format. |
1241| HWCK           | 4    | HWCK format. |
1242| KCHW           | 5    | KCHW format. |
1243