1# Introduction to MindSpore Lite Kit
2
3## Use Cases
4
5MindSpore Lite is a lightweight AI engine built in OpenHarmony. Its open AI framework comes with a multi-processor architecture to empower intelligent applications in all scenarios. It brings data scientists, algorithm engineers, and developers with friendly development, efficient running, and flexible deployment, helping to build a prosperous open source ecosystem of AI hardware/software applications.
6
7So far, MindSpore Lite has been widely used in applications such as image classification, target recognition, facial recognition, and character recognition. Typical use cases are as follows:
8
9- Image classification: determines the category to which an image (such as an image of a cat, a dog, an airplane, or a car) belongs. This is the most basic computer vision application and belongs to the supervised learning category.
10- Target recognition: uses the preset object detection model to identify objects in the input frames of a camera, add labels to the objects, and mark them with bounding boxes.
11- Image segmentation: detects the positions of objects in a graph or the object of a specific pixel in the graph.
12
13## Advantages
14
15MindSpore Lite provides AI model inference capabilities for hardware devices and end-to-end solutions for developers to empower intelligent applications in all scenarios. Its advantages include:
16
17- High performance: Efficient kernel algorithms and assembly-level optimization support high-performance inference on dedicated CPU and NNRt chips, maximizing computing power while minimizing inference latency and power consumption.
18- Lightweight: Provides an ultra-lightweight solution, and supports model quantization and compression to enable smaller models to run faster and empower AI model deployment in extreme environments.
19- All-scenario support: Supports different types of OS and embedded system to adapt to AI applications on various intelligent devices.
20- Efficient deployment: Supports MindSpore, TensorFlow Lite, Caffe, and ONNX models, provides capabilities such as model compression and data processing, and supports unified training and inference IR.
21
22## Development Process
23
24**Figure 1** Development process for MindSpore Lite model inference
25![mindspore workflow](figures/mindspore_workflow.png)
26
27The MindSpore Lite development process consists of two phases:
28
29- Model conversion
30
31  MindSpore Lite uses models in `.ms` format for inference. You can use the model conversion tool provided by MindSpore Lite to convert third-party framework models, such as TensorFlow, TensorFlow Lite, Caffe, and ONNX, into `.ms` models. For details, see [Using MindSpore Lite for Model Conversion](./mindspore-lite-converter-guidelines.md).
32
33- Model deployment
34
35  You can call the MindSpore Lite runtime APIs to implement model inference or training. The procedure is as follows:
36
37    1. Create the inference or training context, including the hardware and the number of threads.
38    2. Load the `.ms` model file.
39    3. Set the model input data.
40    4. Perform inference or training and read the output.
41
42## Development Mode
43
44MindSpore Lite is built in the OpenHarmony standard system as a system component. You can develop AI applications based on MindSpore Lite in the following ways:
45
46- Method 1: [Using MindSpore Lite for Image Classification (ArkTS)](./mindspore-guidelines-based-js.md). You can directly call MindSpore Lite ArkTS APIs in the UI code to load the AI model and perform model inference. An advantage of this method is the quick verification of the inference effect.
47- Method 2: [Using MindSpore Lite native APIs to develop AI applications](./mindspore-guidelines-based-native.md). You can encapsulate the algorithm models and the code for calling MindSpore Lite native APIs into a dynamic library, and then use N-API to encapsulate the dynamic library into ArkTS APIs for the UI to call.
48
49## Relationship with Other Kits
50
51Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and underlying acceleration chips, implementing cross-chip inference computing of AI models.
52
53MindSpore Lite natively allows you to configure NNRt for AI-dedicated chips (such as NPUs) to accelerate inference. Therefore, you can configure MindSpore Lite to use the NNRt hardware. The focus of this topic is about how to develop AI applications using MindSpore Lite. For details about how to use NNRt, see [Connecting the Neural Network Runtime to an AI Inference Framework](../nnrt/neural-network-runtime-guidelines.md).
54