Lines Matching refs:AI

4 …nnect the upper-layer AI inference framework and underlying acceleration chips, implementing cross…
6 …Is are intended for developers of the AI inference framework or application developers who want to…
8AI inference framework can call the NNRt image composition API to convert its model image into an …
10AI inference framework or application can also directly use the offline model dedicated to the spe…
15 1. **Online image composition**: Have the AI inference framework call the NNRt image composition AP…
16 … objects on the underlying AI hardware driver through the NNRt build API. This way, model inferenc…
17 …ct, set the input and output tensors for inference, and execute model inference on the AI hardware.
18AI hardware driver and assign the shared memory to tensors. The input and output tensors must cont…
19 … management**: Display information about the AI hardware connected to NNRt and allow for selection…
21AI hardware for inference. Specifically, use the model converter provided by the AI hardware vendo…
29 - Opens unified AI acceleration hardware inference APIs for the AI inference framework to implement…
30 - Provides image composition APIs for the AI inference framework to pass internal model images to N…
32 …ens the model build time. However, inference can only be executed on the corresponding AI hardware.
34 - Implements zero copy of data by applying for shared memory on the AI hardware driver, improving t…
38 - NNRt provides only the AI inference capability of the underlying AI acceleration hardware, but no…
39AI inference capabilities and hardware attribute configurations commonly shared by AI hardware, su…
40 …rnal model image to connect to the underlying AI hardware. The specific operators are implemented …
48 …using MindSpore Lite to load models on NNRt is faster than using any other AI inference frameworks.
50 …rence between general-purpose computing hardware (CPUs/GPUs) and dedicated AI acceleration hardwar…