Lines Matching refs:hardware
6 …work or application developers who want to directly use AI acceleration hardware to accelerate mod…
8 …nference on the AI acceleration hardware connected to NNRt. This implementation realizes unperceiv…
10 …hardware to perform model inference on NNRt, without the need to call the NNRt image composition A…
16 …hardware-specific model objects on the underlying AI hardware driver through the NNRt build API. T…
17 …ct, set the input and output tensors for inference, and execute model inference on the AI hardware.
18 …hardware driver and assign the shared memory to tensors. The input and output tensors must contain…
19 …ement**: Display information about the AI hardware connected to NNRt and allow for selection of th…
21 …hardware for inference. Specifically, use the model converter provided by the AI hardware vendor t…
29 …ns unified AI acceleration hardware inference APIs for the AI inference framework to implement unp…
32 …hardware-specific offline model loading function, which shortens the model build time. However, in…
33 …mon hardware attributes such as the inference priority, performance mode, and FP16 mode, and suppo…
34 - Implements zero copy of data by applying for shared memory on the AI hardware driver, improving t…
38 …e AI inference capability of the underlying AI acceleration hardware, but not common hardware such…
39 …hardware attribute configurations commonly shared by AI hardware, such as build, execution, memory…
40 …odel image to connect to the underlying AI hardware. The specific operators are implemented in the…
42 …sition. Whether NNRt supports concurrent build and execution depends on underlying hardware driver.
50 …el inference between general-purpose computing hardware (CPUs/GPUs) and dedicated AI acceleration …