Home
last modified time | relevance | path

Searched refs:tensors (Results 1 – 22 of 22) sorted by relevance

/ohos5.0/foundation/ai/neural_network_runtime/test/system_test/
H A Dend_to_end_test.cpp47 OH_NN_ReturnCode End2EndTest::BuildModel(const std::vector<CppTensor>& tensors) in BuildModel() argument
55 OH_NN_ReturnCode status = AddTensors(tensors); in BuildModel()
136 std::vector<CppTensor> tensors{addend1, addend2, activation, immediateTensor, output}; variable
138 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
193 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
262 ASSERT_EQ(OH_NN_SUCCESS, AddTensors(tensors));
317 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
371 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
448 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
514 ASSERT_EQ(OH_NN_SUCCESS, BuildModel(tensors));
[all …]
H A Dend_to_end_test.h33 OH_NN_ReturnCode BuildModel(const std::vector<CppTensor>& tensors);
/ohos5.0/foundation/ai/neural_network_runtime/example/deep_learning_framework/tflite/delegates/nnrt_delegate/
H A Dnnrt_delegate_kernel.cpp198 … if ((i != kTfLiteOptionalTensor) && (context->tensors[i].allocation_type != kTfLiteMmapRo) && in BuildGraph()
293 TfLiteTensor* tensor = &(context->tensors[indexPair.first]); in ConvertTensorTypeToNn()
332 TfLiteIntArray* tensors = node->inputs; in SetInputTensors() local
333 TF_LITE_ENSURE_EQ(context, tensors != nullptr, true); in SetInputTensors()
335 for (auto absoluteIndex : TfLiteIntArrayView(tensors)) { in SetInputTensors()
343 TfLiteTensor* tensor = &context->tensors[absoluteIndex]; in SetInputTensors()
368 TfLiteIntArray* tensors = node->outputs; in SetOutputTensors() local
369 TF_LITE_ENSURE_EQ(context, tensors != nullptr, true); in SetOutputTensors()
370 for (auto absoluteIndex : TfLiteIntArrayView(tensors)) { in SetOutputTensors()
375 TfLiteTensor* tensor = &context->tensors[absoluteIndex]; in SetOutputTensors()
H A Dnnrt_op_builder.cpp48 …if (i != kTfLiteOptionalTensor && opBuilderArgs.context->tensors[i].allocation_type != kTfLiteMmap… in NnrtOpBuilder()
59 TfLiteTensor* biasTensor = &mappingArgs.context->tensors[biasIndex]; in AddZerosBias()
60 const auto inputType = mappingArgs.context->tensors[inputId].type; in AddZerosBias()
82 const TfLiteTensor& inputTensor = mappingArgs.context->tensors[inputId]; in AddZerosBias()
83 const TfLiteTensor& filterTensor = mappingArgs.context->tensors[filterId]; in AddZerosBias()
162 …const int32_t numUnits = mappingArgs.context->tensors[filterTensorId].dims->data[0]; // bias chan… in AddFullConnectedParams()
336 TfLiteTensor* tensor = &(m_context->tensors[tensorIndex]); in AddTensor()
352 TfLiteTensor* tensor = &(m_context->tensors[tensorIndex]); in AddTensor()
408 TfLiteTensor* tensor = &(m_context->tensors[tensorIndex]); in TransposeDepthwiseTensor()
436 TfLiteTensor* tensor = &(m_context->tensors[tensorIndex]); in ConstructNNTensor()
H A Dtensor_mapping.h95 TfLiteTensor* tensor = &(context->tensors[tensorIndex]); in ConvertQuantParams()
128 TfLiteTensor* tensor = &(context->tensors[tensorIndex]); in ConvertType()
H A Dnnrt_utils.h116 TfLiteTensor* tensor = &(context->tensors[tensorIndex]); in TransposeTensor()
/ohos5.0/drivers/interface/nnrt/v2_1/
H A DModelTypes.idl37 …contained in the subgraph, the input and output tensors of the operator, and the input and output
41 …* - 4. When the tensors input by the user are passed to the model, the NNRt module performs model …
171 …* Array of all tensors in the model. The array contains input tensors, output tensors, and constan…
H A DIPreparedModel.idl71 …* The first dimension of the array indicates the number of tensors, and the second dimension indic…
72 * the number of dimensions of the tensors.
74 …* The first dimension of the array indicates the number of tensors, and the second dimension indic…
75 * the number of dimensions of the tensors.
H A DNodeAttrTypes.idl402 …* @brief Connects tensors in the specified axis or connects input tensors along with the given axi…
412 * * Result of the tensors connected.
749 * on the input tensors <b>x1</b> and <b>x2</b>.
978 * on the input tensors <b>x1</b> and <b>x2</b>.
1000 * on the input tensors <b>x1</b> and <b>x2</b>.
1106 * on the input tensors <b>x1</b> and <b>x2</b>.
1373 …* The input must be two tensors or one tensor and one scalar. When the input is two tensors, the d…
1440 …* The input must be two tensors or one tensor and one scalar. When the input is two tensors, the d…
1466 * The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors,
1537 * on the input tensors <b>x1</b> and <b>x2</b>.
[all …]
H A DNnrtTypes.idl252 * @brief Defines the input and output tensors of an AI model.
844 /** Product of the elements of two tensors */
846 /** Difference between the elements of two tensors */
848 /** Maximum value of the elements of two tensors */
/ohos5.0/docs/en/application-dev/reference/apis-neural-network-runtime-kit/
H A D_neural_network_runtime.md582tensors in a specified dimension.<br>Input:<br>- **input**: *n* input tensors.<br>Parameters:<br>-…
1655 Obtains the number of input tensors.
1716 Obtains the number of output tensors.
1822 | inputTensor | Array of input tensors.|
1823 | inputCount | Number of input tensors.|
1824 | outputTensor | Array of output tensors.|
1825 | outputCount | Number of output tensors.|
1855 | inputTensor | Array of input tensors.|
1856 | inputCount | Number of input tensors.|
1857 | outputTensor | Array of output tensors.|
[all …]
H A Dneural__network__core_8h.md66 …runtime.md#oh_nnexecutor) \*executor, size_t \*inputCount) | Obtains the number of input tensors.|
67 …ntime.md#oh_nnexecutor) \*executor, size_t \*outputCount) | Obtains the number of output tensors.|
70 …ize_t \*\*maxInputDims, size_t \*shapeLength) | Obtains the dimension range of all input tensors.|
H A Dneural__network__runtime_8h.md39 …t32_array.md) \*outputIndices) | Sets an index value for the input and output tensors of a model.|
/ohos5.0/docs/en/application-dev/ai/nnrt/
H A Dneural-network-runtime-guidelines.md58 …nsor | Tensor handle, which is used to set the inference input and output tensors of the executor.|
68 …_NN_UInt32Array *outputIndices) | Sets an index value for the input and output tensors of a model.|
115 … memory and tensor description. This way, the device shared memory of other tensors can be reused.|
129 …putCount(const OH_NNExecutor *executor, size_t *inputCount) | Obtains the number of input tensors.|
130 …tCount(const OH_NNExecutor *executor, size_t *outputCount) | Obtains the number of output tensors.|
133 …nputDims, size_t *shapeLength) |Obtains the dimension range of all input tensors. If the input ten…
474 // Obtain information about the input and output tensors from the executor.
475 // Obtain the number of input tensors.
487 // Obtain the number of output tensors.
499 // Create input and output tensors.
[all …]
H A DNeural-Network-Runtime-Kit-Introduction.md17 …te an executor based on the built model object, set the input and output tensors for inference, an…
18 …on the AI hardware driver and assign the shared memory to tensors. The input and output tensors mu…
/ohos5.0/docs/en/application-dev/reference/apis-mindspore-lite-kit/
H A D_mind_spore.md148 …ts) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains all weight tensors of a model. This …
173 …sor) | Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.|
1423 | num | Number of output tensors.|
1450 | num | Number of output tensors. |
1479 | num | Number of weight tensors with a variable shape.|
1843 Obtains all weight tensors of a model. This API is used only for on-device training.
1855 All weight tensors of the model.
1992 Updates the weight tensors of a model. This API is used only for on-device training.
2000 | new_weights | Weight tensors to be updated.|
2090 Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.
[all …]
H A Dtensor_8h.md6 Provides tensor-related APIs, which can be used to create tensors and modify tensor information. Th…
H A Dmodel_8h.md71 …delHandle](_mind_spore.md#oh_ai_modelhandle) model) | Obtains all weight tensors of a model. This …
72 …](_o_h___a_i___tensor_handle_array.md) new_weights) | Updates the weight tensors of a model. This …
H A Djs-apis-mindSporeLite.md903 Obtains all weight tensors of a model. This API is used only for on-device training.
948 | weights | [MSTensor](#mstensor)[] | Yes | List of weight tensors.|
/ohos5.0/foundation/ai/neural_network_runtime/example/deep_learning_framework/
H A DREADME_zh.md44 主要开发步骤包括命令行参数解析、创建NNRt Delegate、TFLite nodes的替换、tensors的内存分配、执行推理、结果查看等,具体如下:
54 6. 用户调用AllocateTensors,完成tensors内存分配和图编译。其中,支持在NNRtDelegate上运行的node会调用NnrtDelegateKernel的prepare接口完…
/ohos5.0/docs/en/application-dev/ai/mindspore/
H A Dmindspore-guidelines-based-js.md171 …fore executing a model, you need to obtain the model input and then fill data in the input tensors.
H A Dmindspore-lite-converter-guidelines.md133 …it the name, data type, shape, and memory format of the input and output tensors of the model resp…