Hello everyone,
When I am initializing my Arm NN, I am pre-allocating output tensors like this:
//pre-allocate memory for output for (int it = 0; it < outputLayerNamesList.size(); ++it) { const armnn::DataType dataType = outputBindingInfo[it].second.GetDataType(); const armnn::TensorShape& tensorShape = outputBindingInfo[it].second.GetShape(); std::vector<float> oneLayerOutResult;
oneLayerOutResult.resize(tensorShape.GetNumElements(), 0); outputBuffer.emplace_back(oneLayerOutResult);
// Make ArmNN output tensors outputTensors.reserve(outputBuffer.size()); for (std::size_t it = 0; it < outputBuffer.size(); ++it) { outputTensors.emplace_back(std::make_pair( outputBindingInfo[it].first, armnn::Tensor(outputBindingInfo[it].second, outputBuffer.at(it).data()))); } }
The question is: What do I need to do to cleanly deallocate these output tensors when I am done with the network? Any suggestions, please?
First look looks sensible, but I'll get an ArmNN person to run their eye over it.
ArmNN expert says that input/output memory is application-scope rather than ArmNN-specific, but what you've got looks sensible given assumptions about types used etc...