We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hello everyone,
When I am initializing my Arm NN, I am pre-allocating output tensors like this:
//pre-allocate memory for output for (int it = 0; it < outputLayerNamesList.size(); ++it) { const armnn::DataType dataType = outputBindingInfo[it].second.GetDataType(); const armnn::TensorShape& tensorShape = outputBindingInfo[it].second.GetShape(); std::vector<float> oneLayerOutResult;
oneLayerOutResult.resize(tensorShape.GetNumElements(), 0); outputBuffer.emplace_back(oneLayerOutResult);
// Make ArmNN output tensors outputTensors.reserve(outputBuffer.size()); for (std::size_t it = 0; it < outputBuffer.size(); ++it) { outputTensors.emplace_back(std::make_pair( outputBindingInfo[it].first, armnn::Tensor(outputBindingInfo[it].second, outputBuffer.at(it).data()))); } }
The question is: What do I need to do to cleanly deallocate these output tensors when I am done with the network? Any suggestions, please?