This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

PyTorch (QAT) Training Help

Developer Program Member shadeomen asked : 

"Hey there, I was working on the QAT project and its been fun learning so far. One problem that I am running into is a low mAP (few epochs show ~0.005 mAP) when training. For context, I'm training on the COCO-2017 dataset using an existing model for object detection. I was wondering if anyone could help me understand if I am training correctly or if this is normal?"

The answer to this was given by Arm Expert Ben Clark : 

"It's all very dependent on the dataset, the quantisation parameters and process... but with COCO2017 that shouldn't be the problem - but there's still a lot of places to look! Useful to get a baseline from full float before quantisation, so you know what you're hoping to emulate. And then - are you quantising further than it can cope (to how many bits?), gradients, parameters, per-tensor/channel/layer... Question is probably a bit broad for a simple support-forum answer, but there should be some good OyTorch quantization guides to look for. Unfortunately I don't have one at hand, hopefully someone else can suggest some..."

If you have a similar question or query, please feel free to ask in this forum