This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Load an image in QVGA format into a ARM Compute Library ICTensor.

Hi.
I am trying to load an image in QVGA format into a ARM Compute Library ICtensor. This is my current code:

CLTensor in0;
in0.allocator()->init(TensorInfo(TensorShape(320U, 240U), 1, DataType::U8));

But in Compute Library "Image" are dined with a Format and dimensions expressed as [width, height, batch].
And a Tensor is defined by a DataType plus a number of channels (Always expected to be 1 for now) and their dimensions are expressed as [width, height, feature_maps, batch].

So, what is suggested to use for the parameters "batch" and feature_maps?

Thanks.

  • Hi ,

    and thanks for using Arm Compute Library. In order to use the RGB or RGBA format, you should initialize your tensor in the following manner:

    CLTensor in0; 
    in0.allocator()->init(TensorInfo(320U, 240U, Format::RGB));

    in this manner you create a 2D tensor with 3 channels (RGB) interleaved. However, just few operations are supported with this data layout so I suggest to convert your input QVGA in a planar data layout (each plane stores the values of each channel). In this manner you can use most of the functions available for Compute Library.

    You might have something like:

    CLTensor in0; 
    in0.allocator()->init(TensorInfo(320U, 240U, Format::RGB));

     // Convert to data layout to planar using CLChannelExtract and CLChannelCombine

    The output of CLChannelCombine should be:

    CLTensor out; 
    out.allocator()->init(TensorInfo(TensorShape(320U, 240U, 3U), 1, DataType::U8));

    Hope this can help.

    Do not hesitate to write on github (github.com/.../issues) if you need any further information.

    Gian Marco