This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Can you suggest a better way?

Hello,

This is not a Keil related question - but either way I was hoping to hear some refreshing ideas...
I have a simple data structure, like this:

typedef struct
{
        int32s  id ;
        int32s  offset ;
        int32s  ptr1[ELEMENT] ;
        int32s  ptr2[ELEMENT] ;
        int32s  ptr3[ELEMENT] ;
} my_struct ;

The module that uses this structure offers several interfaces that expect a pointer to "my_struct".
I'm working in C, so how can I best tackle the need for an exact data structure, but with different sizes of the embedded arrays?

typedef struct
{
        int32s  id ;
        int32s  offset ;
        int32s  ptr1[3*ELEMENT] ;
        int32s  ptr2[3*ELEMENT] ;
        int32s  ptr3[3*ELEMENT] ;
} my_struct2 ;

Obviously I cannot use the existing interfaces with a pointer to "my_struct2" as the offsets of the arrays after "ptr1" have changed.
I don't want to use dynamic memory allocation.
Can you suggest a better way?

  • with static linkage, I am not sure that you can do this without dynamic memory allocation.

  • In your case, ptr1, ptr2 and ptr3 are not really pointers.

    Do you have a lot of different-sized structures?
    Do you have a lot of instances of the different objects?
    Do you have lots of code that makes use of it?

    Maybe you should store the three limits as fields in the struct, and then just have a single data[A_SIZE+B_SIZE+C_SIZE] at the end, and make use of a generic data type with an open-ended data[] member at the end.

    This means that you will not get any compile-time type-checking when sending a struct pointer to a function. So the function must start by checking that the size value(s) are as expected. It would then have to on-the-fly compute the index of the three sub-areas of the data[] field, or start by creating three pointers: ptr1 = data, ptr2 = data + size_a, ptr3 = ptr2 + size_b.

    A problem is how to handle the allocations. But that also depends on if you need dynamic allocations or static allocations.

    You may create a large pool with:

    char pool[N*sizeof(base_struct) + 10*A_SIZE*sizeof(int32s) + ...].
    base_struct* my_struct_1 = pool;
    base_struct* my_struct_2 = pool + ...;
    

    Of if you can support a bit of memory loss:

    char pool[20*sizeof(largest_struct)];
    base_struct* my_struct_1 = (base_struct*)pool;
    base_struct* my_struct_2 = (base_struct*)(pool+1);
    


    The above has obvious disadvantages, in the form of pointer arithmetic. If you are not 100% comfortable with pointers, then you should continue with individual data types and suffer your problems.

    Another thing. Are ptr1[], ptr2[], and ptr3[] always having the same element count? Then maybe you should define a triplet data type and instead have:

    struct {
        ...
        [int32s data_size;]
        triplet_t data[ELEMENT_COUNT];
    };
    


    Now, a large struct will have the first data values stored identically to a smaller struct, so you may let all functions take a pointer to a struct of one size. Then let the function check the optional "data_size" field to check if a struct of the expected size was sent.

    Or you may then follow through by adding a pointer in each struct like:

    typedef struct {
        ...
        int32_s data_size;
        triplet_t *data;
    } my_struct;
    
    triplet_t triplet_pool[SIZE_COUNT_1*SIZE_1 + SIZE_COUNT_2*SIZE_2];
    my_struct struct_pool[SIZE_COUNT_1+SIZE_COUNT_2];
    
    function init(void) {
        unsigned i;
        my_struct *p = struct_pool;
        triplet_pool *t = triplet_pool;
        for (i = 0; i < SIZE_COUNT_1; ++i) {
            p->data_size = SIZE_1;
            p->data = t;
            t += SIZE_1;
        }
        for (i = 0; i < SIZE_COUNT_2; ++i) {
            p->data_size = SIZE_2;
            p->data = t;
            t += SIZE_2;
        }
    }
    


    The common struct pool could be modified into multiple pools with

    triplet_t triplet_pool[SIZE_COUNT_1*SIZE_1 + SIZE_COUNT_2*SIZE_2];
    my_struct struct_pool_1[SIZE_COUNT_1];
    my_struct struct_pool_2[SIZE_COUNT_2];
    


    and similar initialization, or anything in between.

    But if any changes gives any gain really depends on what you do. Do you need an abstract data type where a function can take a struct with varying amounts of data, or does a single function always take a struct with specific size?

    If each function have a hard-coded requirement in size of ptr1[], ptr2[] and ptr3[], then I would continue with individual data types and get the advantage of compile-time type checking. It is only if dyanmic declarations allows you to use generic functions that you should spend time to consider solutions containing pointer-arithmetic during access or initialization. You need to look at the balance between gain and loss in maintainance costs and impact on future developers.

  • I think I am going to try to this recommendation:
    Now, a large struct will have the first data values stored identically to a smaller struct, so you may let all functions take a pointer to a struct of one size. Then let the function check the optional "data_size" field to check if a struct of the expected size was sent.

    thanks a lot for a most outstanding reply!