This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

_init_boxh problem

I'm trying to use the _init_boxh() function but the function always fails.

When I look into the assembler code I noticed that the _init_boxh() routine is not support. 64Ksegment length huge pointers, but using the 16Ksegment far memory model instead when it is testing the memory space:

// example, calling function:
// use 0x1000 block at 0x11F000
_init_boxh(0x11F000,0x1000,0x100)

MOV R11,#0x0100
MOV R10,#0x1000
MOV R8,#0xF000
MOV R9,#0x0011
CALLS _init_boxh(0x2B7E8)

(0x2B7E8): _init_boxh() routine:
MOV R4,#0x00
ADD R11,#1
AND R11,#0xFFFE
JMPR CC_Z,0x02B828
ADD R10,R8 => NOT VALID in 64Kspace => cc lost!!
MOV R6,R8
MOV R5,R8
ADD R6,#6
SUB R10,R11
JMPR CC_C,0x02B828
=> exits here because R10 < R11

  • Hi Peter,

    Here is an extract from the C166 Manual
    http://www.keil.com/support/man/docs/c166/c166_le_huge.htm

    For variables, huge memory is limited to 16M, objects are limited to 64K, and objects may not cross a 64K boundary.

    So taking this into account, I think the function doesn't do anything illegal.
    You would have to use xhuge pointers to be able to cross 64K boundaries. The fact that there doesn't seem to be an xhuge version of _init_boxh() is another matter.

    Regards,
    - mike

  • Hi Mike, thanks for the quick response.

    But I don't quite understand why I'm hitting the huge boundaries:

    I've allocate a huge memory space
    (from 0x11F000 to 0x11FFFF = 0x1000 bytes)
    => not crossing the 64K segment

    I'm requesting to use this 0x1000 bytes size block in the _init_blockh() function.
    => not more then 16K bytes

    In the assembly code I see that:
    R11 = 0x100 (size for one block)
    R10 = 0x1000 (size of the pool)
    R9 = 0x11 (segment of mem. space)
    R8 = 0xF000 (offset of mem. space)

    Then:
    R10 (pool size) is added with R8 (offset mem. space):
    R10 = 0x1000 + 0xF000 = 0x0000

    next, R10 is substracted from R11 (size for one block):
    R10 = 0x0000 - 0x100
    => causing cc and error exit of function

    My idea is that this routine is checking if the size for one block is not higher then the size of the pool, but it is forgetting the carier when adding offset with pool size.

  • You are saying you don't understand where you are hitting a 64K boundary, but the rest of your post explains exactly where you do. "Crossing a 64K boundary" means having a carry in 16-bit arithmetics, and that's exactly what's happening in your case (0x11F000+0x1000=0x120000: here is your carry.)

    - mike

  • The asm code printed is the code from the _init_boxh() routine, NOT mine!

    The internal code of the function is hitting the 64k boundary by adding the pool size with the segment offset.
    I, as a user of the _init_boxh() function, doesn't hit any 64K boundary.
    I allocated a static huge 0x1000 byte block and gave the returned pointer to the _init_boxh() function:

    #define BLOCK_POOL_SIZE 0x1000
    char huge *block_pool[BLOCK_POOL_SIZE];

    main
    {
    ...
    _init_boxh(block_pool,BLOCK_POOL_SIZE,0x100);
    ...
    }

    And block_pool pointer showed 0x11F000 value when I was debugging.

    How should I call this function then?

    I've made a work around by using _init_boxf() instead (far-version), and casting my pointers to (void far *).

    main
    {
    ...
    _init_boxf((void far *)block_pool,BLOCK_POOL_SIZE,0x100);
    ...
    }

  • One thing is confusing me. You wrote:

    char huge *block_pool[BLOCK_POOL_SIZE];
    whereas I believe it should be:
    char huge block_pool[BLOCK_POOL_SIZE];
    Must be a typo in the post.
    Anyway, I see the point your are making. I admit, I didn't see it at first. Basically, the problem is that you are allocating the buffer as a huge array and passing a pointer to it to the function which should be happy with it since we don't expect compiler/linker to allocate huge objects that would cross a 64K boundary.
    It seems that in this particular case the upper boundary of the buffer "touches" a 64K boundary which upsets 16-bit address arithmetics in _init_boxh(). It's not clear to me whether _init_boxh() should be prepared to deal with such situations. It probably should.
    As a workaround, I would suggest allocating the buffer in such a way that its upper boundary doesn't touch a 64K boundary. It could be as easy as this:
    char huge block_pool[BLOCK_POOL_SIZE+1];
    although it's not guaranteed to work. The bulletproof approach would be to combine the buffer in a union with an array of ints and making sure that the buffer size in bytes is odd.

    - mike

  • A correction to my last post:

    although it's not guaranteed to work

    After thinking about it a minute, it is guaranteed to work. But I could still be missing something, of course.

    Regards,
    - mike

  • It appears that there is indeed a problem with this library routine when the size + base_offset = 0x??0000. We are working on a solution that will be included in the next release.

    For the mean time, you should avoid placing the block so that it is located at the end of a 64K page.

    Jon

  • Mike, thanx for the solution: allocating the pool size with (size + 1) as a work around.

    Jon, the chance of having
    POOL_SIZE + OFFSET = 0x??0000 is quite high since static memory allocation order is top to bottom.
    Thanx for the quick response.