This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Use c51 v3.4 compiler with windows XP.

Hello.
I'm trying to obtain a *.hex file using WXP and C51 Compiler V7.0 or upper like the file that was obtained using W98 and C51 Compiler V3.4. I have the following file *.bat:
… SET c51bin=c:\c51_v340\bin
SET C51LIB=C:\C51_v340\LIB
SET C51INC=C:\C51_v340\INC

c:\c51_v340\bin\fmake

But there is a problem called "stack overflow" when I execute this *.bat from DOS. The instruction "c:\c51_v340\bin\fmake" is not well received.
I try to compilate using uVision and changing the tool folders but the program doesn't respond.
I can do it using VWARE, a W98 emulator but I want to know if I can do it without this application.
Did anybody try something like this? Can anybody help me?
Thanks!

Parents
  • Actually in the distant past I did just that. [...] It turned up that a coding error in the original code (revealed by using new tools) was the cause.
    The fact is that if the code can not 'handle' assemble/compile/link with current tools it has a bug.
    The argument "if we keep the original code the same, we do not need to test everything" is TOTAL BALONEY

    I'm not sure if it's TOTAL baloney. Personally I think keeping as much original and tested code the same is -in general- a lower risk. But I agree that running everything through the latest toolset never hurts. Both a change with the old toolset -or- introducing a new toolset, can introduce bugs...

    It probably also depends on the amount of changes you're going to make. If most of the app changes anyway, it might be a good moment to switch toolsets. If it's "just" a very minor change (if there is such a thing as a minor change in a very complex (legacy) system), keeping the old toolset might be a wise decision. With an additional check using the latest tools.

    I just added a discussion item for our team meeting :-)

    Regards,
    Joost Leeuwesteijn

Reply
  • Actually in the distant past I did just that. [...] It turned up that a coding error in the original code (revealed by using new tools) was the cause.
    The fact is that if the code can not 'handle' assemble/compile/link with current tools it has a bug.
    The argument "if we keep the original code the same, we do not need to test everything" is TOTAL BALONEY

    I'm not sure if it's TOTAL baloney. Personally I think keeping as much original and tested code the same is -in general- a lower risk. But I agree that running everything through the latest toolset never hurts. Both a change with the old toolset -or- introducing a new toolset, can introduce bugs...

    It probably also depends on the amount of changes you're going to make. If most of the app changes anyway, it might be a good moment to switch toolsets. If it's "just" a very minor change (if there is such a thing as a minor change in a very complex (legacy) system), keeping the old toolset might be a wise decision. With an additional check using the latest tools.

    I just added a discussion item for our team meeting :-)

    Regards,
    Joost Leeuwesteijn

Children
  • I'm not sure if it's TOTAL baloney. Personally I think keeping as much original and tested code the same is -in general- a lower risk

    "lower risk", that may be, but 'risk' is not the issue.
    Reliance on "tested code still being "tested" without retesting after ANY addition IS TOTAL BALONEY.

    I can give a very simple example (I could, probably, come up with 100s). In a system with internal and external code memory, the two running at somewhat different speed a small addition to some routine pushes a timing loop in an unrelated routine in the code that does not "need to be tested" into the other memory area.

    Erik

  • I don't think people are suggesting that you can make a change with no testing at all. What they are suggesting is that most of the object code doesn't change (in a binary sense) then you're very confident you've introduced no new bugs, regardless of the test results.

    You're assuming that just because a product passes the test suite, then there are no bugs. In reality, test suites are often highly fallible, and fail to detect many bugs.

    So, new code that is only slightly different that passes regression 100% is still less likely to have new bugs than new code that's completely different from the old that also passes the same regression suite 100%. The old code has been passing the real-world test suite of actual operation, while the new code has not.

    Testing is necessary but not sufficient to insure quality.

    The second step after this one is to realize that if the old code doesn't change, for a given amount of finite testing effort, you can concentrate that effort -- not exclusively, but preferentially -- on the part of the code that didn't change, thus making it more likely that you find the new bugs. You may not want to do so -- maybe you'd rather look for old bugs, too -- but at least you have a choice. If every byte in the hex file changes, then you have to retest evenly.

  • So, just when you thought you'd got your Jurassic Keil happily running on the latest PC - Microsoft go and move the goalposts with aonther new OS:

    Yes, XP is now old-hat; Vista is here!

    www.microsoft.com/.../

    So now we can have another few years of exciting "How can I make C51 v0.000001 work with Vista" threads...

    "Oh it all makes work for the working men to do..."