This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Dual data pointer registers problem

Hi there,

I run the codes below in uVision3 V3.30 in order to
see the effect of AUXR1 register in switching DPTR0 and DPTR1. I choose the device as AT89S51 which provides the dual DPTR function. But when I run the codes step by step in uVision, the Address Window appears the pointer address 4000h and 8000h have been written into the same address saying 82h-83h - the DPS in AUXR1 didn't switch the DPTR though the instruction 'XRL DPTRSW, #DPS' definitely alters the DPS in step run. Why this happened?

DPTRSW DATA 0A2H ;
DPS EQU 00000001B ;
ORG 00H

MOV R7, #4 ;
MOV DPTR, #4000h ;
XRL DPTRSW, #DPS ;
MOV DPTR, #8000h ;

LOOP:
XRL DPTRSW, #DPS ;
CLR A
MOVC A, @A+DPTR ;
INC DPTR ;
XRL DPTRSW, #DPS
MOVX @DPTR, A ;
INC DPTR ;
DJNZ R7, LOOP ;

END

  • Interesting. I duplicated your described behaviour in the simulator selecting the AT89S51. The simulator seems to simply ignore the dual dptr existence. Even the AT89S8252 simulation is flawed: you can increment the second dptr, but 'mov dptr' always loads the dptr0.

    I routinely use the Philips AT89C668 using the dual dptrs, and the simulator behaves exactly as the silicon for that part.

  • So I guess that might be the bug in uVision simulator. In the real hardware the dual DPTR should be switched by setting the DPS. Can I say that(lazy to have a go on the hardware :)?

  • So I guess that might be the bug in uVision simulator

    I would not classify this as a 'bug', I would classify it as "expecting more than what is reasonable"

    Expecting Keil - or any other simulator maker - to adapt to every intricacy of all of the 4711 derivatives out there is the same as expecting tools nobody can afford.

    I know that in the mind of most the issue is "why does keil not full simulate MY chip" but none of us is the only customer Keil has, and the chip we use is not the only '51 derivative.

    Reasonable expectations keep tools affordable, unreasonable expectations makes them available only to the largest/richest companies.

    Erik

  • the above could, of course, be reported as a request but still is not a 'bug'.

    I have no doubt Keil would consider such a request, whether they would allocate the resources to fully support the particular derivative you use, is another story. Here, of course, also is the factor that, if Keil act as a business, a request for improvement from a licenced user would, more likely, be considered than a request from a freeloading eval user.

    Since I consider Keil a honorable company, I would assume that if the reported was a REAL bug (not support of the pecularities of a particular derivative) it would not matter what the source of the information was.

    Erik

  • I would not classify this as a 'bug', I would classify it as "expecting more than what is reasonable"

    I do agree that lack of simulation support for a particular chip feature cannot be classified as a simulator bug, but maybe as a limitation on the simulation capabilities for that particular target.

    But that is not what happens in the case at hand. Having simulation support for a chip feature requires the feature to be fully modelled. The dual DPTR support is not a obscure aspect of the core behavior, it is a much used trivial feature. Although it is not 'standard' for the original 8051core, it is provided by many derivatives. Besides, the AT89S8252 is a popular chip from a mainstream vendor.

    In this particular case, it very much can be classified as a bug in the implementation of the simulator, on the basis that a simulated feature does not duplicate the real hardware behavior.

  • But that is not what happens in the case at hand. Having simulation support for a chip feature requires the feature to be fully modelled. The dual DPTR support is not a obscure aspect of the core behavior, it is a much used trivial feature. Although it is not 'standard' for the original 8051core, it is provided by many derivatives. Besides, the AT89S8252 is a popular chip from a mainstream vendor.
    the 8252, based on what I have seen in posts here and elsewhere is "a popular chip" among amateurs.

    it is provided by many derivatives. in just about as many variants as there are derivatives that have it. Some are a sfr bit, that can be toggled by 'inc', some are a SFR bit that can be bit addressed, some are a SFR bt that can not be bit addressed, not to mention that that the actual bit usually is a different one.

    So, if you use a licenced version of the software, send a request to Keil, if not, be grateful for what you get for free "Do not look a gifted horse in the mouth"

    Erik

  • "the 8252, based on what I have seen in posts here and elsewhere is "a popular chip" among amateurs."

    Your point being, what? Do you really think that defects in a toolset should only be classified as bugs if they affect the high-end users?

    "A popular chip" means that it is a target chip for which the toolchain is used by many.

    "...just about as many variants as there are derivatives that have it."
    That's irrelevant. If you have to rely only on the standard core functionality, then the simulator is worthless. If a given functionality is being offered in a given derivative, it ough to be correctly implemented, else it is an implementation bug.

    "So, if you use a licenced version of the software, send a request to Keil, if not, be grateful for what you get for free "Do not look a gifted horse in the mouth""

    Now, that's really where we differ. It often takes a skilled and experienced engineer to detect subtle simulator flaws. Much more common is the user to try to find the bugs in the simulated code rather than write test suites to validate the development tools. Whether or not you are using the limited evaluation version or the full suite, this should not affect the quality of the implemented features.

  • If you have to rely only on the standard core functionality, then the simulator is worthless. If a given functionality is being offered in a given derivative, it ough to be correctly implemented, else it is an implementation bug.
    as I posted earlier "to implement every 'feature' of every derivative, would be so expensive that Keil would have to price themselves out of the market".
    There are about 4711 "given derivatives" with about 47111 'unique features' and to support them all is not feasible with the size of the '51 market. We can argue till the cows come home about how far "as much as economically reasonable" can/should go, that decision must reside with Mr. Keil.

    "So, if you use a licenced version of the software, send a request to Keil, if not, be grateful for what you get for free "Do not look a gifted horse in the mouth""
    Now, that's really where we differ.

    Do we?, what I point out is "if you pay you can request", if you do not, you can only ask.

    It often takes a skilled and experienced engineer to detect subtle simulator flaws.
    I would not know about that. I know no "skilled and experienced engineer" that use a simulator. All "skilled and experienced engineers" I know use an emulator.

    Whether or not you are using the limited evaluation version or the full suite, this should not affect the quality of the implemented features.
    agree, re quality not agreed re scope. That a 'special feature' is not supported is not a matter of quality, it is a matter of scope. What you are, in effect, saying is that if MS word does not have a spell-checker for Swahili, it is not a quality product.

    Erik

    The worst part of this is that there is an undertone here that states "so what if those that pay will have to pay more" I want my free version to be as I want it.

  • So you say that all "skilled and experienced engineers" sit on their hands until they have any hardware to use with their emulator?

    If I start developing software for a product at the same time as the hardware is being designed, I just have to settle for the simulator - or spend time trying to create a mock-up.

    If the external hardware is highly complicated, relying on high frequencies, making use of bga chpis (for which I don't have any solder equipment), or requires sample components that may have a significant lead time - it may not be feasible to try to hand-build something to test on, so the software will not be possible to run in real hardware until the first prototype has been designed, built and delivered.

    The availability of a simulator is often a deciding factor when choosing which compiler suite to buy for a project, since it can allow the majority of time after reception of a hw prototype to be spent validating the hardware, instead of first having to decide what is hardware and what is software errors.

    A simulator can also alow testing of some worst-case scenarios that may be _very_ hard to generate on a real hardware platform.

  • So you say that all "skilled and experienced engineers" sit on their hands until they have any hardware to use with their emulator?
    1) all "skilled and experienced engineers" I know
    2) I have hardware designed 3 designs ahead of what I am doing software for. That way the hardware is there when software time comes.

    If I start developing software for a product at the same time as the hardware is being designed, I just have to settle for the simulator - or spend time trying to create a mock-up.
    You need nothing to write software, only to test it.

    If the external hardware is highly complicated, relying on high frequencies, making use of bga chpis (for which I don't have any solder equipment), or requires sample components that may have a significant lead time - it may not be feasible to try to hand-build something to test on, so the software will not be possible to run in real hardware until the first prototype has been designed, built and delivered.
    I would not do that for "hardware is highly complicated, relying on high frequencies" a "hand-build something to test on" will not be worth much. It will be too different re radiation etc.

    instead of first having to decide what is hardware and what is software errors.
    I know no better tool for 'deciding' that than an emulator.

    A simulator can also alow testing of some worst-case scenarios that may be _very_ hard to generate on a real hardware platform.
    You can test "worst case values" with a simulator, that, however rarely is the issue. Timing conflicts (which no simulator can simulate) is much more of a concern.

    Erik

  • I have hardware designed 3 designs ahead of what I am doing software for. That way the hardware is there when software time comes.

    That implies that you are either doing quite small projects, or that your customers don't have too strict requirements about lead times.

    If the average software is 5k - 20k lines of code, then the development must be started as soon as the major design issues have been resolved.

    If the units are expected to be produced in 1k or 10k/year or more, then every single project has a high tendancy to use completely different chips than the earlier projects. Either because a new, more cost-effective chip has been released, the previously used chips are a number of pins too large or too small, doesn't have fast enough boot time, low enough MIPS/mA, misses a critical peripherical, ...

    Since the majority of the development will use chips I haven't worked with before, it isn't too good to rely on a "write now and test later" strategy. When the hardware arrives, the majority of the software really has to work, since there will not be much time available to validate the hw before deciding what hw changes should be made before shipping it to notified bodies for certifications.

    A software developer must not run around with a "hammer" thinking everything is a "nail". An emulator is good to have, but it is only one of a number of required/needed/appreciated tools. Some problems are best solved with a simulator. Some with a high-end digital scope or logic analyzer. Some with an emulator. Some by running partial builds on a PC, or on similar hardware. Some by just looking at the code and think for a while.

    One of the real strengths with a good simulator, is the ability to perform nightly regression tests, to quickly notice if any part of the software have suddenly started to behave differently than before. The regression testing requires a known test pattern, or it will be very hard to try to look at the test results.

  • If the units are expected to be produced in 1k or 10k/year or more, then every single project has a high tendancy to use completely different chips than the earlier projects. Either because a new, more cost-effective chip has been released, the previously used chips are a number of pins too large or too small, doesn't have fast enough boot time, low enough MIPS/mA, misses a critical peripherical,
    two things
    a) this thread started "the simulator does not simulate a "completely different chip" so, that argument is lost on me.
    b) "doesn't have fast enough boot time" boot time in embedded????

    Erik

  • a) this thread started "the simulator does not simulate a "completely different chip" so, that argument is lost on me.

    My arguments about the advantages of a simulator directly follows your comment:
    - I know no "skilled and experienced engineer" that use a simulator.

    b) "doesn't have fast enough boot time" boot time in embedded????
    Don't restrict your thinking of boot time as the time a PC needs until you get a login prompt.

    For embedded systems, it can quite often come down to the time it takes for a crystal or PLL to stabilize.

    This is an important factor if you run your embedded equipment on _small_ batteries and saves power by turning off the main oscillator, and only keeping an RTC running asynchronously at a few uA. Or you might drive the equipment from a phone line, allowing an average consumption of 15uA until it's time to pick up the line.

    If the processor needs 10k or 100k clock cycles until the main oscillator is stable and the processor may handle any real work, a significant part of the power consumption can be spent just waiting for the oscillator startup. Or the chip may loose a lot of data.

  • Wow. It's just amazing how a thread can develop during your drive home from work.

    This thread has been hopelessly hijacked, and as usual became a much more interesting thread :)

    Erik Malund said: "There are about 4711 "given derivatives" with about 47111 'unique features' and to support them all is not feasible with the size of the '51 market. "
    I can appreciate your point, but I think you are missing one thing here that I want to make clearer: Nobody is demanding full simulation for every of the many 8051 derivatives on the market. That has never been said in this thread. My whole point is that, If you decide to provide simulation support for a chip feature, then you must do it faithfully. There are many chips that are not fully simulated, with several peripheral subsystems that are simply not simulated. That's not so big a problem, and is normally noted in the model implementation notes. HOWEVER, if a feature is implemented, then it must be implemented right, or else you just can't rely on the tool you are using. That is a bug. And this is ok also, because every software or firmware has bugs. Hopefully, the vendor will come up with a future software release that fixes the bug.

    What I don't agree in Erik statements is that one should be 'grateful' for the toolset functionality, and simply ignore the bugs. The companies I worked for have spent tens of thousands of dollars on Keil tools every few years to purchase development tools that were believed solid and dependable. Surely we have not counted on Keil's benevolence or goodwill, but on their craftsmanship and keenness. We must depend on their products to make our products to ship with less bugs.

    But there is an inversion of values here: we do not expect to have rights on a perfect product because we have paid for it. On the contrary, we decide to buy it because we have evaluated it, and decided that despite the few bugs, it is worthy. Product excellence comes before the purchase decision. And that is all about it: should it be discovered later that it is an empty promise, and a better alternative available, customers would not buy it for the next system development. On that perspective, I would definitely question the decision to purchase a toolset that has a very poor quality for the evaluation and educational versions.

    As a matter of fact, that is not what happens to Keil tools. They have their idiosyncrasies, a few nasty bugs, sometimes questionable optimizer code, but that is just the drill on every dev toolset that will ever be. Fact is that we have been using Keil tools for quite a long time, and have been able to ship quite large firmware written with it.

  • Erik Malund said: I know no "skilled and experienced engineer" that use a simulator. All "skilled and experienced engineers" I know use an emulator.

    An emulator is a nice tool. We have a few emulators stuck in obscure corners of the engineering dept. steel storage cabinets. One for each family of CPU used sometime in the past. Actually, an emulator was a required tool back then, because you really could not test firmware, place breakpoints and analyze trace buffers without one. The software debuggers sucked big time, especially for cross-development. The good and extremelly expensive emulators were usually coupled with a sizable logic analyser (a real one), that helped you to deglitch your board. The good stuff was based on unix workstations, of course, not those feeble PCs.

    Then, some five moore's cycles later, there came available massive computing power at every desk. I have in my develop PC today VHDL simulators that run in a few minutes RTL files that would take 5 hours to run in a HP workstation a few years ago. Linear Tech's SPICE engine runs a dozen times faster than the first UCB SPICE I used in a SUN workstation, and the core code is very similar. That allows me to fully characterize complex ADC and SMPS circuits in one week, before the hardware is even laid-out. The result is that usually I make just one PCB iteration for these designs now, and the prototype actually works pretty much exactly the same as the SPICE models.

    The same applies to firmware simulation. I have in my machine simulators that run full-blown instrumented firmware simulations faster than realtime. A well-planned firmware deployment MUST account for a well-designed simulation environment. There are many situations in which an emulator or on-chip debugger cannot substitute for a good simulator. When hunting for hard bugs, a simulator is more productive than an emulator, because you have a more controlled environment. Especially in hard-realtime systems, where the on-chip debug resources may interfere with the system computing load.

    In other words, simulation is ESSENTIAL to a professional design flow. So much so that virtually every EDA tool vendor throws real money in development of good simulation tools. For example, for some high-reliability contracts, you must provide proof of verification for your entire software, by means of testbench simulation vectors and results for every function in your software, something really expensive to do if you use a hardware emulator.

    I am not dismissing the importance of on-chip debugging and integrated trace hardware on the current CPUs. The embedded trace macrocell in the ARM cores do really facilitate system-level verification. But even then, the simulator is essential on the workflow.

    I see it as a much neglected part of the design flow, almost as neglected as comprehensive firmware testing methods and verification.