I would like to be able to write code that implements short delays (a few microseconds) but where the code generated automatically adapts to the speed of the processor. i.e. as the processor speed is changed, the delay remains constant. I don't suppose there is a predefined macro constant available (that Keil hasn't told us about) that makes the cycle time available to the source code? I guess this is quite a common problem. uVision seems to know all about time, so would it be difficult for a predefined constant to be provided?
"most of the information required must be available to uVision." Digressing somewhat, an awful lot of information is available to uVision which would be very useful to the application; eg, uVision already informs the C51 source code of the Memory Model selected by means of the __MODEL__ Predefined Macro Constant. This could easily be extended to include things like: __XTAL__ - The Crystal frequency entered in the 'Target' options (obviously, it would be up to the user to apply a suitable interpretation to this value); __XRAM_START0__, __XRAM_SIZE0__, __XRAM_START1__, __XRAM_SIZE1__ etc - the XRAM details entered in the 'Target' options. These needn't even require any change to the C51 compiler - they could just be (optionally?) placed explicitly on the command line by uVision It would also be useful if the Linker could provide a symbolic name which would allow the application to determine the last used (or first free) location in XRAM - eg, for buffering.