This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Running a simulation without the debugger and IDE

Hi,

Does anybody know if it is possible to run a simulation, with all its functionality, but without the IDE being launched? (No GUI)

The aim here is to debug the windows application that communicates with this simulated device, and not the device code itself.

If possible, can it be done using the demo version, or do I need to purchase a full software license for it?

Many thanks in advance!
Shahar.

  • I think not.

    "The aim here is to debug the windows application that communicates with this simulated device, and not the device code itself."

    But why does that preclude use of the GUI?

    The uVision Manual tells you how you can start a debugging session from the command line - so you don't have to do it all manually.

    How does the device communicate with windows app?
    If it's a UART link, then just direct the simulated UART to a COM port, and use a Null-modem cable to link that to the Windows App's COM port...

  • You are absolutely right.

    I am already launching the simulation from the command line - and it is indeed easy to configure it to automatically start executing.

    As you suspected, the windows application communicates with the simulated device via UART, and I'm using a NULL Modem cable.

    The only issue is that I want to reduce the CPU and memory overheads induced by the IDE, which I don't need.

    Thanks!

  • If that's an issue, then you must have a seriously under-specified PC!

    I've never had problems running other apps due to the resources used by uVision!

    You can, of course, run uVision on a different PC...

  • Regarding your comment about my PC - I must agree, the situation is indeed serious......

    But the thing is I want to run several instances of uVision simulation on a single machine. (My Win application talks to several devices).

    Just out of curiosity: The core simulation simulates in my case a C8051F133 (~100Mhz clock) CPU, and is (claimed) to be doing a cycle-accurate simulation. There are no Idle CPU cycles.
    How can it NOT induce a serious CPU penalty??

    Anyhow, I don't know what are the CPU overheads caused directly by IDE/Debugger itself, which is what I wanted to try and find out in the first place.

    Regarding the other issue I mentioned before, do you have any idea regarding Keil's licensing fees when it comes to "just running a simulation". (i.e: compiling / debugging is not needed)

    Many thanks!

  • The simulator will process cycle-for-cycle. But that isn't the same as running in real time. If your PC is only fast enough to run the simulation at half real time, then that is what will happen. The simulator output will then count 1 second of real-time execution every 2 seconds wall-time for you.

    When the simulation is only tested against debugger test scripts, then they will have their actions scaled accordingly. When mapping a PC serial port into your simulated processor, then you may get into troubles because of processing speed of the virtual system in relation to the expectations of the Windows application that communicates with the virtual machine.

  • IF the PC can't hack it, why not just use the real target hardware?

    AFAIK, the simulator and debugger are integral parts of uVision - they are not available separately.

  • The reason for not using the real target Hardware is that I have also written a small dll for uVision (using their "AGSI" API) that controls the (simulated) behavior of my on-board peripherals. (e.g: external memory, SPI based devices...)

    This approach should (hopefully) allow me to run - automatically - all kinds of scenarios - which is great mainly for my QA-ing purposes...

    I hope that my I made that point clear...

    Got your last answer. Thanks for your effort and insights! I appreciate it.

    Shahar.

  • I am aware of the time accuracy issue you have mentioned.

    Regarding the serial port mapping, I didn't experience actual communication faults, but only slightly delayed replies from the virtual system, compared to the hardware target.

    "When the simulation is only tested against debugger test scripts, then they will have their actions scaled accordingly"

    Can you please elaborate on that?
    (If it matters, please note that I don't use those C-scripts that are run from the uVision command line, or from anywhere else.)

    Thanks.

  • The scripts that are run by the debugger can make delays based on either a fixed number of clock cycles, or a fixed amount of time. But since the scripts are run by the simulator, delays for a specific amount of time will not count wall-time in your room, but will count time based on the simulation.

    So if the processor has a 48MHz clock cycle time, a delay of 1 second in the script will not take one second of your time, but will wait for the simulator to step the simulated instruction clock counter 48 million times. So if you simulate an external SPI device, a delay of 480 clock cycles in the script will then correspond to a 10us delay if you had used real hardware.

    So all scripts scales the delay times with how fast your PC is able to simulate the processor. But programs using a mapped serial port will not know how fast the simulation is, so they will not be able to scale their timing and adjust timeouts correspondingly. If you run a serial port at 9600 baud and fills the FIFO, you expect the UART to be able to send out the characters at about 1ms/character based on your time. But a simulator running at 20% of real-time will have their UART running correspondingly slower. So it will take 5ms of PC time for each character sent into - or received out from - the virtual machine.

    How is your AGSI DLL taking care of the scaling of time from a simulation that isn't running 1:1 with the wall clock?

  • Since I'm not interested in time-accurate simulation of the peripherals (At the end I'm interested in QAing my Win application only...), this I think should not be an issue.

    Generally speaking, since all peripheral mechanisms are interrupt-driven in the MCU implementation, I don't see where there should be a problem. (The dll implements the relevant callbacks - as the AGSI API suggests.)

    SPI communication for instance, will be carried out exactly as fast as the simulation goes.

    The only issue being the mapped UART - as I understood from you.

    Can you please explain to me how (and why) the inclusion/exclusion of a mapped UART impacts the simulation timing?

    Thanks.

  • In that case, might it not be simpler to just compile your embedded code into some kind of PC format for the purpose of testing the Win App?

  • It doesn't impact the simulation timing.

    But it represents a scaling error in relation to the Windows application that is counting timeouts and transfer times based on a different time scale.

    If your protocol specifies that there should be five character pauses between message and response, then the Windows side will compute 5x1ms = 5ms wall time. A simulator that runs at 20% of real time will also compute 5ms delay. But this delay will be scaled and look like 25ms for the Windows application. And if the Windows application makes a 5ms delay, it will be scaled and look like 1ms on the simulated machine.

    Seen another way. A 9600 baud UART can transfer 1000 characters/second. If the simulated machine runs at 20% of real time, then the UART may either run with clock-cycle-based transfer times, in which case the UART will only manage 200 characters/second, to scale the virtual baudrate. The other alternative, is that the simulated UART doesn't compute time, but allows a byte to be received instantly, i.e. as soon as the Windows application sends out a byte, the receive flag gets set in the simulated machine.

    When using the UART for sending out debug information, it is good to have the UART take zero time, basically getting an infinite baudrate.

  • I have actually considered what you proposed.

    The thing is, I have ~150K of Embedded C code that naturally contains many target-specific elements.

    Migrating this code base so it can be run as some "Intex-Windows process" sounds to me like a lot of work.

    If you are aware of any tools / technic that can help me in this direction, please let me know.

    Thanks!

  • Actually, building an AGSI DLL and all the simulation scripts sounds like a lot of work to me...!