This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

max value for seed for srand() function

In Keil's functions, the value for the seed value for srand() is 'unsigned int' according to their function definition.  Which for my processor is 4 bytes - sizeof(unsigned int) = 4.  However, it looks like the higher 2 bytes are ignored (seed value of 0x00001234 produces the same random numbers as 0x00011234 and all other values of the top 2 bytes).  So, is it really just 2 bytes or 65535 the largest seed value and not based on the size of 'unsigned int' of your processor?  Which I read online in other forums as the norm.  Thanks.

Sutton

  • you proved t yourself.

    why not use a REAL random instead of simulated random.  have a timer free running and read it when you need a random number

    n anecdote:illustrating the danger of simulated random

    J1708 specify to add a random delay when collisions happen.  Two units with the same simulated random collided forever

  • I'm using a combination of minutes and seconds from the RTC as a 2 byte seed.  That's really random enough for my application.

    However, I was using a 4 byte combination of day, hours, minutes, and seconds, but noticed the higher 2 bytes weren't doing anything as a seed in Keil's srand() function.  It was just an observation that prompted me to ask if Keil's function just ignores the higher two bytes.  Maybe for future expansion to be consistent with other srand() functions.  Other forums have indicated that the seed max is based on what size the processor's 'unsigned int' is.  So, even if Keil's srand() argument is unsigned int, it looks like it is 2 bytes no matter what and not based on the processor.  It was really just a question for someone at Keil to confirm.