We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi , I am not sure if my question is relavent to this forum or not but its an interesting programing prblem that i am facing. I hope someone here would be able to give usefull advice on it. my microcontroller is receiving a stream of data of 25bits. The stream is 0000011100000001000111000. These bits are encoded strangely. A zero appears as follows, (please ignore the dots, they are just there to hang the pattern.) ___ |...| |...| |...| |...|________ The ratio of high-time to low-time is 1:3. while one appears as follows: ________ |...........| |...........| |...........| |...........|___ The ratio of high-time to low-time is 3:1. My question is that what is this encoding scheme, and how can i read this in a microcontroller ?? I am receiving this stream on a normal pin which is not an interrupt pin. I am looking for some neat algorithm to read the stream faithfully. Any help would be great; thankz & Bye .
I'm not sure of the terms but this is a self-clocking code similar to the old Tarbell cassette scheme from the late 70's. You have to sample the pin periodically with a period smaller than the narrowest pulse. This guarantees a sample (possibly more) in each narrow state. You record the number of high samples and the number of low samples. At the first high reading, you decode the last bit and reset both counters. If the high count is larger than the low count you have a '1', otherwise a '0'. You can also save time and space by counting up on high samples and down on low samples. A bit is a '1' until the count <= 0. It usually pays to sample at a much higher rate to account for signal degradation.
---- ------------ --- signal ------------ ---- sample points | | | | | | | | | | | | values 1 1 2 3 4 1 2 3 4 1 2 output - - - - <-0 - - - - <-1