Hello here my usart settings that it works fine
USAR.USART_BaudRate=9600; USAR.USART_StopBits=USART_StopBits_1; USAR.USART_WordLength=USART_WordLength_8b; USAR.USART_Parity=USART_Parity_No ; USAR.USART_HardwareFlowControl=USART_HardwareFlowControl_None; USAR.USART_Mode=USART_Mode_Rx | USART_Mode_Tx;
but when I want to decrease my error I want to use parity bit and I have changed settings as shown below
USAR.USART_BaudRate=9600; USAR.USART_StopBits=USART_StopBits_1; USAR.USART_WordLength=USART_WordLength_8b; USAR.USART_Parity=USART_Parity_Even ; USAR.USART_HardwareFlowControl=USART_HardwareFlowControl_None; USAR.USART_Mode=USART_Mode_Rx | USART_Mode_Tx;
but when I do the same settings in my pc to receive data ( for even parity bit) it doesn't receive anything.
Just note that communications errors tends to happen in bursts. So you might get a dual-bit or thripple-bit error or even longer error runs.
A parity bit can only detect an odd number of bit errors in the character.
So lots of communication ignores parity bits. Either they use more complex transfer encodings where you add more than one bit to be able to also get error correction. This is normally done when having dedicated hardware. Or the messages are instead given "checksums" or error correction data as separate bytes in each packet.
A single 16-bit CRC can detect all combinations with an odd number of faulty bits. And it can handle one error burst of up to 15 sequential bits. So two bytes for a CRC-16 is normally better invested than having every single byte sent with an additional parity bit.Besides the issue that not all UART can do 8-bit + parity.
It can be argued that having a parity bit in every character would scale better with packet sie.
But it really doesn't scale well since each individual byte is so badly protected.
It's better to use a larger CRC or maybe even use a tw0-dimensional ECC.
Keil: Please consider your spam filter. Why should I need to use a zero in tw0-dimensional? And why isn't the '-' enough as separator?