Hey guys, I am working on making Elliptic Curve Cryptography El-Gamal in 8051 MCU using Keil uvision 4. I want to make it 128 bit, so I need to use GMP library. However, to use GMP I usually download it in mingw-get-setup first, so I think it won't run if I just copy the gmp.h to my project in Keil.
My questions are :
1. how can I use gmp in 8051 using Keil?
2. Or should I compile it first in gcc, then just download the hex file to 8051? How to program the register in gcc?
Thanks for your attention :D
Best regards
I am using 8051 ip core oregano, which has been synthetized in FPGA, so it can be programmed as 8051.
I plan to split 128 bit into integer [32] array, each contains two hexadecimal number. Thus, the operation is done per array block.
Anyone know how to do the multiplication in hexadecimal? because I cannot make binary number (0b11111 not working), so I use hexadecimal.
Thanks all !
Hexadecimal? Binary?
That would only be applicable if you treated the numbers as ASCII text strings with the number stores as the characters '0' and '1' or '0'..'9', 'A'..'F' etc.
Next thing - the processor is 8-bit. It's better to have an array of 8-bit values than to have an array of 32-bit values. Remember that for 32-bit values, the compiler need to either insert many assembler instructions, or call helper functions. And next thing - a 32-bit*32-bit multiply results in a 64-bit answer. But that isn't fun if you don't have a C data type available that can store a 64-bit value. Storing the 128-bit number as an array of 8-bit values would mean that your code would do base-256 arithmetic.
Are you sure you are up for this?
I thought 128-bit would fit in a 16 byte array with no problem?
Big number math and encryption need some pretty serious programming skills, I've seen mediocre programmers fail at such tasks. A lot of the difference is relying on libraries written by others, that actually understood the problem from first principles.
Parsing and printing numbers in ASCII at any base is a pretty simple task. I'd expect a candidate to be able to white-board such code at an interview.
"I thought 128-bit would fit in a 16 byte array with no problem?"
Yes, but the OP has the view that if you have a 128-bit MD5 value like: 0c18b46b8145e786d033d3bad80303ff
Then that is 32 characters. And he seems to want to convert each character into two hex digits. So the first character '0' in that MD5 (which has value 0x30) is then stored as one one byte with 3 and one byte with 0 (two hex digits) in one of his ints.
I think we can quickly deduce that a design that manages to take one 128-bit number and rewrite so it takes an 32*2 = 64 byte array (i.e. 128 bit -> 512 bit) will have lots of problems ahead when trying to implement the actual math operations...