Hey guys, I am working on making Elliptic Curve Cryptography El-Gamal in 8051 MCU using Keil uvision 4. I want to make it 128 bit, so I need to use GMP library. However, to use GMP I usually download it in mingw-get-setup first, so I think it won't run if I just copy the gmp.h to my project in Keil.
My questions are :
1. how can I use gmp in 8051 using Keil?
2. Or should I compile it first in gcc, then just download the hex file to 8051? How to program the register in gcc?
Thanks for your attention :D
Best regards
What 8051 implementation would this be?
Have you coded something similar on a PC? How large was the code? Would something of equivalent size fit inside the 8051 part you have chosen?
You would need to get ALL the source code, you'd need to get ALL of it into your Keil project, and you'd need to compile it there. You are unlikely to be able to take a mishmash of code/libraries compiled for different processors, with different tools, and have them build into a usable form.
Take a step back, and a deep breath, and THINK about what you are doing.
Personally, I'd prefer a 32-bit ARM chip for this task. The general register bank and the better suited instruction set makes it so much simpler for a compiler to build this kind of code. And it would be noticeable in the processing speed too.
Thanks guys for your replies. Actually I have to use this micro for my project. And since 8051 is 8 bit, I need to make many functions , such as addition in 128 bit. Do you guys have any reference about the functions such as addition,multiplication,division in 128 bit for 8051? Thanks a lot
For higher precision math you'd simply use the addition/subtraction with carry propagation,. and extend the bit-wise multiply and divide implementations to accommodate the size of your numbers.
users.utcluj.ro/.../SSCE-Shift-Mult.pdf
The algorithms for doing multiply/divide should be available for 8051 and other 8-bit micros of the era. And explained in most texts about said processors.
Again, WHAT 8051 part are you using?
The concept is trivial - even easier than using decimal numbers.
But the amount of code grows. And the amount of processor instructions needed to produce one output result grows.
So a 8051 will not be a speed demon. And this is a situation where it helps to have more than one index register for the memory accesses. C = A+B is short when everything fits in a register. But with big numbers, each number is an array of bytes.
But the compiler is up to the task, if the developer is up to the task and the code and data spaces are large enough.
Hmm... if you can't even manage the basic arithmetic, isn't this project a bit ambitious...?
I am using 8051 ip core oregano, which has been synthetized in FPGA, so it can be programmed as 8051.
I plan to split 128 bit into integer [32] array, each contains two hexadecimal number. Thus, the operation is done per array block.
Anyone know how to do the multiplication in hexadecimal? because I cannot make binary number (0b11111 not working), so I use hexadecimal.
Thanks all !
Hexadecimal? Binary?
That would only be applicable if you treated the numbers as ASCII text strings with the number stores as the characters '0' and '1' or '0'..'9', 'A'..'F' etc.
Next thing - the processor is 8-bit. It's better to have an array of 8-bit values than to have an array of 32-bit values. Remember that for 32-bit values, the compiler need to either insert many assembler instructions, or call helper functions. And next thing - a 32-bit*32-bit multiply results in a 64-bit answer. But that isn't fun if you don't have a C data type available that can store a 64-bit value. Storing the 128-bit number as an array of 8-bit values would mean that your code would do base-256 arithmetic.
Are you sure you are up for this?
I have tried that my micro unable to do multiplication and unable to represent number in binary. I am wondering what is the most effective algorthm for multi in every 8 bit in hexadecimal?
Yea. I had the same trouble when I tried to solve multi precision arithmetic with my linear motors.
Unable to represent numbers in binary???
Either you are a troll, or you are so very out of your league that you should immediately talk with the person who gave you this task and say that they should give you some other task instead.
If you don't even understand how a number is (or can be) stored in memory, then you'll have a very hard time to implement any big-number functionality. At least the basic steps like neg, add, sub, mul, div and maybe sqrt and 1/x should be almost trivial to implement since it's possible to do by just applying standard elementary school math.
/pwm
Yes sure, as I said, it only able to write in integer or hexadecimal, but I unable to write, for example unsigned int a = 0b110101
Of course elementary student knows that every value is stored in binary form in memory, but I can't write in binary form for this micro 8051.
Anyone who plans to implement any big number library would see it as a trivial exercise to also implement an assign function that can work on an ASCII string of arbitrary number base.
So:
void bignum_set(bignum_t *num,uint8_t base,const char *value); void bignum_set_raw(bignum_t *num,uint8_t value,size_t bytes); uint8_t bignum_data[] = {0xf3,0x12,0x73,0x00,0x03,0x4a }; bignum_set(num1,4,"033303210021111111130203302"); bignum_set(num2,16,"fa3032faccca"); bignum_set(num3,2,"011110100100001110111010110010011010101001010101101000101010"); bignum_set_raw(num4,bignum_data,sizeof(bignum_data));
I thought 128-bit would fit in a 16 byte array with no problem?
Big number math and encryption need some pretty serious programming skills, I've seen mediocre programmers fail at such tasks. A lot of the difference is relying on libraries written by others, that actually understood the problem from first principles.
Parsing and printing numbers in ASCII at any base is a pretty simple task. I'd expect a candidate to be able to white-board such code at an interview.
"I thought 128-bit would fit in a 16 byte array with no problem?"
Yes, but the OP has the view that if you have a 128-bit MD5 value like: 0c18b46b8145e786d033d3bad80303ff
Then that is 32 characters. And he seems to want to convert each character into two hex digits. So the first character '0' in that MD5 (which has value 0x30) is then stored as one one byte with 3 and one byte with 0 (two hex digits) in one of his ints.
I think we can quickly deduce that a design that manages to take one 128-bit number and rewrite so it takes an 32*2 = 64 byte array (i.e. 128 bit -> 512 bit) will have lots of problems ahead when trying to implement the actual math operations...