This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

elliptic curve cryptography on 8051

Hey guys, I am working on making Elliptic Curve Cryptography El-Gamal in 8051 MCU using Keil uvision 4. I want to make it 128 bit, so I need to use GMP library. However, to use GMP I usually download it in mingw-get-setup first, so I think it won't run if I just copy the gmp.h to my project in Keil.

My questions are :

1. how can I use gmp in 8051 using Keil?

2. Or should I compile it first in gcc, then just download the hex file to 8051? How to program the register in gcc?

Thanks for your attention :D

Best regards

Parents
  • I thought 128-bit would fit in a 16 byte array with no problem?

    Big number math and encryption need some pretty serious programming skills, I've seen mediocre programmers fail at such tasks. A lot of the difference is relying on libraries written by others, that actually understood the problem from first principles.

    Parsing and printing numbers in ASCII at any base is a pretty simple task. I'd expect a candidate to be able to white-board such code at an interview.

Reply
  • I thought 128-bit would fit in a 16 byte array with no problem?

    Big number math and encryption need some pretty serious programming skills, I've seen mediocre programmers fail at such tasks. A lot of the difference is relying on libraries written by others, that actually understood the problem from first principles.

    Parsing and printing numbers in ASCII at any base is a pretty simple task. I'd expect a candidate to be able to white-board such code at an interview.

Children
  • "I thought 128-bit would fit in a 16 byte array with no problem?"

    Yes, but the OP has the view that if you have a 128-bit MD5 value like:
    0c18b46b8145e786d033d3bad80303ff

    Then that is 32 characters.
    And he seems to want to convert each character into two hex digits.
    So the first character '0' in that MD5 (which has value 0x30) is then stored as one one byte with 3 and one byte with 0 (two hex digits) in one of his ints.

    I think we can quickly deduce that a design that manages to take one 128-bit number and rewrite so it takes an 32*2 = 64 byte array (i.e. 128 bit -> 512 bit) will have lots of problems ahead when trying to implement the actual math operations...