So I'm trying to write an implementation of RSA in C++. I know - yikes!
I'll try and keep it concise. I've actually modified some of the code so that I don't have to mention custom classes and other stuff I've actually got working here, because it's irrelevant and would require too much explanation.
My first step in the process is to convert an input string into a series of bits (using an std::bitset). I've accomplished this in the following manner:
template <int S> std::bitset<S>* convertToBitStr(const std::string& msg) { std::bitset<S>* result = new std::bitset<S>(); const char* cmsg = msg.c_str(); int bsPos = 0; for (int pos = 0; cmsg[pos] != '\0'; ++pos) { assert(bsPos < S); for (char t = 0; t < CHAR_BIT; ++t) { int p = ((int)cmsg[pos] & pow(2.0,(double)t)); result[bsPos] = ((p == 0) ? false : true); ++bsPos; } } return result; }(Note: You need includes for <assert.h>, <limits>, and of course <bitset> for this to compile)
This yields a simple conversion - take the string "Hi", for instance. The integer value of 'H' (on my compiler, MS VC++) is 72, which in binary is 1001000. The integer value of 'i' is 105, which in binary is 1101001. Each character in the input string gets 8 bits of space in the resulting bitset, with the first character in the string coming last in the binary representation. Thus the binary representation of "Hi" is 0110100101001000. I've also got a function defined to convert these binary representations back to strings, of course - everything is working fine so far.
Now, the interesting part. Assume I've done all the padding I need to do, I've got all my RSA keys: n, the shared key; d, the private key; and e, the public key.
In that case, the RSA hash c of a message M, whose integer representation is m, is computed by c = m^e (mod n).
However, computing the right-hand side of that equation is where I'm having trouble. I cannot convert the binary representation to any of the builtin types and then compute it using builtin functions, because the value of even a measly 5-character-long string is likely to be larger than the maximum value that can be held by an unsigned long.
Furthermore, e is probably going to have a value of something on the order of 65535, or potentially even larger. So we are raising an incredibly large number to the power of another very large number. That's going to require a *TON* of bits to hold that resultant value. Seems grossly inefficient.
So I have a feeling that I am going about this the wrong way. Am I? If so, what is the right way? And if not, how can I compute the exponentiation of two very large numbers - one of which I am restricted to working with the binary representation of (e could be simply an unsigned long or something, or it could also be a bitset; whichever is more convenient)?
Thanks!!
Edited by W-Unit, 10 August 2011 - 05:17 PM.