The first 64 bits of pi are:
11.00100100001111110110101010001000100001011010001100001000110100
Computer multiplication can be sped up by looking for patterns and re-writing this with minimal additions/subtractions (in terms of simple zero-cost bitshifts). To keep 64-bit accuracy, the best I could do was 13 steps:
x01 = 1 # ## ### # ## ### x02 = -x01*2^+06 +x01*2^+00 x03 = +x01*2^+20 +x01*2^+00 x04 = +x01*2^+04 +x01*2^+00 x05 = +x01*2^+05 +x01*2^+00 x06 = +x01*2^+38 +x01*2^+00 x07 = +x02*2^+35 +x03*2^+00 x08 = +x07*2^+02 +x07*2^+00 x09 = +x04*2^+10 +x04*2^+00 x10 = +x05*2^+45 +x06*2^+00 x11 = +x04*2^+24 +x09*2^+00 x12 = +x07*2^+19 +x08*2^+00 x13 = +x11*2^+03 +x12*2^+00 x14 = +x10*2^-48 +x13*2^-60 # ## ### # ## ### # x14 is the first 64 bits of pi
I assume that this optimization problem is not studied much in math since there was no response to my original post (please see my original version for a few more details though its solution used more steps). So, is there a solution with fewer steps than 13? Better yet, is there any systematic method/theory for finding a minimal 128-bit solution? Notice that I could have asked this problem for any number, like 99 (instead of pi), so I was hoping that there might be some number theory devoted to it.
To be clear, in this problem, for each step (e.g., defining x08), you are free to choose two previously-defined variables (e.g., chosen from x01 to x07), their two bitshifts (integers from -infinity to +infinity), and their two inclusion signs (positive or negative). The code above uses # signs to show these six variable columns.