Mathematica internal number formats and precision -


tangentially related this question, happening here number formatting?

in[1]  := inputform @ 3.12987*10^-270 out[1] := 3.12987`*^-270  in[2]  := inputform @ 3.12987*10^-271 out[2] := 3.1298700000000003`*^-271 

if use *10.^ multiplier transition naively expect be:

in[3]  := inputform @ 3.12987*10.^-16 out[3] := 3.12987`*^-16  in[4]  := inputform @ 3.12987*10.^-17 out[4] := 3.1298700000000004`*^-17 

whereas *^ takes transition bit further, albeit machine precision starts flaking out:

in[5]  := inputform @ 3.12987*^-308 out[5] := 3.12987`*^-308  in[6]  := inputform @ 3.12987*10.^-309 out[6] := 3.12987`15.954589770191008*^-309 

the base starts breaking later

in[7]  := inputform @ 3.12987*^-595 out[7] := 3.12987`15.954589770191005*^-595  in[8]  := inputform @ 3.12987*^-596 out[8] := 3.1298699999999999999999999999999999999999`15.954589770191005*^-596 

i assuming these transitions relate format in mathematica internally keeps it's numbers, know, or care hazard educated guess at, how?

if understand correctly wondering when inputform show more 6 digits. if so, happens haphazardly, whenever more digits required "best" represent number obtained after evaluation. since evaluation involves explicit multiplication 10^(some power), , since decimal input need not (and in case not) representable in binary, can small differences expect.

in[26]:= table[3.12987*10^-j, {j, 10, 25}] // inputform  out[26]//inputform= {3.12987*^-10,  3.12987*^-11,   3.12987*^-12,   3.12987*^-13,   3.12987*^-14,   3.12987*^-15,   3.12987*^-16,   3.1298700000000004*^-17,   3.1298700000000002*^-18,   3.12987*^-19,   3.12987*^-20,   3.1298699999999995*^-21,   3.1298700000000003*^-22,   3.1298700000000004*^-23,   3.1298700000000002*^-24,   3.1298699999999995*^-25} 

as *^ input syntax, that's parsing (actually lexical) construct. no explicit exact power of 10 computed. floating point value constructed , faithful possible, extent allowed binary-to-decimal, input. inputform show many digits used in inputting number, because indeed closest decimal corresponding binary value got created.

when surpass limitations of machine floating point numbers, arbitrary precision analog. no longer machineprecision $machineprecision (that's bignum analog machine floats in mathematica).

what see in inputform 3.12987*^-596 (a decimal ending slew of 9's) is, believe, caused mathematica's internal representation involving usage of guard bits. there 53 mantissa bits, analogous machine double, closest decimal representation expected 6 digits.

daniel lichtblau wolfram research


Comments

Popular posts from this blog

python - Scipy curvefit RuntimeError:Optimal parameters not found: Number of calls to function has reached maxfev = 1000 -

c# - How to add a new treeview at the selected node? -

java - netbeans "Please wait - classpath scanning in progress..." -