AppleScript Arithmetic with Big Reals – I hoped it would never come to this

I’ve got an applet that has an accounting component which as you might imagine involves some basic adding and subtracting of numbers with up to two decimal places. I thought that such basic maths would be a cinch for computers and by extension AppleScript but, somehow, I’ve now discovered that to be naive. :blush:

I started noticing that some of the balances/values in my applet seemed off and first tried redoing the maths and double checking my app logic but that didn’t seem to fix/explain the issue.

I tried reading some academic articles on floating point numbers and some AppleScript documentation and read just enough of the former to start feeling sleepy (and give me a hint of what the issue might be) and decided just to play around in Script Debugger instead.

I don’t know whether AppleScript’s reals are floating point numbers or not and whether scientific notation is the same thing as floating points or a type of it or what, but basically if you run the following code you’ll see AppleScript has difficulty with add-ups and take-aways when the numbers are large enough and involve decimal places (at least those seem to be the issues).

set x to 100000.66
set y to 100000
log x - y

It seems I need to write my own basic maths functions. Thankfully I only need adding and subtracting, and it’s with currency so will only ever have up to two decimal places. So I’ve written the below.

##AppleScript Handler for Addition##

on sum of (x as number) to y as number
	return (round ((x + y) * 100)) / 100
end sum

##AppleScript Handler for Subtraction##

on difference of x from y
	return (round ((y - x) * 100)) / 100
end difference

I was a little surprised that there wasn’t more documentation and sample subroutines on this – at least I didn’t find much after one or two lazy Googles.

Are those handlers a sensible way to deal with this? If so I’ll now go through my app and replace all the basic maths operations with calls to those handlers. Will I run into any other weird issues using those handlers? Are there better ones?

Thanks for any wisdom or resources you can offer.

AppleScript reals are doubles — look up double-precision floating-point format on the Web. As a base-2 format, it can’t store all base-10 values exactly.

If you’re doing this a lot, round is a bit slow. You could use div instead, like this:

(x * 100 div 1 + y * 100 div 1) / 100
2 Likes

Awesome. Thanks as always, Shane! I am doing that a lot so will use div 1 instead.

This seems to be slightly more reliable:

((x * 100 + y * 100) as integer) / 100

On the rare occasions when it gives different results, it’s giving the correct one.

set x to (random number 100000000) / 100
set y to (random number 100000000) / 100

{x, y, linefeed, ¬
	x + y, ((x * 100 + y * 100) as integer) / 100, (x * 100 div 1 + y * 100 div 1) / 100, linefeed, ¬
	x - y, ((x * 100 - y * 100) as integer) / 100, (x * 100 div 1 - y * 100 div 1) / 100}
2 Likes

Cool, thanks, Nigel. Admittedly my maths isn’t good enough to immediately be able to detect by eye which is more accurate when using random numbers up to one hundred million in scientific notation, but I’ll take your word for it. :blush: Obviously as integer does its own rounding of any decimal places, whereas div 1 effectively simply drops any decimal places. So for that reason I guess as integer is more likely to produce more accurate results in a wider range of cases.

In my app specifically though I’m not sure there would ever be a difference as I pretty much only ever add and subtract currency values that should have at most two decimal places. Once those values are multiplied by 100, there should be no decimal places and thus div 1 and as integer should produce the same results. But of course how I think things ‘should’ be is not factoring in the whole doubles floating point imprecision which is the reason we’re here in the first place. :blush: so for that reason I’m probably better off with your as integer method and hopefully I’ll never discover an example of any imprecision.

Any idea of the comparative differences in speed between the two methods? I added a log statement to the last line in your example code and SD tells me that it takes all of 0.01 seconds – not sure if that’s the time to log it or the time to do all the maths, but either way we’re obviously not talking about huge amounts of time. Still, I’m curious.

For practical purposes, their speeds are identical, although in fact my offering is minutely faster by virtue of having one fewer operator. Typical timings on my machine for 10,000 iterations (!) of each calculation are:

My offering: 0.007 seconds
Shane’s: 0.009 seconds
round: 1.305 seconds in Script Editor, 10.321 seconds in Script Debugger.

For 10,000 iterations of calls to labelled-parameter handlers containing the calculations:

My offering: 0.013 seconds
Shane’s: 0.015 seconds
round: 1.317 seconds in Script Editor, 10.393 seconds in Script Debugger.

So the time difference between the two non-round calculations is less than the overhead of calling handlers containing them! I’ve no idea why round is so much slower in SD than in SE.

1 Like

I think Round is slower in SD because it’s a Scripting Addition call and there’s more overhead required with apple events in a debugger. But when saved they’ll be identical. As integer is a simple coercion, but doesn’t give you any of the options the Round command does. The only time I’ve had to use those options was when I had to match the rounding method used by our data provider.)

What’s your system for comparing timings? (I want to test my sort algorithm against others).

Hi Ed. Sorry for the slow reply.

as integer also errors if the result would be outside the AS integer range: -(2 ^ 31) to (2 ^ 31) - 1. But round only gives the correct answers within this range anyway, so using as integer instead isn’t a limitation in this respect. Fast implementations of other rounding styles are fairly simple to do too:

-- To nearest (ISO).
n as integer

-- To nearest.
n div 0.5 - n div 1

-- Up.
set nDiv1 to n div 1
if (nDiv1 = n) then return nDiv1
return nDiv1 + 1

-- Down.
set nDiv1 to n div 1
if (nDiv1 = n) then return nDiv1
return nDiv1 - 1

-- Towards zero.
n div 1

-- Away from zero.
set nDiv1 to n div 1
if (nDiv1 = n) then return nDiv1
if (nDiv1 < 0) then return nDiv1 - 1
return nDiv1 + 1

For sorts, I’d generate a longish list, make as many exact copies of it as there are sorts, and see how long each sort takes with its copy. It’s instructive to try different permutations — eg. lists with items which take relatively little time to compare (such as numbers), lists with items which take considerably longer to compare (long strings, records, lists), lists which start out chaotically ordered, lists which start out fairly ordered already, long lists, very long lists, extremely long lists, etc. All sorts are fast with short lists, but short lists can be handy for checking that the sorts are working correctly!