set realValue to 4.3 - 3.2 --> 1.1
realValue = 1.1 --> false

realValue < 1.1 --> true

1.1 = 1.1 --> true

class of realValue --> real
class of 1.1 --> real
class of realValue is equal to class of 1.1 --> true

I don’t understand these results. It seems that the underlying subtraction calculation yields a result that is represented differently in memory than when it is hard-coded, thus they don’t compare equally. Coercing the values to reals has no effect.

Though this will work:

set realValue to ((4.3 - 3.2) as string) as real --> true

It’s really just a limitation of how computers represent floating-point numbers – it’s often not precise. For example, the value 0.1 can’t be represented accurately in binary form.

Look up Wikipedia for floating-point arithmetic and “Accuracy problems” for further details.

Since floating-point numbers (reals) were invented, it has been known that you cannot reliably do an equality comparison with a floating point number, due to the inexact representation internally.

Floating-point comparisons should never be a = OR ≠.
Use < ≤ > ≥

Use integers for equality comparisons.

OR, use some tolerance, like:

set x to 1.1
set y to 4.3 - 3.2
if ((abs (x - y)) ≤ 1.0E-5) then set isEq to true
-->true