• nour@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 年前

      That’s just floating-point arithmetics for you

      (The parseInt thing however… I have absolutely no idea what’s up with that.)

        • nour@lemmygrad.ml
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          2 年前

          IIRC it’s called absorption. To understand, here’s a simplified explanation the way floating-point numbers are stored (according to IEEE 754):

          • Suppose you’re storing a single-precision floating point number — which means that you’re using 32 bit to store it.
          • It uses the following format: The first bit is used for the sign, the next 8 bit are the exponent and the rest of the bits is used for the mantissa. Here’s a picture from Wikipedia to illustrate:
          • The number that such a format represents is 2 ^ (exponent - 127) * 1.mantissa. The 127 is called a bias and it’s used to make possible to represent negative exponents. 1.mantissa meaning that for example, if your mantissa is 1000000..., which would be binary for 0.5, you’re multiplying with 1.5. Most (except for very small ones) numbers are stored in a normalized way, which means that 1 <= 1.mantissa < 2, that’s why we calculate like this and don’t store the leading 1. in the mantissa bits.

          Now, back to 2**53 (as a decimal, that’s 9007199254740992.0). It will be stored as: 0 10110100 00000000000000000000000 Suppose you’re trying to store the next-bigger number that’s possible to store. You do: 0 10110100 00000000000000000000001 But instead of 9007199254740993.0, this is representing the number 9007200328482816.0. There is not enough precision in the mantissa to represent an increment by one.

          I don’t remember how exactly this works, but any number that you’re storing as float gets rounded to the nearest representable value. 2**53 + 1 will be represented by the same bit-string as 2**53, since it’s impossible to represent 2**53 + 1 exactly with the given precision (it’s not possible with double-precision either, the next representable number in double-precision would’ve been 2**53 + 2). So when the comparison happens, the two numbers are no longer distinguishable.

          Here’s a good website to play around with this sort of thing without doing the calculations by hand: https://float.exposed Wikipedia has some general information about how IEEE 754 floats work, too, if you want to read more.

          Disclaimer that this is based on my knowledge of C programming, I never programmed anything meaningful in JavaScript, so I don’t know whether the way JavaScript does it is exactly the same or whether it differs.