• freagle@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Lambda CDM is another model that has already been proposed and has much broader support than MOND.

    All of these factors accelerate the rate of research

    Yes, but not of generating new models, because models have to match ALL observations. The more observations we have, the longer it takes to reconcile all the implications of new models or changes to existing models.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      9 days ago

      Having more processing power and tools that are able to identify patterns within them absolutely does help with producing new models. In fact, tools like theorem solvers can be used to generate models and test them on the data. Much of the process of developing models could be automated going forward. In fact, some of that is already starting to happen today.

      • freagle@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        That’ll certainly be interesting to see if it can make headway against the exponential growth of observations or if it’s merely keeping pace.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          9 days ago

          I don’t expect that amount of observations that require unique explanations is going to grow exponentially. The whole idea behind building models is that it’s a general formula that is used to explain a lot of different phenomena that are emergent properties of a relatively small set of underlying rules. What the wealth of observation does is provide us with more confidence that the model is working in a lot of different contexts.

          • freagle@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            Certainly we can see that the JWST has already provided with us a large number of unique observations, as has the LHC, as has LIGO, as has each new probe sent to a new extraterrestrial object, as has GLAST…

            The more we build new technology, the more unique observations we’re going to have.

            Unless of course you’re of the opinion that 100 years after realizing the Milky Way wasn’t the whole universe that we’ve essentially discovered 99% of what there is to discover.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              9 days ago

              What I’m saying is that there is a good chance that all of these many different observations are emergent properties that stem from a handful of fundamental laws. We don’t have to explain each and every one of them in a unique way, instead we’re trying to build models that account for all of these phenomena. When we have a new observation then we plug it in the model, and either the model needs adjusting or the observation fits with the way the model already works. The more accurate the model is the less chance there is that a new observation will require the restructuring of the model.

              • freagle@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 days ago

                Yes, but our models are not getting simpler and relying on fewer fundamental laws, they are getting more complex. The Timescape model is a good example of that. Even MOND still requires additional complexity to make up the gaps in observed energy/gravity. The more complex our models become, the more surface area there is for novel observations to contradict them. And the more progress we make, the more novel observations we become capable of (assuming there’s more to discover).

                In essence, the only way to even hint at if we’re getting more accurate is the rate of discovery of observations that contradict our models, and even that is a lossy heuristic that relies on some serious assumptions about unknown unknowns.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  9 days ago

                  The fact that the models are getting more complex may itself be a sign that we’re framing things poorly. For example, when we had a geocentric model of the solar system. It wasn’t inherently wrong. The problem was that using the earth as the focal point made it very difficult to express the orbits of other planets, and people had to integrate ideas like retrograde motion and so on. Then we started using the heliocentric model and all these problems went away.

                  Likewise, it’s entirely possible that we’ll come up with a a new model that makes it far easier to express all the different phenomena we’re observing. Obviously there’s no guarantee of that, but that’s one possibility to consider.

                  You’re right that the rate of discovery of things that don’t fit with the model is a pretty good indicator of how well the model works overall. I do think that it is a reasonable assumption that the universe works the same way at the small scale though, and if that is the case anything we’ll find at large scale has to be an expression of these same laws that apply at small scales.