Certainly we can see that the JWST has already provided with us a large number of unique observations, as has the LHC, as has LIGO, as has each new probe sent to a new extraterrestrial object, as has GLAST…
The more we build new technology, the more unique observations we’re going to have.
Unless of course you’re of the opinion that 100 years after realizing the Milky Way wasn’t the whole universe that we’ve essentially discovered 99% of what there is to discover.
What I’m saying is that there is a good chance that all of these many different observations are emergent properties that stem from a handful of fundamental laws. We don’t have to explain each and every one of them in a unique way, instead we’re trying to build models that account for all of these phenomena. When we have a new observation then we plug it in the model, and either the model needs adjusting or the observation fits with the way the model already works. The more accurate the model is the less chance there is that a new observation will require the restructuring of the model.
Yes, but our models are not getting simpler and relying on fewer fundamental laws, they are getting more complex. The Timescape model is a good example of that. Even MOND still requires additional complexity to make up the gaps in observed energy/gravity. The more complex our models become, the more surface area there is for novel observations to contradict them. And the more progress we make, the more novel observations we become capable of (assuming there’s more to discover).
In essence, the only way to even hint at if we’re getting more accurate is the rate of discovery of observations that contradict our models, and even that is a lossy heuristic that relies on some serious assumptions about unknown unknowns.
The fact that the models are getting more complex may itself be a sign that we’re framing things poorly. For example, when we had a geocentric model of the solar system. It wasn’t inherently wrong. The problem was that using the earth as the focal point made it very difficult to express the orbits of other planets, and people had to integrate ideas like retrograde motion and so on. Then we started using the heliocentric model and all these problems went away.
Likewise, it’s entirely possible that we’ll come up with a a new model that makes it far easier to express all the different phenomena we’re observing. Obviously there’s no guarantee of that, but that’s one possibility to consider.
You’re right that the rate of discovery of things that don’t fit with the model is a pretty good indicator of how well the model works overall. I do think that it is a reasonable assumption that the universe works the same way at the small scale though, and if that is the case anything we’ll find at large scale has to be an expression of these same laws that apply at small scales.
Certainly we can see that the JWST has already provided with us a large number of unique observations, as has the LHC, as has LIGO, as has each new probe sent to a new extraterrestrial object, as has GLAST…
The more we build new technology, the more unique observations we’re going to have.
Unless of course you’re of the opinion that 100 years after realizing the Milky Way wasn’t the whole universe that we’ve essentially discovered 99% of what there is to discover.
What I’m saying is that there is a good chance that all of these many different observations are emergent properties that stem from a handful of fundamental laws. We don’t have to explain each and every one of them in a unique way, instead we’re trying to build models that account for all of these phenomena. When we have a new observation then we plug it in the model, and either the model needs adjusting or the observation fits with the way the model already works. The more accurate the model is the less chance there is that a new observation will require the restructuring of the model.
Yes, but our models are not getting simpler and relying on fewer fundamental laws, they are getting more complex. The Timescape model is a good example of that. Even MOND still requires additional complexity to make up the gaps in observed energy/gravity. The more complex our models become, the more surface area there is for novel observations to contradict them. And the more progress we make, the more novel observations we become capable of (assuming there’s more to discover).
In essence, the only way to even hint at if we’re getting more accurate is the rate of discovery of observations that contradict our models, and even that is a lossy heuristic that relies on some serious assumptions about unknown unknowns.
The fact that the models are getting more complex may itself be a sign that we’re framing things poorly. For example, when we had a geocentric model of the solar system. It wasn’t inherently wrong. The problem was that using the earth as the focal point made it very difficult to express the orbits of other planets, and people had to integrate ideas like retrograde motion and so on. Then we started using the heliocentric model and all these problems went away.
Likewise, it’s entirely possible that we’ll come up with a a new model that makes it far easier to express all the different phenomena we’re observing. Obviously there’s no guarantee of that, but that’s one possibility to consider.
You’re right that the rate of discovery of things that don’t fit with the model is a pretty good indicator of how well the model works overall. I do think that it is a reasonable assumption that the universe works the same way at the small scale though, and if that is the case anything we’ll find at large scale has to be an expression of these same laws that apply at small scales.