MOND is not widely accepted for a couple of reasons, but right here in Wikipedia we have this:
The most serious problem facing [MOND] is that galaxy clusters show a residual mass discrepancy even when analyzed using MOND.[6] This detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis…
Sure, but key part is that the whole notion of dark energy only exists because the model we came up with is at odds with what we see happening
That’s accurate but not precise. The model we came up with matched the observations we had at the time, and then new observations came that challenged the model. Dark matter and dark energy are the gaps that would have to be filled with exotic matter and energy in order for the model to remain consistent with observations. Changing the model to eliminate these gaps is the work of many, but developing such a model is the work of multiple generations.
It’s also worth noting that cosmology ultimately relates to how physics work at the smallest scales. It’s all a continuum, and everything builds on itself. Explaining what we see at the largest scales has to directly trace back to the smallest ones.
Which is why developing new models takes generations.
I’m aware MOND also has problems, and it will take time to figure out how to reconcile them. It’s even possible that an entirely different model might be proposed. It’s hard to make predictions on how quickly these things will develop however because the rate of progress is not linear. We accumulate data at increasingly higher rate and fidelity, the tools we use to analyze the data that are constantly improving, and communication is becoming easier. All of these factors accelerate the rate of research. It’s also worth noting that stuff like machine learning can play a big role here as well. These systems are very good at finding patterns in data that would be impossible for humans to see. So, it may take generations to develop a new model or it make take decades. Personally, this isn’t something I’d bet money on.
Lambda CDM is another model that has already been proposed and has much broader support than MOND.
All of these factors accelerate the rate of research
Yes, but not of generating new models, because models have to match ALL observations. The more observations we have, the longer it takes to reconcile all the implications of new models or changes to existing models.
Having more processing power and tools that are able to identify patterns within them absolutely does help with producing new models. In fact, tools like theorem solvers can be used to generate models and test them on the data. Much of the process of developing models could be automated going forward. In fact, some of that is already starting to happen today.
I don’t expect that amount of observations that require unique explanations is going to grow exponentially. The whole idea behind building models is that it’s a general formula that is used to explain a lot of different phenomena that are emergent properties of a relatively small set of underlying rules. What the wealth of observation does is provide us with more confidence that the model is working in a lot of different contexts.
Certainly we can see that the JWST has already provided with us a large number of unique observations, as has the LHC, as has LIGO, as has each new probe sent to a new extraterrestrial object, as has GLAST…
The more we build new technology, the more unique observations we’re going to have.
Unless of course you’re of the opinion that 100 years after realizing the Milky Way wasn’t the whole universe that we’ve essentially discovered 99% of what there is to discover.
What I’m saying is that there is a good chance that all of these many different observations are emergent properties that stem from a handful of fundamental laws. We don’t have to explain each and every one of them in a unique way, instead we’re trying to build models that account for all of these phenomena. When we have a new observation then we plug it in the model, and either the model needs adjusting or the observation fits with the way the model already works. The more accurate the model is the less chance there is that a new observation will require the restructuring of the model.
Yes, but our models are not getting simpler and relying on fewer fundamental laws, they are getting more complex. The Timescape model is a good example of that. Even MOND still requires additional complexity to make up the gaps in observed energy/gravity. The more complex our models become, the more surface area there is for novel observations to contradict them. And the more progress we make, the more novel observations we become capable of (assuming there’s more to discover).
In essence, the only way to even hint at if we’re getting more accurate is the rate of discovery of observations that contradict our models, and even that is a lossy heuristic that relies on some serious assumptions about unknown unknowns.
MOND is not widely accepted for a couple of reasons, but right here in Wikipedia we have this:
That’s accurate but not precise. The model we came up with matched the observations we had at the time, and then new observations came that challenged the model. Dark matter and dark energy are the gaps that would have to be filled with exotic matter and energy in order for the model to remain consistent with observations. Changing the model to eliminate these gaps is the work of many, but developing such a model is the work of multiple generations.
Which is why developing new models takes generations.
I’m aware MOND also has problems, and it will take time to figure out how to reconcile them. It’s even possible that an entirely different model might be proposed. It’s hard to make predictions on how quickly these things will develop however because the rate of progress is not linear. We accumulate data at increasingly higher rate and fidelity, the tools we use to analyze the data that are constantly improving, and communication is becoming easier. All of these factors accelerate the rate of research. It’s also worth noting that stuff like machine learning can play a big role here as well. These systems are very good at finding patterns in data that would be impossible for humans to see. So, it may take generations to develop a new model or it make take decades. Personally, this isn’t something I’d bet money on.
Lambda CDM is another model that has already been proposed and has much broader support than MOND.
Yes, but not of generating new models, because models have to match ALL observations. The more observations we have, the longer it takes to reconcile all the implications of new models or changes to existing models.
Having more processing power and tools that are able to identify patterns within them absolutely does help with producing new models. In fact, tools like theorem solvers can be used to generate models and test them on the data. Much of the process of developing models could be automated going forward. In fact, some of that is already starting to happen today.
That’ll certainly be interesting to see if it can make headway against the exponential growth of observations or if it’s merely keeping pace.
I don’t expect that amount of observations that require unique explanations is going to grow exponentially. The whole idea behind building models is that it’s a general formula that is used to explain a lot of different phenomena that are emergent properties of a relatively small set of underlying rules. What the wealth of observation does is provide us with more confidence that the model is working in a lot of different contexts.
Certainly we can see that the JWST has already provided with us a large number of unique observations, as has the LHC, as has LIGO, as has each new probe sent to a new extraterrestrial object, as has GLAST…
The more we build new technology, the more unique observations we’re going to have.
Unless of course you’re of the opinion that 100 years after realizing the Milky Way wasn’t the whole universe that we’ve essentially discovered 99% of what there is to discover.
What I’m saying is that there is a good chance that all of these many different observations are emergent properties that stem from a handful of fundamental laws. We don’t have to explain each and every one of them in a unique way, instead we’re trying to build models that account for all of these phenomena. When we have a new observation then we plug it in the model, and either the model needs adjusting or the observation fits with the way the model already works. The more accurate the model is the less chance there is that a new observation will require the restructuring of the model.
Yes, but our models are not getting simpler and relying on fewer fundamental laws, they are getting more complex. The Timescape model is a good example of that. Even MOND still requires additional complexity to make up the gaps in observed energy/gravity. The more complex our models become, the more surface area there is for novel observations to contradict them. And the more progress we make, the more novel observations we become capable of (assuming there’s more to discover).
In essence, the only way to even hint at if we’re getting more accurate is the rate of discovery of observations that contradict our models, and even that is a lossy heuristic that relies on some serious assumptions about unknown unknowns.