Not FOSS but commercial software is. The apps just get more bloated and want to suck even more data with each update. Then there is the sites that have hundreds of trackers and third party cookies from everywhere and need 1gb to display 🙄. OK maybe not 1gb but you get the gist.
Electron apps created with React can definitely push the boundaries of what ‘acceptable’ memory usage is.
I have a pet theory that webview-based apps are popular only because currently there is absolutely no usable multiplatform desktop GUI framework. Therefore, developers have to resort to the one thing that works: load a webpage in a web browser.
Even React Native feels like a kludge in a way it converts React components to UI components.
Yep totally unheard of for foss software to get worse. Gnome 3 and kde 4 for example were universally acclaimed.
I think that the post author just neglects that software has become mindblowingly complex compared to the days of yore, if you put together all the features of netscape + win 3.11 + wordperfect + whatever other thing they were using in the 90s at any given point you don’t get 10% of the complexity of a contemporary productivity app (say outlook) let alone a full operating system.
It’s clear that the more complex something is the more things can break. It’s like complaining that F16s are worse than consumer 40€ drones because the former require maintenance every few hours of flight while the latter don’t.
But if all you need is a drone and all anyone makes is an F-16, that is a shitty mismatch. I don’t need an outlook that does all that shit, I just need to check my email, or at least set up a filter to send everything to the trash.
I don’t need teams to do document management, I just need to chat with my team. I’ll resend a document if it is needed for any reason. Companies are adding useless bloat to all of these things and then breaking the core functionality because they’ve made things hard. This is not progress.
Edit: fixed some iOS auto-incorrect. Apologies for any incoherence before.
It’s kinda like having an F16 that are built to go through a McDrive. It’s a feature, yes, but an utterly terrible one at that. Your job is to do one thing and do it well. I’ll take the car for going to McDrive.
He’s not exactly comparing software to netscape or win 3.11 though, he’s comparing version N of some software to version N-1 or N-2 and noticing that they’re getting worse from release to release. Given the rate of new releases the complexity shouldn’t be increasing that rapidly between releases so I’m not convinced that is the cause per se. I have to agree with the conclusion from the article, testing was more rigorous in the past than it is now. Both because there was less surface area to test back then and because time-to-market pressures were less due to the longer windows between releases.
I assume you never worked in testing. back in the days, we used to cram testing into a weekend as developers were late with their coding. There was no test automation so that weekend we spend all the time on the most basic functionality. Barely getting thou the testing of having the app started and some of the most basic functions. Almost never was there any time for regression testing, old functions broke all the time. It wasn’t uncommon that we skipped a bug fix in one version, just to reintroduce the same bug in the next release.
No, but I do work as a developer and we work pretty closely with the testers. Not back in the day though, I’m not that old but I’ve been around enough to know that even in the current era of software development the quality and duration of testing varies quite a bit from company to company. Some companies really don’t care and the testing is token at best, others like where I currently work are quite obsessed with quality and dedicate quite a lot of time and people to testing a release before it goes out. Of course there are still bugs from time to time but a lot are found and fixed during testing.
Previous companies I’ve worked at with not-so-great testing were more consumer facing whereas the current one is B2B with a lot of enterprise customers so maybe companies just put as much or as little effort into testing as they think their target audience is willing to put up with.
It’s less about putting up as much as fulfilling requirements. I know several software houses will tailor and vary their testing directly as result of classification. The higher the rigor, the higher the cost. There are customers that spare no expense in certifications like government.
Free software gets a total rewrite by newbies every few years, losing half the features and productivity gains of the previous version. See GNOME, KDE, GIMP, etc.
OpenBSD & FreeBSD are maybe getting better over time, but even that’s arguable.
Not FOSS but commercial software is. The apps just get more bloated and want to suck even more data with each update. Then there is the sites that have hundreds of trackers and third party cookies from everywhere and need 1gb to display 🙄. OK maybe not 1gb but you get the gist.
Electron apps created with React can definitely push the boundaries of what ‘acceptable’ memory usage is.
I have a pet theory that webview-based apps are popular only because currently there is absolutely no usable multiplatform desktop GUI framework. Therefore, developers have to resort to the one thing that works: load a webpage in a web browser.
Even React Native feels like a kludge in a way it converts React components to UI components.
Yep totally unheard of for foss software to get worse. Gnome 3 and kde 4 for example were universally acclaimed.
I think that the post author just neglects that software has become mindblowingly complex compared to the days of yore, if you put together all the features of netscape + win 3.11 + wordperfect + whatever other thing they were using in the 90s at any given point you don’t get 10% of the complexity of a contemporary productivity app (say outlook) let alone a full operating system.
It’s clear that the more complex something is the more things can break. It’s like complaining that F16s are worse than consumer 40€ drones because the former require maintenance every few hours of flight while the latter don’t.
But if all you need is a drone and all anyone makes is an F-16, that is a shitty mismatch. I don’t need an outlook that does all that shit, I just need to check my email, or at least set up a filter to send everything to the trash.
I don’t need teams to do document management, I just need to chat with my team. I’ll resend a document if it is needed for any reason. Companies are adding useless bloat to all of these things and then breaking the core functionality because they’ve made things hard. This is not progress.
Edit: fixed some iOS auto-incorrect. Apologies for any incoherence before.
It’s kinda like having an F16 that are built to go through a McDrive. It’s a feature, yes, but an utterly terrible one at that. Your job is to do one thing and do it well. I’ll take the car for going to McDrive.
He’s not exactly comparing software to netscape or win 3.11 though, he’s comparing version N of some software to version N-1 or N-2 and noticing that they’re getting worse from release to release. Given the rate of new releases the complexity shouldn’t be increasing that rapidly between releases so I’m not convinced that is the cause per se. I have to agree with the conclusion from the article, testing was more rigorous in the past than it is now. Both because there was less surface area to test back then and because time-to-market pressures were less due to the longer windows between releases.
I assume you never worked in testing. back in the days, we used to cram testing into a weekend as developers were late with their coding. There was no test automation so that weekend we spend all the time on the most basic functionality. Barely getting thou the testing of having the app started and some of the most basic functions. Almost never was there any time for regression testing, old functions broke all the time. It wasn’t uncommon that we skipped a bug fix in one version, just to reintroduce the same bug in the next release.
No, but I do work as a developer and we work pretty closely with the testers. Not back in the day though, I’m not that old but I’ve been around enough to know that even in the current era of software development the quality and duration of testing varies quite a bit from company to company. Some companies really don’t care and the testing is token at best, others like where I currently work are quite obsessed with quality and dedicate quite a lot of time and people to testing a release before it goes out. Of course there are still bugs from time to time but a lot are found and fixed during testing.
Previous companies I’ve worked at with not-so-great testing were more consumer facing whereas the current one is B2B with a lot of enterprise customers so maybe companies just put as much or as little effort into testing as they think their target audience is willing to put up with.
It’s less about putting up as much as fulfilling requirements. I know several software houses will tailor and vary their testing directly as result of classification. The higher the rigor, the higher the cost. There are customers that spare no expense in certifications like government.
And of course, remember…
Early access / beta programs
Subscriptions for every fucking thing
Free software gets a total rewrite by newbies every few years, losing half the features and productivity gains of the previous version. See GNOME, KDE, GIMP, etc.
OpenBSD & FreeBSD are maybe getting better over time, but even that’s arguable.