- cross-posted to:
- technology@lemmy.world
- c_lang@programming.dev
- cross-posted to:
- technology@lemmy.world
- c_lang@programming.dev
Biden administration calls for developers to embrace memory-safe programing languages and move away from those that cause buffer overflows and other memory access vulnerabilities.
deleted by creator
I think that’s the point. You can’t trust the average developer to do things safely. And remember, half of all programmers are even worse than average.
Maybe even more!
Wouldn’t that be the median programmer instead of average?
The word “average“ can mean many things, for example, mean, median, mode, or even things like “within 1 standard deviation from the mean”.
I was using it strictly as the mean which divides the population exactly in half.
The median is the one that splits a data set in half and picks the middle.
You’re right of course, that was a stupid mistake on my part.
Which half am I in?
If you have to ask
You know
Yes. And 75% of car driver believe they are above average as well…
99% of devs believe they are in the top 1%
Half of all programmers constitute the so called “average” group
Yea! I’m one of them!
deleted by creator
Bell curves don’t work to make this point. A bell curve is symmetrical, so half of developers will always be below average on a bell curve. But yes, it is true that for other types of distributions, more or less than half of the developers could be below average. What the person above you was looking for, in the general case, would be the median.
What? How would you define “average”? His statement is technically correct.
Average is the mean (i.e. sum of all “skill” divided by the amount of programmers)
What they were thinking of is the median (50th percentile = 0.5 quantile), which splits the group in two equal sized groups.
For a bell curve, they are the same values. But think of the example of average incomes: 9 people have an income of 10$, one has an income of 910$. The average income is 100$ ((10*9+910)/10). The median is basically 10 however.
The distribution of skill in humans, for various tasks and abilities, can often be approximated by a normal distribution. In that case, as you know, the mean is equal to the average.
Yeah, fair enough
Actually, in order to test your assumption, you’d need to quantitatively measure skill, which per se is something already problematic, but you’d also need to run a statistical test to confirm the distribution is a normal/Gaussian distribution. People always forget the latter and often produce incorrect statistical inferences.
The mean is in the center of the bell curve, so I’m not sure what your point is.
Literally.
Or rather a Dunning Kruger issue: seniors having spent a significant time architecturing and debugging complex applications tend to be big proponents for things like rust.