

Thanks!
Thanks!
Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.
What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.
For the dynamic vs static typing debate, see the article by Dan Luu:
https://danluu.com/empirical-pl/
But this is not the central point of the above blog post. The central point of it is that, by the very nature of LKMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-exoerimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.
And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions to multiple complex and sometimes contradictory objectives (which is absolutely critical to sny sane software architecture).
What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.
And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.
Are you saying that it is not possible to use scientific methods to systematically and objectively compare programming tools and methods?
Of course it is possible, in the same way as it can be inbestigated whuch methods are most effective in teaching reading, or whether brushing teeth is good to prevent caries.
And the latter has been done for comparing for example statically vs dynamically typed languages. Only that the result there is so far that there is no conclusive advantage.
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But tasks lile programming are logical tasks and LLMs absolutely can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
Na ja, wie man am Schweizer Gebirgsort Blatten, der Flut an der Donau vor nem Jahr, und dem Brand in Paradise/Kalifornien sehen kann, sind es ja immer nur die armen Schlucker in der Dritten Welt, die wegen des Klimawandels Haus und Hof oder vielleicht auch Leib und Leben verlieren. /s
Ich würde vorschlagen, Du fragst mal bei Radfahrern/ÖPNV Nutzern / Autofahrern in deiner Umgebung rum, wie viel Zeit sie mit Pendeln und Wegen verbringen. Das wird bei den Nicht-Autofahrern nicht wesentlich mehr sein.
Und das ist auch nicht anders bei den Bewohnern von Amsterdam, Paris und Kopenhagen. Es ist gesichertes Ergebnis der Verkehrsforschung, dass Menschen kein Entfernungsbudget haben für Wege, sondern ein Zeitbudget. Und deshalb führen “schnellere” Verkehrsmittel immer nur zu längeren Wegen.
Und wenn das Argument dann ist “Ja aber ich wohne auf dem Land” - darauf geht auch das Interview mit Knoflacher ein. Es ist zu wesentlichen Teilen dem Auto zu verdanken, dass ländliche Gegenden in Deutschland solche Verkehrswüsten sind.
So what do you do with a file object?
You are right with this. But still, in Rust, a vector of u8 is different from a sequence of unicode characters. This would not work in Python3 either, while it’d work in Python2.
Thanks, I fixed it!
Ich fände es saucool, wenn es analog zur Reichsgaragenverordnung, das Autofahrern Stellplätze sichert, Verordnungen gäbe, die sicher stellen, dass Pendler und Mieter sichere Fahrradstellplätze auch für die Nacht haben, und es ausreichend und vernünftige Stellplätze vor Geschäften und öffentlichen Einrichtungen gibt.
Das Konzept der temporären Spielstrassen kommt aus England
Ich bin mir ziemlich sicher, dass z.B. Bogotá in Kolumbien mit der sonntäglichen Ciclovía seit Anfang der Neunziger 1974 lange voraus war
https://en.m.wikipedia.org/wiki/Ciclovía
https://es.m.wikipedia.org/wiki/Ciclovía#Ciclovías_recreativas
https://duckduckgo.com/?q=ciclovía+bogotá+carrera+septima&atb=v405-5__&ia=images&iax=images
My experience (from using Linux since 1998) is that the best way to use Linux is to get compatible hardware (that is, unless you want to develop device drivers). And this doubly and triple for laptops and graphics cards. Refurbished business Thinkpads are a very good option.
What I find interesting is that move semantics silently add something to C++ that did not exist before: invalid objects.
Before, if you created an object, you could design it so that it kept all invariants until it was destroyed. I’d even argue that it is the true core of OOP that you get data structures with guaranteed invariants - a vector or hash map or binary heap never ceases to guarantee its invariants.
But now, you can construct complex objects and then move their data away with std::move() .
What happens with the invariants of these objects?
let mut bytes = vec![0u8; len as usize];
buf.read_exact(&mut bytes)?;
// Sanitize control characters
let sanitized_bytes: Vec<u8> = bytes.into_iter()
.filter(|&b| b >= 32 || b == 9 || b == 10 || b == 13) // Allow space, tab, newline, carriage return
.collect();
This implicitly, and wrongly, swaps the interpretation of the input from UTF8 text to pure ASCII.
Did you ever note that when intelligent engineers talk about designs (or quite generally when intelligent people talk about consequential decisions they took), they talk about their goals, about the alternatives they had, about what they knew about the properties of these alternatives and how these evaluated with their goals, about which alternatives they chose in the end and how they addressed the inevitable difficulties they encountered?
For me, this is quite a very telling sign of intelligence in individuals. And truly good engineering organizations do collect and treasure that knowledge - it is path-dependent and you cannot quickly and fully reproduce it when it is lost. And more importantly, some fundamental reasons for your decisions and designs might change, and you might have to revise them. Good decisions also have a quality of stability which is that the route taken does not change dramatically when an external factor changes a little.
So and now compare that to when you let automatically plan a route through a dense, complex suburban train network, by using a routing app. The route you get will likely be the fastest one, with the implicit assumption that this is what you of course want - but any small hiccup or delay in the transport network can well make it the slowest option.
Cognitive Debt is where you forgo the thinking in order just to get the answers, but have no real idea of why the answers are what they are.”
Here are many more pictures in high resolution:
https://www.derbund.ch/zerstoerung-in-blatten-bilder-zeigen-geroelllawine-490800412498
One can see that the rocks are blocking a river, causing up to 30 meters of flooding. Eventually, it will break through.
The article is lacking some hard numbers.
It cites Bitcom on constant high screen time But Bitcom is an industrial lobby group. They are for example anti-privacy and anti-data protection. I wouldn’t expect them to publish anything which is not in the interest of Big Tech.
You might have a look at LEO:
https://en.wikipedia.org/wiki/Leo_(text_editor)
I used it extensively for some time to write big documentation. It is good.
But I’d guess that for most tasks, Emacs org-mode is the most powerful option.