• @ree
    link
    62 years ago

    I don’t care if arrays start at 1.

  • ☆ Yσɠƚԋσʂ ☆
    link
    32 years ago

    My most controversial opinion on programming is that static typing is largely overrated. I’ve been doing software development professionally for around 20 years now, and I’ve seen no evidence to suggest that static typing plays a significant role in software quality. The dominant factors tend to be the experience of the developer, coding practices, code reviews, testing, and documentation. And while many ideas, such as OO, sound good on paper, they often don’t work out as intended in practice. In the early 2000s OO was all the rage, and it was supposed to solve code reuse, encapsulation, and all kinds of problems. The consensus was that OO was the one true way to do things, and every language had to use some flavor of OO. After decades of production use, it’s quite clear that OO did not live up to the hype. What we’re seeing now with static typing is a similar situation in my opinion.

    The reality is that both type disciplines have been around for many decades, and there are literally millions of projects of all kinds out there written in both static and dynamic languages. Despite that nobody has been able to show any statistically significant trends that suggests projects written in static languages have less defects. This article has a good summary of research to date. Some people will argue that this is just something that’s too hard to measure, but that’s an absurd argument. For example, effects of sleep deprivation on code quality are clearly demonstrated. So, if static typing played a significant role the effects would be demonstrated the same way.

    What often happens in discussions around static typing is that people state that static typing has benefits and then try to fit the evidence to fit that claim. A scientific way to approach this would be to start by studying real world open source projects written in different languages. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can then make a hypothesis as to why that is. For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it’s correct or not. In fact, there was a large scale GitHub study that’s been replicated. The conclusion was that there is no statistically significant effect associated with static typing.

    I think that the fundamental problem with static typing is that it’s inherently more limiting in terms of expressiveness because you’re limited to a set of statements that can be verified by the type checker effectively. This is a subset of all valid statements that you’re allowed to make in a dynamic language. So, a static language will often force you to write code for the benefit of the type checker as opposed to the human reader because you have to write code in a way that the type checker can understand. This can lead to code that’s more difficult to reason about, and you end up with logic errors that are much harder to debug than simple type mismatches. People can also get a false sense of confidence from their type system as seen here where the author assumed the code was correct because it compiled, but in fact it wasn’t doing anything useful.

    At the same time, there is no automated way to check the type definitions themselves and as you encode more constraints via types you end up with a meta program describing your program, and there is nothing to help you know whether that program itself is correct. Consider a fully typed insertion sort in Idris. It’s nearly 300 lines long! I could understand a 10 line Python version in its entirety and be able to guarantee that it works as intended much easier than the Idris one.

    I also think dynamic typing is problematic in imperative/OO languages because the data is mutable, and you pass things around by reference. Even if you knew the shape of the data originally, there’s no way to tell whether it’s been changed elsewhere via side effects.

    On the other hand, functional languages such as Clojure embrace immutability and any changes to the data happen explicitly in a known context. This makes it much easier to know exactly what the shape of the data is at any one place in your application. Meanwhile, all the data is structured using a set of common data structures. Any iterator function such as map, filter, or reduce can iterate any data structure, and it’s completely agnostic regarding the concrete types. The code that cares about the types is passed in as a parameter. Pretty much any data transformations are accomplished by chaining functions from the standard library together, with domain specific code bubbling up to a shallow layer at the top.

    Finally, it’s worth noting that types aren’t the only approach available. Runtime contracts as seen with Spec in Clojure provide a way to create a semantic specification for what the code is meant to be doing, and to do generative testing against it. This approach is much more flexible than a type system, and it can encode semantically meaningful constraints that are difficult to encode using types.

  • @pingveno
    link
    22 years ago

    Going a little overboard with tickets and process improvements can be really beneficial. As I’ve utilized the features in Jira more and more, I’ve found I’m more organized, less likely to loose tasks in our backlog, and more driven to get things done. I think that’s partially because it helps sort out my ADHD brain a bit.

  • Ephera
    link
    12 years ago

    Kotlin is a more complex programming language than Scala.

    Scala still seems to have this bad rep from the times when functional programming features weren’t commonplace yet in most programming languages, because it stood for this oh-so-complex functional programming stuff.
    Now, Kotlin is climbing up the TIOBE index, lauded by Java programmers as finally giving them the features they wanted, and as someone who’s now coded extensively in both, I don’t get it.

    Kotlin is obviously heavily inspired by Scala and they seem to have roughly the same features. But Kotlin imposes tons of rules that limit these features in how you can use them.

    However, I’m not talking about the good kind of rules, those which might help to streamline the code style. I’m saying it feels like they implemented half of each feature respectively, and then, so they didn’t have to finish implementing, they disallowed using it in other ways.
    Arbitrary rules, which as a programmer you just have to memorize to please the language gods.

    From a technical perspective, the only aspects I can see Kotlin being better are somewhat better Java interop and if you want to build a DSL, it has some nice features for that. But those DSL features make it worse / less streamlimed when you don’t want to build a DSL, and it just seems to be worse in every other aspect.

    Obviously, popularity rarely correlates with technical merit, I can accept that. But when people tell me I shouldn’t use Scala, because it’s so complex, I should use Kotlin instead, that shit fucking triggers me.

  • @ttmrichter
    link
    12 years ago
    1. Any programmer who can’t recite the fallacies of distributed programming from memory should not be permitted near any kind of networking code.

    2. Any programmer who is permitted to program networking applications (c.f. #1) should be required to only use a network environment (for work and personal use) that is high load and low reliability while doing so.

    3. If you don’t have used-in-anger knowledge of everything from an HDL to a formal theorem-proving language, plus the entire spectrum in between, you should not call yourself a “full stack” developer. The stack is much deeper than you can imagine.

    4. If you don’t know everything from network PHY to high-level networking abstractions you should not call yourself a “full network stack” developer. Network stacks are also much deeper than you think.

    • @guojing
      link
      1
      edit-2
      2 years ago

      deleted by creator

    • @ttmrichter
      link
      12 years ago

      Starting with one, however, almost certainly will.