Overall I agree with what the author says, though I have a few further thoughts:
One might argue that writing types are time consuming, and the bugs are not an issue when programmer can cover those cases with automated tests.
These two arguments contradict each other and together are an argument for static typing, not against. Which just shows how weak these arguments are.
but a more powerful type of inference that can actually infer the whole type of a function by analyzing the body of the function.
This bit I am not convinced by. Inferring the API of a function from its body makes it harder to see breaking changes when you refactor the body. It can be useful for internal private helpers, but IMO public APIs should be explicit about their signature.
Functional Programming
I would go one step further here. It should support OOP and procedural paradigms as well. No single programming paradigm is best suited to all problems. Sometimes a mixed approach is best. Any language that heavily leans oneway at the expense of the others limits itself to the problems it can best solve. I do admit the far too many languages lean too much towards OOP at the expense of functional style.
So it is easy to push for a functional style over OOP in new languages. But I would be weary of purely functional language as well.
These two arguments contradict each other and together are an argument for static typing, not against. Which just shows how weak these arguments are.
The way I read it, he wasn’t suggesting that was a good argument at all. He was just explaining what he believes dynamic type enthusiasts say.
This bit I am not convinced by. Inferring the API of a function from its body makes it harder to see breaking changes when you refactor the body. It can be useful for internal private helpers, but IMO public APIs should be explicit about their signature.
Well, in F# at least, this inference is the default. However, anybody can still fully type out the function signature. I think I get what you are saying, but in the case of a public API or interfaces the programmer can simply just add the type specifications.
I would go one step further here. It should support OOP and procedural paradigms as well.
Yea I somewhat agree with this. Though I mostly abhor OOP, it taken in small doses can be good. And procedural programming is always invaluable of course.
The way I read it, he wasn’t suggesting that was a good argument at all. He was just explaining what he believes dynamic type enthusiasts say.
Oh I read it the same way. I was just pointing out how much those two arguments contradict each other and that alone is almost enough of an argument without the extra ones the author gave. Though at the same time they are a bit of a straw man.
Well, in F# at least, this inference is the default. However, anybody can still fully type out the function signature. I think I get what you are saying, but in the case of a public API or interfaces the programmer can simply just add the type specifications.
But the problem with this being the default is most people wont go through the extra steps involved when they dont need to - which means you cannot really benefit from it most of the time. And the author implies that inferred types from bodies are always (or almost) the better option - which I disagree with.
Though I mostly abhor OOP, it taken in small doses can be good
Do you though? Or do you abhor inheritance. There are a lot of good ideas from OOP style code if you ignore inheritance (which was not originally part of the definition of OOP, in fact the original definition was very similar to some traits of function styles). It was only in later years and more modern times that OOP and inheritance was intertwined and IMO as a style it still has a lot to offer if you drop that one anti-feature.
It can be nice not to have to worry about types when you are doing exploratory programming. For example, I once started by writing a function that did a computation and then returned another function constructed from the result of that computation, and then realized that I’d actually like to attach some metadata to that function. In Python, that is super-easy: you just add a new attribute to the object and you’re done. At some point I wanted to tag it with an attribute that was itself a function, and that was easy as well. Eventually I got to the point where I was tagging it with a zillion functions and realized that I was being silly and replaced it with a proper class with methods. If I’d known in advance that this is where I was going to end up then I would have started with the class, but it was only after messing around that I got a solid notion of what the shape of the thing I was constructing should be, and it helped that I was able to mess around with things in arbitrary ways until I figured out what I really wanted without the language getting in my way at intermediate points.
Just to be clear, I am not saying that this is the only or best way to program, just that there are situations where having this level of flexibility available in the language can be incredibly freeing.
And don’t get me wrong, I also love types for two reasons. First, because they let you create a machine-checked specification of what your code is doing, and the more powerful the type system, the better you can do at capturing important invariants in the types. Second, because powerful type systems enable their own kind of exploratory programming where instead of experimenting with code until it does what you want you instead experiment with the types until they express how you want your program to behave, after which writing the implementation is often very straightforward because it is so heavily constrained by the types (and the compiler will tell you when you screwed up).
Some things are very easy to do in loosly typed languages - just as letting a function take a string or int, and then parsing that string into an int if a string was passed in. In a loosly typed language you can just call something like if typeof(input) but in a strongly typed language you often need a lot more boiler plate and extra wrapper types to say the function can only take an int or string - for instance in rust you might need to wrap them in a enum first or some how create a trait and implement that for each types.
This is quite a bit of extra upfront cost some of the time that really rubs people that are used to loosly typed languages the wrong way. So they think it is slow to work with types. But what they never seem to count is the countless hours you save knowing that when you read a value from something and pass it to a function that wants only an int, that the value is an int and you dont end up getting 2 + "2" = "22" and other suprising bugs in your program. Which results in less time debugging and writing tests for weird cases the compiler does not allow.
But all that extra time if often not counted as that is dissociated from the original problem at hand. And they are already used to this cost - people often notice a new cost, but don’t notice a possibly bigger saving else where.
The main argument against strictly typed languages imo isn’t that types are time-consuming to write, it’s that they forbid some otherwise valid programs. When writing down your types you are forced to write down some of the assumptions you make about your data (which is usually a good thing) but all assumptions aren’t necessarily possible or ergonomic to express in your given programming languages type system.
Overall I have a strong preference for statically typed languages as they (usually) make code more readable and help prevent prevent bugs, but it’s important to not strawman fans of dynamic types either!
Can you give any examples of such “otherwise valid programs”? Because a lot of times static typing also has ways to do everything dynamic typing can but it is just more difficult or (obviously) won’t have the benefits of static typing.
I think the way some dynamically typed languages use maps is interesting. In most languages if you have a hashmap, all values have to have the same type. In a dynamic language you can have some members be methods, some members be values of potentially different types and so on. Of course, depending on what you want to achieve, you might be able to use a struct for example. A map is more flexible though. You can union two different maps, or you can have a function that takes a map that has either a or b. It’s not necessarily impossible to express this in static types either, but there are many things here that quickly become tedious to do with types that Just Work in dynamically typed languages.
Overall I agree with what the author says, though I have a few further thoughts:
These two arguments contradict each other and together are an argument for static typing, not against. Which just shows how weak these arguments are.
This bit I am not convinced by. Inferring the API of a function from its body makes it harder to see breaking changes when you refactor the body. It can be useful for internal private helpers, but IMO public APIs should be explicit about their signature.
I would go one step further here. It should support OOP and procedural paradigms as well. No single programming paradigm is best suited to all problems. Sometimes a mixed approach is best. Any language that heavily leans oneway at the expense of the others limits itself to the problems it can best solve. I do admit the far too many languages lean too much towards OOP at the expense of functional style.
So it is easy to push for a functional style over OOP in new languages. But I would be weary of purely functional language as well.
The way I read it, he wasn’t suggesting that was a good argument at all. He was just explaining what he believes dynamic type enthusiasts say.
Well, in F# at least, this inference is the default. However, anybody can still fully type out the function signature. I think I get what you are saying, but in the case of a public API or interfaces the programmer can simply just add the type specifications.
Yea I somewhat agree with this. Though I mostly abhor OOP, it taken in small doses can be good. And procedural programming is always invaluable of course.
Oh I read it the same way. I was just pointing out how much those two arguments contradict each other and that alone is almost enough of an argument without the extra ones the author gave. Though at the same time they are a bit of a straw man.
But the problem with this being the default is most people wont go through the extra steps involved when they dont need to - which means you cannot really benefit from it most of the time. And the author implies that inferred types from bodies are always (or almost) the better option - which I disagree with.
Do you though? Or do you abhor inheritance. There are a lot of good ideas from OOP style code if you ignore inheritance (which was not originally part of the definition of OOP, in fact the original definition was very similar to some traits of function styles). It was only in later years and more modern times that OOP and inheritance was intertwined and IMO as a style it still has a lot to offer if you drop that one anti-feature.
I don’t understand how types could be time consuming.
It can be nice not to have to worry about types when you are doing exploratory programming. For example, I once started by writing a function that did a computation and then returned another function constructed from the result of that computation, and then realized that I’d actually like to attach some metadata to that function. In Python, that is super-easy: you just add a new attribute to the object and you’re done. At some point I wanted to tag it with an attribute that was itself a function, and that was easy as well. Eventually I got to the point where I was tagging it with a zillion functions and realized that I was being silly and replaced it with a proper class with methods. If I’d known in advance that this is where I was going to end up then I would have started with the class, but it was only after messing around that I got a solid notion of what the shape of the thing I was constructing should be, and it helped that I was able to mess around with things in arbitrary ways until I figured out what I really wanted without the language getting in my way at intermediate points.
Just to be clear, I am not saying that this is the only or best way to program, just that there are situations where having this level of flexibility available in the language can be incredibly freeing.
And don’t get me wrong, I also love types for two reasons. First, because they let you create a machine-checked specification of what your code is doing, and the more powerful the type system, the better you can do at capturing important invariants in the types. Second, because powerful type systems enable their own kind of exploratory programming where instead of experimenting with code until it does what you want you instead experiment with the types until they express how you want your program to behave, after which writing the implementation is often very straightforward because it is so heavily constrained by the types (and the compiler will tell you when you screwed up).
Some things are very easy to do in loosly typed languages - just as letting a function take a string or int, and then parsing that string into an int if a string was passed in. In a loosly typed language you can just call something like
if typeof(input)
but in a strongly typed language you often need a lot more boiler plate and extra wrapper types to say the function can only take an int or string - for instance in rust you might need to wrap them in a enum first or some how create a trait and implement that for each types.This is quite a bit of extra upfront cost some of the time that really rubs people that are used to loosly typed languages the wrong way. So they think it is slow to work with types. But what they never seem to count is the countless hours you save knowing that when you read a value from something and pass it to a function that wants only an int, that the value is an int and you dont end up getting
2 + "2" = "22"
and other suprising bugs in your program. Which results in less time debugging and writing tests for weird cases the compiler does not allow.But all that extra time if often not counted as that is dissociated from the original problem at hand. And they are already used to this cost - people often notice a new cost, but don’t notice a possibly bigger saving else where.
The main argument against strictly typed languages imo isn’t that types are time-consuming to write, it’s that they forbid some otherwise valid programs. When writing down your types you are forced to write down some of the assumptions you make about your data (which is usually a good thing) but all assumptions aren’t necessarily possible or ergonomic to express in your given programming languages type system.
Overall I have a strong preference for statically typed languages as they (usually) make code more readable and help prevent prevent bugs, but it’s important to not strawman fans of dynamic types either!
Can you give any examples of such “otherwise valid programs”? Because a lot of times static typing also has ways to do everything dynamic typing can but it is just more difficult or (obviously) won’t have the benefits of static typing.
I think the way some dynamically typed languages use maps is interesting. In most languages if you have a hashmap, all values have to have the same type. In a dynamic language you can have some members be methods, some members be values of potentially different types and so on. Of course, depending on what you want to achieve, you might be able to use a struct for example. A map is more flexible though. You can union two different maps, or you can have a function that takes a map that has either a or b. It’s not necessarily impossible to express this in static types either, but there are many things here that quickly become tedious to do with types that Just Work in dynamically typed languages.