Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we want code that's expressive enough to do what we want, while being constrained enough to not do what we don't

I don't think that's an ideal mental model. Code in any (useful) language can do what you want, and can not do what you don't want. The question is how far that code is from code that breaks those properties -- using a distance measure that takes into account likelihood of a given defect being written by a coder, passing code review, being missed in testing, etc. (Which is a key point -- the distance metric changes with your quality processes! The ideal language for a person writing on their own with maybe some unit testing is not the same as for a team with rigorous quality processes.) Static typing is not about making correct code better, it's about making incorrect code more likely to be detected earlier in the process (by you, not your customers).



I was being glib, so let me expand on what I said a little.

By 'constraint' I mean something the language disallows or at least discourages. Constraints in software development are generally intended to eliminate certain classes of errors. Static typing, immutability, variable scoping, automatic memory management and encapsulation are all examples of constraints, and represent control that the language takes away from the developer (or at least hides behind 'unsafe' APIs).

By 'expressiveness' I mean a rough measurement of how concisely a language can implement functionality. I'm not talking code golf here; I mean more the size of the AST than the actual number of bytes in the source files.

Adding constraints to a language does not necessarily reduce its overall expressiveness, but static typing is one of those constraints that typically does have a negative effect on language expressiveness. Some will argue that static typing is worth it regardless, or that this isn't an inherent problem with static typing, but one that stems from inadequate compilers.


That is a pretty fair assessment, and I'll avoid the nominal v. structural subject, but in my experience the difference between static and dynamic typing comes down to metaprogramming. For instance, much of Python's success stems from its dynamic metaprogramming capabilities. By contrast Java's limitations wrt metaprogramming prevent it from competing in areas such as ML and data science / analytics.

One of the most untapped and misunderstood areas in language design is static metaprogramming. Perhaps this is what you meant by "inadequate compilers", but there is no reason why Java can't provide compile-time metaprogramming. With a comprehensive implementation it can compete directly with dynamic metaprogramming, with the benefits of static analysis etc., which is a game changer.


Reading your previous comments made me read 'glib' as 'g-lib', the GTK one.


Everything has a cost. If you had to pick between "write 99% correct code in 1 week" vs "write 100% correct code in 1 year", you probably would pick the former, and just solve the 1% as you go. It's an absurd hypothetical, but illustrates that it's not just about correctness. Cost matters.

What often annoys me about proponents of static typing is that they sound like it doesn't have a cost. But it does.

1. It makes syntax more verbose, harder to see the "story" among the "metadata".

2. It makes code less composable, meaning that everything requires complex interfaces to support everything else.

3. It encourages reuse of fewer general types across the codebase, vs narrow scoped situational ones.

4. It optimizes for "everything must be protected from everything" mentality, when in reality you only have like 2-5 possible data entries into your system.

5. It makes tests more complex to write.

6. Compiled languages are less likely to give you a powerful/practical REPL in a live environment.

For some, this loses more than it gains.

Also, albeit I haven't seen this studied, human factor probably plays bigger role here than we realize. Too many road signs ironically make roads less safe due to distraction. When my code looks simple and small, my brain gets to focus better on "what can go wrong specifically here". When the language demands I spend my attention constructing types, and add more and more noise, it leaves me less energy and perspective on just taking a step back and thinking "what's actually happening here".


Cost matters, but in my experience there's more to this story. It's more like this:

"write 99% correct code in 1 week and then try to fix it as you go, but your fixes often break existing things for which you didn't have proper tests for. It then takes you total of 2 years to finally reach 100% correct code."

Which one do you choose? It's actually not as simple as 1 year vs 2 years. For a lot of stuff 100% correctness is not critical. 99% correct code can still be a useful product to many, and to you it helps you to quickly validate your idea with users.

However, the difference between static and dynamic typing is not that drastic, if you compare dynamic typing to an expressive statically typed language with good type inference. Comparing, for example, Python to C++ is not really fair as there are too many other things that make C++ more verbose and harder to work with. But if we compare Python to for example F# or even modern C#, the difference is not that big. And dynamic typing has a costs too, just different.

1. "Story" can be harder to understand without "metadata" due to ambiguity that missing information often creates. It's a delicate balance between too much "metadata" and too little.

2. Too much composability can lead to bugs where you compose wrong things or in a wrong way. Generic constraints on interfaces and other metaprogramming features allow flexible and safer composability, but require a bit more tought to create them.

3. Reuse is similar. No constraints on reuse, doesn't protect you from reusing something in corner case where it doesn't work.

4. (depends on how you design your types)

5. Dynamic languages require you to write more tests.

6. F# and C# for example both have REPL.

Quality statically typed language is much harder to create and require more features to be expressive, so there are less of them or they have some warts and they are harder to learn.

It's a game of tradeoffs, where a lot of choices depend on a specific use case.


Dynamic languages can execute code without type annotations, so you _can_ just dismiss types as redundant metadata. But I don’t think that’s wise. I find types really useful as a human reader of the code.

Whether you write document them or not, types still exist, and you have to think about them.

Dynamic languages make it really hard to answer “what is this thing, and what can I do with it?”. You have to resort through tracing through the callers, to check the union of all possible types that make it to that point. You can’t just check the tests, because there’s no guarantee they accurately reflect all callers. A simple type annotation just gives you the answer directly, no need to play mental interpreter.


I don't disagree, dynamic languages require better writing skills, so for example, in case of bilingual teams, metadata helps bridge the language barrier. However, if your team is good at expressing how/what/why[1] in your dynamic language, you will not have much issue answering what things are. Again, there are costs with either choice.

[1]: https://max.engineer/maintainable-code


> Everything has a cost. If you had to pick between "write 99% correct code in 1 week" vs "write 100% correct code in 1 year", you probably would pick the former, and just solve the 1% as you go. It's an absurd hypothetical, but illustrates that it's not just about correctness. Cost matters.

I work on airplanes and cars. The cost of dead people is a lot higher than the cost of developer time. It’s interesting to ask how we can bring development costs down without compromising quality; in my world, it’s not at all interesting to talk about strategically reducing quality. We have the web for that.


> It’s interesting to ask how we can bring development costs down without compromising quality; in my world, it’s not at all interesting to talk about strategically reducing quality.

You have some level of quality ya'll are used to, that was already achieved by compromise, and you'd like to stay there. How was that original standard established?

On an exponential graph of safety vs effort (where effort goes up a lot for small safety gains) you are willing to put in a lot more points of effort than general industry to achieve a few more points of safety.


> You have some level of quality ya'll are used to, that was already achieved by compromise, and you'd like to stay there. How was that original standard established?

Safety-critical code for aviation co-evolved with the use of digital systems; the first few generations were directly inspired by the analog computers they replaced, and many early systems used analog computers as fallbacks on failures of the digital systems. These systems were low enough complexity that team sizes were small and quality was maintained mostly through discipline. As complexity went up, and team sizes went up, and criticality went up (losing those analog fallbacks), people died; so regulations and guidelines were written to try to capture best practices learned both within the domain, and from across the developing fields of software and systems engineering. Every once in a while a bunch more people would die, and we'd learn a bit more, and add more processes to control a new class of defect. The big philosophical question is how much of a washout filter you apply to process accumulation; if you only ever add, you end up with mitigations for almost every class of defects we've discovered so far, but you also end up fossilized; if you allow processes to age out, you open yourself to make the same mistakes again. To make it a less trivial decision, the rest of software engineering has evolved (slowly, and with crazy priorities) at the same time, so some of the classes of defect that certain processes were put in to eliminate are now prevented in practice by more modern tooling and approaches. We now have lockstep processors, and MPUs, and verified compilers, and static analysis tools, and formal verification (within limited areas)... all of which add more process and time, but give the potential for removing previous processes that used humans instead of tooling to give equivalent assurances.


Thanks for writing this (just a generally interesting window into a rare industry). As you point out, you can't only ever add. If there was a study suggesting that static types don't add enough safety to justify tradeoffs, you might consider phasing them out. In your industry, they are currently acceptable, there's consensus on their value. You probably have to prioritize procedure over individual developers' clarity of perception (because people differ too much and stakes are too high). That's fair, but also a rare requirement. Stakes are usually lower.


> If there was a study suggesting that static types don't add enough safety to justify tradeoffs, you might consider phasing them out.

Perhaps. Speaking personally now (instead of trying to generalize for the industry), I feel like almost all of the success stories about increasing code quality per unit time have been stories about putting defect detection and/or elimination left in the development process -- that is, towards more and deeper static analysis of both requirements and code. (The standout exception to this in my mind is the adoption of automatic HIL testing, which one can twist as moving testing activity left a bit, but really stands alone as adding an activity that massively reduced manual testing effort.) The only thing that I can see removing static types is formal proofs over value sets (which, of course, can be construed as types) giving more confidence up front, at the cost of more developer effort to provide the proofs (and write the code in a way amenable to proving) than simple type proofs do.


The most important ingredient by far is competent people. Those people will then probably introduce some static analysis to find problems earlier and easier. But static analysis can never fix the wrong architecture and fix the wrong vision.

In the industries I've worked, it's not a huge problem if you have a bug. It's a problem if you can't iterate quickly, try out different approaches quickly, bring results quickly. A few bugs are acceptable as long as they can be fixed. I've even worked at a medical device startup for a bit and it wasn't different, other than at some point there need to happen some ISO compliance things. But the important thing is to get something off the ground in the first place.


> The most important ingredient by far is competent people.

Having competent people is a huge time (cost) savings. But if you don't have a process that avoids shipping software even when people make mistakes (or are just bad engineers), you don't have a process that maintains quality. A bad enough team with good processes will cause a project to fail by infinite delays, but that's a minor failure mode compared to shipping bad software. People are human, mostly, and if your quality process depends on competence (or worse, on perfection), you'll eventually slip.


You'll slip and regain your footing a few hours later without much loss in most industries.


Right, but I hope you also understand that nobody's arguing for removing static types in your situation. In a highly fluid, multiple deployment per day, low stakes environment, I'd rather push a fix than subject the entire development process to the extra overhead of static types. That's gotta be at least 80% of all software.


> The cost of dead people is a lot higher than the cost of developer time.

So you're using proof on every line of code you produce?


> So you're using proof on every line of code you produce?

No, except for trivially (the code is statically and strongly typed, which is a proof mechanism). The set of activities chosen to give confidence in defect rate is varied, but only a few of them would fit either a traditional or formal verification definition of a proof. See DO-178C for more.


I agree 100÷ with this. I've had the same concerns for a long time but rarely see them expressed with such eloquence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact