There is no such thing as software development
It's hard to find an aspect of modern life that is not influenced in some way by software. Some of it is very visible, for example the Web browser I start on my computer. Other software is completely invisible, such as the software controlling my car's diesel engine. Some software is safety critical, for example flight control software in airplanes. Other software is used in a much more futile way, such as playing games. I could go on listing characteristics in which different software packages differ, but I will leave it at that - I don't really expect anyone to disagree about the ubiquity and diversity of software in our increasingly digital world.
Given this diversity, it is surprising how many seem to consider "software development", and related terms such as "software engineering", as general concepts requiring no further qualification. In particular, plenty of people are happy to discuss in an abstract way how software should best be developed, without any reference to a concrete application domain, project size, expected longevity, etc. Imagine we did the same for the world of atoms, lumping together activities as distinct as chemical synthesis, carpentry, and dental surgery under the label "matter manipulation", and starting a discussion about best practices for matter manipulation. I doubt anyone would take such a debate seriously.
A good example of such an overly abstract discussion is the one about the benefits of static typing. There is a large camp of static typing enthusiasts who claim that static typing is Right with a capital R. They argue that it's always better to have correctness guarantees than not to have them. The implicit assumption is that static typing comes at no cost, which is manifestly false. The main contributions to this cost are 1) additional cognitive load, 2) the need to work around the limitations of a type checker, and 3) additional barriers to the combination of independently developed libraries. As soon as one admits the necessity of a cost-benefit analysis for static typing, it quickly becomes obvious that this can only be done for 1) some specific category of software and 2) a specific type system. The question then becomes: is type system A useful for improving the quality of software in application domain X? A nice example of this point of view is given by Rich Hickey in his keynote on "Effective Programs", where he explains why none of the well-known type systems are useful for the kind of software he writes, leading to his decision to design Clojure as a dynamically typed language.
Focusing software development questions on specific software categories has many potential benefits. Perhaps most importantly, it permits formulating questions in a precise enough way to make them amenable to empirical verification (aka "the scientific method"), acting at the same time as a safeguard against overly generalizing the conclusions from empirical studies. Moreover, the study of specific use cases is likely to lead to improvements in the methodology. In my example of static typing, it can be expected that once type system designers adopt the habit of thinking about specific software categories, they will design and evaluate type systems for various important application domains, taking into account both the kind of data being processed and the kinds of mistakes one would like to protect oneself against. Even better, once type system designers recognize that there is no single type system to rule them all, they might start to think about how to combine pieces of software written using different type systems. In the end, the three cost factors I mentioned might all end up heavily reduced.
Since there is a chance that some type system designers are reading this, I'll profit from having their attention and suggest developing a type system for numerical computations, which by some strange coincidence is what I do in my own work. In this application domain, most data represents physical quantities and its low-level representation is "float" or "array of floats". Properties that one could usefully monitor in the course of type checking are dimensions and units, but also positivity or non-zeroness. For array operations, the compatibility of array dimensions is worth a check as well. A static proof of complete absence of such mistakes is probably not doable, but detecting as many mistakes as possible while inserting run-time checks for the rest is probably a very useful compromise. It is also worth considering some important sub-categories of numerical software, in particular the different layers of the scientific software stack that I have described before. The required guarantees are much higher for infrastructure software (layer 2) than for scripts and workflows (layer 4), and infrastructure developers can be expected to invest more effort to ensure correctness. However, this does raise the question of type-checking at the interface between layers, a possible solution being gradual typing.
Static typing is merely one example for the importance of looking at specific software application domains, there are many others. The utility of paradigms such as object-oriented or functional programming is also mostly discussed in the abstract, as are the relative merits of development strategies like test-driven or agile development. Finally, some less discussed but practically important questions could get more limelight exposure if formulated more concretely in the context of specific applications. I am thinking for example of the choice between using external libraries and writing one's own code, involving the trade-off between development effort and the long-term risk of uncontrollable dependencies.
Comments retrieved from Disqus
- Thomas Arildsen:
I think you raise some very important points here. It is similar in spirit to what I usually spend a substantial amount of time trying to to convince students in my courses: The choice between fast, compiled, "low-level" languages (such as C) and slower, interpreted, "high-level" languages (such as Python) is not one language to rule them all. It depends highly on how much time/cost you are willing to spend on developing the program vs how much it is actually going to be used after completion. In the case of custom scientific computing software, I find Python or similar languages is what makes sense.
Also, I find it very relevant that you point out how data types in numerical computing applications are not simply a case of int vs float. In fact, this is what two PhD students in a recent research project I was involved in tried to solve in this way: http://vbn.aau.dk/en/public... & http://magni.readthedocs.io.... The idea is to do run-time detailed numerical type-checking of function arguments using decorators in Python.- Konrad Hinsen:
Thanks for your comment!
You point out another tradeoff, language choice, that very much depends on what your software is actually supposed to do. I didn't mention this example because I rarely see language choice discussed abstractly, although it certainly happens.
It's good to see we agree on the importance of unit checking :-) If it's so rarely done in practice, that's because it is not well supported. For Python, your approach of run-time checking is very appropriate, but people who turn to Fortran or C for speed would expect compile-time checks with no run-time overhead. There is actually a tool (not so well known for now) that does static unit checking for Fortran (https://camfort.github.io/) and for C++ it can be done via template metaprogramming (http://www.boost.org/doc/li.... Microsoft's F# language has dimensional analysis as a built-in feature, as does Frink (https://frinklang.org/). But I am not aware of any language with a general-purpose type system that would allow the implementation of dimensional analysis. If anyone does, I'd appreciate a pointer.
- Franklin Chen:
General-purpose languages like Haskell have type systems that enable building your own dimensional analysis system if you want. One example mature library contributed to the community is https://hackage.haskell.org...
- Konrad Hinsen:
Thanks for the pointer! That library looks interesting, though I don't see how exactly it works, given that I have never heard of data kinds and type families before. But I can see from the source code that it does standard dimensional analysis, that it does the checking at compile time, which is the basic list of requirements. What I don't see is how it handles the well-known tricky cases such as making both Hz and Bq compatible with 1/s but not with each other.
- Franklin Chen:
Unfortunately, in `dimensional`, currently Hz and Bq are not kept different at all, actually. I see that although the types look different
```
hertz :: Num a => Unit Metric DFrequency a
becquerel :: Num a => Unit Metric DActivity a
```
in factDActivity is just an alias to DFrequency rather than a different type. I've submitted an issue at https://github.com/bjornbm/...
- Konrad Hinsen:
And I have added a comment to prevent the authors from believing that there is a simple fix. Doing this correctly is probably a research project. But I hope somebody will go for it!
- Konrad Hinsen:
- Franklin Chen:
- Konrad Hinsen:
- Franklin Chen:
- Konrad Hinsen: