Philosophies In Software Engineering
Do One Thing and Do It Well
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.
... the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.
The UNIX Philosophy:
- Small is beautiful.
- Make each program do one thing well.
- Build a prototype as soon as possible.
- Choose portability over efficiency.
- Store data in flat text files.
- Use software leverage to your advantage.
- Use shell scripts to increase leverage and portability.
- Avoid captive user interfaces.
- Make every program a filter.
>>> import this The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!
Convention over configuration: to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility.
I once loved Maven so much. Not only because the philosophy - "convention over configuration” - was so striking to me at that time, but also because it made me realize that there’s(or should be) a philosophy behind everything. However in Maven, it only solves a tiny part of the problem: directory layout. It still takes dozens of lines of xml to perform even the most basic tasks. Then comes sbt. No more XML. The whole building process is described by scala code. At the beginning it felt so bizarre, things came out from nowhere but works like magic. Until one day I pressed
cmd+B and realized
libraryDependenciesis just some predefined variable(actually val) by sbt, and the weird operators
%% are just methods.
I actually vaguely felt this trend from another task I’ve been doing. Now I’m convinced.
That task was to build a data mining pipeline, for the data scientist. We encapsulated some of the common functionalities and provided a single configuration as the user interface. Originally it was a JSON(I was never a fun of XML), but the quoting and trailing comma caused headaches for the users; then we tried java properties, which was not very flexible to defined nested configs, and escaping special characters also became a chore; YAML was another option, though we did not have the confident to spend time on it; eventually we created some home grown format, then educating the users turned out to be the biggest overhead.
Things got worse when more users jumped on board. Because of the nature of the Data Science: exploratory and experimental, it was so hard to provide all the tweaks and tricks our scientist used. Even if we can actually encapsulate all of those requirements, the configuration would become a hundreds of pages manual.
The big question now: is “configuration” the correct solution, or level of abstraction, for data science? Maybe we should take one step back and let user interact with code directly, like R, like SAS, like scikit-learn for python, oh, yes, like Spark.
worse is better: simplicity of both the interface and the implementation are more important than any other attributes of the system — including correctness, consistency, and completeness.
- Simple and wrong
- Complicated and wrong
- Complicated and right
- Simple and right
Alan Turing, 1936: no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt.
"Halting Problem" is therefore undecidable by algorithm.