Strive for changes that are obviously correct.
At Git Merge 2016 Greg Kroah-Hartman, who is a Fellow at the Linux Foundation, gave a talk about the development of the Linux kernel. The development of the Linux kernel moves fast! At the time of Greg’s talk, there were 10,800 lines added, 5,300 lines removed, and 1,875 lines modified every day. In addition, the development keeps going faster and faster.
There are a lot of changes in a Linux kernel release, so how do the maintainers keep up? Greg attributes this to “Time-based releases” and “Incremental changes”. Time-based releases mean that a new Linux kernel is released every 6-7 weeks. An effect of the time-based release schedule is that if code does not get in, the time to the next opportunity is not very long, at least not for a project at this scale. This fixed release schedule also helps users, since they can plan ahead. When a new release candidate is cut, there are only bug fixes and regression reversions going in, no new features.
The rules for what can go into a stable kernel is really interesting. The change can, at its largest, be about a hundred lines of code, and it has to be “obviously correct”. Obvious correctness is, as Greg points out, in the eye of the beholder. In addition, if you are to do a more complicated thing, you have to show your work, break it down into individual steps, and every change in the series has to be obviously correct. The burden of work is on the developer, not on the reviewer. As Greg puts it, when talking about the obviously correct changes; “I would be stupid not to accept them”, since they are small, simple, and correct pieces, and you showed all your work.
A similar pattern is “micro commits”, described by Ottinger in his post “what’s this about micro commits“. However, Ottingers main focus is on commit size. The experiences are similar, smaller commits are easier to verify and review. He points out that reviewing commits do not scale linearly, it is not 20 times harder to review 20 commits over one larger changeset. The non-linearity resonates with my own experience. This practice, of small commits, is also described by Mark Seeman in his book, “Code that fits in your head”. In this context, Mark describes how small commits increase the manoeuvrability of your code. It is easy for you to try ideas and change your mind if you keep moving in small increments. I think Mark sums it up nicely when he writes that “Your commit history should be a series of snapshots of working software”.
Personally, I see small commits as a side effect of striving for obvious correctness. If your work is to be judged as obviously correct, it has to be made in small pieces and to bring the reader along, you have to show your work. Focusing on obvious correctness will thus inevitably lead to small commits.
One important principle in lean development, to reduce waste, is to move from working in large batches to a one-piece flow. If we are striving for obviously correct changes, we enable a one-piece flow of changes to our software to reach production. Large changes spanning multiple files will require more effort and focus of the reviewer, resulting in a stop in the flow of changes, when a large batch should be reviewed.
I think the principle of obvious correctness is fractal. From Greg, we know it works on a large project such as the Linux kernel, and I do not doubt that it works as well on a small project. This principle is related to our previously defined principle of fast feedback. Running your quality checks on small commits will give you faster feedback from your tests, linting, and reviews.
In summary, I will follow Greg’s advice and make it a first principle, as I believe some very good quality practices will fall out of applying the principle. Fast feedback loops, all the way to production, are good. The fastest feedback loops will happen when changes are small, well described, and obvious. Make your reviewer feel as if there were stupid not accepting your change since you showed your work and it’s obviously correct.