Apr 14, 2022Liked by Samuel Arbesman

Perhaps a slightly more optimistic view from a systems perspective.

Computer science is the discipline of breaking down problems into smaller and smaller parts. Each level of abstraction has its own specialty but someone needs to know how to combine these parts. And someone, in turn, combines *those* parts. Turtles all the way up.

Within each level, there are small details and large inter-dependencies. So maybe each level of abstraction needs different kinds of minds (and time frames) to become robust?

Expand full comment

This is right on, as is D. Schmudde's comment.

Code suffers from a sort of entropic decay, but not all systems/software decays at the same rate. In the churn of pursuing product/market fit, that decay rate is devastatingly high unless the system is designed for iteration (and then is merely "high").

Before I did much coding myself, I was dismayed at how often developers were "refactoring" – not just old code but code they had written 2-3 months ago! But due to a weird career arch that led to me coding, I now find myself refactoring constantly. Always Be Refactoring! But there is a "right way" to refactor when the rapid iterations just keep on coming. We can distill/generalize down to the right set of primitives with the right affordances and interfaces – legos, in a sense – with which we can build + modify quickly. Moving legos creates far less entropy than moving around the lower-level constructs.

Expand full comment

Great piece!

I felt myself cynically thinking: do you really believe the average startup founder is engaged in long term thinking?! It's often the engineers that say, "Wait! We need to do this right or we'll regret it later." But I do think you're onto something.

The other relevant factors are the inherent ("number of states") complexity of software vs. physical systems (á la Fred Brooks), and that software is an attempt to code objective rules around often fuzzy and tacit procedures and domains.

Expand full comment