Search This Blog

Tech Book Face Off: Clean Code Vs. Agile Principles, Patterns, and Practices in C#

It's been two months since I've done a Tech Book Face Off, but not because I haven't been studying and learning. Things have been crazy busy lately, so I had to slow down on the reading a bit. I've still managed to sneak these two great books into my limited free time, and now it's time to give them a rundown.

I've read a lot of things that refer to Agile development or talk about bits and pieces of it, but I never really read anything that presented a complete picture. I've also repeatedly been told that many of the things I talk about on this blog are covered thoroughly by Robert C. Martin, better known as Uncle Bob. It was about time that I studied Agile in more depth, and it seemed like a good time to check out some of Uncle Bob's stuff since he's an outspoken advocate of Agile. I decided on two of his books: Clean Code: A Handbook of Agile Software Craftsmanship and Agile Principles, Patterns, and Practices in C#. Let's see what I've been missing.

Clean Code front coverVS.Agile Principles, Patterns, and Practices in C# front cover

Clean Code


First things first, I love the covers. Images of deep space objects are always inspiring to me. I said quite a while ago that I wanted to read Clean Code, and only now have I finally gotten around to it. I often think when reading a good book that I should have read it sooner, but when would I have fit it in, really? I read a lot.

I do feel that Uncle Bob was speaking directly to me through this book. It was excellent. Nearly all of the programming related stuff I write about here is already in Clean Code—well organized, and clearly presented. I'm amazed at the amount of overlap in our thinking; although I don't agree with him on everything.

The book is focused on what clean code looks like and how to write code well. It starts out simple with recommendations on naming conventions, functions, comments, and formatting. Then it builds on that foundation with objects and data structures, error handling, component boundaries, unit tests, classes, and systems. It then moves into high-level issues with emergent design, concurrency, and successive refinement, and finishes up with a nice example of how to put all of this advice to work on refactoring a SerialDate class. All of the code smells and heuristics are then listed at the end for convenience.

I thoroughly enjoyed reading this book, and I'd like to share some of my favorite quotes to give a sense of the tone and style of the writing. I'll do my best to limit myself. Here's one from the first chapter, and it's actually a quote from Michael Feathers, author of Working Effectively with Legacy Code, which I have not yet read:
I could list all of the qualities that I notice in clean code, but there is one overarching quality that leads to all of them. Clean code always looks like it was written by someone who cares. There is nothing obvious that you can do to make it better. All of those things were thought about by the code’s author, and if you try to imagine improvements, you’re led back to where you are, sitting in appreciation of the code someone left for you… 

Even though this was written by a different author, it summarizes quite succinctly what clean code is. Of course, it says nothing about how to make your code clean, but it certainly gives you a good test for deciding when you've reached that goal. It's humbling to think about how much more work I have yet to do before I can claim to write code to that standard. I am constantly looking at my code and thinking that it could be expressed more clearly, concisely, or flexibly. There is a balance between those characteristics that I haven't quite mastered, yet. That is the focus of Clean Code, and much of the advice takes the form of how to write code well:

Writing software is like any other kind of writing. When you write a paper or an article, you get your thoughts down first, then you massage it until it reads well. The first draft might be clumsy and disorganized, so you wordsmith it and restructure it and refine it until it reads the way you want it to read.
I like the idea of programming being like writing. I used to be afraid to change code to make it read better, probably like many novice programmers. Getting a program to do what it was supposed to was hard enough. Why risk breaking it? But as I became more comfortable and confident with code, due in no small part to better coding practices like version control and unit testing, I began to realize the value of well-written code. Now I strive for it in all of my programming, and I'm not afraid to temporarily break code on the way to making it better. Progress is best made by focusing on writing better code instead of explaining bad code through comments:
The proper use of comments is to compensate for our failure to express ourself in code. Note that I used the word failure. I meant it. Comments are always failures. We must have them because we cannot always figure out how to express ourselves without them, but their use is not a cause for celebration.
Sometimes comments are a necessity, but they shouldn't be used as a crutch. Many times comments are a code smell. Maybe the code needs better variable names, maybe it needs a good function wrapped around it, but somehow the code could be expressed more clearly than it is because you're having to explain yourself.

In the chapter on writing classes, Uncle Bob goes through an example where he refactors a program that generates prime numbers, and explains why clean code is not necessarily smaller code:
It went from a little over one page to nearly three pages in length. There are several reasons for this growth. First, the refactored program uses longer, more descriptive variable names. Second, the refactored program uses function and class declarations as a way to add commentary to the code. Third, we used whitespace and formatting techniques to keep the program readable. 
Dense code can be very unclear, and improving it generally means making it longer. In this particular example, I'm not in total agreement that the end result is all that readable, either. Some of the method names are too long and don't accurately describe the essence of the algorithm, some classes would have been better off as collections of functions if the language used hadn't been Java, and I really prefer snake_case to CamelCase. Snake case is closer in appearance to normal writing, with underscores in place of spaces. While camel case smashes all of the words together and uses unnatural capitalization to attempt to mitigate the damage. The result is far less readable.

Even though the example in the book made the code longer, I've seen plenty of overly verbose code that's also confusing and is greatly improved by making it shorter. The point is that code is more understandable if the pacing is right, just like writing, and the author should take pacing into consideration when writing code. Regardless of my disagreement with the result of this example, I fully agree with the points Uncle Bob was making, and it helped clarify my own thoughts on what clean code should be.
We want to structure our systems so that we muck with as little as possible when we update them with new or changed features. In an ideal system, we incorporate new features by extending the system, not by making modifications to existing code.
This gave me a chuckle, because I know how it can be taken to the extreme—never remove anything from the system, only add more. I know Uncle Bob is not advocating this, but I thought it was funny. I know design teams that are afraid to remove or change anything in a working system. We should always remember: 
Although software has its own physics, it is economically feasible to make radical change, if the structure of the software separates its concerns effectively.
Here software is being compared to buildings, and software has the distinction of being able to be changed in place. Making changes to a building by ripping parts of it out and building in different structures is normally too expensive and not ideal. As long as the building is functional, the next iteration is done by constructing another building with any new ideas (save home renovation). Software can be iterated in place to try out ideas, and it's often more expensive to rewrite everything from scratch than to change the existing software to incorporate those ideas.

The whole book is filled with great advice on writing clean code. When I reviewed The Pragmatic Programmer and Code Complete, I wondered if the content of Code Complete could be sufficiently covered by the combination of The Pragmatic Programmer, Clean Code, and Refactoring in less pages. Without having read Refactoring, I now think that the other two books pretty much cover what Code Complete does in a more enjoyable, more engaging way. Clean Code should definitely be in every programmer's library.
 

Agile Principles, Patterns, and Practices in C#


Preface: This book was co-authored by Robert Martin's son Micah, but I'll generally refer to things as written by Uncle Bob since I'm not sure who wrote what.

This book (APPP for short) is made up of 4 sections that cover the Agile development process, Agile design with the SOLID principles, and two case studies of a non-trivial payroll software system and a package analysis of the payroll software. The book also covers what you need to know about UML diagrams and most, if not all, of the design patterns from Design Patterns. All in all, it packs a ton of useful information into a very readable package. It's the only book you really need to read on UML, and maybe all you need on design patterns as well. However, Design Patterns goes into more depth and rigor, so you may want to dig into it anyway.

Like Clean Code, APPP has a lot of great advice in it, and I really enjoyed the relaxed, personable writing style. I didn't agree with everything, though. For example, when describing pair programming Uncle Bob says:

The team works together in an open room. … Two chairs are in front of each workstation. The walls are covered with status charts, task breakdowns, Unified Modeling Language (UML) diagrams, and so on. The sound in this room is a buzz of conversation. Each pair is within earshot of every other pair. Each has the opportunity to hear when another is in trouble. Each knows the state of the other. The programmers are in a position to communicate intensely. One might think that this would be a distracting environment. … In fact, this doesn’t turn out to be the case. Moreover, instead of interfering with productivity, a University of Michigan study suggested, working in a “war room” environment may increase productivity by a factor of 2.

Other studies show programmers are more productive in offices with walls and doors. In fact Peopleware, another well-respected book on software development, strongly advocates for private offices. So who's right? Currently, I'm working in an entirely open office, and I'm really enjoying it. We have some of the good elements of more communication and problem solving help, but other than that, it doesn't sound too much like Uncle Bob's depiction. If there was a "buzz of conversation" more than a few times a day, I know I wouldn't be able to get much done. It's not just distracting, but downright disruptive when you're in flow. In a noisy environment like this it might be required to pair program in order to brute force forward progress. How is that going to work when you need to think deeply about a problem without interruption? I find myself deep in thought most of the time, and I appreciate that the office is generally very quiet.

While I generally disagreed with their ideas on the optimal office environment, I found myself agreeing much more with their take on UML diagrams:


So, yes, diagrams can be inappropriate at times. When are they inappropriate? When you create them without code to validate them and then intend to follow them.
Indeed, code can reveal constraints on a problem that diagrams easily gloss over. The authors get even more serious later on before describing the different UML diagrams in detail:
Before exploring the details of UML, we should talk about when and why we use it. Much harm has been done to software projects through the misuse and overuse of UML.
And then they bring down the hammer:
It is not at all clear that drawing UML diagrams is much cheaper than writing code. Indeed, many project teams have spent more on their diagrams than they have on the code itself. It is also not clear that throwing away a diagram is much cheaper than throwing away code. Therefore, it is not at all clear that creating a comprehensive UML design before writing code is a cost-effective option.
Yes! Basically, use UML when it is useful for clarifying ideas, exploring a system, or experimenting with options in a rough and basic way. If you're moving towards UML diagrams as documentation or worse, executable UML (shudder), you've taking a wrong turn into architecture astronaut land. Uncle Bob also tries to give you a wake-up call when resisting design changes:
We might complain that the program was well designed for the original spec and that the subsequent changes to the spec caused the design to degrade. However, this ignores one of the most prominent facts in software development: Requirements always change!
Accept it already. The design is going to change, even the one you're working on now, especially the one you're working on now. Don't blame things you can't control. Find something you can control, blame that, and then fix it! It's your job. You were hired to be a problem solver, so solve some problems.

They also take some digs at software documents:
The value of a software document is inversely proportional to its size.
And too much up-front design:
If we tried to design the component-dependency structure before we had designed any classes, we would likely fail rather badly. We would not know much about common closure, we would be unaware of any reusable elements, and we would almost certainly create components that produced dependency cycles. Thus, the component-dependency structure grows and evolves with the logical design of the system.
This gets at the fact that it's very hard to know what a software system will look like until it's finished. Accept what you don't know, and don't try to plan beyond your horizon.

I'll wrap up with one more quotation that I don't entirely agree with:

That is the problem with conventions: they have to be continually resold to each developer. If the developer has not learned the convention or does not agree with it, the convention will be violated. And one violation can compromise the whole structure.

This idea is interesting when compared to the push for convention over configuration, especially in Ruby on Rails, a fairly successful framework that epitomizes convention. I can see where Uncle Bob is coming from, but there is so much grey area and context dependence here that general statements can't hold much water. I would say that conventions definitely have their uses in well-defined situations.

Overall, APPP was an excellent book. The opinions were sharp, the analysis was clear, and the explanations were thorough. The book filled a lot of holes in my understanding of Agile development, and I enjoyed all of the pragmatic coverage of Object-Oriented Design principles, UML diagrams, and design patterns. Working through this book will greatly help you in becoming a better programmer.

Two Sides of a Coin


These two books are nicely complimentary. Clean Code deals with the format of the code at a fundamental level, and how to make it readable and understandable. APPP tackles how to design a software system effectively and covers many of the Agile tools available to improve the process. They are both quite valuable for improving your skills as a programmer. Working through the extensive code examples and doing the work of understanding all of the practices presented will help you reach the next level in your programming endeavors.

The Cost of Abstraction

Programmers love abstractions, and they spend a significant amount of time thinking up and building new ones. Look at the everything-is-a-file abstraction in Unix, the IO stream abstraction in most languages' standard libraries, or the many types of abstractions that make up the various software patterns. Abstractions are everywhere in programming, and when they are useful they can really improve the utility of software. But abstractions don't come for free.

Abstraction, Yes, But at What Cost?


Abstractions have a number of costs during the course of their design and use. To make this discussion more concrete, let's assume we're talking about abstracting a communication interface with a higher level set of messages on top of different lower level protocols, like USB, SPI, or I2C. To use an abstraction, there is the initial cost of designing and building it to generalize multiple different protocols to use the same interface. There is the incremental cost of making each additional protocol fit the abstract interface and possibly extending the abstraction when a new protocol can't easily fit the mold. There is the maintenance costs of needing to understand the abstraction whenever new messages need to be added or things stop working the way they should.

Then there's the hidden cost. What if the abstraction is never used more than once? What if you only ever use the interface over USB? The design would have most likely been much simpler without abstracting it to support multiple protocols, so every time you use it, you have to deal with the overhead of the abstraction without any of the benefits. If you design abstractions into your code by default, without thinking about the costs, you'll quickly build up an edifice of unnecessary overhead that will constantly slow you down. Uncle Bob talks about this drag on productivity when discussing the Factory pattern in Agile Principles, Patterns, and Practices in C#:
Factories are a complexity that can often be avoided, especially in the early phases of an evolving design. When they are used by default, factories dramatically increase the difficulty of extending the design. In order to create a new class, one may have to create as many as four new classes: the two interface classes that represent the new class and its factory and the two concrete classes that implement those interfaces.
Of course, the Factory pattern is an invaluable abstraction in the right context, just like a communication interface abstraction is invaluable if you need to send messages over multiple protocols. Useful abstractions make us much more efficient programmers. So how do we decide when the added flexibility and complexity of an abstraction outweighs the greater simplicity and rigidity of a direct approach?

Abstract Vs. Concrete


First, we should understand what parts of a software system should be abstract and which parts should be concrete. Uncle Bob has some good insight on this distinction as well:
Some software in the system should not change very often. This software represents the high-level architecture and design decisions. We don't want these architectural decisions to be volatile. Thus, the software that encapsulates the high-level design of the system should be put into stable components… . The instable components…should contain only the software that is likely to change.
Not only do we want the high-level architecture of the system to not change very often, but we want it to be resistant to change. We want it to be flexible under duress, and abstractions provide that flexibility. The details of the system are necessarily more rigid. They will break and need to be rebuilt when their requirements change.

The natural tendency when faced with the prospect of change is to try to design the system upfront to handle as much change as possible. Software patterns and polished examples do a disservice here because they encourage the line of thinking that more abstractions will save the day. They are presented in a finished form and explained so clearly that you begin to believe that if you can abstract away every potential point of change in the system, making changes and adding new features will be easy. The problem is, understanding and working with an overly abstract system is not easy. It requires a huge amount of cognitive overhead to get anything done.

Books and web tutorials abound with examples of all kinds of abstractions, but they have a common drawback. It's very hard to show in an example the design path that lead to using any particular abstraction. In a real system, a well designed abstraction is there because the system needed it to be there, not because the programmer thought it would be cool to stick it in or was guarding against imagined future change. Abstractions only work well in the right context, and the right context develops as the system develops.

Example code is normally presented after the system has fully developed. The system is normally small by necessity, making most abstractions look rather silly with not much code to support them, but for the purpose of the example, the system is already finished. In real life programming, the system is not finished—because you're adding to it—and it is very unlikely that you can predict where the system will end up. Adding abstractions haphazardly ends up creating a lot of useless work.

Wait For It…Wait For It…NOW!


The right time to add an abstraction to a design is at the point when you start feeling the pain of not having it. Don't do it sooner because it's quite possible the extra work will be wasted and the extra complexity will be a burden. Don't wait too long because the whole time you're feeling the pain of not having the abstraction, more and more work is piling up that will have to be done to switch over to the abstraction.

Right when you start feeling pain is the perfect time to move to an abstraction. Both the risk of wasted effort and the amount of work to change the code will be minimized at this point. The risk of wasted effort will be small because now you are sure that you will actually use the abstraction. The amount of work will be small because so far you haven't duplicated any code and the code you do have should be easy to separate into the abstract and concrete parts. That is, of course, if you have been following good development practices.

In the case of our communication interface example, the right time to move to an abstract interface is when a second low-level protocol needs to be supported. If you only need to support USB, you don't need to have an abstract interface, but as soon as you also need to support SPI, an abstract interface will greatly reduce code duplication and make development easier. It will also be clear exactly what needs to be pulled into the abstract interface so it can be shared between the two protocols and what needs to be implemented separately in each protocol. That is the time when all of the relevant information is available and the need is most apparent.

Some people may balk at what appears to be extra work, changing code to design in an abstraction that arguably should have been designed in from the ground up. That extra work could have been avoided if the abstraction was there from the start, they say. Well, no, not really. The code should have been much easier to write without the abstraction, so that was less work initially. And it wasn't clear until later that the abstraction was actually needed. Furthermore, the abstraction would have taken about as much work to design in the first place, but would have been done with less knowledge about the system so would likely have been done wrong. The abstraction would have to be fixed when the second protocol was added anyway. Which way is really more work? I bet the abstraction-up-front approach would be.

Adding abstractions only when and where they're necessary allows a software system to evolve naturally, becoming the solution it needs to be without adding a lot of extra cruft. If the team accepts this process and allows it to happen instead of fighting against it, development will have a pleasant flow to it. Progress will be faster and require less effort when the system isn't overloaded with unnecessary, costly abstractions.

Sometimes Low-Tech is Better

I love high-tech gadgets as much as the next guy. I've got my Kindle DX. I've got my iPod Touch. I've got my Nissan Leaf. And I love 'em all, but sometimes high-tech gadgets end up being a solution in search of a problem. They don't always do what they need to do, and instead pile on superfluous features just to increase the tech factor.

I've come across two examples recently that have put this issue in stark relief for me. The first one is a simple outdoor thermometer. If you were to ask me to recommend a good outdoor thermometer a week ago, I would have probably come up with a digital thermometer with a wireless remote sensor that you put outside. You know, something like this:

Digital outdoor thermometer

This is the current best selling weather thermometer on Amazon, and it gets great reviews to boot. The problem with it is it does way too much. Why do I need to have a clock with the thermometer? I already have umpteen clocks in my house. There is always one within view no matter where I am. I do not need another one. I also don't need to know the indoor temperature of my house. I already know it based on the time of year. In the winter it's 65°F (yeah, we like it cold; put on a sweater, you wuss), and right now it's summer so it's exactly 76°F. I set the thermostat and it sets the temperature. I don't need another thermometer to tell me the same thing. This thermometer also takes four AA batteries. I hope they last a while.

You can, of course, take a step up from the basic digital thermometer with this:


It's your own personal weather station! I really don't know what else to say here, other than this thermometer doesn't solve any additional problems that my iPod Touch doesn't already take care of. I already have an iPod Touch, which does so many other things to boot, so why do I need this?

If you asked me today what my ideal outdoor thermometer is, I'd have to go with this:

Analog outdoor thermometer

Isn't it brilliant? Just slap it on your window, and you can easily read the outside temperature from across the room. No setup, no wireless, and no batteries. It's an elegant solution to the problem of knowing what the temperature is outside. When I get up in the morning and need to know if it's a shorts or a pants day, this thermometer is exactly what I want—no more, no less.

The second high-tech-is-worse example is a feature on a device that tries to be too high-tech for its own good. I got a sleek new Lenovo X1 Carbon ultrabook at work, and for the most part it's awesome. It's fast, light, and really cool. The only problem is the row of F keys at the top. They aren't there. In their place is a long, narrow touch panel. At the far left part of it, you can touch to cycle through three different sets of functions for multimedia, application shortcuts, and the normal F keys. Let me count the problems with this "feature."

Lenovo X1 Carbon keyboard and touch panel

First, I have to look down to touch the F key that I want to use. I use the F2, F3, and F5 keys all the time, so much, in fact, that I can use them without looking more easily than most of the number keys. Not so with the touch panel. I have to look to make sure that I touch the right part of the panel, and that slows me down.

Second, there is no tactile feedback. With a touch screen you get plenty of visual feedback from a well-designed interface, but with this touch panel, I get no feedback until I look up at the screen and see that the correct action took place. Granted, I'm just flicking my eyes down and back up, but it's irritating because I never had to do that with normal F keys and it takes me away from what I was trying to do.

Third, the ability to cycle through multiple sets of functions is not a good thing. I can't be sure of which set of functions is currently active without looking at them, and I have to pay attention as I cycle through them to stop on the one that I want. Some functions are also in multiple sets, so it's taking me a while to get comfortable with where all of the functions are. The old way of doing the multimedia functions, where you hold down an Fn key that activates sub-functions on the F keys and navigation keys, was much better because it was consistent and always visible.

Okay, I don't want to think about the touch panel anymore. It's a poorly thought out feature on an otherwise excellent machine. The only reason Lenovo designed it in is because touch sensors are all the rage, and they thought it would be a slick high-tech addition to a new laptop. But while a touch screen may make sense on a laptop, (especially to my three-year-old son who keeps trying to touch the game icons on the taskbar of my laptop at home) this touch panel is useless.

I quickly plugged my trusty Logitech K740 keyboard into the X1 for use at my desk. This is my current favorite keyboard. The feel of the keys is awesome, it's sleek and attractive, and all of the keys are in the right place.

 Logitech K740 keyboard

This is a tried and true layout. There's nothing fancy going on here. It's not even wireless, just a USB cord, and it works flawlessly. The F keys are even grouped into sets of four so that I can easily feel that my fingers are in the right place to press the right keys without looking.

Neither of these products, the digital thermometer nor the X1's touch panel, is solving a problem that really needs solving. It makes me wonder about other high-tech products on the horizon, like the iWatch. I am hesitant to go against any new Apple product because they have shown again and again that there are huge markets for what they come up with, even if the market didn't exist before the product, but I'm still skeptical of the iWatch's usefulness. The tablet and smart phone market has shown that bigger screens are almost universally more desirable. The tiny screen size of a smart watch will severely limit what can be done with it. I'm going to have to see some seriously compelling use cases before trading in my much-loved Timex.

The lesson here (the iWatch's unproven success notwithstanding) is that when designing a product, think about the essence of the product that will make it useful. What was wrong with the way it was done before that can be improved? What problem is this high-tech gadget solving? If redesigning something with the latest tech doesn't significantly improve it, why do it at all?

A digital thermometer that reports temperature to a tenth of a degree both inside and out doesn't give me any more information about whether or not to put on a sweater when going outside. A touch panel with modes instead of dedicated keys doesn't help me type and interact with the computer any faster. It actually slows me down, degrading the primary usefulness of a keyboard. If you need a high-tech feature to make a product significantly better, then by all means, design it in. If you're adding in high-tech features just because they seem shiny and new, think long and hard about what you're doing because you could be wrecking the essence of your product. Sometimes low-tech is better.

What Limits Technological Progress?

Technology is advancing more rapidly than ever, right? Wrong. I'm not buying it. I've written before about how technological progress isn't nearly as overwhelming as it's made out to be. I'm not even sure if progress is still accelerating. I'm beginning to think it's experiencing a frictional force that's causing it to coast, more or less. Don't misunderstand, technology is still advancing. It's just not advancing at an ever increasing rate.

Measures of Technological Progress


One way to get a handle on technology's rate of change, albeit an imperfect one, is to look at GDP. Because GDP is a measure of how much we produce as a nation, and more advanced technology allows us to increase productivity (arguably, it's one of the primary purposes of new technology), increasing GDP is correlated with advancing technology. A great place to explore economic data is the St. Louis Fed site, FRED, and that's where I got the following graphs. Here's what US GDP looks like since 1947:

Graph of US GDP from 1947 to 2014

This looks like a classic exponential curve, ignoring the grey periods of recession, that would correspond to GDP having a constant positive rate of growth, but looks can be deceiving. To see things a bit better, we can look at the logarithm of GDP:

Graph of Log GDP from 1947 to 2014

When looking at a log graph, a constant growth rate will show as a linear slope, and in fact that is the case with GDP up to about 1980. But then the slope decreases between 1980 and 2000, right when computers and the internet were quickly coming into the market and making huge strides in performance. GDP's rate of change takes another step down after 2000, and another after 2010. Now the last step down is most likely due to the Great Recession of 2007, but that reason can only hold for so long. If the rate doesn't come back up, the real reason may be the same underlying trend of slower growth that's been creeping in for decades.

As I said, GDP is an imperfect measure of technological progress. There are a lot of inputs to GDP that have nothing to do with higher productivity resulting from technology, such as population growth, women entering the workforce in the 1970s, and the inclusion of services over time that used to be done privately and are now done commercially (like lawn care, child care, and elder care). Total factor productivity is a measurement that tries to eliminate as many of these other sources of GDP growth as possible, and the result is a measure of the economy's technological change. Here is a graph of US TFP:

Graph of Log of Total Factor Productivity from 1950 to 2014

Clearly, the 1970s did not show much technological progress. I think I know what the problem was—disco. Well actually, it was most likely due to the rapid increase in the labor force because so many women were leaving the home and getting jobs. Technology didn't need to advance to keep GDP going strong during this time. Once the transition was mostly complete, technology had to pick up the slack again, but it didn't bring productivity back to the rate it was going in the 1950s and early 1960s. It was started to accelerate in the late 1990s with the Dot-Com bubble, but once that bubble burst, TFP slowed dramatically and it's never really recovered.

Reasons for the Slowdown


Depending on how you look at it, we've been experiencing a technology slowdown (okay, actually a reduced technology speedup, but that doesn't roll off the tongue quite as nicely) for the past 15 to 50 years. Maybe the 1990s felt like such a rapid advancement because the 1970s were so darn slow. Lots of people have been studying this change in growth throughout modern history and put forth a variety of reasons for it.

Maybe economic policy is to blame, but then why has the slowdown been persistent over so many administrations with widely varying policy? Most economists come to the conclusion that administrations have a fairly indirect effect on GDP, and it's actually very difficult for a president to move the growth rate in either direction.

Maybe we need better education. Although I entirely agree with that, we're not the only country experiencing this phenomenon. Pretty much every advanced economy is in the same boat here, including those with better education systems than us.

Maybe we're reaching a technological peak with diminishing returns in the future. Irreducible complexity could be playing a role here, too, meaning that more advanced technologies are too complex to be worth pursuing. I have a hard time believing that completely, though. We're a long way from Star Trek, and there are plenty of advanced technologies that we've been imagining for a long time that are within the realm of possibility. We just haven't made them a reality, yet.

Maybe we need a new, cheaper source of energy than coal and oil. Even without the problems of suffocating the planet and causing all kinds of health problems from pollution, coal and oil are simply getting too expensive to continue to depend on. Technological progress absolutely depends on a cheap, plentiful energy source. We desperately need a new one to keep progress going, but our resistance to adopting new energy sources is more of a symptom rather than a root cause of slowing technological progress.

I think there's a more fundamental reason for the slowdown, and it has to do with our ability to adopt new, advanced technologies. Up until the 1950s, our technology was limited primarily by how quickly we could communicate ideas that would lead to new inventions and then by how quickly we could mass produce those inventions and distribute them. With the technologies that were available prior to 1950, the whole process was rather slow. After 1950 we pretty much solved the communication, manufacturing, and distribution problems so technology advanced as quickly as we could invent new things. After about 1980 we started hitting a new limit. The current generation of consumers could not adopt and make use of new technologies quickly enough to maintain the same rate of growth. We've reached a point where, to paraphrase Max Plank, technology advances one funeral at a time.

It's like air friction, the faster you try to move, the stronger it will push back. When a new technology is introduced, it takes a certain amount of time for a critical mass of people to be using it. Then it takes more time for those people to figure out how to use it effectively. Finally, people that grew up using the new technology and don't have all of the mental baggage of older generations will figure out new and interesting ways to use that technology. It's those younger generations that will more readily adopt the next technology that builds on the ones they grew up with, and most likely they'll also be the ones to create that new technology.

Progress at Generational Timescales


This whole process takes time on the order of a generation, or about two decades. That's about how long it took the internet to go from a blip on the radar for early adopters to a ubiquitous tool that almost everyone uses. Mobile computing is taking about the same amount of time. These technologies didn't take so long to develop because we didn't have the capacity to get them out faster. If we had the political will to do it, we could have easily built up the infrastructure faster. What held things back was social friction from the people that were starting to use these new technologies.

Frictions to adoption appear in a number of forms. First, people generally resist paying for software, especially web apps and smart phone apps, because they don't feel like they're getting something physical. Someone might pay $5 everyday for a big cup of Starbucks coffee, but that same person probably doesn't want to pay $5 for an app that he will use multiple times a day. Even if he buys such an app everyday and only uses a fraction of them, but uses them frequently, he's probably getting more value than he gets from the overpriced coffee. Yet he doesn't want to pay for the apps because they're virtual. People didn't have such an aversion to paying for software when it came in a box. Future generations will probably be more comfortable paying for apps because they'll have grown up with that model, and they'll know that if they want good apps, they'll have to somehow support the companies that make them.

Another type of friction comes from people resisting a change to something they see as only marginally better. If you've spent decades running to stores to buy things, it may take you a while to realize how much easier it is to order things online. The same thing goes for looking up things like addresses, phone numbers, definitions, or the news. The older you are, the more likely you'll continue using the resources you've always used instead of going to the internet. I see a little of that tendency in myself. I have a fairly well curated RSS feed that I'm hanging on to. I haven't switched to Twitter or Facebook, although I hear that they work great as news feeds if you set them up right. I don't want to put in the work to make that happen, so my Twitter feed is a whole bunch of noise for me and I don't even have a Facebook account. I haven't adopted the newer technology because it doesn't seem worth it to me.

This generational friction also shows up in research. Studies have found that people have better reading comprehension when reading on paper instead of a screen and that they learn better by writing instead of typing. I'll bet the results of those studies will change when future generations read and learn more on tablets than paper. It's more a matter of what you've grown up using than some inherent advantage with paper.

Finally, the biggest friction that new technology is running up against is fear. Fear has always been a force against new technology, but now it is becoming the dominant force that is holding technology back. We saw this fear with the internet and people being afraid to use it for various reasons. Many people are still fearful of it, but they're getting older and more people are being born for whom the internet has always existed. It's not so scary for them.

Fear is the predominant force that EVs and solar energy will have to overcome. When I hear people opposing these technologies, I hear fear. How will I get around in an EV with limited range? How long will the battery last? How can we depend on solar electricity if it's unreliable? What if it's cloudy for a week? What about winter!? These are not insurmountable problems, and the faster the technologies are adopted, the faster the problems will be solved. But people have to get comfortable with the idea that these technologies are going to work and be much better than what they're currently using, and that takes time.

Autonomous vehicles (AVs) are an even better example of a technology that will have a lot of fear to overcome. The first deadly crash involving an AV, especially one where it's determined to be the AV's fault, will be devastating for the technology as a whole. Even though AVs will be much safer than human drivers, every malfunction and every accident will be tallied, scrutinized, and overblown. That's why we're not likely to see AVs suddenly come to the market fully formed. They'll gradually appear in a long series of steps instead. Automatic parking and cruise control with range detection will become more common. Cars will start developing the ability to stay in their lanes at highway speeds. More sensors and cameras will be added to assist drivers and cars will slowly take on the tedious tasks of driving. Eventually we'll find that we no longer need the steering wheel and pedals, but it's going to take much longer to get there than our technological progress alone would indicate.

Then there are the technologies that even I am afraid of adopting. Computer implants are something that I don't ever want to have in my body, if I can help it. Some implants already exist, depending on what you define as a computer implant. Pacemakers have advanced to the point where they could qualify, and I've heard of electrical implants with microprocessor control being used for chronic pain management. So far these implants are only done if necessary for survival, but it's only a matter of time before that shifts to augmentation. I already know I'll be one of the older generations that resists adopting this technology when it happens.

Looking back, it seems like the time period between the 1950s and 1980s was unique in our history. Because of advances in communication, manufacturing, and distribution, technological advances had almost zero resistance, so progress accelerated rapidly. Beyond 1980 progress started encountering a new resistance that began slowing the acceleration down. That resistance comes from older generations not adopting new technologies fast enough. It then falls on younger generations that are more comfortable with new technology—because it doesn't feel new to them—to take it, find new and better uses for it, and pursue the next advancements. Maybe younger generations will become more comfortable with change itself, reducing the friction on technology and allowing it to accelerate again. If that doesn't happen, we're likely experiencing the first effects of what will be a permanent friction, and technology will continue to advance at a more constant speed in the future instead of the constant acceleration that we were coming to expect.