Technical Debt Questions and Answers with Philippe Kruchten

Technical-Debt-Dont-Go-Bankrupt Technical Debt Questions and Answers with Philippe Kruchten

The following are questions and answers following the Technical Debt: Don’t Go Bankrupt breakfast session with Philippe Kruchten put on by Optimus Information Inc.

There is a 52 minute video of the presentation. You can also download the presentation slides on Prof. Kruchten’s site including the substantial reference list.

The November 2012 issue of IEEE Software Magazine is a special issue on technical debt featuring an introduction by Prof. Kruchten.

Question and Answers

Q: Are there any specific examples or case studies you can recommend?

Why don’t you grab my slides. Some of the papers that I list, some of the URLs that I list are actually about people that have done case studies.

Q: Has anyone looked at the human factors with technical debt? What happens when you do take on a fair amount due to the issues with expertise…I see a real issue with scalability and retention of resources as you incur more technical debt.

No. I don’t recall seeing anything like that. It would be nice to look at the people issue and how it affects morale.

Q: My issue is largely that I am working within a set context: set budget, set timeline, set expectations and set cognitive biases. You are largely talking about the software engineering side of it, but do you have some thoughts about the more business side of this?

[I]nvolve the business in realizing that there is technical debt, how technical debt comes into play [that] decision that you make tomorrow will have some impact in 3 months from now or in 3 years from now.

We techies tend to hide or protect the business from the visibility of those issues and then the management is likely to say that “You did this to yourself. I didn’t know, yes I was pushing you for more stuff, but I didn’t know the consequences.”

I think making people more aware with some concrete examples that are more relevant to your context might help. You can diminish the impact of cognitive biases by having more accurate, pertinent information that people can relate to.

Q: Right, but being the cheapest up front is a pretty strong thing to fight against.

Don’t lose memory of it. Say that we are making these decisions because of these constraints. They have the consequences. Lets write these consequences [down] and make them visible rather than hide them and pretend that they didn’t happen and forget.
After you do the release in six months, the situation will be the same. It will be “oh we are pressed by time and we need this [feature] early.”

“Yes, but we have all of these things that we have to do first before rushing in to that [feature]. You agreed to [this].”

Having visibility and agreement up front and writing it down might be useful: keeping things visible as we go along rather than hiding them. The main problem is hiding [technical debt] even sometimes forgetting it.

Q: I think there’s an education component to it as well. Because what you just described is making people aware of deliberate technical debt but there is also indeliberate technical debt that you mentioned earlier in your slides as well. And I think you have to educate people that don’t know about it; that this is possibly a problem down the line and we should possibly account for this down the line.

The MacConnell Type 1 [technical debt], small scattered low quality code, this is easier. There are a lot of tools now that can do static analysis.

It’s a matter of [rolling] up your sleeve, [putting] the right tool in place and [knocking] down some poor quality code: this algorithm is too complex lets break it; this needs to be organized a little bit.

These tasks are relatively easy to do and relatively easy to spread over time and to educate people to do a better job over time.

It’s more the massive chunk of technical debt that people tend to live with. They just say “What can we do? It’s just too big to refactor. We have to live with it now.”

Not having objective facts that are not taken from somebody’s blog or presentation, but are from your own context: gathering data about code quality; and difficulty to evolve; and how does your velocity evolved over time; and having some metrics that are particular to your environment and making those visible. And trying to look at why and why and why.

Trying to go at the root cause might be a first approach. It’s…a matter of information and education.

Upcoming Events

We have a few more similar events planned for the near future on the following topics:

  • HTML5 vs. Native Apps for Mobile (Early Jan.)
  • Performance Testing
  • Testing Infrastructure in the Cloud
  • Enterprise Software Implementation – Avoid Surprises
  • Software Outsourcing – Do’s and Don’ts

If you would like to see a topic, be sure to contact us with your ideas, or connect with us on Twitter, Facebook or LinkedIn.