Monday, October 01, 2007

How To Tell If You're Doing Agile Right

Agile software development is about asking a question, taking a best guess at the answer, acting on that assumption, and then back to asking a question about what happened as a result. It's about gathering feedback and acting on it in a continuous cycle. That's it. Iterations and retrospectives are the core of any shift towards agility. They are axiomatic; everything else can be derived.

All the popular agile practices like test driven development, continuous integration, refactoring, short iterations, and daily stand-up meetings are nothing more than some of the answers a lot of people arrived at after asking some interesting questions. (Edit: That might sound like I'm discounting these practices. I'm not. I've found them all to be very useful tools.)

If the only question you're asking is "are we doing Agile right?", then you're probably not. There's no magic formula. Some practices are helpful across a wide variety of contexts, but there's no guarantee a given practice will be right for your situation.

If you're asking questions like these:
  • What is our biggest problem? What is the most important question we can ask ourselves right now?
  • What are the expectations of each stakeholder group? What are our stakeholder groups?
  • What are this project's risks? Can we reduce any of them? How can we force ourselves to tackle them early? What mitigation strategies can we put in place? What can we do to help ourselves notice new risks as the project proceeds?
  • Is this project worth undertaking? What is the expected value? Do we know the TCO of the project? Do we know the opportunity cost?
  • Why did we miss that deadline? Did anyone realize that would happen and not say anything? Did everyone realize that would and not say anything? What was the cost of missing the deadline, and what was the cost of finding out we would miss it so late? Would we be in a better position if we faced reality sooner? How can we help ourselves do that next time?
  • How much is employee turnover really costing us? Can we do anything to reduce this cost? Can we do anything to reduce employee turnover? Is it cost-efficient to do so?
  • How did that bug get into production? What changes can we make to prevent similar bugs from reaching production in the future? What will the side effects of those changes be?
  • How can we improve the way we work? What do we mean by "improve"?
  • Are users actually using this feature? How are they using it? What do they think about it?
  • Why are we adding this feature? What is the feature's TCO? What value will it deliver? What risks are involved with adding this feature?
  • What have other teams with this problem done? Do any of their solutions make sense in our context? What makes us similar/different?
  • How do you unit test a user interface? Is it worth it (for us)?
  • Why are we producing document x? What is it's TCO? What is it's value? Who's the audience? Do they have any ideas about increasing the value of the document?
  • Is this particular "best practice" or "agile" practice helping us work more effectively? Are we missing something important, or is it just not useful in our context?
  • If we don't know the answer now, how can we find out quickly?
... and if you're asking them collectively, as a team, in a blame-free manner, then it's likely you're on the right track.

Of course I'm not prescribing that you ask exactly and only these questions... they're just the kind of questions that I find lead to positive changes and other interesting questions.

I'm sure you are all thinking of other questions that should be on this list. Please share them in the comments!

Monday, July 16, 2007

An Agile Bookshelf: 10 Must-Read Books

I did my best to select books I believe have something to offer for both those already doing agile software development and those who are curious about agile but remain skeptical. I'd like to think that even if you think "Agile" is a useless buzzword, you'll find something useful in each of these books. Quite simply, they are books that have changed the way I think about software development and how I practice it.

Without further ado (and in no particular order):

Agile Software Development: Principles, Patterns, and Practices
by Robert C. Martin

This book is more about good object-oriented design as it is about agile development practices. In fact, Martin is done with his overview of agile practices by page 84 (out of a total of 524 pages.) The rest of the book focuses on fundamental principles of OO design (using the definition of design from Jack Reeves' classic essay "What Is Software Design?".) This book confirmed my suspicions that there was a strong relationship between good OO design and agile practices. Good OO design gives your software the flexibility it needs to enable rapid response to change. Agile practices (particularly Test Driven Development) give you the feedback you need to keep your design clean... if you have the discipline to do it.

This book is best suited to developers who already have a few years of practical experience and a good understanding of the basics of object orientation. It will solidify your understanding of good OO practice, provide a few new insights, and give you a vocabulary to talk about both. I consider it a must-read for any serious professional working in an OO language.

Working Effectively With Legacy Code
by Michael Feathers

"Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts."

So begins the back cover of this book. Feathers throws out the traditional (and vague) definition of legacy code and replaces it with this one: code without tests. Without tests, you don't know whether your software is getting better or worse when you make changes. Without rapid feedback after changes, quality suffers and time is wasted in tracking down defects further down the line (when it's much more expensive).

I meet lots of people who are curious about unit testing, but don't know where to get started with their current (untested) project. Feathers explains a number of techniques to get you existing code base under tests as safely and painlessly as possible. If you're interested in developer testing but feeling overwhelmed, or just wondering how to tame a big, messy code base, this is the book for you.

Test-Driven Development: By Example
by Kent Beck

When you get into the habit of writing a test before writing the code that will make it pass, some very interesting things start to happen. Beyond the benefits you get from unit testing after coding (a growing suite of regression tests, confidence your code works, etc.) you get several new benefits: better design (because you're forced to pay attention to it earlier in the process), tests that are more readable and less fragile, and a lot less time spent manually testing and debugging.

You also start spending a lot less time writing unnecessary code, since your tests are derived from actual requirements, and you don't write code unless you need it to make a test pass. The tests act as unambiguous, executable requirements. You've connected the dots between vague requirements and working code.

This book is an an hands-on tutorial of how to develop software test-first. If you're curious about test-first development, this is a great book to get you started.

Refactoring: Improving the Design of Existing Code
by Martin Fowler

No one gets design perfect on their first guess. You will always have a firmer understanding of the needs of your design after getting feedback from reality than you did at the beginning of a project. Besides that, the needs placed on the design will almost always change over time, so even if you guessed right, your design is going to change. If you don't make an explicit effort to change it, it will degrade over time as hacks are put in place to work around the current design in order to get new features in place.

The best way to change design is to consciously evolve it over time. Keep it clean; improve it a little bit every time you add a feature, and you won't hit that dead end where it's impossible to add new features and a total rewrite is the only option. This book shows you how to do that. It explains the link between good tests and refactoring, lists the 'code smells' you should keep an eye out for that may be a sign that your design isn't optimal, and gives some advice about which refactorings are most appropriate in each situation. This book is a must-read for anyone who is interested in software maintainability.

Agile Estimating and Planning
by Mike Cohn

This book takes the guesswork out of software estimation. In it, Cohn outlines an empirical estimation process; that is, one based on evidence rather than predictions. Rather than just take a best guess at the beginning of a project (when we have the least information of any point in the project's lifetime), he uses early measurements of the work capacity (or 'velocity', in agile terms) of the team to determine the real pace at which the team can deliver working software. This gives management critical information early (allowing them to adjust scope to meet a given deadline, or adjust the deadline to meet a given scope). This puts an end to the trap customers, management, and developers so often find themselves in: not realizing until very near the deadline that you're way off schedule.

If you've ever been surprised by a schedule slip, or want more predictability in your software development process, read this book.

The Goal
by Eliyahu Goldratt

This is the business novel about The Theory of Constraints, a set of principles for continuously improving operational efficiency. Although the specific example in the book is industrial, the principles also apply to software development.

The book is a fun and easy read, but contains a number of dramatic insights into what productivity really is, how it can be related to everyday tasks, and how it can be improved. I don't want to elaborate and ruin the story for you. I'll just say this: I can practically guarantee you will discover unnoticed inefficiencies in your software development process after reading this book.

Fearless Change: Patterns for Introducing New Ideas
by Mary Lynn Manns and Linda Rising

Very few software professionals work alone. So chances are, if you have a brilliant idea about how to work more effectively (agile or otherwise), you're going to need to convince others to do it too. Usually, the higher-leverage the idea is, the more people you need to convince. Unfortunately, people often instinctively resist change, and most of us don't have a natural gift for making people more comfortable with it. That's where this book comes in.

This book is about introducing new ideas to established organizations. The first half of the book explains the nature of the task, your role as a change agent, and the challenges you'll face. The second half is a catalog of patterns you can use to face those challenges.

Introducing new ideas to an any large organization is never easy. If you've ever thought of a more effective way to work, but had trouble convincing others to adopt it, this book can help.

Waltzing with Bears: Managing Risk on Software Projects
by Tom DeMarco and Timothy Lister

Risk management is possibly the only element of management that cannot be delegated; at the very least you have to manage the risks of delegating risk management. Neglecting sound risk management principles and practices can be an extremely costly mistake. Unfortunately, risk management is done extremely poorly, or not done at all, in many development organizations.

In this book, DeMarco and Lister explain why risk management is so important, why it is often done so poorly, and how to do it effectively. It's short and conversational, but packed with valuable ideas. I recommend this to anyone who doesn't want another late, over-budget, or otherwise failed project on their resume.

Crucial Conversations
by Kerry Patterson, Joseph Grenny, Ron McMillan, Al Switzler

A 'crucial conversation' is any conversation in which opinions vary, stakes are high, and emotions run strong. They have long lasting effects. They can transform relationships, both for the good and the bad. They are the most important conversations we have, and have long-lasting effects. Unfortunately, most of us handle crucial conversations poorly.

In this book, the authors provide tools for getting better results from crucial conversations. They explain how to recognize when a conversation turns crucial, how to focus on your real goals (rather than your transient emotional goals, like your sudden urge to prove the other person wrong), how to curb your own defensive/aggressive reactions, how to share information without making others defensive, and how to create conditions of safety that allow the other party to share information freely.

Unless you work by yourself, you need these skills to work effectively. (And they're handy skills to have for your life outside work, too.)

by Tom DeMarco and Timothy Lister

This book questions the application of standard management theory to knowledge workers. They show how this misapplication causes many of the problems perennially faced by managers of software professionals, and outline some new management principles specific to knowledge workers, backed by years of research from the authors' work with dozens of software organizations. Even better, they point out where management is missing out on opportunities not just to avoid problems, but to actively develop highly effective, productive teams.

Peopleware is a short read full of interesting ideas. Chances are you'll strongly disagree with some of the ideas in this book, consider some of the others common sense, and pick up a half dozen new insights along the way. I'd recommend this not just to managers at all levels, but to anyone who works on a team as well.

Sunday, June 24, 2007

How to Deal With Frustration

I just wrapped up a contract with a large corporate client. As is the case with almost all large organizations (even those attempting to transition to agile practices), there was no shortage of wasteful corporate policies and procedures.

Nothing frustrates lean-thinking people quite the way inefficient or wasteful processes do. We all want our expertise and abilities to be leveraged to create and deliver as much value as possible. When wasteful processes prevent this, we get mad. Not because we're inherently negative people, but because striving for greatness starts with refusing to be mediocre. We try to fix the broken process. If we don't immediately succeed, we bitch.

When you get three or four passionate lean-thinkers together, their frustration can feed off each other and become a real problem. But until recently, I didn't realize that. I might occasionally point out we were wasting time and should get back to work. That was about it.

Then I read My Start-Up Life by Ben Cosnocha. In a side bar about time management, Ben says:
"I believe time management is important to make each day rewarding and productive. But instead of focusing on how I can save a minute here, a minute there, I think about a different metric: energy."

He goes on to quote The Power of Full Engagement, by Jim Loehr and Tony Schwartz:

"Energy is the fundamental currency of high performance. Capacity is a function of one's ability to expend and recover energy. Every thought, feeling, and action has an energy consequence. Energy is the most important individual and organizational resource."

That distinction between time and energy rings true with me. When I'm energetic, I'm able to work smarter and think faster, and I can get a lot more done. Time is not the limiting factor of my productivity; energy is.

When we focus on our frustration with problems that are not immediately or easily solvable, there is a bigger issue at stake than wasted time: wasted energy. Energy is the leverage that allows us to make effective use of time. When we allow ourselves to become frustrated at our inability to prevent waste, we deplete our energy. Our frustration with waste is often more wasteful than the waste we're frustrated with in the first place.

I'm not saying you should just put up with all the wasteful policies you're asked to comply with. But avoid the bitch sessions. Be aware of how focusing on your frustration affects your energy level, and act accordingly.

Monday, June 11, 2007

Are Your Unit Tests Too DRY?

While tackling some long-overdue refactoring of a particularly tangled part of our current project, Dmitri and I were faced with the task of updating some long and unreadable unit tests that we had just caused to fail.

We had made a classic mistake earlier in the project and neglected technical issues like dependency management in a rush to be productive and show our customer some quick progress. We ended up with poorly structured and highly-interdependent classes. This made writing simple tests impossible. Trying to test a single piece of functionality often required 20 to 30 lines of setup code. As I mentioned last time, this resulted in a downward spiral: poorly structured code made it hard to write good tests, and a lack of good tests made it hard to refactor the code.

So here we were. We had struggled for several hours to decipher some tangled code and find a good starting point for the refactoring (what Michael Feathers calls a seam), we had made a small change, and we'd run the unit tests. Now we were staring at a list of 4 failing tests, and not only did we not know why they were failing, we had no idea what they were trying to test and how they were testing it.

One of the problems with the tests was interesting. Normally, I am a huge proponent of DRY programming. If there is no good reason for duplicating data or logic, don't duplicate it -- and there is hardly ever a good reason. However, we had just stumbled upon one of the good reasons.

Unit tests should function as documentation. You should be able to read a unit test in 10-15 seconds and know what requirement it is testing, and how client code should interact with the code that implements that requirement. If our tests had had this property, we would have saved ourselves the hours of trying to figure out what the code was doing, and the hour or so it took us to figure out what the tests were doing, why they were failing, and how to fix them.

If you feel the urge to DRY up your tests, stop for a minute and question whether you are working at the best point of leverage available to you.

If DRYing up the tests seems like the only way to set up tests in under 30 lines of code, your real problem is that the code you are testing is poorly isolated. This is probably your problem if you find yourself writing methods named setupTypicalUserAccount(), setupOtherThingyForTest(), etc.

If DRYing up the tests seems like the only way to make the tests readable, the code under test probably doesn't have a useful or meaningful interface. For example, you may be tempted to create a factory method in your test that calls the three or four methods necessary to put your target object into a valid state. Put the factory method in the target class instead. In general, if you can remove duplication in tests by making the interface of the code under test more humane, do it. If code is difficult for your tests to use, it will be difficult for other client code to use too.

Of course, soemtimes DRYing up tests is a good idea. If you want to replace repetetive lists of assertions with your own assertion methods, go for it. There's nothing wrong with an assertUserAccountIsValid(Account account) method that contains half a dozen assertions. There are lots of times when DRYing up your tests really is a good idea. Just make sure you've got a good reason and are not running on instinct.

Think of your tests the same way you'd think of a user manual or other documentation. You wouldn't want to repeat yourself on every second page, but you also wouldn't want to eliminate duplication entirely from a user manual. You'd drive your users nuts. In the case of a user manual, readability trumps maintainability, and the same goes for your tests. (Of course, you want both readability and maintainability if you can get it.)

If you found it impossible to write a clear, concise user manual, you might start worrying about the user interface of the program itself. If you find it impossible to write clear, concise tests, start worrying about programming interface of the program itself. That's probably your best point of leverage.

Wednesday, April 04, 2007

Why You Won't Fix It Later

We've all been there. The deadline is looming, everything is behind schedule, and you're in a rush to finish the FooBar module. You're puzzling over one last glitch. You know how to fix it, but it looks like it will take a minor redesign of the module... probably 4-5 hours of work. You just don't have that kind of time.

Suddenly a clever idea strikes you. Hmmm... it just might work. You realize deep down it's not the right way to do it. Maybe it means adding some temporal/implicit dependencies. ("as long as no one starts calling foo() before initBar(), everything should keep working.")

Maybe it means throwing in a magic string that will only work until January 3 next year. ("No problem, I'll just come back to this code after the deadline. We shouldn't be too busy then.")

Maybe it means breaking the design and making the code untestable. ("Well, it would be nice to have automated tests around this, but it seems to be working. Hopefully no one makes any changes to this code before the deadline.")

Maybe it means living with intermittent bugs. ("Hmmm. The system only times out 8% of the time. We need to figure out why before we go into production, but that should be good enough for testing.")

Maybe it means removing one bug and introducing another one. ("Well, at least we can submit the page now. Hopefully none of the users double-clicks the submit button until I've had a chance to revisit the code after the deadline. I'll fix it later.")

That's the magic word. Later. It makes a great warning signal that you may be heading down a dangerous path. When you catch yourself thinking "I'll fix it later", stop for a minute. You're feeling that little twang of guilt for a reason (even if it's masked by the little ego boost you get from coming up with such a clever workaround). Think about the real consequences of this decision. Will you really get back to it later? What will happen if you don't? What are the risks you're introducing? Ask another developer for an opinion. Ask the customer for an opinion (if you can phrase it in customer language). Think a little longer about other solutions.

There are several popular variants of "I'll fix it later":
  • I'll fix that bug later.
  • I'll verify with the customer that I've built what they actually need later.
  • I'll write unit tests later.
  • I'll remove the fragility from the unit tests later.
  • I'll make the unit tests readable later.
  • I'll make the unit tests fast later.
  • I'll integration test later.
  • I'll usability test later.
  • I'll remove that copy/paste duplication later.
  • I'll bounce my idea/design/code off another developer later.
  • I'll remove that workaround/hot fix/complete hack later.
  • I'll make the code readable/maintainable later.
  • I'll worry about performance/reliability later.
The problem is, we usually don't get around to doing any of those things we plan to do "later". After dealing with the consequences of "I'll fix it later" a few too many times, my friend Dave LeBlanc coined LeBlanc's Law:

"Later equals Never."

Why is this? There are a few reasons that I've noticed:
  1. When you cut corners in order to deliver on time, you're giving management and your customer a false sense of how fast you can reliably deliver. Agile teams use the term 'velocity' to describe the estimated amount of customer value they can deliver per iteration. If there is still work left to be done, you are effectively lying to your customer about how fast you can deliver value. Since your customer thinks you can deliver more than you really can, you will be overloaded with work again next time. You will start accumulating technical debt. There is no easy cure for technical debt (the most common cure being a complete re-write), so prevention is the best medicine. The best way to prevent technical debt from accumulating is to establish realistic expectations about how fast you can effectively work.

  2. When you skimp on automated tests, and even when you write tests but don't ensure they are readable, atomic, and easily-maintained unit tests, you limit your ability to effectively refactor. When you can't easily refactor, it begins to get hard to write readable, atomic, and easily-maintained unit tests. Not only that -- because it's harder to evolve your design, you will face a stronger temptation to fix bugs with workarounds and hacks that will come back to bite you later. You will spend more time debugging and bug fixing, leaving you less time to write tests and refactor. It's a downward spiral that results in reduced velocity.
Agile developers often work with what they call a "definition of done". You are not finished with a feature until it meets the definition of done. It acts as a checklist or set of heuristics that help you realize (and admit) when you have more work to do. A definition of done might include things like these:
  • unit tested
  • verified by customer & customer tests
  • usability tested
  • integrated
  • integration tested
  • documented
  • performance tested
  • peer reviewed (via pair programming or some other mechanism)
  • refactored, readable, duplication-free
  • bug-free
Of course, when you first introduce this idea, your definition won't be this comprehensive. Start small (coded, unit tested, peer reviewed, and refactored makes a good start). Every few iterations, if you are successfully meeting your current definition, add something to it. Eventually you will have a pretty comprehensive definition of done, and each time you finish a feature, you'll have a lot less stuff left over to finish "later".

Do you have any other "I'll fix it later" variants to add to my list? Stories about how planning on fixing something later came back to haunt you, or how adhering to a definition of done saved a lot of potential pain? When is it ok to "fix it later"? Where's the fine line between LeBlanc's Law and YAGNI? Please share your thoughts in the comments section!

Monday, March 05, 2007

Traffic Jams and Software Development

I was sitting in construction-induced gridlock a few days ago when I had an epiphany. In retrospect, I can't believe I made the same mistake that everyone else makes. I can't believe I considered traffic jams a bad thing. Traffic jams are awesome!

On the surface, this isn't obvious. But once you begin applying best practices of software project management, it becomes clear that a traffic jam is a sign of a healthy freeway. Let me explain.

First of all, I noticed that the freeway was fully utilized, a situation I have rarely seen in my driving history. As anyone who knows anything about cost accounting can tell you, when your resources are not fully utilized, the cost per unit processed goes up. It really seems irresponsible of the government to build as many freeways as they do, considering that most of the time they are underutilized. What a waste of taxpayer money! Why build a 4-lane highway that generally only ends up half full of cars? They could have built a 2-lane highway instead and saved half the money! (The traffic jam I was in when I realized this occurred when 2 lanes were closed for construction; luckily I didn't miss this opportunity for insight.)

Not only that, I also realized all the drivers were working really hard. I could tell we were working hard because we had been there so long. We all know how important it is to have a hard-working team, and obviously time spent at work is the best measurement of how hard we're working. (Otherwise, why would managers be insisting on 80-hour work weeks at crunch time?) Knowing this, it was really uplifting to realize I was surrounded by such a committed commute-force.

If only I was surrounded with such committed individuals at work, and we managed to keep all the resources fully utilized, I'm sure we'd be much more successful. It's a shame.

There's just one thing bothering me. I still can't figure out why I used to find traffic jams so much more frustrating than software development. Strange.

Monday, February 19, 2007

A Source of Conflict During Agile Adoption

Many organizations try to adopt agile development by plugging in a set of engineering practices without changing the organization's management style. Fortunately, depending on your starting point, some big gains can be made by adopting more effective engineering practices like test-driven development and continuous integration. Unfortunately, to make any big leaps in productivity and predictability, you also need to change the way you manage projects.

By only implementing low-level changes, and avoiding changes that have broader impact, organizations generally don't see the improvements they are hoping for (although they often see just enough improvement to become complacent with their new half-agile method of development).

This is a specific example that illustrates a broader point: there are two ways of solving any given problem:
  1. Work more effectively withing your current context.
  2. Change your context.
As in the above example, you can see some gains by working more effectively within your context, but to have major breakthroughs you usually have to change your work context.

Here's the killer: changing your work context in a large organization can be extremely, extremely challenging. When it works, it often takes a lot of time and effort to get the ball rolling. Because of the difficulty in bringing about context change, many people give up on trying to do it at all. They think that it isn't worth the effort, or almost forget it's even an option. (If you've heard the words "that won't work here" or "that's corporate America, learn to live with it", you probably know what I'm talking about.) Truth be told, often they are right. However, that doesn't stop those of us who believe strongly there is a better way of doing things. Sometimes, it is worth it.

I recently realized that this conflict of approaches to problem-solving is the underlying cause of a lot of conflict between agile evangelists and others on a team struggling towards agile development. Here's how it usually happens:

The team is faced with some particular obstacle, and gets together to figure out how to proceed. Let's say the team has to jump through some bureaucratic hoops to deploy a new version of the software to production. As a result, releases incur a lot of overhead, making frequent releases impractical. A few people make suggestions that minimize amount of time or effort it takes to jump through the hoops. The team begins discussing the benefits and drawbacks of each. An agile evangelist realizes the problem isn't an essential one, but one caused by the current work context. Frustrated with the short-sightedness of his teammates, he expresses his dissatisfaction with all the suggestions so far, and asks "Why do we have to do this paperwork every time we release anyway?", and tries steering the conversation towards strategies to change this aspect of the work context. Less idealistic members of the team realize this won't solve the problem immediately (if it's possible at all), and reiterate their own suggestions. A heated argument ensues.

The problem is neither side feels listened to. They are attacking different aspects of the problem, so there is no actual conflict between them. Both solutions could be applied. Knowing that it will probably take time to change the context, there is probably some benefit to working more effectively within the context in the meantime. Similarly, even if you can reduce the inconvenience of working within the current context, if it is possible to eliminate the inconvenience entirely by changing the context, it's probably worth trying to do that too.

But, because each side is focused so completely on their view of the problem, they don't acknowledge the other's suggestion as useful. Fortunately, I think this is one of those problems that is half-way solved once you're aware of it. Remember that you are focusing on a different way of solving the problem than someone else, and that they may both have benefit. Try to remain aware of how you share your perspective (and for that matter, focus on sharing your perspective, rather than making your point). Practice saying "Yes, and..." instead of "Yeah, but...".

Often agile evangelists and other change agents shoot themselves in the foot by letting their enthusiasm lead the way. This will leave others feeling unheard, which sets up conflict and resistance. Remember, organizational change is an emotional domain, not necessarily a rational one. Before you can understand and deal with others' emotional reactions, you have to understand how your emotions drive your behavior, and how this contributes to the reactions of others.

Tuesday, February 06, 2007

Work With the Customer, Not For the Customer

If you think your job is to do what the customer* tells you to do and build what the customer tells you to build, you are at risk of creating this:

We as software professionals have a responsibility to our customers that goes beyond giving them total control. We are responsible not just for writing code; we are responsible for helping to create a useful product.

A few days ago, I was listeneding to an IT Conversations interview with Joel Spolsky. He talks a bit about how XP teams work with the customer role, and makes a few good points:
  • "Customers will not invent the great features. They will not come up with the 'big leap' ideas." (Your average music aficionado would never have invented the iPod, for example.)
  • "A good programmer... will come up with features customers never would have imagined were possible." (due to knowlege of program internals and what's technically possible, combined with understanding of domain and the needs of the user.)
  • You will discover more interesting insights when you say "Tell me about your job and how you use [our product]" rather than ask "What features should we do?"
  • When you ask that second question, "... you get customers asking you for features that seem like obvious features to ask for but which they're never going to use or care about or need."
I think what Joel is getting at is that often the customer on an XP team is someone with operational experience, not product development experience. Product development expertise is needed to create a coherent, useful, and elegant product, but teams new to agile development sometimes neglect to make use of their experience to help the customer do this. Sometimes, we throw out the heavy up-front planning and design, but neglect to replace it with a deep level of collaboration with the customer throughout the development effort. This often results in a product without a coherent vision. Users may not find this product useful, but may not be able to articulate what's wrong with it.

We are all experts at living in homes, but if you designed and supervised the construction of your own home without advice from architects, engineers, and construction experts, you probably wouldn't be very happy with the result. Likewise, if you do not help shape the vision of the product, you'll likely build something your customer and users aren't very happy with (despite it being exactly what they asked for).

Unless your customer has significant product development experience, the expertise of both parties is needed when shaping the vision of the product. (I'm assuming here there is a business analyst, markitect, usability expert, or someone else - hopefully several people - with significant product development experience on the development team, and that the customer is a domain expert with a keen understanding of the problem the product is meant to solve and the people who will be using it.) As my colleague Dmitri Dolguikh recently remarked:

"[Software development] is not about the customer asking you for something and you building it for them. It's about you and the customer working together to figure out what it actually is that they need."

Responsible software development means working with your customer to create the best possible product, not working for your customer and abdicating all responsibility for the product being built.

* Please read this as "Product Owner" if you prefer Scrum to XP.

Friday, January 05, 2007

The Essence of Agile Software Development

I finally got around to reading an excellent article by Alistair Cockburn entitled Are iterations hazardous to your project?. He states the case that often iterations devolve into mere planning windows, without resulting in valuable feedback from customers on running, tested features.

Feedback is the key to agile software development. Agile techniques (from pair programming to TDD to tracking velocity and generating release and iteration burndown charts) work because they provide rapid feedback. The more feedback, and the sooner we get it, the better. Mike Bria put it best in the XP mailing list:
"feedback... is to agile as water is to ice"
Feedback is not just an element of agile software development; it is the very essence of it. Keep this in mind and you'll be able to steer your way out of almost any pitfall in your adoption of agile techniques.

However, as my good friend Daniel De Aguiar is fond of pointing out, feedback by itself is not sufficient. We need to analyze the information we're getting, and act on it, to make feedback useful. In response to a sneak preview of this post, Danny replied:
"Opportunities for feedback analysis throughout the iteration are critical to success. Otherwise, the feedback is just noise."
I agree. Many teams are getting feedback through some of the practices mentioned above but don't squeeze enough benefit out of it. For example, often teams will diligently create burndown charts but ignore the red flags that such charts are meant to expose. If you're going to gather feedback, make use of it!

You can read Danny's thoughts on the matter here.