The Inadequacy of Feedback

Or more accurately, the inadequacy of the after-course evaluation process to gather meaningful and actionable feedback.

I facilitated three sessions at Agile2009 in August.  It was a lot of fun, but I found something deeply unsatisfying and frustrating about the Agile2009 feedback process.  This is not something specific to Agile2009, it is a fault of the feedback system we have become accustomed to and, it seems, have never stopped to question.  I guess I have high expectations of the Agile conference leading the way in new thought, rather than following tired, old patterns that are clearly broken.  Too bad that has not yet occurred — but here’s hoping.

In a nutshell the feedback process works like this: The session ends. Participants are given a form, just as they are ready to leave the room to drink coffee, eat lunch or play the networking game.  On the form they are expected to check some boxes on a 1-5 scale against some vague and ambiguous criteria.  They are expected to add comments.  Most do the former, few do the latter (fewer do the latter in a way that is both legible or meaningful).  The forms are anonymous.  There is no space for a follow up conversation.

The value of these forms to me as a presenter (I cannot speak for others) is almost zero.  Participants often give diametrically opposed feedback, so it is extremely difficult to use it in a constructive way.  Different sessions suit different personalities, and there is no way to make everyone happy. I also dislike that the feedback is anonymous as it flies in the face of Agile values such as trust, courage and transparency.

I am sure there are many who can explain from a psychological or systems perspective why this process is so broken and so lacking in value, and I am hoping someone will do so by way of a comment.  I’m looking at it here in a purely personal  way, i.e. how it affects me. I have always felt that such a feedback mechanism not only adds no value, but is actually destructive.

When I read these anonymous feedback forms (written under duress, and mostly as an act of compliance) I have one of two reactions:

  1. I am hurt by the criticism.  I feel deflated and wounded. I feel misunderstood
  2. My ego is boosted, and I feel flushed with pride. I want to brag.

It is not constructive to dwell in either of these places, and yet this is where I automatically go.  Clearly, I can work on this, and I do, but I reckon that a more meaningful feedback process would help dissolve the two extremes and create a better space for both giving and receiving feedback.

When someone takes the time, in a thoughtful and reflective way to offer me feedback, either in person or in written form, in a transparent way (i.e. not anonymous) I find I have feedback that I can hear, that I can consider, that I can take action on.

These days when I teach CSM courses, or other trainings I don’t hand out feedback forms.  Instead I use a process of continuous reflection, which is done through rich dialog.  Occasionally I have asked for feedback in the form of a drawing, or a haiku. Such mediums tap a different part of the brain —or perhaps not the brain at all— and help people get away from stating the obvious.  I heard of one Scrum trainer who hands out blank sheets of paper at the end of the training.  This is a great improvement, but the participants are still under pressure to write something meaningful in a very short time-frame.

I continue to consider and to search for new ways of gathering meaningful feedback.  I am fairly sure the situation can be improved by getting rid of the anonymity aspect, making the feedback form optional, and allowing participants to take the form away and complete in their own time.

23 responses to “The Inadequacy of Feedback

  1. Laurent Bossavit

    Your timing is good with this observation: there’s plenty of time to do something about it. I would also suggest you make sure Jim Newkirk sees this, if you haven’t already done so.

    The basic question, it seems to me, is “Why don’t we simply trust session participants to give speakers feedback, when they want and in the form they want ?” Which is a variant of “Why don’t we simply treat people as responsible adults ?”

    In this case (and I suspect in many other cases), there are non-trivial reasons why the system is arranged that way; it’s not a rhetorical question.

    The structure of the system is such that participants see *many* sessions in a *short* amount of time. They are likely, coming back from the conference, to are busier than usual and thus not to have time to provide feedback to so many speakers in a timely fashion. They are likely to have forgotten the sessions which had least value to them, which are precisely the ones that would benefit most from feedback.

    Feedback only matters anyway for what you are going to iterate on. If you speak at conferences regularly, but tend to do a different session each time, feedback on your delivery matters but feedback on the session not so much. So, feedback should be tailored to the profile of the speaker (which is an issue with a blank sheet of paper just as much as with a generic form). You really need time to say “this is the feedback I’d like to have”.

    In short, a conference like Agile200x is the wrong kind of setting in which to ask for feedback that might improve a conference session. If you want to get feedback *at* or *after* the conference, you’re going to go against the grain, inevitably.

    Review time is a better time to get feedback on a session. One of the biggest areas for improvement in Agile200x, in my opinion, is the impact of session reviews on the actual sessions.

    Another opportunity for the speaker is to present their session first to a small but representative audience, outside of the big conference and ahead of time. Then you can invite people specifically for the purpose of getting feedback.

    Yet another opportunity is to speak at conferences where the format ensures appropriate feedback. PLoP is a conference *entirely* focused on giving people useful feedback. Of course, that’s not very useful advice for improving Agile200x, but it’s something to keep in mind. Jim has changes in mind for the conference format, so he can do something there within his parameters.

    • Hi Laurent,

      personally, I’d very much like to give speakers at big conferences feedback on how the session could have been improved for me. It’s just that there often isn’t a convenient way to do so.

  2. I feel your pain. Having recently organised a scrum conference (South Africa’s first) it was frankly heartbreaking and exhausting to read through the feedback. One of the things we identified as an organising team was that the quality of the feedback to speakers needed to be improved.
    I also realised (and it was echoed by a comment from Jeff Sutherland in his presentation) that there is no pleasing some people…

  3. Ah – the value (or lack of value) of the Smile Sheet. I think you raise some great points, about trust and anonymity of the feedback. Especially in the area of Agile.

    I would assume one of the problems is that the smile sheet is too generic – for the purpose of your presentation – perhaps it is designed to measure at the conference level not the presentation level, and is trying to do double duty.

    However one may ask if the conference is really doing anything of value – are they truely following Kirkpatrick’s Evaluation model? Are they doing steps 2-4? Are is the smile sheet just a placebo – a place holder for “we really do care what you think”?

    Kirkpatrick’s evaluation model essentially measure:

    * Reaction of student – what they thought and felt about the training
    * Learning – the resulting increase in knowledge or capability
    * Behavior – extent of behavior and capability improvement and implementation/application
    * Results – the effects on the business or environment resulting from the trainee’s performance

    May I suggest you create your own smile sheet next time… customize & personalize it. Here is a good example:

  4. I think this I precisely why I like openspace events so much. Instead of a one-to-many talk to me; it is a many-to-many facilitated dialog. And the feedback is pretty immediate and the ending retrospective even more informative.

    Of course, in the one-to-many scenario you are describing, perhaps invite folks that want to talk more to a smaller session at lunch and start by asking them for feedback on your talk. It makes it personal, closer to one-on-one and you get the folks that care; although they may have a slightly positive bias (which is why they showed up), but you coudl investigate the areas ot improve with targeted questions.


  5. Catherine Dupont

    Perhaps it may be helpful at the onset of the session to confirm what their learning expectations and compare them with your agenda. (May be different from what you expect.)

    Including time to invite realtime feedback with the audience at checkpoints of the talk (to confirm agreement of objectives or address disconnects) may aid them in the thinking part of a good review.

  6. I’m a big fan of Rypple.

    Whever I present, I collect a list of attendees and send a Rypple micro-survey immediately thereafter. It feels very agile-retrospective-like to me.

    I blogged quickly a long time about it here:

    And I’m one of the case studies found here:

  7. Pingback: Twitter Trackbacks on

  8. Tobias I have the same feeling. In my session about 15 attendees wrote words on their feedback sheets – many were nice. A few told me mine had been their best session so far. But the only actionable one was the one that invited to co-author a book 🙂 So real conversations would help.

    I did appreciate your feedback sheet getting it before the session made it easier to visualize success.

  9. For me the best possible feedback is when you talk face to face with person who delivers feedback. This means someone felt it’s important enough to go talk with you.

    And to be honest it’s almost only feedback which should count. It doesn’t really matter if you tell me to write a haiku or draw a picture – if I’m set todeliver “OK, keep up good work” message I will and I won’t go any further.

    On the other hand if I feel an urge to deliver feedback the form isn’t important. I can write down notes and leave my email address in case you’d be willing to discuss further. I can find you during a break and tell you what’s on my mind. I can draw a cartoon or whatever.Either way if I care and I know you care I will reach you with the message.

    The problem is you almost never hear that feedback is valued by presenter. Then you don’t know whether it is worthwhile even to write a tick under “4 out of 5” checkbox.

  10. One trick i’m using in order to encourage feedback on my Scrum courses, is to do a short retrospect meeting at the end of each day with the goal of improving the course itself.
    that way we get to train on how to do retrospect, and I get valuable concrete feedback which i can sometime act upon during the course itself.

  11. At Agile Boston I send out ‘rate the meeting’ emails where 10 means [great] and 1 means [terrible]. I strongly encourage comments. About 70 people show up per meeting and I receive maybe 7 responses after. Only a few of these have comments. That is my experience in encouraging low-effort, post-meeting reviews.

    People have short attention spans and weigh cost/benefit before acting. When the show is over, what’s in it for them to do a review? If I offer a material incentive, I figure it skews the feedback towards [meaningless]. Cursory analysis of results supports this idea.

    I do 3 things that seem to work:

    1. Focus attention on feedback earlier, before the event is over. Direct them to the review before class ends. Now they are leveraging time in class, not adding post-class effort. They also have more time to reflect on recent events and present-moment events in the class.

    2. Explicitly welcome [negative] feedback in the form of suggestions. Ask. I notice that this welcoming-action (actually an explicit ground rule) encourages candid [“negative”] yet highly actionable feedback in the reviews.

    3. Create boundaries and structure in the review. My class-review sheet is a series of statements that cross-check for dissonance on the item answers so I can tell if they are paying attention when answering.

    The review rates 5 to 1 where [5] means great and [1] means terrible, in checkboxes. I use statements (not questions) in SVO-p syntax.


    “My experience of this class is that it is one of my best classroom experiences ever!”

    …(more questions)…

    “This class does not meet my expectations”

    The SVO-p syntax keeps everything direct and in-the-now.

    I hope this helps.


    See also:

    • “Explicitly welcome [negative] feedback in the form of suggestions. Ask. I notice that this welcoming-action (actually an explicit ground rule) encourages candid [“negative”] yet highly actionable feedback in the reviews.”

      Does this not result in participants going away with a negative feeling about the experience?

  12. Tobias,
    I agree that the rating of sessions with numerical scale is not helpful to the speakers. I have found that we get better data by giving the attendees a feedback form at the beginning of the conference that lists all sessions. Then for each session we ask if they would recommend that session to a friend. Then we ask them to write in one thing that they learned.
    Since the form is received at the beginning of the conference – people can keep it and fill it in as they think of something to add. We also add a few conference wide questions about the place, format, etc.

    This works reasonably well. Let’s the organizers and presenters know if the sessions provided learning value to the attendees.

    On another note: I always struggle with what to do with the hurtful negative comments – there are always a few – even added to the “what did u learn” area…. As you note – since these are mostly anon (only a few people add their names) it’s not helpful to receive a negative personal attack. How can any of us grow from such an experience?

  13. In the last workshop I gave, half through it I gave every attendee an index card. I then asked them to write down two things: one thing that I should continue doing or do more of. On the opposite side of the card one thing I should do differently or additionally for the rest of the workshop. At first, they struggled a bit, nut then I got some very meaningful and actionable responses!

  14. Ilja-

    So you had a retrospective at the end of your workshop? What a great idea! You can gain more valuable feedback and reinforce the importance of the Scrum retrospective at the same time.

  15. Like Ilja – I do the same thing in my 2 day agile training. In my case I do 4 times – after each half day. Then I do my best to make at least small changes on the fly. My clients today were astounded that this was possible – when really all I did was reorder a few slides and ensure all their questions got answered. Its a blast.

  16. As for great feedback the Agile Tour Toronto people had a great example earlier in the week. They asked for one number – “how do you rate this talk” and two open ended questions:
    “What did you like about this talk?” and “If there was one thing you could change to make this talk perfect what would it be?”. I got very good feedback and much higher percentage of feedback and comments than Agile 2009.

  17. My favorite replacement for those nifty feedback forms is a Temperature Reading at the end of each day.

    I let the group stand up once more and form a circle. The TR itself consists out of five sections, participation for each is voluntary:

    1. Offer Appreciations (sometimes just thanks)
    2. New Information and Insights
    3. Questions and Answers (to get rid of my question backlog)
    4. Complaints with Recommendations (to improve the next day/training)
    5. Hopes and Wishes

    The format works very well for me. I love to close it on the last training day with the ‘secret handshake’ – passed along the circle.


  18. I believe the best feedback is the one you get when people come to talk to you of their own free will, after a lecture or in a recess during training. They have to have the courage to come and say what they feel, but if they do not, then maybe they are not interested enough to share.

  19. You might want to take a look at Chief Learning Officer – Nov. 2009 issue. It is dedicated to Don Kirkpatrick, father of the 4 Levels of Evaluation. The smile sheets you discuss are level 1, the evolution of his work is that one _should_ start with level 4, the Results and work backward to level 1. But most evaluations start at 1 may progress to level 2 but stop there.
    First article: The Father of the Four Levels

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s