What Did You Do For The Troops Saturday?
And they wonder why 73 plus percent of military members are registered as Republicans

Leadership: Rumbles from Below

We had a Roundtable call with Army Colonel Bruce J. Reider on the subject of the Army's new Muti-Source Assessment and Feedback system.  The concept here is, for the first time, to mandate that officers and NCOs -- as well as Department of the Army civilians, interestingly enough -- get formal feedback on how they do as leaders from their subordinates and peers as well as from their command.  While it won't be part of their performance evaluation, every single person in a leadership position will have to receive these comments.  COL Reider feels this has the opportunity to fundamentally improve Army culture by letting people know exactly where they are weakest in terms of the Army's leadership values.

Furthermore, everyone participating in the system will be submitting their comments on you anonymously.

One of the things that bloggers have a lot of experience with is the effect of granting anonymity to commenters.  The natural question, then, is:  Will you be editing out the profanity?

The Colonel says, "Actually, yes."  The concern for anonymity is such, however, that they will be doing so with an automated filter -- so that no human except the person being evaluated sees the comments.

Quite a bit more on this after the jump.

David Axe expressed some concern that the anonymous nature of the comment system could be damaging to the Army's goal of encouraging leaders to behave creatively.  Again, bloggers know something about this:  anonymous commenters who don't have to actually see (and deal with) your reaction to comments tend to lose their inhibitions against tearing you a new one.  We in the blogosphere have a term for this, as you all know -- "flaming" -- and Mr. Axe worried that this might actually suppress the creativity that is important in adapting to changing battlefields.

I think that's a good point.  COL Reider said that their test subjects had really only had praise for the system, which is probably true:  participants in the test were a small pool by comparison to "the whole Army," and as such had less confidence that anonymity was real for them.  I will be absolutely shocked if the Army doesn't find, when the system goes out into the wild, that they are getting flame bombs thrown -- particularly at superiors, but also those who just don't fit in as well. 

We all know that soldiers gripe constantly about each other, among other things.  One of the things that happens in a unit is that the soldiers come, through these complaints, to an understanding about what each person's weaknesses are -- and then that person is subtly moved to a position where those weaknesses are least dangerous for the unit as a whole.  We've all seen that happen.

What this system does differently is that it lets the people in on what's being said behind their backs.  The Army believes that this will fundamentally improve leadership qualities -- because you can see how the team feels you're doing badly (and also, how you're doing well), you can concentrate on firming up your weak areas. 

That may, in fact, be what happens.  If so, it will be a fundamental improvement.

However, David Axe is not wrong to suggest that it runs the risk of suppressing creative thinkers -- because what defines creative thinkers is that they push against the standard understanding.  That is the kind of thing that may come to be viewed as a flaw in those social behind-the-back discussions; the value of their thinking may take a while to become obvious.  To the degree that officers and NCOs of this type are suppressed, the Army will not benefit.

I asked after the assertion that this was a "scientific" system, and I want to take a moment to thank COL Reider and his team for responding to my question at length, and then to followup questions as well.  You can read the information paper they sent me here.  (A colorful, but useful explanation of why I was asking can be read here, thanks to a clever fellow who blogs under the handle "Geek with a .45").

Durling my training in experimental methodology and statistical analysis for the social sciences, our professor spent the first year entirely training us in discerning "science from voodoo". As far as human behavior goes, science, he asserted, looked like well constructed double blind studies of single, relevant factors.

What has been done here is not double-blind studies of a single relevant factor.  Rather, it is consensus building about multiple factors.  Some of that is obvious -- "focus groups" for example -- but some of it is less so.  The only hard reference to a methodology is "Q-sort," which you can read about here.

Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which represent shared ways of thinking.

Q sorts, in other words, are looking for consensus views -- shared ways of thinking, as the article puts it.

What that means is that this is not "hard" science -- that is, science where you can actually disprove a claim, and even the fundamental model, with evidence that can be repeated as often as necessary.  It is not "soft" science, of the type the Geek was talking about, where you are testing the influence of a single relevant factor, with controls to ensure that no other factors are involved. 

It is, however, empirical.  There is nothing wrong with that, as there are some questions that science isn't structured to investigate.  Very many areas of human life simply aren't scientific questions.  I don't meant to suggest this is something the Army hasn't been careful about. 

I also don't mean to suggest that the Colonel was anything less than honest in his appraisal of it as science -- I merely wish to disagree that this is, in fact, science.  I assume his honesty and honor as an officer of the United States.

With that understood, there's another point about the use of consensus methodologies, which is that it underlines David Axe's original concern.  So I wrote back to ask about that:

Would you ask the good doctor if the use of Q-sort methodology to develop the questions increases the probability of the problem David Axe asked after -- social crushing of innovation -- given that Q-sort methods function around consensus?  It seems to me that's likely to lead to a question pool that reinforces consensus understandings of 'the right way to do it,' rather than one that points to innovation (as the COL suggested was his goal).  If you build questions along those lines, you'll more likely get answers of the type that Axe was worrying about.

Of course, since I don't know what the question pool really is, that may not be right -- it's possible there were efforts to mitigate against that problem that are not clear in the answer given.

The Colonel wrote back, rather patiently:

MSAF is an innovative leader development practice in that it provides a method for obtaining feedback from multiple sources, not just the individual's rater and/or supervisor.    

The items or questions are not so much about the right way to do something  as the right thing to do.  That is what the leader competencies in doctrine establish - what are the right leadership behaviors.  Having experts judge what good leadership behaviors are was not only an important step but a necessary one. The Q-sort was done with individuals who are expert in leadership theories and research.  The leader competency model was similarly built through a deliberate analytic effort.  The method of development also focused on identifying the most important behaviors and behaviors that can differentiate among individuals.  Do we have the instruments as good as they can be? Probably not, but we have done more to develop the instruments for MSAF at its outset than I would guess the typical professional firm does when they consult with businesses to build a customized 360 degree assessment prorgram.  The Army MSAF program will continually review the results of the instruments and refine them as necessary; a special staff position has been established to do this.

Once again, I want to thank the Colonel and his staff for the time they've given me, to help me (and, hopefully, you) understand all this.  I'd like to reiterate that my disagreements are meant to be respectful ones.

They should also not be viewed as an attack on the MSAF system.  I agree with the Colonel that what is needed at this stage is more experience -- continual review and improvement, as he puts it.  That is the mark of a successful empirical approach.  What we hope to do here at BlackFive is only to add some concepts -- and something of our own experience with 'anonymous review.'  So long as the potential dangers are recognized and accounted for, the system could very well evolve into something tremendously useful.

I likewise agree that giving subordinates more of a voice is something the Army could really use.  Most of the officers I've dealt with have been good officers, whose concern for their enlisted was carefully balanced with their duty to carry out the missions assigned to them.  There are some who have treated their enlisted without the slightest consideration, however; and now, at least, they'll have to face up to some comments about that.

Even with the profanity edited out, that has the potential to be a very good thing.  There are also some perils.  I trust the Army will indeed watch carefully how the system plays out, and will be interested to hear what their further experience with it brings.

Comments