- By Thomas E. RicksThomas E. Ricks covered the U.S. military from 1991 to 2008 for the Wall Street Journal and then the Washington Post. He can be reached at firstname.lastname@example.org.
By Major Gareth Lintt
Best Defense personnel contest entry
If I could change one thing, I would revise the U.S. Army’s Officer Evaluation Reporting (OER) system.
I am going to concentrate on the U.S. Army’s system because that is the one I know, and it has issues that I would argue can be fixed. And yes, I know: we just revised the OER forms. The problem with the current revision is that the new boss is the same as the old boss. We took our old OER, DA Form 67-9, and replaced it with four new ones: one each for company grade officers, field grade officers, strategic grade colonels, and strategic grade brigadier generals. In order to keep this relatively short, I will ignore use of the term “strategic grade,” as commenting on that might make my head explode.
The new OER forms are the same as the old forms in that there are boxes for words — these boxes are filled out by the rated officer’s rater and senior rater. Generally speaking, rater and senior rater equate to boss, and boss’s boss. To be fair, the new forms made some changes that make sense, eliminated some superfluous box-checking, and generally aim to provide a better picture of the rated officer. The new forms still suffer from the same problem as the old forms, though: too many words. As an example, on my last evaluation I wrote the significant duties and responsibilities for my job during my rated period. I am willing to bet that nobody has read that, and nobody ever will. Because those words just do not matter at the central selection boards that look at evaluations and decide on promotion.
In the Army’s own words ( notes on slide 18): “In post board surveys, board members list the senior rater section (specifically the narrative) as the single most important element in helping them determine how to vote a file.” The only words that matter are the senior rater’s narrative, and the senior rater gets five lines, max, to describe the rated officer. And selection board members, from what I have been told by people who have sat on these boards, take about 15-20 seconds to digest everything in the rated officer’s promotion packet — so, the officer’s records, photo, and several OERs. 20 seconds. I will go out on a limb here and state that 20 seconds is really not enough time to read, digest and develop a full picture of an officer, particularly based on about 100 words or so of narrative. And a block check — you’re top block, or acceptable block or you’re not getting promoted. If you’re on a selection board you have about enough time to scan for key words, and whether or not the rated officer has been scaled against others by the senior rater: number one of ten, or similar.
What would I replace this system with? I would return to one form for everyone, and I would eliminate all of the verbiage. Instead, I would have both the rater and the senior rater provide a numerical rating, between 1 and 25, of the rated officer across 20 categories. Rating officers would submit their rating based on their opinion of the rated officer. So if tactical competence is a category, my rater might give me a 12, and my senior rater a 13 based on their individual opinions of my tactical competence.
The key to this system though is collated data. The form — though I would keep the Army’s new online only submission system — is used to collect the numbers and the Army’s Human Resources Command then averages these numbers for the rater and the senior rater. If I rate 5 officers it is very unlikely that, if I am being honest, everyone will receive the same number across 20 categories, particularly if I am rating on a scale of 1 to 25. When everything is averaged out, I have effectively stacked my rated officers, one of five through five of five. And this is the information that boards would see. And this stacking would continue through a rating officer’s career. So from captain to general, every time I rate a lieutenant it contributes to this stack. By the time you’re a senior officer you have a solid population of rated officers by rank. And if you’re being honest, a 12 is a 12 is a 12. If you are a low grader then that reflects in the raw numbers but since everyone you rate is getting low numbers it does not matter, what matters is the rated officer’s placement in your overall scale. Scaling two of three when your rater is a captain will be seen as exactly what it is: a new lieutenant who is doing well enough. Scaling one of 52 when the rater is a colonel will also distinctly show who is doing well in that colonel’s opinion.
For senior raters I would also add a question: how many times during this rating period have you met face-to-face with the rated officer? And since all of this is online, both the rated officer and the senior rater have to respond independently. I would add a block for the rated officer asking if the senior rater had told him/her what to respond. Senior raters that get too many affirmative responses to this question would face some administrative enquiry, or at least be flagged for their leadership.
This is the essence of my concept: an evaluation system that would be very difficult to game and that produces an evaluation that is actually useful. This system would be difficult to implement, particularly given that we just revamped the OERs. And it would take two to five years to build sufficient profile data for raters and senior raters. The benefits, though, would be well worth the time and effort to implement. Boards would have a reasonably objective view of how officers stack up against other officers, and that is the whole purpose of boards. The rated officer would build a record of what he or she is good at, and where improvements can be made. Toxic leaders could not single out specific officers as their evaluation would stand out as a minority report. Couple this with having the rated officer complete a form on both the rater and the senior rater and you get a 360-degree evaluation system. Mask the rated officer’s opinions until 10-15 have been compiled, and the compilation is given to the rater or senior rater. And recompile every time another 10 or 15 are added. This secures anonymity for the rated officer, particularly as it is all numbers and not verbiage that can be decoded into identities, provides raters and senior raters with feedback on their leadership, and provides the Army with a running estimate of how its officers are doing.
No system is going to be perfect, but I think that this recommendation adds a bit more objectivity to the rating system, and eliminates some of the continuous issues that our current rating system suffers from.
Major Gareth Lintt has received numerous evaluations throughout his career, none of which stated “below average, do not retain.” Yet. This article is opinion and does not in any way reflect Army, DoD or US Government policy, intent, or conceptualization. Obviously.
Hey Joe, whattaya know about how to improve the U.S. military personnel system? Please consider sending it to the blog e-mail address, with PERSONNEL in the subject line.
via U.S. Army HRC