In 2005, I moved to the Technology Demonstration Program
Operational Research Team in Defence Research and Development Canada (DRDC)
Headquarters.
Every year, teams of
defence scientists in DRDC make proposals for Technology Demonstration
Projects. A committee of military
officers from all of the military environments review the proposals which are
also briefed to them by the proposal teams.
Then each member of the committee ranks the proposals. The rankings of the proposals are then
examined by the Technology Demonstration Project Operational Research Team
using a piece of consensus analysis software developed by the Center for
Operational Research and Analysis called MARCUS. The top ranked proposals are funded by Defence
Research and Development Canada to the tune of approximately $4M each. Therefore a great deal of money is at stake
and it is extremely important to select the most promising proposals.
There usually ten members on the committee who rank the
projects according to five independent weighted criteria. I was concerned about the MARCUS methodology
because it suffered from the problem of irrelevant alternatives. That is, the ranking of the top proposals
could change if a proposal of a low rank was either added or removed from
consideration. That is, adding or
removing a proposal that would not be chosen by the committee because it was of
poor quality could cause a proposal of fairly high quality not to be selected.
I developed an alternative method to MARCUS which was based
on the Condorcet Method of ranking. I
called it Condorcet Elimination.
Then I built a simulation to test the Condorcet Elimination
Method. I used Monte
Carlo simulation to model the voting of the committee
members. First, I developed a series of
simulated proposals with a known quality level using random numbers. Then I modelled the error in the criteria’s
ability to determine the proposal with the best quality. Finally, I introduced random errors in the
ability of the committee member to evaluate the quality of the proposals
according to the criteria.
All of the errors were assumed to be Normally distributed. However, I varied the standard deviation
of the distribution to determine the robustness of the committee evaluation
process. I used the Condorcet
Elimination Method to determine the final rankings of the committee and the
level of consensus.
In general, the results showed that the committee was able
to determine the top few and bottom few proposals in terms of the true
quality. However, there was generally a
lack of consensus in the rankings of the proposals in the mid-range of
quality. This is problematic because it
is at this mid-level where the funding is cut off. So the lack of consensus on the quality of
the proposals might result in some lower quality proposals being funded and
some higher quality proposals not making the cut.
I found the more proposals there were, the less likely it
was that the committee would obtain a complete consensus. The more committee members there were, the
more likely the truly best proposals would be found by the consensus of the
committee. However, if there was enough
variance in the error the committee could get locked into a form of Group Think
in which they seem to form a consensus that a proposal is not the highest quality is the highest in quality level.
My suggestion was to change the use of the ranking
process. I felt that if there was a
consensus on the top proposals, they should be funded. If there was a consensus on the bottom proposals, they should not be funded. However, if the funding cut-off line was in
the area in which there was not complete consensus, partial funding should be provided
by DRDC and the remainder of the funding for the proposal should be provided
from the sponsor of the project. This
would determine the sponsor’s “willingness to pay” using demand revealing methods.
No comments:
Post a Comment