Don't confuse system testing with user training
There are usually two distinct parts to a transition within the R2O cycle, especially for major programs. The first part is the implementation of a new or updated technical system that enables the transition. The second part is the implementation of the research product within the technical system for the user to assess. Since each part has the potential for significant challenges, there is an increased likelihood that, if combined and not treated as discrete tasks, the R2O cycle will be prolonged as shortcomings in one task dog the other.
For the purpose of expediency, managers often push system integration with research evaluation (after all, there have probably been a lot of delays already!), and others are willing to support this endeavor because it is truly an exciting time in the R2O cycle. After all, the user will shortly have access to the research and development for the first time and it seems the objective of the transition is in reach. But practitioners have little place in testing a technical system or capability, other than how it fits within the operational workflow, and if the technical system is not up to par, the research evaluation will fall short, potentially frustrating the research team. After all, it is not necessarily the fault of the research team for this failure; they may not have worked with the technical system.
User training is complicated enough without requiring the users to learn a new system or system feature/capability as part of the research evaluation. In some cases it may be necessary (if existing tools cannot exploit the new information, for example), but two central questions should be asked:
- Do we need the expertise of practitioners to test the technical system?
- Are the new research products mature enough within the technical system for field experts to integrate into their operations?
If a software developer or systems engineer can test the technical system for bugs and errors, that should not be offloaded onto the field. If the research team looks at the implementation of the research products in the technical system and notices that the results are not as expected, then it is not ready for field evaluation. If a premature product was pushed off to the field, users could quickly sour on its potential application, especially if the prospective added value is unclear to them. In other words, if what is presented for evaluation is close to a finished product, users are more likely to envision the utility during their initial assessment. This will bode well for subsequent tests within the R2O cycle.
Finally, integrating system testing and user training leads to a paradox. If we ask the practitioner to evaluate a product they are unfamiliar with, in a technical system that is new to them, how can we truly collect feedback that will move the R2O cycle forward? They may not know how to attribute shortcomings, and more time will be spent for the research team (which hopefully includes an analyst) and technical developers to determine the source. This is more likely to be the case if the product evaluation includes simulated or proxy data that may not be representative of future capabilities, adding a third layer for potential confusion.