Preliminary, non-operational, but used operationally? Huh?

The ‘O’ in R2O is sometimes confounding. Whereas the realm of research is an expansive net for innovation, the concept of operations is more of a broadening spectrum. What constitutes operations? When is a research byproduct considered operational? Does an operational status confer consistency and reliability requirements? Must training be available for operations before a research byproduct is operational? Who decides whether a transitioned research byproduct should be used operationally?

These are not necessarily easy questions to answer, and they may vary depending on the nature of the project and byproduct, but there are some guidelines that can make it easier to assess where in the R2O cycle a transitioned item sits in its readiness for application to an organization’s deliverables and services.

First, it is important to define the portion of an organization or enterprise that constitutes “operations” at the onset of the research—before it is ready for any demonstration. There are sometimes two different visions of operations that exist downstream. Systems engineers will see “operations” as the system or set of systems responsible for creating a research byproduct in a consistent and reliable sense, and the personnel that support that infrastructure. For them, their reality of operations, as unsettling as it may be, is less about the practitioner and more about the upstream computational resources that must deliver a transitioned item based on pre-established metrics of timeliness and availability.

Because these stringent metrics usually require a sustained level of funding, and there is software development necessary to formally implement transitioned items, this concept of operations can amplify a “speed bump” in the R2O cycle to an impenetrable barrier, especially absent strategic planning well ahead of an item reaching this stage. For this reason, using systems engineering metrics as a proxy for an item’s operational status is typically less than ideal, at least for the purpose of making it available for demonstration to practitioners. After all, the genesis of quality R2O is collaborations, so impediments to the interaction between research and operations, and evaluation of transitioned items, should be minimized to the most practical extent possible.

With few exceptions, practitioners are often the targets or receivers of transitioned items in the R2O cycle. In this sense, “operations” is the customer-facing presence in an organization, and potentially the second tier, or “back line” support. This allows, perhaps paradoxically, for pre-operational research byproducts to be used operationally, even in a demonstration capacity. In other words, transitioned items that are still run on a development platform may be used to support decisions for their customers. But is this always appropriate?

The question in this scenario is whether a transitioned item is stable enough and easy enough to interpret or use without formal or recurring training. If a practitioner can determine when a research byproduct is behaving outside of its prescribed quality specifications such that it can be ignored, then the operational use can only yield a benefit. If allowing practitioners to make that delineation requires training, then that must be a necessary component of an operationalization effort.

A lack of clarity about what is operational and what can be used operationally has led to real confusion. When imagery from the first satellite in the Geostationary Operational Environmental Satellite R-Series (GOES-R) became available via direct broadcast services, there were competing concepts of operations. Should practitioners use imagery from a new satellite for real weather analysis and forecast challenges? The satellite was still undergoing a necessary checkout period to ensure quality calibration and navigation. Since imagery was routinely available, some meteorologists took to social media to decry the mandated “preliminary, non-operational” tag as a punch line to the same bad joke. And who could blame them? If imagery from GOES-R could help their operational needs and deliver a better deliverable or service to their customers, why should the operational status of the satellite dictate whether it should? Fortunately, the National Weather Service assured their staff that this pre-operational data could be used operationally so long as it was available.

Let the value proposition guide your operational assessment. It should matter less about what availability metrics say about operational status than how much value it adds for the practitioner decisions that ultimately reach the users or customers. While poor consistency and reliability may be detrimental to the R2O cycle, too stringent definitions can withhold benefits. Achieving a balance is important, and establishing that expectation early on in the R2O cycle is paramount. Letting both “ends” of the R2O process decide the best approach to the operational transition timeline is ideal. After all, “research” generates the value and “operations” captures, realizes, and exploits it for the betterment of the organization or enterprise.


Peer review and R2O

One area of increasing confusion is the difference between the scientific peer review and research to operations processes, and their relative value for a new research byproduct. To some extent, that value depends on the organization or enterprise. But peer review and R2O are two separate processes, and the comparison between them is a matter of “apples and oranges”. They both have important roles but serve different purposes. Scientific peer review exists to make sure that a given research byproduct is valid in the realm of science from which it was derived. While science evolves, the peer review process has a defined starting and ending point, where the end is usually a publication that can be further scrutinized by the broader scientific community. The R2O process, typically internal to an organization, evolves with the progress of a research byproduct and sometimes does not reach a conclusion as it iterates over the lifecycle of a byproduct in operational status.

Allow me to illustrate the difference between peer review and R2O using a common household appliance—the toaster. The research component is the development of the toaster. The transfer to operations is the manufacture and sales of that toaster to a consumer. The analogous peer review process is the engineering analysis of the appliance. For example, there are inevitably certain checks necessary to make sure that the toaster is safe to use, and does not get too hot, or cause a fire, among other things. In comparison, the R2O process for the toaster would assure that the toaster meets the needs of the likely consumers, through focus groups or other evaluations. It could be a perfectly safe toaster, but if the lever is stiff and difficult to operate, or the slots are not large enough to fit a bagel, then that is a failure of the R2O process. A failure in R2O suggests that it is not functional in an operational sense, even if it is engineered to be safe and energy efficient. Of course, if it causes fire, a failure of the “peer review”, then it is neither safe nor useful.

To that end, scientific validity can be proven through peer review, but even if a research byproduct makes it to operations before peer review is complete, the practitioner will most likely notice glaring scientific shortcomings if they exist. If the toaster catches fire, there will be few if any additional consumers. The same applies to R2O. If it is not scientifically valid, the practitioners will realize that and not use it any further, because it does not add any value. Similarly, it is possible that peer review may not lead to a sufficient amount of initial criticism. That may be caught over time as a result of a continual evaluation within the R2O process, if the process incorporates heavy evaluation from users.

For this reason, the relationship between peer review and R2O is a tenuous one. That is because not every component of a research byproduct is necessarily subject to peer review, or even can be reasonably peer reviewed. Sometimes peer review constraints are greater than demands from the R2O process or practitioners. Or, certain components are quite important to operations that are not evaluated during peer review at all. Science is only a part of the research byproduct transitioned to operations; how it is displayed and the manner in which it is applied to operational problems are unique challenges that peer review will generally not resolve, but R2O processes can carefully examine.

While R2O processes can be developed to be onerous, with numerous checks for compliance with information technology standards, budgets, and so forth, that is ultimately unnecessary. This can slow R2O, particularly when compounded by bureaucracy. Adherence to elements of a specific R2O process unrelated to the development of the byproduct and practitioner experience are not essential to prove operational worthiness or scientific validity. There is not a perfect recipe for which R2O demonstrations over a period of time are necessary to establish a certain degree of rigor, just as peer review is not a failsafe way to assure accurate and reproducible scientific results.

Both peer review and R2O have their place and importance in whatever form they come. One does not need to occur before the other, and sometimes one is more beneficial than the other. When both processes are invoked, it can produce the most desirable and defendable research and resultant byproducts that are useful for practitioners, but organizations and enterprises must consider which elements of these processes are necessary in achieving their overall mission and objectives.


Knowledge, skills, and mental representations

It can be confounding that there are not necessarily similar specific attributes of research byproducts that achieve a successful R2O transition and become routinely useful in an operational environment. While there are certain elements common to the R2O process and how a product is presented that increase the likelihood of a successful transition, there is no common strategy to pursue to guarantee a winner. Transitioned research byproducts are comprised of scientific knowledge such that they present new information in a way that serves as a basis for making, or at least altering, operational decisions. Knowledge and information alone are a small piece of the “puzzle” for the practitioner. Retention of knowledge and information is difficult, even in the short-term, but it becomes even more challenging if the practitioner has to reason about how to apply it – that is, finding out how it “fits”.

This is abstract because it is quite primitive in that it relates to how humans learn and think. The application of that information requires the practitioner to possess a certain degree of skill with regards to making the relevant decision. In demonstrating that skill, the practitioner will have a certain mental representation of the components and factors relevant to the operational task that he or she is performing. This mental representation is the “puzzle”. Not all practitioners will have the same mental representations. These mental representations will likely vary, at least by experience, education, or the time of entry into the field. This is because scientific fields are continually evolving and the demands for practitioners to perform certain tasks as part of their duties change over time. One major forcing of change to mental representations recently is the substantial increase in data.

For this reason, there should be no expectation that the mental representations of the more experienced or more recently educated are better in some way, or that R2O projects should be focused on practitioners with less experience in order to “improve” them. Experience alone (that is, the cumulative amount of time doing something) is not necessarily proportional to the quality of the tasks that a practitioner performs if that individual has not attempted to improve. While in the modern workplace, there tends to be some deference toward experience, most people have probably encountered that stale colleague that is resistant to change and does a fairly mediocre job, even without some metric to assess whether an experienced colleague is “better” than an ambitious young professional a few years out of school.

While perhaps the best transition projects in R2O are attractive to a wide range of practitioners with varying mental representations, certain projects may be more appealing to those practitioners with mental representations that have gaps, or there is a research byproduct that in some way simplifies the mental representation of practitioners to remove a degree of uncertainty or add clarity. The very best transition projects add value to the expert mental representations because those are the representations that push the functioning boundaries of the field or enterprise. A practitioner with a mental representation that already has a sufficient amount of information to make a confident decision needs little further developing or additional inputs.

Mental representations of tasks, particularly in scientific fields, are built around information from data and research byproducts. And how practitioners use that information in arriving at a particular operational decision (through the practitioner’s mental representation) is not necessarily straightforward, especially when the data space is crowded with many competing pieces of information and no clear way to sort or weigh one thing relative to something else. R2O transition processes must take care to assess and carefully attempt to quantify how the operational decision is changing as a result of a new input. It is not enough to expect additional data or a new research byproduct to lead to an improvement, even if it is more precise or novel in some way compared to existing sources. More information may simply prove onerous to incorporate with certain operational constraints (cost or time among them).

Just as there is great emphasis on creating meaningful research byproducts for practitioners based on scientific work, R2O initiatives should focus on improving the mental representations and skills of practitioners for certain operational tasks that are under scrutiny. This is not a matter of a starting point that is “good” or “bad”, but instead a place where there is an opportunity to be better based on the state of the science. For those expert practitioners, this means becoming the best, and expanding the horizons of the entire enterprise in the process.

Further reading for those interested: There is a book, “Peak: Secrets from the New Science of Expertise”, by Anders Ericsson and Robert Pool, that discusses how deliberate practice builds expertise and how expertise is the result of superior mental representations, almost exclusive of the discipline.


Keeping new datasets in step with modern organizations using R2O

Underlying transitions from research to operations are typically some critical sources of raw data or observations that can further an operational objective or priority when provided to practitioners and applied to the mission. In early stages, there is often at least some degree of research involvement in either setting the specifications of the instruments that will take new observations, or establishing concepts and performance specifications for potential derived research byproducts from those observations. Most successful iterations of the R2O cycle consume a few years at the very most. What happens, then, when there are observing system or new dataset procurements that take a decade or longer?

Consider this case study. The Geostationary Operational Environmental Satellite R-Series (GOES-R) launched in November 2016 after years of planning and preparation. As a weather satellite, GOES-R will be able to provide operational meteorologists with imagery of atmospheric patterns and cloud features at a temporal, spatial, and spectral capacity substantially greater than the “legacy” GOES and its instruments. The ground segment will also create products, formulated from past research, that are composited or derived from the imagery. Those will also be provided to the meteorologists in the field to complement the raw imagery.

However, the National Weather Service (NWS) established the requirements for GOES-R in 1999. In the years that have passed, the derived products have been formulated but, most notably, the NWS has evolved as an organization. The forecaster (practitioner) workflow has changed with increases in other observational data and numerical weather prediction model output. There have also been substantial improvements in information technology such that the methods of storing and visualizing weather observations and communicating weather forecasts to the public have evolved too.

So, when it is time to validate the new imagery and products from GOES-R against the NWS requirements, should they pretend it is 1999? With a strong R2O cycle in place, there is the opportunity to refresh the requirements in the context of today’s research and operational environments. To put it another way, the users of imagery and derived products from GOES-R should not have to settle for a product with accuracy and precision specifications based on what was attainable in 1999. Change is inevitable. Certain products may hold more promise to meet better specifications than others, and find more users, in today’s research and operational contexts. There may be other observations that have since filled related needs or model analyses that may be more reliable for producing a representation of the atmosphere that a satellite simply cannot provide. Instead, focusing on what is realistic and attainable with the new instruments and state of the research in 2016 and the years ahead should be the absolute priority.

The evolution of requirements and priorities can be summarized in three stages over the lifespan of an observation:

  1. Establish requirements to set the instrument specifications based on the best concept of future users’ needs.
  2. Set updated “lifetime” requirements when the data is first available with a consideration of the organizations’ present and future mission.
  3. Update priorities to refocus attention on where the observing system can add value today, shifting away from validating all previous requirements evenly.

It is easy to get caught pairing the instrument specifications with the requirements for the entire lifespan of a new observing system because the instruments and other components were built to meet the requirements initially. And with current procurement timelines spanning at least several years, if not a decade for major acquisitions, there is little that can be done to change that. But following availability of the new observations, the paradigm should change, and hopefully there is room for growth through either the instrumentation exceeding its specification, providing higher quality data, or the scientific community providing new methods for using the collected data. In order to enable this ability, there has to be some flexibility to update software components of the system on timelines more consistent with R2O transitions (weeks or months, not years). In general, and with the right architecture, it should not cost more, or at least substantially more, to update the software.

Fixed hardware and a static observing system are rare in an era where organizations, enterprises, missions, and requirements are continually evolving, especially at today’s rapid pace. But where that is a reality (such as with weather satellites), past priorities should be dismissed as old requirements are updated through demonstrations in proving grounds and testbeds. These forums, which involve practitioners directly in the R2O process, can be used to establish a new benchmark for operational functions of the raw data and derived research byproducts from an observing system or dataset.

Establishing metrics that support the progress toward updated requirements from the new benchmark can show where there is the potential for added value over what the current capabilities, research, and configuration provide. After all, R2O is about capturing the added value. A combination of metrics and new requirements should turn the cogs of the proverbial R2O wheel for the lifespan of the observing system or dataset and potentially help to recognize limitations that can set the hardware or other requirements on the next enhancement to the dataset.

In all cases though, ideas (from research, operations, or elsewhere) should not be taken at face value. Whether these are research ideas, software ideas, or evolutionary hardware ideas, the idea should only serve as an impetus for subsequent analysis. There should be methods for testing the validity of a proposed concept and establishing metrics and requirements accordingly. Ideas are opinions resulting from what “we” know now and applying it to resolve a current or future challenge. It can be costly to not verify the dexterity of an idea in a R2O cycle that involves a major or novel set of observations or other data.

Managing a data collection or observing system means owning the requirements and keeping the R2O cycle as modern as the organization or enterprise it serves. Keeping R2O processes nimble and efficient means knowing when the requirements are out of date. Otherwise, there can be a waste of time, or worse, a loss of faith in the processes that can bear tomorrow’s fruit.


Listen to the community and refine R2O

Research to operations requires an astute and routine identification and evaluation of the stakeholders. The most efficient way to do this is holding conferences, in person, that pull unique participants from the breadth of researchers, practitioners, and leaders within, and beyond, the community. This means engaging colleagues beyond the geographic and disciplinary reaches of current R2O activities. There are usually many international partners and adjacent sectors that can provide a fresh perspective on how to optimize R2O activities based on their own trials. Taken to heart, these perspectives can challenge the status quo through reframing R2O discussions. After all, R2O is about collaboration. It is about the people involved more than the process that institutes and prescribes it.

It is not as easy as calling a meeting and buying some coffee and muffins though. Planning a conference and establishing an agenda that focuses on R2O necessitates placing a notable emphasis on who is in the audience, not who should be presenting. Conferences that seek to serve as a springboard for new R2O ideas or concepts for improving R2O itself must hear from those “in the trenches” instead of “on the hill”. Successful R2O is in its details.

There is a clear way to identify whether an agenda has strayed too far from the ideals of connecting people and exploiting opportunities. If the program on paper appears like an itemized organizational chart, with the senior leadership presenting first, then the mid-level management, and finally project managers, then the conference will be more about affirming the present than changing for the future. The audience is likely to consist of a mixture of researchers and practitioners that do not necessary identify with the process-centric perspective that managers usually proffer anyway. Hearing the internal community perspectives as well as those from aboard and beyond can stimulate a thorough discussion about future directions and energize everyone involved.

Conferences that connect both players in the R2O cycle must translate the discussion to a consensus so that the promising ideas for future R2O transition or process improvement are identified. Once they are identified, further conference interactions should aim to produce actionable partnerships and proposals. This requires less talking to the audience and more hearing from the audience, or giving the members of the audience the opportunity to talk amongst each other. In other words, the program should consist of fewer presentations and more town hall meetings, panel discussions, and breakout groups.

Beyond that, inviting international collaborators to partake in events at R2O conferences can assure a different perspective is shared. This could be particularly advantageous for R2O projects with substantial government involvement. After all, government processes can be resistant to change because there are no external market forces to demand it, even if budgets are tight. Yet, international government partners likely have similar challenges and may resolve them differently.

But it has to be easy for all of the R2O players to air their opinions and provide potential solutions. This is difficult to force, but encouraging the willing conference attendees to address how they see the present climate and future evolution of the R2O environment in the perspective of their own R2O projects instead of what they are doing is a major step. The “how” provides the details. There are endless examples of R2O but examples of R2O, even successful ones, are not really telling of best practices on their face.

Furthermore, from a science perspective, conferences, including those with international collaborations, can be beneficial to the entire community in producing the best research approaches that can then translate to improved research byproducts. Parallel algorithm development teams may be in competition, whether formally or informally, but there is joint interest in determining the merits of each solution, scientific or otherwise. The best criticism and ideas can sometimes come from the outside.

Bringing people together to build collaborations and expand the community requires use of that capacity in demonstrating its benefit. To that end, the largest community conference can still be unsuccessful if it is nothing more than presentations directed to an audience. Encouraging attendees to contribute to the forum and build their personal networks beyond their discipline, organization, or nation is essential for the growth and refinement of R2O. Listen to them.