What is in your suitcase? Why is that a challenge for R2O?

Almost everyone brings a suitcase when they travel. The size of the suitcase and its contents depend on the length of the trip, the destination, and the personal preferences of the traveler. Some people travel light while others travel with many different forms and options of attire, as well as varying amounts of personal items. In almost all suitcases, there will be shirts, pants, socks, shoes, and toiletries, but one person’s suitcase may have dozens of different shirts while another may have just a few.

If you examine how practitioners perform their jobs and specific tasks, you will find that there are many different ways to create the same deliverable, or information product, such as a meteorologist creates a weather forecast. There are some practitioners that use a few reliable tools or sources for crafting their deliverables while others elect to have a broader selection at their fingertips, even if they do not use each item regularly. This is no different than the suitcases, each with some of the same types of items mixed along with a few unique items.

It is not a matter of right or wrong, but instead an example of how different people can approach the same task (or trip) in different ways. In rare circumstances, there may be instances in which someone uses too few or too many tools or sources, and in those extreme scenarios, quality can suffer, but that is generally an exception. Few people pack for a weeklong trip with only a single shirt and pair of pants or haul more than one or two suitcases when departing for a few days.

In the context of R2O demonstrations, these individual preferences can make determining the value of new research byproducts more challenging. This is because the benefit of a new innovation (such as a tool, source, or other research byproduct) compared to one practitioner’s set of tools and sources may be different than compared to another’s. To extend the suitcase metaphor, suppose two travelers were headed to a tropical island: one traveler packed light button-down shirts while the other packed polyester t-shirts. If we introduced a polo shirt with breathable, moisture-wicking fabric to both travelers, it may be more beneficial to the traveler who packed the polyester t-shirts. But suppose neither traveler packed any hats, in which case a baseball cap would be a welcome addition to each suitcase.

Some innovations are incremental improvements and others are new and unique, and that will impact the perceived value during a demonstration depending on the scenarios and the practitioners. The role or function of certain demonstrated research byproducts may also be different relative to each other and the existing set of tools or sources that a practitioner actively uses. Or one practitioner may find a new function for a research byproduct than another.

These are important reasons why R2O demonstrations must involve multiple different practitioners, particularly those with diverse backgrounds, experiences, and skills. Some practitioners may find new innovations more suited for their particular workflow, or they may be more open to changing their workflow, particularly if they recognize challenges with creating a quality deliverable. Some may like to have many different options available to them, and therefore readily add a new innovation to their toolkit or workflow, while others may prefer to focus on traditionally successful (i.e., “tested and true”) approaches.

The reason there are different successful workflows is because workflows can be optimized in a certain way without necessarily compromising quality of the deliverable. These optimizations are based on the preferences and knowledge of each practitioner. Some workflows may be larger than others because thoroughness may be valued over time and vice versa. There are always external constraints on workflows just as there are constraints on R2O.

For example, the budget is an omnipresent constraint, but there is also the issue of relevance and time. There has to be some reasonable judgment on how to constrain a workflow or a certain R2O demonstration so that practitioners are not overwhelmed with options such that the demonstration is no longer practical to assess the value of disparate innovations. At some point, the suitcase is full. And just like shirts, pants, and socks are usually not packed in random order, trying to demonstrate too much at once can be challenging in proving value.

At that point of capacity, the individual value of each item in and outside of the suitcase should be reconsidered. There need not be too many similar items. As part of some demonstrations, it is prudent to consider the unique merits of each tool or source relative to the different practitioner workflows. In adding one shirt to a suitcase, another one may need to be removed. That is why R2O is not always about adding new products, but replacing or removing products, which can be difficult if some practitioners have incorporated them into their workflows while others have not or willingly move on.

In the end, people planning for the same trip will have different types of suitcases and different items in their suitcases, but there will be common or similar items between them, in value and/or function. You know of everything in your suitcase and how to use or wear it, but the same might not be true for someone else’s. The preferences for tools and sources that practitioners use to do their jobs are no different. Quality deliverables might arise from different methods with some similar elements. Assessing the value of a new innovation requires a constant consideration of that.

Share

Preliminary, non-operational, but used operationally? Huh?

The ‘O’ in R2O is sometimes confounding. Whereas the realm of research is an expansive net for innovation, the concept of operations is more of a broadening spectrum. What constitutes operations? When is a research byproduct considered operational? Does an operational status confer consistency and reliability requirements? Must training be available for operations before a research byproduct is operational? Who decides whether a transitioned research byproduct should be used operationally?

These are not necessarily easy questions to answer, and they may vary depending on the nature of the project and byproduct, but there are some guidelines that can make it easier to assess where in the R2O cycle a transitioned item sits in its readiness for application to an organization’s deliverables and services.

First, it is important to define the portion of an organization or enterprise that constitutes “operations” at the onset of the research—before it is ready for any demonstration. There are sometimes two different visions of operations that exist downstream. Systems engineers will see “operations” as the system or set of systems responsible for creating a research byproduct in a consistent and reliable sense, and the personnel that support that infrastructure. For them, their reality of operations, as unsettling as it may be, is less about the practitioner and more about the upstream computational resources that must deliver a transitioned item based on pre-established metrics of timeliness and availability.

Because these stringent metrics usually require a sustained level of funding, and there is software development necessary to formally implement transitioned items, this concept of operations can amplify a “speed bump” in the R2O cycle to an impenetrable barrier, especially absent strategic planning well ahead of an item reaching this stage. For this reason, using systems engineering metrics as a proxy for an item’s operational status is typically less than ideal, at least for the purpose of making it available for demonstration to practitioners. After all, the genesis of quality R2O is collaborations, so impediments to the interaction between research and operations, and evaluation of transitioned items, should be minimized to the most practical extent possible.

With few exceptions, practitioners are often the targets or receivers of transitioned items in the R2O cycle. In this sense, “operations” is the customer-facing presence in an organization, and potentially the second tier, or “back line” support. This allows, perhaps paradoxically, for pre-operational research byproducts to be used operationally, even in a demonstration capacity. In other words, transitioned items that are still run on a development platform may be used to support decisions for their customers. But is this always appropriate?

The question in this scenario is whether a transitioned item is stable enough and easy enough to interpret or use without formal or recurring training. If a practitioner can determine when a research byproduct is behaving outside of its prescribed quality specifications such that it can be ignored, then the operational use can only yield a benefit. If allowing practitioners to make that delineation requires training, then that must be a necessary component of an operationalization effort.

A lack of clarity about what is operational and what can be used operationally has led to real confusion. When imagery from the first satellite in the Geostationary Operational Environmental Satellite R-Series (GOES-R) became available via direct broadcast services, there were competing concepts of operations. Should practitioners use imagery from a new satellite for real weather analysis and forecast challenges? The satellite was still undergoing a necessary checkout period to ensure quality calibration and navigation. Since imagery was routinely available, some meteorologists took to social media to decry the mandated “preliminary, non-operational” tag as a punch line to the same bad joke. And who could blame them? If imagery from GOES-R could help their operational needs and deliver a better deliverable or service to their customers, why should the operational status of the satellite dictate whether it should? Fortunately, the National Weather Service assured their staff that this pre-operational data could be used operationally so long as it was available.

Let the value proposition guide your operational assessment. It should matter less about what availability metrics say about operational status than how much value it adds for the practitioner decisions that ultimately reach the users or customers. While poor consistency and reliability may be detrimental to the R2O cycle, too stringent definitions can withhold benefits. Achieving a balance is important, and establishing that expectation early on in the R2O cycle is paramount. Letting both “ends” of the R2O process decide the best approach to the operational transition timeline is ideal. After all, “research” generates the value and “operations” captures, realizes, and exploits it for the betterment of the organization or enterprise.

Share

Peer review and R2O

One area of increasing confusion is the difference between the scientific peer review and research to operations processes, and their relative value for a new research byproduct. To some extent, that value depends on the organization or enterprise. But peer review and R2O are two separate processes, and the comparison between them is a matter of “apples and oranges”. They both have important roles but serve different purposes. Scientific peer review exists to make sure that a given research byproduct is valid in the realm of science from which it was derived. While science evolves, the peer review process has a defined starting and ending point, where the end is usually a publication that can be further scrutinized by the broader scientific community. The R2O process, typically internal to an organization, evolves with the progress of a research byproduct and sometimes does not reach a conclusion as it iterates over the lifecycle of a byproduct in operational status.

Allow me to illustrate the difference between peer review and R2O using a common household appliance—the toaster. The research component is the development of the toaster. The transfer to operations is the manufacture and sales of that toaster to a consumer. The analogous peer review process is the engineering analysis of the appliance. For example, there are inevitably certain checks necessary to make sure that the toaster is safe to use, and does not get too hot, or cause a fire, among other things. In comparison, the R2O process for the toaster would assure that the toaster meets the needs of the likely consumers, through focus groups or other evaluations. It could be a perfectly safe toaster, but if the lever is stiff and difficult to operate, or the slots are not large enough to fit a bagel, then that is a failure of the R2O process. A failure in R2O suggests that it is not functional in an operational sense, even if it is engineered to be safe and energy efficient. Of course, if it causes fire, a failure of the “peer review”, then it is neither safe nor useful.

To that end, scientific validity can be proven through peer review, but even if a research byproduct makes it to operations before peer review is complete, the practitioner will most likely notice glaring scientific shortcomings if they exist. If the toaster catches fire, there will be few if any additional consumers. The same applies to R2O. If it is not scientifically valid, the practitioners will realize that and not use it any further, because it does not add any value. Similarly, it is possible that peer review may not lead to a sufficient amount of initial criticism. That may be caught over time as a result of a continual evaluation within the R2O process, if the process incorporates heavy evaluation from users.

For this reason, the relationship between peer review and R2O is a tenuous one. That is because not every component of a research byproduct is necessarily subject to peer review, or even can be reasonably peer reviewed. Sometimes peer review constraints are greater than demands from the R2O process or practitioners. Or, certain components are quite important to operations that are not evaluated during peer review at all. Science is only a part of the research byproduct transitioned to operations; how it is displayed and the manner in which it is applied to operational problems are unique challenges that peer review will generally not resolve, but R2O processes can carefully examine.

While R2O processes can be developed to be onerous, with numerous checks for compliance with information technology standards, budgets, and so forth, that is ultimately unnecessary. This can slow R2O, particularly when compounded by bureaucracy. Adherence to elements of a specific R2O process unrelated to the development of the byproduct and practitioner experience are not essential to prove operational worthiness or scientific validity. There is not a perfect recipe for which R2O demonstrations over a period of time are necessary to establish a certain degree of rigor, just as peer review is not a failsafe way to assure accurate and reproducible scientific results.

Both peer review and R2O have their place and importance in whatever form they come. One does not need to occur before the other, and sometimes one is more beneficial than the other. When both processes are invoked, it can produce the most desirable and defendable research and resultant byproducts that are useful for practitioners, but organizations and enterprises must consider which elements of these processes are necessary in achieving their overall mission and objectives.

Share

Knowledge, skills, and mental representations

It can be confounding that there are not necessarily similar specific attributes of research byproducts that achieve a successful R2O transition and become routinely useful in an operational environment. While there are certain elements common to the R2O process and how a product is presented that increase the likelihood of a successful transition, there is no common strategy to pursue to guarantee a winner. Transitioned research byproducts are comprised of scientific knowledge such that they present new information in a way that serves as a basis for making, or at least altering, operational decisions. Knowledge and information alone are a small piece of the “puzzle” for the practitioner. Retention of knowledge and information is difficult, even in the short-term, but it becomes even more challenging if the practitioner has to reason about how to apply it – that is, finding out how it “fits”.

This is abstract because it is quite primitive in that it relates to how humans learn and think. The application of that information requires the practitioner to possess a certain degree of skill with regards to making the relevant decision. In demonstrating that skill, the practitioner will have a certain mental representation of the components and factors relevant to the operational task that he or she is performing. This mental representation is the “puzzle”. Not all practitioners will have the same mental representations. These mental representations will likely vary, at least by experience, education, or the time of entry into the field. This is because scientific fields are continually evolving and the demands for practitioners to perform certain tasks as part of their duties change over time. One major forcing of change to mental representations recently is the substantial increase in data.

For this reason, there should be no expectation that the mental representations of the more experienced or more recently educated are better in some way, or that R2O projects should be focused on practitioners with less experience in order to “improve” them. Experience alone (that is, the cumulative amount of time doing something) is not necessarily proportional to the quality of the tasks that a practitioner performs if that individual has not attempted to improve. While in the modern workplace, there tends to be some deference toward experience, most people have probably encountered that stale colleague that is resistant to change and does a fairly mediocre job, even without some metric to assess whether an experienced colleague is “better” than an ambitious young professional a few years out of school.

While perhaps the best transition projects in R2O are attractive to a wide range of practitioners with varying mental representations, certain projects may be more appealing to those practitioners with mental representations that have gaps, or there is a research byproduct that in some way simplifies the mental representation of practitioners to remove a degree of uncertainty or add clarity. The very best transition projects add value to the expert mental representations because those are the representations that push the functioning boundaries of the field or enterprise. A practitioner with a mental representation that already has a sufficient amount of information to make a confident decision needs little further developing or additional inputs.

Mental representations of tasks, particularly in scientific fields, are built around information from data and research byproducts. And how practitioners use that information in arriving at a particular operational decision (through the practitioner’s mental representation) is not necessarily straightforward, especially when the data space is crowded with many competing pieces of information and no clear way to sort or weigh one thing relative to something else. R2O transition processes must take care to assess and carefully attempt to quantify how the operational decision is changing as a result of a new input. It is not enough to expect additional data or a new research byproduct to lead to an improvement, even if it is more precise or novel in some way compared to existing sources. More information may simply prove onerous to incorporate with certain operational constraints (cost or time among them).

Just as there is great emphasis on creating meaningful research byproducts for practitioners based on scientific work, R2O initiatives should focus on improving the mental representations and skills of practitioners for certain operational tasks that are under scrutiny. This is not a matter of a starting point that is “good” or “bad”, but instead a place where there is an opportunity to be better based on the state of the science. For those expert practitioners, this means becoming the best, and expanding the horizons of the entire enterprise in the process.

Further reading for those interested: There is a book, “Peak: Secrets from the New Science of Expertise”, by Anders Ericsson and Robert Pool, that discusses how deliberate practice builds expertise and how expertise is the result of superior mental representations, almost exclusive of the discipline.

Share

Keeping new datasets in step with modern organizations using R2O

Underlying transitions from research to operations are typically some critical sources of raw data or observations that can further an operational objective or priority when provided to practitioners and applied to the mission. In early stages, there is often at least some degree of research involvement in either setting the specifications of the instruments that will take new observations, or establishing concepts and performance specifications for potential derived research byproducts from those observations. Most successful iterations of the R2O cycle consume a few years at the very most. What happens, then, when there are observing system or new dataset procurements that take a decade or longer?

Consider this case study. The Geostationary Operational Environmental Satellite R-Series (GOES-R) launched in November 2016 after years of planning and preparation. As a weather satellite, GOES-R will be able to provide operational meteorologists with imagery of atmospheric patterns and cloud features at a temporal, spatial, and spectral capacity substantially greater than the “legacy” GOES and its instruments. The ground segment will also create products, formulated from past research, that are composited or derived from the imagery. Those will also be provided to the meteorologists in the field to complement the raw imagery.

However, the National Weather Service (NWS) established the requirements for GOES-R in 1999. In the years that have passed, the derived products have been formulated but, most notably, the NWS has evolved as an organization. The forecaster (practitioner) workflow has changed with increases in other observational data and numerical weather prediction model output. There have also been substantial improvements in information technology such that the methods of storing and visualizing weather observations and communicating weather forecasts to the public have evolved too.

So, when it is time to validate the new imagery and products from GOES-R against the NWS requirements, should they pretend it is 1999? With a strong R2O cycle in place, there is the opportunity to refresh the requirements in the context of today’s research and operational environments. To put it another way, the users of imagery and derived products from GOES-R should not have to settle for a product with accuracy and precision specifications based on what was attainable in 1999. Change is inevitable. Certain products may hold more promise to meet better specifications than others, and find more users, in today’s research and operational contexts. There may be other observations that have since filled related needs or model analyses that may be more reliable for producing a representation of the atmosphere that a satellite simply cannot provide. Instead, focusing on what is realistic and attainable with the new instruments and state of the research in 2016 and the years ahead should be the absolute priority.

The evolution of requirements and priorities can be summarized in three stages over the lifespan of an observation:

  1. Establish requirements to set the instrument specifications based on the best concept of future users’ needs.
  2. Set updated “lifetime” requirements when the data is first available with a consideration of the organizations’ present and future mission.
  3. Update priorities to refocus attention on where the observing system can add value today, shifting away from validating all previous requirements evenly.

It is easy to get caught pairing the instrument specifications with the requirements for the entire lifespan of a new observing system because the instruments and other components were built to meet the requirements initially. And with current procurement timelines spanning at least several years, if not a decade for major acquisitions, there is little that can be done to change that. But following availability of the new observations, the paradigm should change, and hopefully there is room for growth through either the instrumentation exceeding its specification, providing higher quality data, or the scientific community providing new methods for using the collected data. In order to enable this ability, there has to be some flexibility to update software components of the system on timelines more consistent with R2O transitions (weeks or months, not years). In general, and with the right architecture, it should not cost more, or at least substantially more, to update the software.

Fixed hardware and a static observing system are rare in an era where organizations, enterprises, missions, and requirements are continually evolving, especially at today’s rapid pace. But where that is a reality (such as with weather satellites), past priorities should be dismissed as old requirements are updated through demonstrations in proving grounds and testbeds. These forums, which involve practitioners directly in the R2O process, can be used to establish a new benchmark for operational functions of the raw data and derived research byproducts from an observing system or dataset.

Establishing metrics that support the progress toward updated requirements from the new benchmark can show where there is the potential for added value over what the current capabilities, research, and configuration provide. After all, R2O is about capturing the added value. A combination of metrics and new requirements should turn the cogs of the proverbial R2O wheel for the lifespan of the observing system or dataset and potentially help to recognize limitations that can set the hardware or other requirements on the next enhancement to the dataset.

In all cases though, ideas (from research, operations, or elsewhere) should not be taken at face value. Whether these are research ideas, software ideas, or evolutionary hardware ideas, the idea should only serve as an impetus for subsequent analysis. There should be methods for testing the validity of a proposed concept and establishing metrics and requirements accordingly. Ideas are opinions resulting from what “we” know now and applying it to resolve a current or future challenge. It can be costly to not verify the dexterity of an idea in a R2O cycle that involves a major or novel set of observations or other data.

Managing a data collection or observing system means owning the requirements and keeping the R2O cycle as modern as the organization or enterprise it serves. Keeping R2O processes nimble and efficient means knowing when the requirements are out of date. Otherwise, there can be a waste of time, or worse, a loss of faith in the processes that can bear tomorrow’s fruit.

Share