Oversight Tradeoffs

This article discusses some of the tradeoffs when determining the type and level of oversight for a distributed MBD project.

This article is a follow-on to my previous article on the types of oversight for simulation projects. For these articles, I have defined technical oversight as "input from outside the simulation team to control simulation correctness".

Like any other engineering effort, there are tradeoffs when choosing the types of oversight for a particular Distributed MBD project. The most commonly encountered tradeoffs are:

  • Budget & Schedule vs. Correctness
  • Stakeholder Insight vs. Simplicity
  • Process vs. Flexibility

Budget & Schedule vs. Correctness

This is the most obvious tradeoff in oversight. Stakeholders want results quickly, and they also want evidence of correctness. Developing evidence of correctness is time consuming:

  • Design decisions can be made at any level in a project, and the high-level decisions usually have a larger impact. Deciding what level of decisions should be documented is a trade-off between schedule and correctness. Requiring documentation of low-level decisions is time consuming, but not documenting them increases the risk of an error occurring at a lower level.
  • Simulation teams should be testing for correctness, which can be quite time consuming on its own. However, capturing the test results into a form that can be distributed to stakeholders can add significantly to the effort. Deciding what evidence is required, and the form in which it is presented, is a tradeoff.
  • Gathering the pedigree of simulations and their calibration data, and confirming its correctness, often involves organizations outside of the simulation team. Because of this, gathering pedigree evidence can become a time-consuming political issue.
  • In my experience, Independent Verification and Validation (IV&V) efforts on a simulation projects can become political. The involved engineers tend protect their decisions, and often won’t accept an approximation that introduces only negligible errors. While IV&V is a great way to confirm correctness, the politics of it can cost time, and it requires budgeting an independent team.

Stakeholder Insight vs. Simplicity

Stakeholders need to confirm that their needs are being met, and simulation teams generally want to meet those needs. However, demonstrating that everybody’s needs are being met can cost the simulation team time:

  • Producing documentation and evidence, and documenting it in a form that stakeholders can absorb, can be complicated. Typically, most of the documentation is relatively straight-forward. However, determining which edge cases to cover, describing the edge cases, and documenting them can be quite tricky because the consumer might not be familiar with the edge cases. Less documentation is easier, but stakeholders might not get the insight they need to ensure their needs are being met.
  • Too often, stakeholders request more evidence than is necessary to meet their needs. Sometimes they worry about scenarios that can’t occur, they overestimate the importance of some details, or they worry about topics that do not affect their requirements. When this occurs, the choices are to push back or to produce the additional material. Both choices consume time and resources*.

Process vs. Flexibility

The amount and type of process imposed onto a Distributed MBD project from outside can affect correctness in multiple ways:

  • Process is a mechanism for the project leadership to control the performance and execution of an M&S project. With too much process, team members will not have the flexibility to adjust as new needs arise. With too little process, leadership will spend too much time with hands-on management of the team and the project.
  • By reducing flexibility, too much process can harm correctness. If it is easier for engineers to shoehorn a hack into a bad design than it is to revisit the design, you just might get hacked-up code. If an error requires too much effort to correct, engineers might be reluctant to point out their (or their colleagues’) errors.
  • Process imposed from outside can have a major impact. For example, if leadership unilaterally declares that the project must be "agile" without the proper setup and support, it will likely end up as Cargo Cult Agile, which is worse than no process at all. A team that is skilled in DevOps might not do well with agile, and an agile team might not do well with DevOps.

You are the Expert

In these last three articles, I’ve discussed some of the considerations when applying oversight to an M&S project. The point I want you take from this is that there is no general-purpose ‘right’ way to do it. Instead, it depends on your particular circumstances.
I’ve read some other articles that discuss one approach or another, and claim that it is the right approach for all projects. I strongly disagree with any claim that says one approach is better than another. Instead, I firmly believe that what is best for your project depends on many factors, some of which I’ve discussed. You know your project and your team better than me or anybody else, so you need to make sure that the oversight on your project is appropriate for your team.

 

* There is a trick to reducing unnecessary feedback from reviewers: Just provide something that nobody needs. If you do this, most of the reviewers will make sure their needs are met, and also tell you to remove the extraneous bit. Reviewers with nothing to say will simply tell you to remove the extra product. This way, they feel like they are doing their job, so they feel less need to search for minutia. For example: Select two unrelated time-varying outputs from your simulation, cross-plot them, give the plot a title that makes it seem plausible but not important, and include the plot in your documentation.

Request More Information

Contact Us To Learn More