Researchers can use simulations and simulated data for the following applications, among others:

  1. Given a theory about some phenomenon, we can create a simulation of that phenomenon using the rules laid out in the theory. Running the simulation produces a description of what would happen over time, if the theory is correct.
  2. Extending #1, we can compare simulated outcomes with real-world data describing the phenomenon. If the simulated outcome matches the real-world data, then the claim that the theory is a good explanation of the phenomenon is strengthened. A poor match weakens the claim. This helps us to evaluate how much we should trust a particular theory. If simulations from a theory routinely fail to match observations of the phenomenon the theory claims to explain, we should begin to doubt whether the theory is a good explanation.
  3. Many statistical methods require an assumption about the true data generating process or “model” underlying the observed data being analyzed. With simulation, we do not need to assume what the true model is because we can program it into the simulation. This allows us to examine the accuracy of statistical methods when the assumptions of those methods are violated.

Examples of #1, simulation demonstrating a theory:

  • Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A Garbage Can Model of Organizational Choice. Administrative Science Quarterly, 17(1), 1–25.
  • Burton, R. M., & Obel, B. (1980). A computer simulation test of the M-form hypothesis. Administrative Science Quarterly, 25(3), 457–466.
  • Cohen, M. D. (1984). Conflict and Complexity: Goal Diversity and Organizational Search Effectiveness. American Political Science Review, 78(2), 435–451.
  • Lant, T. K., & Mezias, S. J. (1990). Managing Discontinuous Change: A Simulation Study of Organizational Learning and Entrepreneurship. Strategic Management Journal, 11(4), 147–179.
  • March, J. G. (1991). Exploration and Exploitation in Organizational Learning. Organization Science, 2(1), 71–87.
  • Levinthal, D. A. (1997). Adaptation on Rugged Landscapes. Management Science, 43(7), 934–950.
  • Rivkin, J. W. (2000). Imitation of Complex Strategies. Management Science, 46(6), 824–844.
  • Rivkin, J. W., & Siggelkow, N. (2003). Balancing Search and Stability: Interdependencies Among Elements of Organizational Design. Management Science, 49(3), 290–311.
  • Ethiraj, S. K., & Levinthal, D. A. (2004). Bounded Rationality and the Search for Organizational Architecture: An Evolutionary Perspective on the Design of Organizations and Their Evolvability. Administrative Science Quarterly, 49(3), 404–437.
  • Lenox, M. J., Rockart, S. F., & Lewin, A. Y. (2006). Interdependency, Competition, and the Distribution of Firm and Industry Profits. Management Science, 52(5), 757–772.
  • Ethiraj, S. K., & Levinthal, D. A. (2009). Hoping for A to Z While Rewarding Only A: Complex Organizations and Multiple Goals. Organization Science, 20(1), 4–21.
  • Hernandez, E., & Menon, A. (2017). Acquisitions, Node Collapse, and Network Revolution. Management Science, (Articles in Advance).

Examples of #2, simulation compared to data analysis:

Examples of #3, simulation to demonstrate methodological issues:

  • Bertrand, M., Duflo, E., & Mullainathan, S. (2004). How much should we trust differences-in-differences estimates? Quarterly Journal of Economics, (February), 249.
  • Keele, L., & Kelly, N. J. (2006). Dynamic models for dynamic theories: The ins and outs of lagged dependent variables. Political Analysis, 14(2), 186–205.
  • Petersen, M. A. (2009). Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches. Review of Financial Studies, 22(1), 435–480.
  • Flannery, M. J., & Hankins, K. W. (2013). Estimating dynamic panel models in corporate finance. Journal of Corporate Finance, 19(1), 1–19.
  • Semadeni, M., Withers, M. C., & Certo, S. T. (2014). Research Notes and Commentaries: The Perils of Endogeneity and Instrumental Variables in Strategy Research: Understanding through Simulations. Strategic Management Journal, 35, 1070–1079.
  • Murphy, K. R., & Aguinis, H. (2017). HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results? Journal of Business and Psychology, Online, 1–17.
  • Morris, T.P., White, I.R., & Crowther, M.J. (2017) Using simulation studies to evaluate statistical methods. Available at
  • Abadie, A., Athey, S., Imbens, G., & Wooldridge, J. (2017). When Should You Adjust Standard Errors for Clustering? (NBER Working Paper Series No. 24003). Retrieved from
  • Kalnins, A. (2018). Multicollinearity: How common factors cause Type 1 errors in multivariate regression. Strategic Management Journal, 39(8), 2362–2385.