This misconception directly contradicts the definition of evidence-based management – that decisions should be made through the conscientious, explicit and judicious use of evidence from four sources, including evidence from practitioners. Evidence-based management does not mean that any one source of evidence is more valid than any other. Even the professional experience and judgment of practitioners can be an important source if it is appraised to be trustworthy and relevant. In fact, evidence from practitioners is essential in appropriately interpreting and using evidence from other sources.
Evidence-based management involves seeking out and using the best available evidence from multiple sources. It is not exclusively about numbers and quantitative data, although many practice decisions involve figures of some sort. You do not need to be a statistician to undertake evidence based management, but understanding basic statistical concepts helps you to critically evaluate some types of evidence. The principles behind such concepts as sample size, statistical versus practical significance, confidence intervals and effect sizes, can be understood without math. Evidence-based management is not about statistics, but statistical thinking is an important element.
Yes, some management decisions do need to be taken quickly, but even split-second decisions require trustworthy evidence. Making a good, fast decision about when to evacuate a leaking nuclear power plant or how to make an emergency landing requires up-to-date knowledge of emergency procedures and reliable instruments providing trustworthy evidence about radiation levels or altitude. When important decisions need to be made quickly, an evidence-based practitioner anticipates the kinds of evidence that quality decisions require. The need to make an immediate decision is generally the exception rather than the rule. As every experienced manager knows, the vast majority of management decisions are made over much longer time periods – sometimes weeks or even months – and often require consideration of legal, financial, strategic, logistic or other organizational issues, which all take time. The inherent nature of organizational decisions, especially important ones, provides plenty of opportunity to collect and critically evaluate evidence about the nature of the problem and, if there is a problem, the decision most likely to produce the desired outcome. For evidence-based management, time is not normally a deal breaker.
One objection practitioners have to using evidence from the scientific literature is the belief that their organization is unique, suggesting that research findings will simply not apply. Although it is true that organizations do differ, they also tend to face very similar issues, sometimes repeatedly, and often respond to them in similar ways. Peter Drucker, a seminal management thinker, was perhaps the first to assert that most management issues are ‘repetitions of familiar problems cloaked in the guise of uniqueness’. The truth of the matter is that it is commonplace for organizations to have myths and stories about their own uniqueness. In reality they tend to be neither exactly alike nor completely unique, but somewhere in between. Evidence-based practitioners need to be flexible enough to take such similar-yet-different qualities into account. A thoughtful practitioner, for instance, might use individual financial incentives for independent sales people but reward knowledge workers with opportunities for development or personally interesting projects, knowing that financial incentives tend to lower performance for knowledge workers while increasing the performance of less-skilled workers.
Sometimes little or no quality evidence is available. This may be the case with a new management practice or the implementation of new technologies. In some areas the organizational context changes rapidly, which can limit the relevance and applicability of evidence derived from the past situations. In those cases, the evidence-based practitioner has no other option but to work with the limited evidence at hand and supplement it through learning by doing. This means pilot testing and treating any course of action as a prototype, that is, systematically assess the outcome of decisions made using a process of constant experimentation, punctuated by critical reflection about which things work and which things do not.
Evidence is not an answer. It does not speak for itself. To make sense of evidence, we need an understanding of the context and a critical mindset. You might take a test and find out you scored 10 points, but if you don’t know the average or total possible score it’s hard to determine whether you did well. You may also want to know what doing well on the test actually means. Does it indicate or predict anything important to you and in your context? And why? Your score in the test is meaningless without this additional information. At the same time, evidence is never conclusive. It does not prove things, which means that no piece of evidence can be viewed as a universal or timeless truth. In most cases evidence comes with a large measure of uncertainty. Evidence-based practitioners typically make decisions not based on conclusive, solid, up-to-date information, but on probabilities, indications and tentative conclusions. Evidence does not tell you what to decide, but it does help you to make a better-informed decision.