Published research and evidence is critical for the growth and development of any new and evolving science, and as such, should be considered an integral part of practice for anyone involved in patient care. However, it is when one allows it to dictate their course of treatment versus guide it—and in particular then feel compelled to judge others on their own individual approach—that problems develop. In my 25 some odd years of practice, there are very few days that have gone by where I have not read or referred to some journal article or piece of literature which I have used to further advance my knowledge base and help to shape my thoughts and intervention strategies for the work I do. As such, I hold this information very valuable to me in its ability to provide context for things I am already doing, as well as considerations I should make moving forward.
When someone is exposed to new information--when variable and potentially conflicting thoughts related to new concepts are presented--there will be a few fairly predictable responses. This holds true in many different types of scenarios, but in particular when it comes to science. Recognize that these are observations on my part and by no means should be considered anything which I can reference or that these percentages should be considered absolutes. I call it The 25% Rule of New Information:
- Seekers: 25% of people will be very intrigued and want to know more about it and will seek out further information.
- Skeptics: 25% will discredit it as hogwash, cite some kind of “lack of evidence”, question the evidence which is presented, or just feel like it can’t be valid or it would have been learned in school, already further established, etc.
- Self-Contained: 25% will have no idea what it means, not interested, and/or forget about the concepts and focus on what they continue to be responsible for.
- Supple: 25% of people will be intrigued and willing to learn, but are too busy with what they are responsible for at this time in their life. They may or may not seek further info in the future depending on the situation they are in or if it continues to come up again.
It is up to the reader to decide the best way to decipher the information and figure out how it best should be applied to their individual thought processes and practice. Research is an atlas which tells you the main, secondary and tertiary roads to a particular outcome. However, utilizing this atlas concept, just because a particular route is the most direct does not necessarily make it the best or the one you want to always take. I remember hearing Dr. Todd Stull, founder of Inside Performance Mindroom, lecture at a conference and he used the example of driving to work and how when you first do the commute, it is new and an individual has a heightened awareness of their surroundings and the way they are going so that they can reach the desired destination. However, after taking what is most likely the main road and most direct route over and over again, it becomes so routine that there are times where you may have actually forgotten a lot of the drive as your thoughts become otherwise occupied.
Correlating this to our current Western therapy model, our intervention strategies are very direct and reactive. This is reinforced by the way our reimbursement and financial models have been established as well. If something is tight, stiff, weak or imbalanced, we have a tendency to approach it directly and stretch, mobilize, strengthen or retrain it. Once a problem has developed, we will then react and apply what we feel are the appropriate steps to address it. This is not necessarily wrong, but is it always optimal? What happens when our patients do not respond to our intervention strategies? Are we then able to take an alternate route on the atlas or are we stuck on that most direct route? Even worse, do we then blame the patient because we are unable to come up with a different solution? And with respect to reimbursement and financial models, they do not allow for interventions or strategies to help to prevent these kinds of problems in the first place.
Class I evidence supported by multicenter double blind randomized control trials is considered the gold standard in quality research, but realistically, how much of what we do in our day-to-day practice can possibly be held to this standard? It is also important to note that the research published in these peer-reviewed journals still only reflects the plausibility of research claims (1) based on a given hypothesis that the authors and paper sought to establish. Plausibility, based on a particular subset, which fit specific criteria.
It is through these concepts of looking at an approach or technique and whether it is evidence-based practice or practice-based evidence that one should be better able to build a working model, or intervention strategy, which best serves the practitioner, the patient, and allow for the ability to be able to provide the best service we possibly can for that person who is sitting in front of us (2). It also allows for us to be able to make smart clinical decisions and enough adaptability and variability to be able to have options and a proverbial ‘back-up plan” if someone is not responding to our initial approach.
Another important consideration is the fact that there are a significant number of incredibly brilliant minds in the industry, and while many are in academia, there are a significant number who are not. They are clinicians who treat patients during their days, they may even lecture at conferences and do consulting, but they are not doing research. To be able to produce—and have accepted in peer-reviewed journals—quality articles takes a tremendous amount of time, resources, and in many cases some financial backing and/or an educational institution supporting that work. It is akin to how some of the best athletes in the world are not necessarily competing at the professional level for their given sport for a variety of reasons.
It is also important to note that there are many things which are part of standard practice that had to start somewhere. Smart, observant clinicians came up with ideas which they then employed and a number, but not all, were then brought to the rigors of researching their efficacy. Some have been supported, some have been refuted and some have demonstrated conflicting results.
Please continue to read research, allow it to help form your approach and intervention strategies, use it to help have quality dialogue among your peers, but make sure to keep things in perspective.
1. Peer Review and the Acceptance of New Scientific Ideas: Discussion paper from a Working Party on equipping the public with an understanding of peer review. Compiled & presented by Tracey Brown, Director of Sense About Science, Nov 2002-May 2004. http://www.senseaboutscience.org/data/files/resources/17/peerreview.pdf.
2. Swisher AK. Practice-Based Evidence. Cardiopulm Phys Ther J. 2010 Jun; v. 21(2): 4.