Research Lessons: Designing a Study for a Clinical Setting

by Frances Wickham Lee, DBA, RHIA, Karen A. Wager, DBA, RHIA, Andrea W. White, PhD, David M. Ward, PhD


Conducting a study in a clinical setting is a study in itself. Here, researchers share four important lessons learned.

When research moves from the laboratory to the field, it faces unique challenges. A research team that included the authors discovered the challenges of a busy clinic firsthand when it set out to examine physician resistance to aspects of an electronic health record (EHR) system. The researchers sought to test assumptions by clinicians that direct entry of encounter notes would harm the physician-patient relationship.

Three key members of the research team were health information administrators with limited experience in conducting experimental research. We learned a great deal, both about experimental research and about conducting research in the practice setting. The purpose of this article is to share several of the important lessons we learned conducting this study.

Background of the Study

HIM professionals tout the benefits of EHR systems, but on the clinic floor and elsewhere, enthusiasm can be more restrained. Many healthcare organizations have discovered that among the challenges they face in implementing EHR systems, organizational barriers are often the most difficult to overcome—even years after the system is in place.

The Adult Primary Care Clinic (APCC) at the Medical University of South Carolina implemented its EHR system in 1997. In the years that followed, however, most attending and resident physicians continued to dictate their visit notes, and APCC failed to realize transcription cost savings. In July 2000, APCC administration instituted a policy requiring all residents and fellows to enter their visit notes directly into the EHR.

APCC resident physicians and fellows (along with a few of the attending physicians) expressed concern about the policy. They believed patients would be opposed to their entering notes into a computer during an examination. They felt direct entry would diminish eye contact and personal interaction with patients. They also expressed concern that entering notes into the computer would increase the length of the patient visit, limiting the number of patients that could be seen each day. The residents and physicians at the APCC are not alone in their feelings. Similar concerns have been expressed among physicians at other institutions.1–4

We believed that the direct entry policy implemented at APCC offered a unique opportunity to design a research study to address the residents’ and attending physicians’ concerns. Our hypothesis was that direct entry of notes into the EHR in the presence of the patient does not affect the patient’s perception of the quality of the physician-patient relationship. Not only would we be able to test our hypothesis in a natural setting, but we would also be able to collaborate with our physician colleagues to answer an important question about the use of the EHR.

The Research Setting

APCC is an interdisciplinary primary care training facility, staffed by 14 attending physicians, nine internal medicine residents, and three general internal medicine fellows. The patient population is predominately uninsured or underinsured. Uninsured patients are seen at discounted rates based upon a sliding scale; however, all patients must pay a deposit fee at the time of their visit.

Within the facility are 14 examination rooms, each equipped with a computer workstation connected to the EHR system. The system provides disease-specific progress note templates to facilitate direct entry of visit notes. Residents and fellows may also use computers in two workrooms to enter their visit notes into the EHR. Prior to the study, only one of the 12 residents and fellows entered patient visit notes at the point of care; the others entered their notes into the EHR at a location outside the exam room.

Lesson 1: Match the Method to the Setting

Our first lesson came early—the research design must fit the practice setting.

We had planned to create and study three distinct groups. In the first group, physicians would use a structured interview during the patient visit. In the second group, physicians would use the same structured interview and, additionally, enter their notes directly into the EHR. By studying these two experimental groups we hoped to determine whether the more structured interview approach affected the patient relationship or if direct entry played a role. The third group of the study was to have been the control group, conducting “business as usual.”

We planned to test the effects of direct entry into the EHR on the physician-patient relationship by using a patient questionnaire. The questionnaire would be administered to each patient twice—once before the physician began direct entry into the EHR in the examination room and once afterward, when the patient returned for the next visit.

We ran into difficulties with our plan right away. When presented with the design, the attending physicians felt that the residents would not adhere to the structured interview and that there would be no way to monitor this. We also became doubtful that we would have enough participants to adequately fill three groups.

We thus redesigned the study for two groups. In the final design, an experimental group of residents directly entered patient information into the EHR system in front of the patient; a control group of residents entered patient information into the system only after the encounter was completed. This design prevented us from separating the effects of the interview structure from the use of the computer, but we realized we had to work within what the natural practice setting could reasonably provide.

Lesson 2: Test the Instruments

Early in the study we identified an existing questionnaire to measure a patient’s level of satisfaction with his or her physician-patient relationship.5 We expected to make minor adjustments to the instrument, primarily to ensure that all questions were written at the appropriate comprehension level for our patient population. After pilot testing the instrument, however, we found that we would have to make significant changes to the language of the questionnaire.

This was our second lesson in research design and methods—always pilot test any instrument, even one that has been used successfully in another study. Each study is unique. In our case, the original instrument had been developed in French and was translated into English, which we believe accounted for much of the difficulty with the language. We also altered the instrument’s response scale to be more easily understood by our patient population. Modifications to the questionnaire were so significant that the instrument required additional reliability and validity tests during our statistical analysis.

Lesson 3: Expect Recruitment Challenges

The 11 APCC residents and fellows who did not directly enter visit notes into the EHR during the patient encounter were invited to participate in the study. Ten accepted. We randomly assigned them to one of the two groups, the direct-entry group or the control group.

At APCC, each patient is assigned a resident who provides “continuity” care to the patient throughout his or her residency. In other words, the resident sees the same patient each time the patient presents. We will call these patients “continuity patients.” This feature of the practice setting limited the patient population eligible for the study to the continuity patients of the 10 participating residents.

This introduced our third major lesson in research design and implementation—recruitment of patients can be difficult in a practice setting. We had estimated that 360 continuity patients would participate in the initial interview, pre-intervention phase of the study. We fell short of this estimate for several reasons.

First, we overestimated the effect of the financial incentives we offered to encourage participation. We paid each patient participant $5 for completing the pre-intervention questionnaire and a total of $20 when both questionnaires were completed. (The full payment of $20 equaled the deposit fee required of patients at the time of their visit.)

At the same time, we underestimated the role of the clinic nurses in recruiting patients. Nurses were instructed to encourage patients to participate in the study, but not all nurses were equally enthusiastic in recruiting participants.

Our initial projection relied heavily on physician estimates of the number of continuity patients seen daily. We did not foresee the greater unpredictability of the individual participant residents’ schedules, including time off and schedule rotations that took them out of the clinic. (One resident participant, for example, was on maternity leave during several weeks of the study period.) As a result of these challenges, we enlisted 172 patients to participate in the pre-intervention portion of our study.

Lesson 4: Expect Data Collection Challenges

Collecting data in a busy adult primary care clinic proved to be a greater challenge than we originally anticipated. Not only did we fall short of our estimated patient participation, we spent several more weeks collecting post-intervention data than we had expected. Our fourth major lesson learned was to plan for the unexpected during the data collection phase.

At the time of each initial patient encounter, one of the clinic nurses was responsible for explaining the purpose of the study and inviting patients to participate. Likewise, it was the clinic nurse who was to recognize returning patients and direct them for their second interviews. For both study encounters, a trained graduate assistant interviewed the patient, after receiving informed consent. The graduate assistant read each question from the questionnaire and asked the patient to respond using a five-point Likert scale.

Unexpected data collection problems arose with each of these groups. Clinic patients had a 20 to 25 percent “no-show” rate, leaving the graduate assistants who had planned for a follow-up interview without anyone to interview. At times, patients declined to participate at follow up because they were simply too tired after having waited to see the doctor or too ill to be interviewed. Nurses in the clinic had a strong impact on who was approached or reminded about study participation. As discussed, some nurses were more enthusiastic than others about the study and actively recruited more participants.

Changes in graduate assistant support had an impact on data collection, also. We began the study with three trained graduate assistants. One left during the pre-intervention phase to take another job. A second had competing commitments and could not always participate on the days when continuity patients were scheduled. We feel certain that at least a few follow-up interviews were lost due to these conflicts.

We dealt with these data collection issues by extending our post-intervention study phase by several months to gain access to continuity patients who had completed the first questionnaire. We also took a more active role in identifying patients who were returning for their second visit. Our graduate assistant made phone calls a day or two prior to patients’ follow-up visits to remind them of the second component of the study. For patients we were able to reach by phone, we found the reminder calls helpful in getting patients to complete the post-test interview. However, many patients did not have follow-up appointments scheduled at APCC within the time frame of the study. At the completion of the study, a total of 92 continuity patients completed both the pre- and post-intervention questionnaires.

Conclusion

Our preliminary results indicate that the hypothesis we proposed has been supported—that direct entry of notes into the EHR in the presence of the patient does not affect the patient’s perception of the quality of the physician-patient relationship.

Participating in this research study was interesting, informative, and stimulating for us as health information administrators. Working with practicing clinicians in designing and conducting the study gave us a greater appreciation of the challenges in providing care to uninsured and underinsured patients.

We clearly were forced to acknowledge that applied research in active clinical sites cannot be viewed as controlled research in a laboratory. Glitches and challenges will arise while doing research in the clinical arena, but working together to discover and overcome some of the research challenges proved to be both stimulating and rewarding.

Acknowledgements

This project was funded by AHIMA’s Foundation of Research and Education (FORE).

Notes

  1. Legler, J.D., and R. Oates. “Patients’ Reactions to Physician Use of a Computerized Medical Record System during Clinical Encounters.” Journal of Family Practice 37, no. 3 (1993): 241–44.
  2. Ornstein, S., and A. Bearden. “Patient Perspectives on Computer-based Medical Records.” Journal of Family Practice 38, no. 6 (1994): 606–10.
  3. Rethans, J., P. Hoppener, G. Wolfs, and J. Diederiks. “Do Personal Computers Make Doctors Less Personal?” British Medical Journal 296 (1988): 1446–48.
  4. Solomon, G. L., and M. Dechter. “Are Patients Pleased with Computer Use in the Examination Room?” The Journal of Family Practice 41, no. 3 (1995): 241–44.
  5. Haddad, S., et al. “Patient Perception of Quality Following a Visit to a Doctor in a Primary Care Unit.” Family Practice: An International Journal 17, no. 1 (2000): 21–29.

Frances Wickham Lee, Karen A. Wager (wagerka@musc.edu), and Andrea W. White are associate professors and David M. Ward is associate professor and chairman in the Department of Health Administration and Policy at the Medical University of South Carolina, Charleston.


Article citation:
Lee, Frances Wickham, et al. "Research Lessons: Designing a Study for a Clinical Setting." Journal of AHIMA 75, no.4 (April 2004): 36-39.