Implementation research for digital technologies and tuberculosis

Implementation research for digital technologies and tuberculosis

TDR TDR

Implementation research for digital technologies and tuberculosis

A toolkit for evaluating the implementation and scale-up of digital innovations across the tuberculosis continuum of care.

Module 3

Research methods

By this point, you have defined your research objectives and developed your proposal introduction. The next step is to identify what data you need to meet your research objectives and a plan to collect it. This module discusses the basics principles and components of developing research methodology for an IR study.

Implementation outcomes

In research, the outcome of interest refers to the main variable(s) that will be monitored to assess attainment of your research objective. It is the translation of your research objective into a concretely defined and measured indicator, with changes or differences in the outcome of interest typically representing the impact of an intervention. In implementation research, outcomes of interest typically reflect the implementation process. Proctor and colleagues have described implementation outcomes as "the effects of deliberate and purposive action to implement new treatments, practices and services."1

For example, if you were interested in piloting a new digital intervention to improve treatment adherence among TB patients, the first implementation challenge you may wish to address is understanding if it will work in your context. In this scenario, your outcome of interest might be the effectiveness of the technology for improving treatment adherence, which could be measured by monitoring the proportion of TB patients using the technology who complete at least 90% of treatment doses during the study period.

As we have learned, the RE-AIM framework provides a good overview of common implementation outcomes that can be explored in relation to your digital intervention. You may find it useful to revisit the RE-AIM domains definitions described in Module 1. The domains you choose to focus on will need to be defined in relation to your specific study. While some may be easy to define – for example; Reach could be defined as: the number of eligible TB patients in location X who agreed to use VOT; or Adoption could be defined as: the proportion of TB clinics in location X who have an electronic record for at least 85% of their patient case load by timeframe X – others may be less clear. For example, if you are interested in exploring the acceptability of a new or potential eLearning platform for TB staff, what do we mean by acceptable in this specific context? What questions do we need to ask to measure it?

It may be useful to review the literature to see how others have defined and measured similar IR outcomes. Some relevant studies may have already been identified during the literature search conducted in Module 1. For outcomes that are more qualitative in nature – such as acceptability or appropriateness – there may be pre-existing instruments that have been designed to measure these concepts that can be adapted to suit the specific requirements of your study. Pre-existing instruments have a higher chance of being valid (i.e. the instrument measures what it has been designed to measure) and reliable (i.e. the instrument produces the same results when used to measure the same thing at different times and by different people).

The tables below provide some illustrations from the literature of how the RE-AIM domains have been defined and measured in relation to digital technologies (Table 5), and examples of existing methods and instruments that you may wish to use or adapt (Table 6).

Research instruments and study populations

Selecting an appropriate research instrument may seem like a daunting task. Where possible, an existing tool should be selected, which is more likely to be reliable and valid. In some cases, there may be multiple relevant instruments available to select from. Consider the following questions to help identify the most appropriate tool:

  • Which instrument is most relevant to my specific research question?
  • How widely used is the instrument?
  • Has the instrument been used in settings that are similar to my own?
  • Does the instrument exist in my language or would it need to be translated? If a translation is required, how will we ensure that the original meaning of the instrument is retained?

Your team may also wish to consult with an appropriate research body (if you do not already have such capacity within your team) that can assist with identifying and selecting appropriate research instruments.

Study population

Study population refers to people, or participants, who you plan to collect data from in order to measure your implementation outcome. Your study population should be a natural extension of your research objective and outcome. In your proposal, you will need to provide a description of these participants, how they will be selected, the settings in which they will be selected, and the criteria for participation (Table 7). The participants need to reflect the characteristics of individuals as much as possible – TB patients, TB clinic staff, TB programme M&E staff for example – in the setting in which study inferences will be extrapolated. In this section of your proposal, you will also need to include a description of your digital intervention that you plan to trial, evaluate or modify to provide context to your study and justify the choice of study population.

Study design

After identifying what you will measure, what data you may need and who you will collect it from, the next step is to determine the study design, which refers to the type of study that will be used to collect data from study participants. The choice of study design will be informed by the IR objectives. In general, studies are categorised as either descriptive or analytical.

Descriptive studies aim to describe a certain condition or outcome of interest. Descriptive studies are useful for generating information about who, what, where and when (or how frequent). Descriptive studies also include studies that use secondary data to answer research questions.

Analytic studies, by comparison, can answer questions about why and how, can test hypotheses about relationships between variables and outcomes of interest, and produce measures of effectiveness. Analytic studies can be further categorized as experimental/quasi-experimental or observational studies.

Experimental studies control exposure to variables to explore the effect on the outcome of interest typically by randomly allocating participants to intervention (exposed) and control (unexposed) groups in a study. Quasi-experimental studies, those that manipulate exposure to a variable without random allocation of participants, are also considered under this category.

Observational studies explore relationships between variables and outcomes of interest by observation only, without any attempt to intervene or control exposure to a variables or risk factor that are associated with the outcome of interest.

Table 8 provides an overview of the most commonly employed study designs in IR. While many of these designs can be used alone or in combination to assess any of the RE-AIM domains, the table provide an illustration of how certain designs may be used to measure specific domains applied to digital technologies.

There are various reporting guidelines for certain study types which set out minimum criteria for reporting study results. However, it is also useful to consider these guidelines during the design of your study to ensure that your study includes all required components to meet these reporting standards. Relevant guidelines include CONSORT for pragmatic trials, STROBE for observational studies, and also StaRI specifically for implementation studies.

These guidelines, along with other resources are also available from the EQUATOR Network website, which provides a comprehensive searchable online library of reporting guidelines and links to other resources relevant to research reporting.

There are various reporting guidelines for certain study types which set out minimum criteria for reporting study results. However, it is also useful to consider these guidelines during the design of your study to ensure that your study includes all required components to meet these reporting standards. Relevant guidelines include CONSORT for pragmatic trials, STROBE for observational studies, and also StaRI specifically for implementation studies.

Methodology

While the study design can be regarded as the overall strategy for the research, the methodology refers to the tactics for obtaining the data. In general, research methods can be classified as qualitative, quantitative or mixed methods. Note that the choice of study methods will also influence the data collection tools used.

Quantitative methods: better for answering the question: What is happening?

Quantitative methods involve the collection and/and analysis of objective data, often in numerical form. They are used when it is necessary to establish cause-and-effect relationships, where the researcher can decide who receives an intervention (experimental research) or observes what happens (observational research). Research that utilizes pre-existing or routinely collected data, such as M&E or surveillance data for secondary analysis, is also a frequently used quantitative method.

Data collection tool: The most common instrument for collecting quantitative research is a survey or questionnaire. A survey is a tool that comprises a series of questions, designed to measure a given item or set of items. Questions are typically closed-ended and are answered by participants by marking an answer sheet or choosing from a closed list of responses in the form of yes / no / maybe; OR strongly disagree / disagree / neutral / agree / strongly agree, etc. Surveys can be administered in a variety of ways, such as through in-person interviews using paper-based or electronic forms, telephone surveys, mailed surveys, or online/web-based surveys.

If the focus of the research is to explore factors such as values, attitudes, opinions, feelings and behaviour of individuals and understand how these affect the individuals in question, then qualitative methods are most appropriate. The two main qualitative methods are focus group discussions and key informant interviews (Table 9).

Data collection tool: Interview guides rely on semi-structured or open-ended questions to stimulate in-depth conversation and discussion around certain points of interest to the researcher. Open-ended or semi-structured questions are those that cannot be answered with a yes or no. Interview guides should start with broad, general questions, designed to get participants to open up, then become more specific through the use of probes.

Like the name suggests, mixed methods combine both quantitative and qualitative methods. Mixed methods approaches are appropriate when you require a better understanding of the problem than either a quantitative or qualitative research approach could achieve alone. Mixed methods are often used when qualitative information is needed to gain a better understanding of an issue to inform or launch a subsequent quantitative study.

Economic evaluations

Cost is often an important outcome when it comes to implementation. Outside of the research environment, health departments and NTPs are often operating in restricted financial environments and therefore intervention impact must be carefully weighed against costs when making programmatic or policy decisions. Consideration of the costs and benefits of a digital intervention is a necessary part of decision-making during resource allocation and is useful for guiding prioritization of interventions. Demonstrating a sound investment case is particularly important for new digital interventions that require significant front-loading of costs, or those that lack the evidence-base to support widespread integration into mainstream activities.

Your team may wish to consider adding a costing exercise to your IR study to describe the cost of implementing a digital intervention to assist with future planning purposes, by looking at factors such as cost per intervention, running costs, determination of initial investments for implementation compared to later running costs, etc. A more detailed measure of costs requires specific study approaches that can be considered under the umbrella of economic evaluation (Table 10). Economic evaluations are typically integrated into a broader IR design, particularly when using a quasi-experimental approach that would enable evaluation of the comparative costs and outcomes associated with an intervention against a standard of care, for example. However, collection of cost data in other study designs is still valuable and makes explicit the cost considerations of an intervention, which can be weighed by a decision-maker against the benefits.

While the specifics of how to cost digital interventions are beyond the scope of this toolkit, there are useful references that can be consulted for further guidance:

Costing tool for DAT projects (developed for TB Reach by McGill International TB Centre)15

Implementation research ethics

If your proposed research includes collecting data from participants you will need to seek ethical clearance from an appropriate institutional review board (IRB; also known as ethical review board, or independent ethics committee) in your country. If your research is funded by an international body, you may also need to seek ethical review in the country where that body is located such has a university ethics board. Seeking ethics approval prior to conducting your research is a safeguarding measure to ensure the protection of your research participants and your adherence to ethical standards of research.

Once you have finalised your IR study protocol, you should submit it to the relevant IRB for clearance. The IRB will ensure your research complies with ethical principles and practices by undertaking a thorough review of your study design and conduct, recruitment and care and protection of participants, protection of participant confidentiality, informed consent process and community considerations. Your proposal should include an ethics section that describes the steps you will take to ensure the protection, dignity, rights and safety of potential research participants before, during and after the research takes place.

In addition, your IR proposal must demonstrate that you have informed consent from all participants to collect and use their information and that they are aware of any potential risks associated with their participation in your study. As a result of the review, you may expect to receive questions that you would need to answer as well as requests to modify the protocol or the study design. Depending on the situation of the ethics board(s), there is typically a delay of a few months or more that needs to be factored in for this process. Fees may also be applicable.

Informed consent

Informed consent is an important means of upholding a person's autonomy and adhering to the principle of respect for persons, and ensures that individuals can freely make decisions to participate according to personal interest, values and priorities.

Informed consent of participants must be gained prior to their involvement in research, either verbally or in writing. Ethics committees will pay particular attention to how consent will be obtained from prospective study participants, and carefully scrutinize all informed consent documents. Informed consent is more than a contractual obligation and should be understood as a process that begins with the initial contact with the research participant (during the recruitment process) and carries through to the end of participants' involvement in the project. The establishment of the process requires four basic elements:

  1. Provision of accurate and appropriate information
  2. Participant's ability to understand the purpose of and procedures in the research process
  3. Participant's capacity to consent
  4. Voluntary participation and withdrawal, even after having initially acceded to take part and responded

To have effective informed consent, full information should be explained in the language of the participants. Furthermore, the use of simplified language in local vernacular should be used, avoiding scientific and professional jargon. Special care should be taken whenever fear and desperation, poor health literacy and distrust of public institutions may affect patients' choice to give or withhold consent. Your team should develop a consent form that includes information about the research, the procedure, expected outcomes and potential benefits.

Templates for consent forms can be found at the WHO research policy page.18 These templates should be adapted to your local situation.

Ethical challenges related to digital technology

There may be ethical challenges that are unique or specific to working with digital technologies that will need to be considered and addressed by research teams (see Box 16). For example, digital technologies are hugely efficient at storing and communicating large volumes of data, very often over the Internet. However, this also poses risks to privacy, confidentiality and security of personal data. Any research using digital technology to collect, store or use personal information must demonstrate steps taken to ensure the safety and confidentiality of digital data.

Additionally, because TB is often associated with poverty, homelessness and sub-optimal health system resources, the use of these technologies in TB surveillance, care and research contexts can indirectly accentuate bias and stigma if inadvertent disclosure of confidential information occurs. The following text box provides some principles developed by Stanford University on how to ensure ethical conduct in relation to digital health.

Proposal checklist: Research methodology section

Proposal checklist

Exercise: With your team, begin to develop the start of your research methodology section, including your study design, research methods and a description of your study population. You will finish your research methods section after completing the next module (Data management and analysis). Work through the checklist to make sure this section includes all necessary information and is correctly formatted.

Note: Data management and analysis will be covered in Module 4 but will be included under the research methodology section of your proposal.
Component
Study design
  • Describes the selected study design and key features of the study.
  • Identifies and defines the key research outcomes, which clearly relate to the study objective and research questions.
Component
Study population and setting
  • Describes the setting or context within which the study or intervention will be conducted.
  • Describes the subjects (sample) or participants who will be involved in the research, including inclusion and exclusion criteria and estimations of sample size and rationale (such as sample size calculations, practical considerations and data saturation, as appropriate).
Component
Intervention / implementation strategy
  • Describes the digital intervention being implemented, evaluated or modified, or the implementation strategy being used to enhance adoption, implementation or sustainability of an evidence-based programme or practice.
  • Mentions the overall goal of the digital intervention or the implementation strategy.
Component
Ethics
  • Meets the ethical standards for research.
  • Mentions the process for applying for ethical clearance (such as which IRB will you apply to).
  • Includes an informed consent form as an annex if your study includes collecting data from participants.

Interested in using the toolkit and would like assistance?