Nicholas H. G. Holford, MB, ChB, FRACP
An 85-year-old, 60-kg woman with a serum creatinine of 1.8 mg/dL has atrial fibrillation. A decision has been made to use digoxin to control the rapid heart rate. The target concentration of digoxin for the treatment of atrial fibrillation is 2 ng/mL. Tablets of digoxin are available that contain 62.5 micrograms (mcg) and 250 mcg. What maintenance dose would you recommend?
The goal of therapeutics is to achieve a desired beneficial effect with minimal adverse effects. When a medicine has been selected for a patient, the clinician must determine the dose that most closely achieves this goal. A rational approach to this objective combines the principles of pharmacokinetics with pharmacodynamics to clarify the dose-effect relationship (Figure 3–1). Pharmacodynamics governs the concentration-effect part of the interaction, whereas pharmacokinetics deals with the dose-concentration part (Holford & Sheiner, 1981). The pharmacokinetic processes of absorption, distribution, and elimination determine how rapidly and for how long the drug will appear at the target organ. The pharmacodynamic concepts of maximum response and sensitivity determine the magnitude of the effect at a particular concentration (see Emax and C50, Chapter 2; C50 is also known as EC50).
FIGURE 3–1 The relationship between dose and effect can be separated into pharmacokinetic (dose-concentration) and pharmacodynamic (concentration-effect) components. Concentration provides the link between pharmacokinetics and pharmacodynamics and is the focus of the target concentration approach to rational dosing. The three primary processes of pharmacokinetics are input, distribution, and elimination.
Figure 3–1 illustrates a fundamental hypothesis of pharmacology, namely, that a relationship exists between a beneficial or toxic effect of a drug and the concentration of the drug. This hypothesis has been documented for many drugs, as indicated by the Target Concentrations and Toxic Concentrations columns in Table 3–1. The apparent lack of such a relationship for some drugs does not weaken the basic hypothesis but points to the need to consider the time course of concentration at the actual site of pharmacologic effect (see below).
TABLE 3–1 Pharmacokinetic and pharmacodynamic parameters for selected drugs in adults. (See Holford et al, 2013, for parameters in neonates and children.)
Knowing the relationship between dose, drug concentration, and effects allows the clinician to take into account the various pathologic and physiologic features of a particular patient that make him or her different from the average individual in responding to a drug. The importance of pharmacokinetics and pharmacodynamics in patient care thus rests upon the improvement in therapeutic benefit and reduction in toxicity that can be achieved by application of these principles.
The “standard” dose of a drug is based on trials in healthy volunteers and patients with average ability to absorb, distribute, and eliminate the drug (see Clinical Trials: The IND and NDA in Chapter 1). This dose will not be suitable for every patient. Several physiologic processes (eg, body size, maturation of organ function in infants) and pathologic processes (eg, heart failure, renal failure) dictate dosage adjustment in individual patients. These processes modify specific pharmacokinetic parameters. The two basic parameters are clearance, the measure of the ability of the body to eliminate the drug; and volume of distribution, the measure of the apparent space in the body available to contain the drug. These parameters are illustrated schematically in Figure 3–2 where the volume of the beakers into which the drugs diffuse represents the volume of distribution, and the size of the outflow “drain” in Figures 3–2Band 3–2D represents the clearance.
FIGURE 3–2 Models of drug distribution and elimination. The effect of adding drug to the blood by rapid intravenous injection is represented by expelling a known amount of the agent into a beaker. The time course of the amount of drug in the beaker is shown in the graphs at the right. In the first example (A), there is no movement of drug out of the beaker, so the graph shows only a steep rise to a maximum followed by a plateau. In the second example (B), a route of elimination is present, and the graph shows a slow decay after a sharp rise to a maximum. Because the level of material in the beaker falls, the “pressure” driving the elimination process also falls, and the slope of the curve decreases. This is an exponential decay curve. In the third model (C), drug placed in the first compartment (“blood”) equilibrates rapidly with the second compartment (“extravascular volume”) and the amount of drug in “blood” declines exponentially to a new steady state. The fourth model (D) illustrates a more realistic combination of elimination mechanism and extravascular equilibration. The resulting graph shows an early distribution phase followed by the slower elimination phase.
Volume of Distribution
Volume of distribution (V) relates the amount of drug in the body to the concentration of drug (C) in blood or plasma:
The volume of distribution may be defined with respect to blood, plasma, or water (unbound drug), depending on the concentration used in equation (1) (C = Cb, Cp, or Cu).
That the V calculated from equation (1) is an apparent volume may be appreciated by comparing the volumes of distribution of drugs such as digoxin or chloroquine (Table 3–1) with some of the physical volumes of the body (Table 3–2). Volume of distribution can vastly exceed any physical volume in the body because it is the volume apparently necessary to contain the amount of drug homogeneously at the concentration found in the blood, plasma, or water. Drugs with very high volumes of distribution have much higher concentrations in extravascular tissue than in the vascular compartment, ie, they are nothomogeneously distributed. Drugs that are completely retained within the vascular compartment, on the other hand, would have a minimum possible volume of distribution equal to the blood component in which they are distributed, eg, 0.04 L/kg body weight or 2.8 L/70 kg (Table 3–2) for a drug that is restricted to the plasma compartment.
TABLE 3–2 Physical volumes (in L/kg body weight) of some body compartments into which drugs may be distributed.
Drug clearance principles are similar to the clearance concepts of renal physiology. Clearance of a drug is the factor that predicts the rate of elimination in relation to the drug concentration (C):
Clearance, like volume of distribution, may be defined with respect to blood (CLb), plasma (CLp), or unbound in water (CLu), depending on where and how the concentration is measured.
It is important to note the additive character of clearance. Elimination of drug from the body may involve processes occurring in the kidney, the lung, the liver, and other organs. Dividing the rate of elimination at each organ by the concentration of drug presented to it yields the respective clearance at that organ. Added together, these separate clearances equal total systemic clearance:
“Other” tissues of elimination could include the lungs and additional sites of metabolism, eg, blood or muscle.
The two major sites of drug elimination are the kidneys and the liver. Clearance of unchanged drug in the urine represents renal clearance. Within the liver, drug elimination occurs via biotransformation of parent drug to one or more metabolites, or excretion of unchanged drug into the bile, or both. The pathways of biotransformation are discussed in Chapter 4. For most drugs, clearance is constant over the concentration range encountered in clinical settings, ie, elimination is not saturable, and the rate of drug elimination is directly proportional to concentration (rearranging equation ):
This is usually referred to as first-order elimination. When clearance is first-order, it can be estimated by calculating the area under the curve (AUC) of the time-concentration profile after a dose. Clearance is calculated from the dose divided by the AUC. Note that this is a convenient form of calculation—not the definition of clearance.
A. Capacity-Limited Elimination
For drugs that exhibit capacity-limited elimination (eg, phenytoin, ethanol), clearance will vary depending on the concentration of drug that is achieved (Table 3–1). Capacity-limited elimination is also known as mixed-order, saturable, dose- or concentration-dependent, nonlinear, and Michaelis-Menten elimination.
Most drug elimination pathways will become saturated if the dose and therefore the concentration are high enough. When blood flow to an organ does not limit elimination (see below), the relation between elimination rate and concentration (C) is expressed mathematically in equation (5):
The maximum elimination capacity is Vmax, and Km is the drug concentration at which the rate of elimination is 50% of Vmax. At concentrations that are high relative to the Km, the elimination rate is almost independent of concentration—a state of “pseudo-zero order” elimination. If dosing rate exceeds elimination capacity, steady state cannot be achieved: The concentration will keep on rising as long as dosing continues. This pattern of capacity-limited elimination is important for three drugs in common use: ethanol, phenytoin, and aspirin. Clearance has no real meaning for drugs with capacity-limited elimination, and AUC should not be used to calculate clearance of such drugs.
B. Flow-Dependent Elimination
In contrast to capacity-limited drug elimination, some drugs are cleared very readily by the organ of elimination, so that at any clinically realistic concentration of the drug, most of the drug in the blood perfusing the organ is eliminated on the first pass of the drug through it. The elimination of these drugs will thus depend primarily on the rate of drug delivery to the organ of elimination. Such drugs (see Table 4–7) can be called “high-extraction” drugs since they are almost completely extracted from the blood by the organ. Blood flow to the organ is the main determinant of drug delivery, but plasma protein binding and blood cell partitioning may also be important for extensively bound drugs that are highly extracted.
Half-life (t1/2) is the time required to change the amount of drug in the body by one-half during elimination (or during a constant infusion). In the simplest case—and the most useful in designing drug dosage regimens—the body may be considered as a single compartment (as illustrated in Figure 3–2B) of a size equal to the volume of distribution (V). The time course of drug in the body will depend on both the volume of distribution and the clearance:
Because drug elimination can be described by an exponential process, the time taken for a twofold decrease can be shown to be proportional to the natural logarithm of 2. The constant 0.7 in equation (6) is an approximation to the natural logarithm of 2.
Half-life is useful because it indicates the time required to attain 50% of steady state—or to decay 50% from steady-state conditions—after a change in the rate of drug administration. Figure 3–3 shows the time course of drug accumulation during a constant-rate drug infusion and the time course of drug elimination after stopping an infusion that has reached steady state.
FIGURE 3–3 The time course of drug accumulation and elimination. Solid line: Plasma concentrations reflecting drug accumulation during a constant-rate infusion of a drug. Fifty percent of the steady-state concentration is reached after one half-life, 75% after two half-lives, and over 90% after four half-lives. Dashed line: Plasma concentrations reflecting drug elimination after a constant-rate infusion of a drug had reached steady state. Fifty percent of the drug is lost after one half-life, 75% after two half-lives, etc. The “rule of thumb” that four half-lives must elapse after starting a drug-dosing regimen before full effects will be seen is based on the approach of the accumulation curve to over 90% of the final steady-state concentration.
Disease states can affect both of the physiologically related primary pharmacokinetic parameters: volume of distribution and clearance. A change in half-life will not necessarily reflect a change in drug elimination. For example, patients with chronic renal failure have both decreased renal clearance of digoxin and a decreased volume of distribution; the increase in digoxin half-life is not as great as might be expected based on the change in renal function. The decrease in volume of distribution is due to the decreased renal and skeletal muscle mass and consequent decreased tissue binding of digoxin to Na+/K+-ATPase.
Many drugs will exhibit multicompartment pharmacokinetics (as illustrated in Figures 3–2C and 3–2D). Under these conditions, the “half-life” reflecting drug accumulation, as given in Table 3–1, will be greater than that calculated from equation (6).
Whenever drug doses are repeated, the drug will accumulate in the body until dosing stops. This is because it takes an infinite time (in theory) to eliminate all of a given dose. In practical terms, this means that if the dosing interval is shorter than four half-lives, accumulation will be detectable.
Accumulation is inversely proportional to the fraction of the dose lost in each dosing interval. The fraction lost is 1 minus the fraction remaining just before the next dose. The fraction remaining can be predicted from the dosing interval and the half-life. A convenient index of accumulation is the accumulation factor:
For a drug given once every half-life, the accumulation factor is 1/0.5, or 2. The accumulation factor predicts the ratio of the steady-state concentration to that seen at the same time following the first dose. Thus, the peak concentrations after intermittent doses at steady state will be equal to the peak concentration after the first dose multiplied by the accumulation factor.
Bioavailability is defined as the fraction of unchanged drug reaching the systemic circulation following administration by any route (Table 3–3). The area under the blood concentration-time curve (AUC) is proportional to the dose and the extent of bioavailability for a drug if its elimination is first-order (Figure 3–4). For an intravenous dose, bioavailability is assumed to be equal to unity. For a drug administered orally, bioavailability may be less than 100% for two main reasons—incomplete extent of absorption across the gut wall and first-pass elimination by the liver (see below).
TABLE 3–3 Routes of administration, bioavailability, and general characteristics.
FIGURE 3–4 Blood concentration-time curves, illustrating how changes in the rate of absorption and extent of bioavailability can influence both the duration of action and the effectiveness of the same total dose of a drug administered in three different formulations. The dashed line indicates the target concentration (TC) of the drug in the blood.
A. Extent of Absorption
After oral administration, a drug may be incompletely absorbed, eg, only 70% of a dose of digoxin reaches the systemic circulation. This is mainly due to lack of absorption from the gut. Other drugs are either too hydrophilic (eg, atenolol) or too lipophilic (eg, acyclovir) to be absorbed easily, and their low bioavailability is also due to incomplete absorption. If too hydrophilic, the drug cannot cross the lipid cell membrane; if too lipophilic, the drug is not soluble enough to cross the water layer adjacent to the cell. Drugs may not be absorbed because of a reverse transporter associated with P-glycoprotein. This process actively pumps drug out of gut wall cells back into the gut lumen. Inhibition of P-glycoprotein and gut wall metabolism, eg, by grapefruit juice, may be associated with substantially increased drug absorption.
B. First-Pass Elimination
Following absorption across the gut wall, the portal blood delivers the drug to the liver prior to entry into the systemic circulation. A drug can be metabolized in the gut wall (eg, by the CYP3A4 enzyme system) or even in the portal blood, but most commonly it is the liver that is responsible for metabolism before the drug reaches the systemic circulation. In addition, the liver can excrete the drug into the bile. Any of these sites can contribute to this reduction in bioavailability, and the overall process is known as first-pass elimination. The effect of first-pass hepatic elimination on bioavailability is expressed as the extraction ratio (ER):
where Q is hepatic blood flow, normally about 90 L/h in a person weighing 70 kg.
The systemic bioavailability of the drug (F) can be predicted from the extent of absorption (f) and the extraction ratio (ER):
A drug such as morphine is almost completely absorbed (f = 1), so that loss in the gut is negligible. However, the hepatic extraction ratio for morphine is morphine clearance (60 L/h/70 kg) divided by hepatic blood flow (90 L/h/70 kg) or 0.67. Its oral bioavailability (1 – ER) is therefore expected to be about 33%, which is close to the observed value (Table 3–1).
Rate of Absorption
The distinction between rate and extent of absorption is shown in Figure 3–4. The rate of absorption is determined by the site of administration and the drug formulation. Both the rate of absorption and the extent of input can influence the clinical effectiveness of a drug. For the three different dosage forms depicted in Figure 3–4, differences in the intensity of clinical effect are expected. Dosage form B would require twice the dose to attain blood concentrations equivalent to those of dosage form A. Differences in rate of absorption may become important for drugs given as a single dose, such as a hypnotic used to induce sleep. In this case, drug from dosage form A would reach its target concentration earlier than drug from dosage form C; concentrations from A would also reach a higher level and remain above the target concentration for a longer period. In a multiple dosing regimen, dosage forms A and C would yield the same average blood level concentrations, although dosage form A would show somewhat greater maximum and lower minimum concentrations.
The mechanism of drug absorption is said to be zero-order when the rate is independent of the amount of drug remaining in the gut, eg, when it is determined by the rate of gastric emptying or by a controlled-release drug formulation. In contrast, when the dose is dissolved in gastrointestinal fluids, the rate of absorption is usually proportional to the gastrointestinal fluid concentration and is said to be first-order.
Extraction Ratio & the First-Pass Effect
Systemic clearance is not affected by bioavailability. However, clearance can markedly affect the extent of availability because it determines the extraction ratio (equation [8a]). Of course, therapeutic blood concentrations may still be reached by the oral route of administration if larger doses are given. However, in this case, the concentrations of the drug metabolites will be increased compared with those that would occur following intravenous administration. Lidocaine and verapamil are both used to treat cardiac arrhythmias and have bioavailability less than 40%, but lidocaine is never given orally because its metabolites are believed to contribute to central nervous system toxicity. Other drugs that are highly extracted by the liver include morphine (see above), isoniazid, propranolol, and several tricyclic antidepressants (Table 3–1).
Drugs with high extraction ratios will show marked variations in bioavailability between subjects because of differences in hepatic function and blood flow. These differences can explain some of the variation in drug concentrations that occurs among individuals given similar doses. For drugs that are highly extracted by the liver, bypassing hepatic sites of elimination (eg, in hepatic cirrhosis with portosystemic shunting) will result in substantial increases in drug availability, whereas for drugs that are poorly extracted by the liver (for which the difference between entering and exiting drug concentration is small), shunting of blood past the liver will cause little change in availability. Drugs in Table 3–1 that are poorly extracted by the liver include warfarin, diazepam, phenytoin, theophylline, tolbutamide, and chlorpropamide.
Alternative Routes of Administration & the First-Pass Effect
There are several reasons for different routes of administration used in clinical medicine (Table 3–3)—for convenience (eg, oral), to maximize concentration at the site of action and minimize it elsewhere (eg, topical), to prolong the duration of drug absorption (eg, transdermal), or to avoid the first-pass effect (sublingual or rectal).
The hepatic first-pass effect can be avoided to a great extent by use of sublingual tablets and transdermal preparations and to a lesser extent by use of rectal suppositories. Sublingual absorption provides direct access to systemic—not portal—veins. The transdermal route offers the same advantage. Drugs absorbed from suppositories in the lower rectum enter vessels that drain into the inferior vena cava, thus bypassing the liver. However, suppositories tend to move upward in the rectum into a region where veins that lead to the liver predominate. Thus, only about 50% of a rectal dose can be assumed to bypass the liver.
Although drugs administered by inhalation bypass the hepatic first-pass effect, the lung may also serve as a site of first-pass loss by excretion and possibly metabolism for drugs administered by nongastrointestinal (“parenteral”) routes.
THE TIME COURSE OF DRUG EFFECT
The principles of pharmacokinetics (discussed in this chapter) and those of pharmacodynamics (discussed in Chapter 2 and Holford & Sheiner, 1981) provide a framework for understanding the time course of drug effect.
In the simplest case, drug effects are directly related to plasma concentrations, but this does not necessarily mean that effects simply parallel the time course of concentrations. Because the relationship between drug concentration and effect is not linear (recall the Emax model described in Chapter 2), the effect will not usually be linearly proportional to the concentration.
Consider the effect of an angiotensin-converting enzyme (ACE) inhibitor, such as enalapril, on ACE. The half-life of enalapril is about 3 hours. After an oral dose of 10 mg, the peak plasma concentration at 3 hours is about 64 ng/mL. Enalapril is usually given once a day, so seven half-lives will elapse from the time of peak concentration to the end of the dosing interval. The concentration of enalapril after each half-life and the corresponding extent of ACE inhibition are shown in Figure 3–5. The extent of inhibition of ACE is calculated using the Emax model, where Emax, the maximum extent of inhibition, is 100% and the C50, the concentration of the drug that produces 50% of maximum effect, is about 1 ng/mL.
FIGURE 3–5 Time course (hours) of angiotensin-converting enzyme (ACE) inhibitor concentrations and effects. The blue line shows the plasma enalapril concentrations in nanograms per milliliter after a single oral dose. The red line indicates the percentage inhibition of its target, ACE. Note the different shapes of the concentration-time course (exponentially decreasing) and the effect-time course (linearly decreasing in its central portion).
Note that plasma concentrations of enalapril change by a factor of 16 over the first 12 hours (four half-lives) after the peak, but ACE inhibition has only decreased by 20%. Because the concentrations over this time are so high in relation to the C50, the effect on ACE is almost constant. After 24 hours, ACE is still 33% inhibited. This explains why a drug with a short half-life can be given once a day and still maintain its effect throughout the day. The key factor is a high initial concentration in relation to the C50. Even though the plasma concentration at 24 hours is less than 1% of its peak, this low concentration is still half the C50. Once-a-day dosing is common for drugs with minimal adverse effects related to peak concentrations that act on enzymes (eg, ACE inhibitors) or compete at receptors (eg, propranolol).
When concentrations are in the range between four times and one fourth of the C50, the time course of effect is essentially a linear function of time. It takes four half-lives for concentrations to drop from an effect of 80% to 20% of Emax—15% of the effect is lost every half-life over this concentration range. At concentrations below one fourth the C50, the effect becomes almost directly proportional to concentration, and the time course of drug effect will follow the exponential decline of concentration. It is only when the concentration is low in relation to the C50 that the concept of a “half-life of drug effect” has any meaning.
Changes in drug effects are often delayed in relation to changes in plasma concentration. This delay may reflect the time required for the drug to distribute from plasma to the site of action. This will be the case for almost all drugs. The delay due to distribution is a pharmacokinetic phenomenon that can account for delays of a few minutes. This distributional process can account for the short delay of effects after rapid intravenous injection of central nervous system (CNS)–active agents such as thiopental.
Some drugs bind tightly to receptors, and it is the half-life of dissociation that determines the delay in effect, eg, for digoxin. Note that it is the dissociation process that controls the time to receptor equilibrium. This is exactly the same principle as the elimination process controlling the time to accumulate to steady state with a constant rate infusion (see Figure 3–3).
A common reason for more delayed drug effects—especially those that take many hours or even days to occur—is the slow turnover of a physiologic substance that is involved in the expression of the drug effect. For example, warfarin works as an anticoagulant by inhibiting vitamin K epoxidase in the liver. This action of warfarin occurs rapidly, and inhibition of the enzyme is closely related to plasma concentrations of warfarin. The clinical effect of warfarin, eg, on the International Normalized Ratio (INR), reflects a decrease in the concentration of the prothrombin complex of clotting factors. Inhibition of vitamin K epoxidase decreases the synthesis of these clotting factors, but the complex has a long half-life (about 14 hours), and it is this half-life that determines how long it takes for the concentration of clotting factors to reach a new steady state and for a drug effect to reflect the average warfarin plasma concentration.
Some drug effects are more obviously related to a cumulative action than to a rapidly reversible one. The renal toxicity of aminoglycoside antibiotics (eg, gentamicin) is greater when administered as a constant infusion than with intermittent dosing. It is the accumulation of aminoglycoside in the renal cortex that is thought to cause renal damage. Even though both dosing schemes produce the same average steady-state concentration, the intermittent dosing scheme produces much higher peak concentrations, which saturate an uptake mechanism into the cortex; thus, total aminoglycoside accumulation is less. The difference in toxicity is a predictable consequence of the different patterns of concentration and the saturable uptake mechanism.
The effect of many drugs used to treat cancer also reflects a cumulative action—eg, the extent of binding of a drug to DNA is proportional to drug concentration and is usually irreversible. The effect on tumor growth is therefore a consequence of cumulative exposure to the drug. Measures of cumulative exposure, such as AUC, provide a means to individualize treatment.
THE TARGET CONCENTRATION APPROACH TO DESIGNING A RATIONAL DOSAGE REGIMEN
A rational dosage regimen is based on the assumption that there is a target concentration that will produce the desired therapeutic effect. By considering the pharmacokinetic factors that determine the dose-concentration relationship, it is possible to individualize the dose regimen to achieve the target concentration. The effective concentration ranges shown in Table 3–1 are a guide to the concentrations measured when patients are being effectively treated. The initial target concentration should usually be chosen from the lower end of this range. In some cases, the target concentration will also depend on the specific therapeutic objective—eg, the control of atrial fibrillation by digoxin often requires a target concentration of 2 ng/mL, while heart failure is usually adequately managed with a target concentration of 1 ng/mL.
In most clinical situations, drugs are administered in such a way as to maintain a steady state of drug in the body, ie, just enough drug is given in each dose to replace the drug eliminated since the preceding dose. Thus, calculation of the appropriate maintenance dose is a primary goal. Clearance is the most important pharmacokinetic term to be considered in defining a rational steady-state drug dosage regimen. At steady state, the dosing rate (“rate in”) must equal the rate of elimination (“rate out”). Substitution of the target concentration (TC) for concentration (C) in equation (4) predicts the maintenance dosing rate:
Thus, if the desired target concentration is known, the clearance in that patient will determine the dosing rate. If the drug is given by a route that has a bioavailability less than 100%, then the dosing rate predicted by equation (9) must be modified. For oral dosing:
If intermittent doses are given, the maintenance dose is calculated from:
Maintenance dose = Dosing rate × Dosing interval (11)
(See Box: Example: Maintenance Dose Calculations.)
Note that the steady-state concentration achieved by continuous infusion or the average concentration following intermittent dosing depends only on clearance. The volume of distribution and the half-life need not be known in order to determine the average plasma concentration expected from a given dosing rate or to predict the dosing rate for a desired target concentration. Figure 3–6 shows that at different dosing intervals, the concentration-time curves will have different maximum and minimum values even though the average concentration will always be 10 mg/L.
FIGURE 3–6 Relationship between frequency of dosing and maximum and minimum plasma concentrations when a steady-state theophylline plasma level of 10 mg/L is desired. The smoothly rising black line shows the plasma concentration achieved with an intravenous infusion of 28 mg/h. The doses for 8-hourly administration (orange line) are 224 mg; for 24-hourly administration (blue line), 672 mg. In each of the three cases, the mean steady-state plasma concentration is 10 mg/L.
Estimates of dosing rate and average steady-state concentrations, which may be calculated using clearance, are independent of any specific pharmacokinetic model. In contrast, the determination of maximum and minimum steady-state concentrations requires further assumptions about the pharmacokinetic model. The accumulation factor (equation ) assumes that the drug follows a one-compartment model (Figure 3–2B), and the peak concentration prediction assumes that the absorption rate is much faster than the elimination rate. For the calculation of estimated maximum and minimum concentrations in a clinical situation, these assumptions are usually reasonable.
When the time to reach steady state is appreciable, as it is for drugs with long half-lives, it may be desirable to administer a loading dose that promptly raises the concentration of drug in plasma to the target concentration. In theory, only the amount of the loading dose need be computed—not the rate of its administration—and, to a first approximation, this is so. The volume of distribution is the proportionality factor that relates the total amount of drug in the body to the concentration; if a loading dose is to achieve the target concentration, then from equation (1):
Example: Maintenance Dose Calculations
A target plasma theophylline concentration of 10 mg/L is desired to relieve acute bronchial asthma in a patient. If the patient is a nonsmoker and otherwise normal except for asthma, we may use the mean clearance given in Table 3–1, ie, 2.8 L/h/70 kg. Since the drug will be given as an intravenous infusion, F = 1.
Therefore, in this patient, the infusion rate would be 28 mg/h/70 kg.
If the asthma attack is relieved, the clinician might want to maintain this plasma level using oral theophylline, which might be given every 12 hours using an extended-release formulation to approximate a continuous intravenous infusion. According to Table 3–1, Foral is 0.96. When the dosing interval is 12 hours, the size of each maintenance dose would be:
A tablet or capsule size close to the ideal dose of 350 mg would then be prescribed at 12-hourly intervals. If an 8-hour dosing interval was used, the ideal dose would be 233 mg; and if the drug was given once a day, the dose would be 700 mg. In practice, F could be omitted from the calculation since it is so close to 1.
For the theophylline example given in the Box, Example: Maintenance Dose Calculations, the loading dose would be 350 mg (35 L × 10 mg/L) for a 70-kg person. For most drugs, the loading dose can be given as a single dose by the chosen route of administration.
Up to this point, we have ignored the fact that some drugs follow more complex multicompartment pharmacokinetics, eg, the distribution process illustrated by the two-compartment model in Figure 3–2. This is justified in the great majority of cases. However, in some cases the distribution phase may not be ignored, particularly in connection with the calculation of loading doses. If the rate of absorption is rapid relative to distribution (this is always true for rapid intravenous administration), the concentration of drug in plasma that results from an appropriate loading dose—calculated using the apparent volume of distribution—can initially be considerably higher than desired. Severe toxicity may occur, albeit transiently. This may be particularly important, eg, in the administration of antiarrhythmic drugs such as lidocaine, where an almost immediate toxic response may occur. Thus, while the estimation of the amount of a loading dose may be quite correct, the rate of administration can sometimes be crucial in preventing excessive drug concentrations, and slow administration of an intravenous drug (over minutes rather than seconds) is almost always prudent practice.
When intermittent doses are given, the loading dose calculated from equation (12) will only reach the average steady-state concentration and will not match the peak steady-state concentration (Figure 3–6). To match the peak steady-state concentration, the loading dose can be calculated from equation (13):
TARGET CONCENTRATION INTERVENTION: APPLICATION OF PHARMACOKINETICS & PHARMACODYNAMICS TO DOSE INDIVIDUALIZATION
The basic principles outlined above can be applied to the interpretation of clinical drug concentration measurements on the basis of three major pharmacokinetic variables: absorption, clearance, and volume of distribution (and the derived variable, half-life). In addition, it may be necessary to consider two pharmacodynamic variables: maximum effect attainable in the target tissue and the sensitivity of the tissue to the drug. Diseases may modify all of these parameters, and the ability to predict the effect of disease states on pharmacokinetic parameters is important in properly adjusting dosage in such cases. (See Box: The Target Concentration Strategy.)
The amount of drug that enters the body depends on the patient’s adherence to the prescribed regimen and on the rate and extent of transfer from the site of administration to the blood.
The Target Concentration Strategy
Recognition of the essential role of concentration in linking pharmacokinetics and pharmacodynamics leads naturally to the target concentration strategy. Pharmacodynamic principles can be used to predict the concentration required to achieve a particular degree of therapeutic effect. This target concentration can then be achieved by using pharmacokinetic principles to arrive at a suitable dosing regimen (Holford, 1999). The target concentration strategy is a process for optimizing the dose in an individual on the basis of a measured surrogate response such as drug concentration:
1.Choose the target concentration, TC.
2.Predict volume of distribution (V) and clearance (CL) based on standard population values (eg, Table 3–1) with adjustments for factors such as weight and renal function.
3.Give a loading dose or maintenance dose calculated from TC, V, and CL.
4.Measure the patient’s response and drug concentration.
5.Revise V and/or CL based on the measured concentration.
6.Repeat steps 3–5, adjusting the predicted dose to achieve TC.
Overdosage and underdosage relative to the prescribed dosage—both aspects of failure of adherence—can frequently be detected by concentration measurements when gross deviations from expected values are obtained. If adherence is found to be adequate, absorption abnormalities in the small bowel may be the cause of abnormally low concentrations. Variations in the extent of bioavailability are rarely caused by irregularities in the manufacture of the particular drug formulation. More commonly, variations in bioavailability are due to metabolism during absorption.
Abnormal clearance may be anticipated when there is major impairment of the function of the kidney, liver, or heart. Creatinine clearance is a useful quantitative indicator of renal function. Conversely, drug clearance may be a useful indicator of the functional consequences of heart, kidney, or liver failure, often with greater precision than clinical findings or other laboratory tests. For example, when renal function is changing rapidly, estimation of the clearance of aminoglycoside antibiotics may be a more accurate indicator of glomerular filtration than serum creatinine.
Hepatic disease has been shown to reduce the clearance and prolong the half-life of many drugs. However, for many other drugs known to be eliminated by hepatic processes, no changes in clearance or half-life have been noted with similar hepatic disease. This reflects the fact that hepatic disease does not always affect the hepatic intrinsic clearance. At present, there is no reliable marker of hepatic drug-metabolizing function that can be used to predict changes in liver clearance in a manner analogous to the use of creatinine clearance as a marker of renal drug clearance.
C. Volume of Distribution
The apparent volume of distribution reflects a balance between binding to tissues, which decreases plasma concentration and makes the apparent volume larger, and binding to plasma proteins, which increases plasma concentration and makes the apparent volume smaller. Changes in either tissue or plasma binding can change the apparent volume of distribution determined from plasma concentration measurements. Older people have a relative decrease in skeletal muscle mass and tend to have a smaller apparent volume of distribution of digoxin (which binds to muscle proteins). The volume of distribution may be overestimated in obese patients if based on body weight and the drug does not enter fatty tissues well, as is the case with digoxin. In contrast, theophylline has a volume of distribution similar to that of total body water. Adipose tissue has almost as much water in it as other tissues, so that the apparent total volume of distribution of theophylline is proportional to body weight even in obese patients.
Abnormal accumulation of fluid—edema, ascites, pleural effusion—can markedly increase the volume of distribution of drugs such as gentamicin that are hydrophilic and have small volumes of distribution.
The differences between clearance and half-life are important in defining the underlying mechanisms for the effect of a disease state on drug disposition. For example, the half-life of diazepam increases with patient age. When clearance is related to age, it is found that clearance of this drug does not change with age. The increasing half-life for diazepam actually results from changes in the volume of distribution with age; the metabolic processes responsible for eliminating the drug are fairly constant.
A. Maximum Effect
All pharmacologic responses must have a maximum effect (Emax). No matter how high the drug concentration goes, a point will be reached beyond which no further increment in response is achieved.
If increasing the dose in a particular patient does not lead to a further clinical response, it is possible that the maximum effect has been reached. Recognition of maximum effect is helpful in avoiding ineffectual increases of dose with the attendant risk of toxicity.
The sensitivity of the target organ to drug concentration is reflected by the concentration required to produce 50% of maximum effect, the C50. Diminished sensitivity to the drug can be detected by measuring drug concentrations that are usually associated with therapeutic response in a patient who has not responded. This may be a result of abnormal physiology—eg, hyperkalemia diminishes responsiveness to digoxin—or drug antagonism—eg, calcium channel blockers impair the inotropic response to digoxin.
Increased sensitivity to a drug is usually signaled by exaggerated responses to small or moderate doses. The pharmacodynamic nature of this sensitivity can be confirmed by measuring drug concentrations that are low in relation to the observed effect.
INTERPRETATION OF DRUG CONCENTRATION MEASUREMENTS
Clearance is the single most important factor determining drug concentrations. The interpretation of measurements of drug concentrations depends on a clear understanding of three factors that may influence clearance: the dose, the organ blood flow, and the intrinsic function of the liver or kidneys. Each of these factors should be considered when interpreting clearance estimated from a drug concentration measurement.
It must also be recognized that changes in protein binding may lead the unwary to believe there is a change in clearance when in fact drug elimination is not altered (see Box: Plasma Protein Binding: Is It Important?). Factors affecting protein binding include the following:
1.Albumin concentration: Drugs such as phenytoin, salicylates, and disopyramide are extensively bound to plasma albumin. Albumin levels are low in many disease states, resulting in lower total drug concentrations.
2.Alpha1-acid glycoprotein concentration: a1-Acid glycoprotein is an important binding protein with binding sites for drugs such as quinidine, lidocaine, and propranolol. It is increased in acute inflammatory disorders and causes major changes in total plasma concentration of these drugs even though drug elimination is unchanged.
3.Capacity-limited protein binding: The binding of drugs to plasma proteins is capacity-limited. Therapeutic concentrations of salicylates and prednisolone show concentration-dependent protein binding. Because unbound drug concentration is determined by dosing rate and clearance—which is not altered, in the case of these low-extraction-ratio drugs, by protein binding—increases in dosing rate will cause corresponding changes in the pharmacodynamically important unbound concentration. In contrast, total drug concentration will increase less rapidly than the dosing rate would suggest as protein binding approaches saturation at higher concentrations.
4.Binding to red blood cells: Drugs such as cyclosporine and tacrolimus bind extensively inside red blood cells. Typically, whole blood concentrations are measured, and they are about 50 times higher than plasma concentration. A decrease in red blood cell concentration (reflected in the hematocrit) will cause whole blood concentration to fall without a change in pharmacologically active concentrations. Standardization of concentrations to a standard hematocrit helps to interpret the concentration-effect relationship.
An accurate dosing history is essential if one is to obtain maximum value from a drug concentration measurement. In fact, if the dosing history is unknown or incomplete, a drug concentration measurement loses all predictive value.
Timing of Samples for Concentration Measurement
Information about the rate and extent of drug absorption in a particular patient is rarely of great clinical importance. Absorption usually occurs during the first 2 hours after a drug dose and varies according to food intake, posture, and activity. Therefore, it is important to avoid drawing blood until absorption is complete (about 2 hours after an oral dose). Attempts to measure peak concentrations early after oral dosing are usually unsuccessful and compromise the validity of the measurement, because one cannot be certain that absorption is complete.
Some drugs such as digoxin and lithium take several hours to distribute to tissues. Digoxin samples should be taken at least 6 hours after the last dose and lithium just before the next dose (usually 24 hours after the last dose). Aminoglycosides distribute quite rapidly, but it is still prudent to wait 1 hour after giving the dose before taking a sample.
Plasma Protein Binding: Is It Important?
Plasma protein binding is often mentioned as a factor playing a role in pharmacokinetics, pharmacodynamics, and drug interactions. However, there are no clinically relevant examples of changes in drug disposition or effects that can be clearly ascribed to changes in plasma protein binding (Benet & Hoener, 2002). The idea that if a drug is displaced from plasma proteins it would increase the unbound drug concentration and increase the drug effect and, perhaps, produce toxicity seems a simple and obvious mechanism. Unfortunately, this simple theory, which is appropriate for a test tube, does not work in the body, which is an open system capable of eliminating unbound drug.
First, a seemingly dramatic change in the unbound fraction from 1% to 10% releases less than 5% of the total amount of drug in the body into the unbound pool because less than one third of the drug in the body is bound to plasma proteins even in the most extreme cases, eg, warfarin. Drug displaced from plasma protein will of course distribute throughout the volume of distribution, so that a 5% increase in the amount of unbound drug in the body produces at most a 5% increase in pharmacologically active unbound drug at the site of action.
Second, when the amount of unbound drug in plasma increases, the rate of elimination will increase (if unbound clearance is unchanged), and after four half-lives the unbound concentration will return to its previous steady-state value. When drug interactions associated with protein binding displacement and clinically important effects have been studied, it has been found that the displacing drug is also an inhibitor of clearance, and it is the change in clearance of the unbound drug that is the relevant mechanism explaining the interaction.
The clinical importance of plasma protein binding is only to help interpretation of measured drug concentrations. When plasma proteins are lower than normal, total drug concentrations will be lower but unbound concentrations will not be affected.
Clearance is readily estimated from the dosing rate and mean steady-state concentration. Blood samples should be appropriately timed to estimate steady-state concentration. Provided steady state has been approached (at least three half-lives of constant dosing), a sample obtained near the midpoint of the dosing interval will usually be close to the mean steady-state concentration.
Initial Predictions of Volume of Distribution & Clearance
A. Volume of Distribution
Volume of distribution is commonly calculated for a particular patient using body weight (70-kg body weight is assumed for the values in Table 3–1). If a patient is obese, drugs that do not readily penetrate fat (eg, gentamicin, digoxin, tacrolimus, gemcitabine) should have their volumes calculated from fat-free mass (FFM) as shown below. Total body weight (WT) is in kilograms and height (HTM) is in meters:
Patients with edema, ascites, or pleural effusions offer a larger volume of distribution to the aminoglycoside antibiotics (eg, gentamicin) than is predicted by body weight. In such patients, the weight should be corrected as follows: Subtract an estimate of the weight of the excess fluid accumulation from the measured weight. Use the resultant “normal” body weight to calculate the normal volume of distribution. Finally, this normal volume should be increased by 1 L for each estimated kilogram of excess fluid. This correction is important because of the relatively small volumes of distribution of these water-soluble drugs.
Drugs cleared by the renal route often require adjustment of clearance in proportion to renal function. This can be conveniently estimated from the creatinine clearance, calculated from a single serum creatinine measurement and the predicted creatinine production rate.
The predicted creatinine production rate in women is 85% of the calculated value because they have a smaller muscle mass per kilogram, and it is muscle mass that determines creatinine production. Muscle mass as a fraction of body weight decreases with age, which is why age appears in the Cockcroft-Gault equation.*
The decrease of renal function with age is independent of the decrease in creatinine production. Because of the difficulty of obtaining complete urine collections, creatinine clearance calculated in this way is at least as reliable as estimates based on urine collections. The fat-free mass (equation ) should be considered rather than total body weight for obese patients, and correction should be made for muscle wasting in severely ill patients.
Revising Individual Estimates of Volume of Distribution & Clearance
The commonsense approach to the interpretation of drug concentrations compares predictions of pharmacokinetic parameters and expected concentrations to measured values. If measured concentrations differ by more than 20% from predicted values, revised estimates of V or CL for that patient should be calculated using equation (1) or equation (2). If the change calculated is more than a 100% increase or 50% decrease in either V or CL, the assumptions made about the timing of the sample and the dosing history should be critically examined.
For example, if a patient is taking 0.25 mg of digoxin a day, a clinician may expect the digoxin concentration to be about 1 ng/mL. This is based on typical values for bioavailability of 70% and total clearance of about 7 L/h (CLrenal 4 L/h, CLnonrenal 3 L/h). If the patient has heart failure, the nonrenal (hepatic) clearance might be halved because of hepatic congestion and hypoxia, so the expected clearance would become 5.5 L/h. The concentration is then expected to be about 1.3 ng/mL. Suppose that the concentration actually measured is 2 ng/mL. Common sense would suggest halving the daily dose to achieve a target concentration of 1 ng/mL. This approach implies a revised clearance of 3.5 L/h. The smaller clearance compared with the expected value of 5.5 L/h may reflect additional renal functional impairment due to heart failure.
This technique will often be misleading if steady state has not been reached. At least a week of regular dosing (four half-lives) must elapse before the implicit method will be reliable.
Benet LZ, Hoener B: Changes in plasma protein binding have little clinical relevance. Clin Pharmacol Ther 2002;71:115.
Holford NHG: Pharmacokinetic and pharmacodynamic principles, 2013. http://holford.fmhs.auckland.ac.nz/teaching/pharmacometrics/advanced.php.
Holford NHG: Target concentration intervention: Beyond Y2K. Br J Clin Pharmacol 1999:48:9.
Holford NHG, Sheiner LB: Understanding the dose-effect relationship. Clin Pharmacokinet 1981;6:429.
Holford N, Heo YA, Anderson B: A pharmacokinetic standard for babies and adults. J Pharm Sci 2013;102:2941.
CASE STUDY ANSWER
Sixty-seven percent of total standard digoxin clearance is renal, so the standard renal clearance is 0.67 × 9 L/h = 6 L/h/70 kg with creatinine clearance of 100 mL/min and nonrenal clearance is (1 − 0.67) × 9 L/h = 3 L/h/70 kg (see Table 3–1 for standard pharmacokinetic parameters). Her predicted creatinine clearance is 22 mL/min (Cockcroft & Gault), so for digoxin, her renal clearance is 6 × 22/100 × 60/70 = 1.1 L/h, nonrenal clearance 2.7 × 60/70 = 2.6 L/h, and total clearance 3.7 L/h. The parenteral maintenance dose rate is 2 mcg/L × 3.7 L/h = 7.4 mcg/h. Once-a-day oral dosing with bioavailability of 0.7 would require a daily maintenance dose of 7.4/0.7 × 24 = 254 mcg/day. A practical dose would be one 250 mcg tablet per day.
*The Cockcroft-Gault equation is given in Chapter 60.