## Review of Phoenix Connect Software Tool

In this post, I am reviewing the Phoenix® Connect package from Pharsight (A Certara Company). Over the past several years, Pharsight has made a significant effort to modernize the pharmacokinetic analysis software tools to aid in the drug development process. While many longtime users of the PCNonlin and WinNonlin software solutions are disappointed with the need to learn a new computer interface, I liken this change similar to the move from text entry computers (e.g. DOS) to the windows-based platforms we now use (e.g. Windows, Mac OSX). You can still find people who love command-line computing; however the vast majority of users are comfortable and even more efficient with the graphical user interfaces we now see on computers and other electronic devices.

The Phoenix platform consists of modular packages that fit together on the Phoenix whiteboard to provide an integrated solution for pharmacokinetic and pharmacodynamic analysis.

Phoenix Platform

These software packages, each with a different purpose, fit together interchangeably, similar to the toy Lego’s. This review focuses on the Phoenix Connect product which is displayed as a Data Management tool; however it actually provides connectivity to a myriad of other external tools. I prefer to think of Phoenix Connect as the tool that connects items within Phoenix to the outside world. Phoenix Connect has 3 primary functions, which I will review in order:

1. Connecting Phoenix with data sources
2. Integration of external analysis tools into Phoenix workflows
3. Exporting results for reporting

Connecting Phoenix with data sources

Pharmacokinetic analysis is one part of data analysis in a study. As such, data flows from one source to another with the final location a report or regulatory document. The Phoenix Connect tool provides seamless integration to connect incoming demographics and concentration data with pharmacokinetic analysis procedures. Phoenix Connect provides methods to connect to a variety of external data sources. One of those sources is the Pharsight Knowledgebase Server (PKS), a proprietary database for PK and PD analyses. Phoenix Connect also provides flexibility by allowing import of SAS transport files in SDTM formats, or other external database structures (e.g. Oracle’s ODBC).

While connectivity to external data sources is important, Phoenix amplifies this valuable feature of Phoenix Connect by allowing for creation of templates and data links to automate data import. This allows a user to set up a pharmacokinetic analysis procedure (e.g. handling of BLQ values, non-compartmental analysis, concentration-time plots, data summarization, etc.) that can be repeated by simply connecting to new data sources.

Integration of external analysis tools

Many users think of Phoenix as an updated version of Pharsight’s WinNonlin product. But Phoenix is really a whiteboard platform that can be used with a large number of analysis tools. Phoenix Connect permits analysis with NONMEM, R, S-Plus, SAS, and many other analytical tools. There are competing tools that Pharsight sells (e.g. Phoenix WinNonlin, and Phoenix NLME); however, the Phoenix platform can be used to conduct native analyses with other tools. With Phoenix Connect, the user can add a NONMEM object that includes the dataset, control stream, and all output files. The resulting output files can then be used in other Phoenix objects (tables, plots, etc.) including integrated R-scripts. Phoenix Connect allows the user to organize and standardize pharmacometric analysis. Instead of a myriad of files and directories, all pharmacometric files are consolidated in a single Phoenix project file.

NONMEM setup

NONMEM Output

NONMEM Plot

This feature of Phoenix Connect allows for consolidation and harmonization of all pharmacokinetic and pharmacometric analyses using the Phoenix platform, while retaining the value of multiple analysis tools (e.g. WinNonlin, NONMEM, R, etc.). Analyses can be performed using the native environment, yet the advantages of Phoenix can be leveraged with any of these tools. Using Phoenix Connect with external analysis tools also provides an increased level of quality control that can improve compliance with regulations and company standard operating procedures.

Exporting results for reporting

Phoenix Connect also provides a reporter tool that can be used to export results for incorporation into study reports. This feature completes the connection of Phoenix from data to results. The reporter tool allows you to select tables, figures, or text (called “listings” in the Reporter object) from your workflow and output them into a Microsoft Word document using context-sensitive table/figure/listing titles. The reporter object functions like all other Phoenix objects in that it takes inputs (tables, figures and text) and operates on it to produce output (MS Word file).

This new Reporter object can be used to compile tables and figures for study reports easily and quickly. For example, let’s take a bioequivalence clinical study (2-period 2-way crossover study). Standard figures include individual concentration-time profiles, mean concentration-time profiles for each formulation, and dot plots comparing Cmax and AUC across formulations. Standard tables include a listing of concentration-time data by subject, listing of individual PK parameter estimates, mean concentration-time data by formulation, mean PK parameter estimates by formulation, and a statistical comparison of Cmax and AUC across formulations. All of these tables and figures can be produced using Phoenix objects (table objects and plot objects, respectively). Instead of exporting each one independently, these objects can be fed into a Reporter object with context-sensitive information (e.g. analyte name, formulation, period, subject ID, etc.) that can be included in the title of the table or figure. These separate objects can then be exported together into a Microsoft Word document that can be saved and directly imported into a clinical study report.

Reporter Setup

Reporter Output

If the data is updated, then the Reporter object, like any other Phoenix object, will turn pink to notify you that it is not current with the preceding objects. You can quickly refresh the workflow and update the output document to reflect the changes. This new Reporter object can be used within templates to standardize output for different studies and reports.

The new Reporter tool is an exciting advancement for the Phoenix platform. While Phoenix has been an excellent analysis platform, it always lacked a quality method for exporting information in an easy and useful manner. Individual file exports were time consuming, and could not be tracked within the workflow. In addition, the output was simply a data dump rather than a series of organized outputs that are report-ready. By adding the ability to customize titles and footers, the Reporter tool allows Phoenix to fully execute analyses from input data to final output (tables, figures, listings) for pharmacokinetic and pharmacodynamic analyses.

Overall impression and recommendations for the future

The updates to the Phoenix Connect product have made it an indispensable part of the Phoenix experience. The data connections provide access to a wide range of data sources. As companies begin to standardize on SDTM data formats, the data connection tool will become increasingly important instead of requiring a separate PK analysis dataset to be created by a SAS programmer. The ability to execute 3rd party software within Phoenix is the hidden gem of the Phoenix Connect product. This permits incorporation of tools into a regulatory compliant workflow that can be controlled in a validated environment. It simplifies the collection of output, and clearly identifies whether the results are current by color coding (white for current, pink for not current). Using Phoenix to manage your pharmacometrics work may soon become the standard for NONMEM control file management in the future. Additional tools such as custom R or S-Plus scripts or even custom SAS code allow the pharmacokinetic scientist to execute programs independently in a way that permanently links the analysis to the inputs and outputs. Finally, the new reporter tool is a first step to providing quality, report-ready output for inclusion in study reports. The reporter tool’s context-sensitive titles and footnotes are simple, useful, and easy to create. Like other objects, the reporter confirms that the output is current by the same color coding (white and pink) used in all other Phoenix objects. Future additions of reporter outputs to PowerPoint files, font selections for titles, and footnote locations will further refine the reporter tool to make it even more useful.

I highly recommend using the Phoenix Connect tool in your standard pharmacokinetic analysis workflow. Also, addition of this tool turns Phoenix into a pharmacometrics platform, even if you use NONMEM exclusively for your nonlinear mixed-effects modeling work. You can learn more about Phoenix Connect at the Pharsight website (http://www.certara.com/products/pkpd/phx-connect).

## Simulating BLQ values in NONMEM

When simulation concentration-time data using NONMEM, there are times when you wish to include the censoring effect of BLQ measurements from your bioanalytical laboratory. It is very easy to implement this in your NONMEM control stream. Here is how I do it.

In the $ERROR block , you will be adding your residual error to the prediction. The prediction is denoted F in NONMEM code. The individual prediction that is output as DV is denoted Y in NONMEM code. Let’s assume that the limit of quantitation is 0.1 ng/mL. The code to add at the end of your error code is the following: Y=F + ERR(1) IF(Y.LT.0.1) THEN Y=0 ENDIF The first line can vary depending on your specific residual error structure. The heavy lifting is done by the second line where the value Y is compared to the target value of 0.1 using the Fortran term for “less than” (LT). If the predicted value is less than the BLQ it will be replaced by the number zero. If the predicted value is greater than or equal to the BLQ, then the prediction does not change. You cannot assign text (eg BLQ) to the value of Y. Only numerical values are accepted. In my example I set it to zero; however, I could have assigned a value of -99999 also. Ether way I would extract these “BLQ” values from the simulated dataset before plotting the data. ## NONMEM Software Review – Part 2 In the first part of my review of NONMEM, I focused on the installation of the software. This portion of the review will focus on the use of the software. NONMEM is at a collection of Fortran programs that need to be run from a command line or through some sort of batch procedure. While this method of program execution was common in the 1980′s and 1990′s it is very uncommon in 2011. Thus many companies and individuals have developed scripts, batch files, and graphical user interfaces to simplify the execution of NONMEM and the post-processing activities after NONMEM completes. Two of the most popular GUIs are PDx-POP and PLT Tools. These will be reviewed separately as they add functionality not currently available in the NONMEM package. ### Running NONMEM To run NONMEM you can issue a simple statement from the command line: nmfe7 [model file] [output file] (Note: The command on each operating system differs slightly.) The nmfe7 batch file provided during the installation takes the model file and initiates a NONMEM run and then produces a text output file for the user. NONMEM will also output any table files requested by the user in the model file. These table files generally include individual parameter estimates, model concentration estimates, and other data necessary for model diagnostic evaluations. ### Estimation methods NONMEM 7 has built upon the robust model fitting engine that was originally developed by Stuart Beal and Lewis Sheiner in the 1970′s. The original model fitting methods of First Order approximation (FO), First Order Conditional Estimation (FOCE), First Order Conditional Estimation with Interaction (FOCEI), and Laplace (second order approximation) are all present and work well. These methods benefit from some improvements in gradient processing which reduces run times and avoids premature ending of certain complicated models. In addition to these methods, ICON has added several Bayesian methods. These new methods provide distributions of parameter estimates as a result rather than a single set of best-fit parameters. These new methods include: • Importance sampling Expectation Maximization • Iterative two-stage • Stochastic Approximation Expectation Maximization (SAEM) • Full Markov Chain Monte Carlo (MCMC) Bayesian Analysis All of these new methods require MU referencing. This tells NONMEM how the THETA parameters are associated arithmetically with the etas and individual parameters. An example of this conversion is shown here: NONMEM VI Code K = THETA(3)+ETA(3) NONMEM 7 MU Referencing MU_2=THETA(3) CL=MU_2+ETA(3) This MU referencing is used to speed up the execution of the new Bayesian methods. Reported improvements are seen with complex models that may not minimize using FOCE, but MU-referenced Bayesian methods provide adequate model fits for interpretation. A key new feature for NONMEM 7 is the ability to run multiple estimation steps in one control stream. For example, you can start the model using a Bayesian methodology to quickly approach a set of parameter estimates that are reasonable. Then a FOCE method can be invoked using the last iteration of the previous method as a starting point. This new ability to execute multiple estimation steps in a single control stream can prove to be very useful when working with difficult problems or complex sets of data and equations. ### NONMEM Output In addition to the new methods, ICON has updated the output to make it more user-friendly and versatile. In the text output file, the following tags have been added to facilitate extraction of key information. These new tags are: • #METH: Text that describes the method, for example First Order Conditional Estimation Method with Interaction. • #TERM: This tag indicates the beginning of the termination status of the estimation analysis. • #TERE: This tag indicates the end of the lines describing the termination status of the analysis. • #OBJT: This tag indicates the beginning of the text describing the objective function, such as Minimal Value Of Objective Function. • #OBJV: This tag indicates the objective function value. • #OBJS: This tag indicates the objective function standard deviation (MCMC Bayesian analysis only). Another addition is the output of conditional weighted residuals as a default output of NONMEM 7. This eliminates the need to calculate these separately or with customized code. Finally all variance, covariance, and parameter estimates are now output in a standard table format as a separate file. This eliminates the need to extract this information from the text output file using a specialized tool. ### The Good NONMEM continues to be the gold standard in software for nonlinear mixed effects models for the pharmaceutical industry. The updates and additions in NONMEM 7 have added to the repertoire of tools that are available to the pharmacometrician to evaluation pharmacokinetic and pharmacodynamic data. The addition of the Bayesian methods is of particular interest to many who develop complex models that do not appear to converge using the traditional conditional estimation methods. The upgrade to Fortran 95 was significant as support for Fortran 77 was waning, making it increasingly difficult to find appropriate compilers. In addition, the complete rebuilding of the NONMEM output is a welcomed improvement. It provides the end user with the ability to quickly access the desired information without the need for a text extraction program or painful review of the output file. ### Room for Improvement Although NONMEM 7 is a step in the right direction, there is still a huge void in the space for a high quality nonlinear mixed effects modeling program with a viable graphical user interface. NONMEM 7 still requires command line interaction at a minimum for the installation process, and then again for execution, unless a separate GUI is purchased. Furthermore, NONMEM 7 only performs model regression. It does not contain any post-processing capabilities. This leaves diagnostic analysis split amongst a variety of tools such as Excel, R, S-Plus, SAS, and many others. Each user tends to create a “system” of software to perform their analysis. In the end, we have no common end-to-end software package for pharmacometric analysis. My recommendations for future NONMEM development are the following: • Integrated fortran compiler that is invisible to the user • Integrated GUI and post-processing tool for standard analysis • Continued improvements to existing estimation methods and addition of new methods ### Conclusion Overall, NONMEM continues to be a leader in pharmacometric analysis tools. After many years of minimal development, the ICON team has added significant value to the product. However, there is still room to improve and simplify the software installation and interface to ensure continued leadership in the market. I will continue to use NONMEM for my population pharmacokinetic and pharmacodynamic analyses, but I will always be looking for that next software that can bridge the gap to the modern era of GUI computing. You can find out more about NONMEM at ICON’s website. The following contact information was found on the ICON website: License Enquiries can be made by email (IDSSoftware@iconplc.com), telephone (+1 410-696-3100) or fax (+1 215-789-9549). ## NONMEM Software Review – Part 1 In early March, many of you voted on which PK software I should review. NONMEM received 41% of the votes, so I will review it first. I decided to break up my review into two parts: Installation of NONMEM and Using NONMEM. This is particularly important for NONMEM because the installation of the software proved to be challenging. NONMEM 7.1.0 CD NONMEM is an acronym for the Nonlinear Mixed Effects Modeling Software originally designed by Lewis Sheiner and Stuart Beal, formerly of the University of California, San Francisco. The software arrived on single CD from ICON Development Solutions, the current owner and developer of NONMEM. I received version 7.1.0 on the CD and was instructed to download 7.1.2 update from a website. The CD contains the NONMEM source code, help files, an installation batch file, and installation instructions. It does not come packaged with a Fortran compiler, which is required for installation and execution. NONMEM supports multiple operating systems, including Linux, UNIX, Windows, and Mac OS X. I attempted the installation of NONMEM in 3 distinct environments: Windows Vista Home Premium, Mac OS X (Snow Leopard), and a virtual machine (Virtual Box) running Windows XP on a Mac OS X computer. ### Installation on Windows Vista Home Premium I attempted to install NONMEM on my Windows Vista Home Premium computer by first installing Fortran G95 (www.g95.org). I followed the instructions on the G95 website and successfully installed Fortran. I was able to test the Fortran installation by compiling a small fortran program provided in the NONMEM installation instructions. I then disabled the user access control feature of Windows and proceeded to install NONMEM. NONMEM is installed from a command window by calling a batch file and appending several commands. These commands include the installation drive, destination folder, fortran command, fortran optimizations, archive command, and a few other optional items. After calling the batch file, commands begin to be issued that copy the necessary files to the desired location and compile the NONMEM programs (NONMEM, PREDPP, and NMTRAN) using Fortran. After NONMEM is compiled and installed, the help files are installed and then a test run is executed. My installation worked normally until the test run. At that point the command window closed and NONMEM was not executed. I spent a few hours investigating the problem, but was unable to resolve the problem. ### Installation on Mac OS X (Snow Leopard) After my failure to install NONMEM on my Windows Vista computer, I attempted to install it on my iMac. I attempted to using Fortran G95 for the NONMEM installation (as described above), but was also unsuccessful. I then used gfortran (hpc.sourceforge.net), another version of Fortran. When using gfortran, NONMEM installed without any problems. The test run was executed and worked properly. I also successfully completed the installation using Intel Fortran (version 11). ### Installation on virtual Windows XP (on Mac OSX using VirtualBox) I also tested the installation of NONMEM using a virtual machine on Mac OS X. Using Sun Microsystems’ VirtualBox (www.virtualbox.org), I installed a Windows XP client operating system. I attempted the same installation procedures using both G95 and gfortran. Unfortunately, the same problem occurred as was seen with Windows Vista. ### Overall impressions of installation procedure The installation of NONMEM was very difficult to say the least. Of the 3 system setups, I was only able to get NONMEM installed on one … and only after trying different Fortran compilers. I have been using NONMEM for almost 10 years, and have performed installations of previous NONMEM versions (versions 5 and 6) on various Windows platforms (2000, XP, 7), Linux (RedHat), and OS X. Frankly, I was quite surprised of the many challenges that I experienced with NONMEM 7.1.0. I spent approximately 6 hours working on the various installations. Although I was able to get NONMEM working on my primary computer, I believe the installation could be much smoother. The difficulty I experienced is not uncommon with NONMEM. It is particularly vexing to new users who are trying to use the software for the first time. ICON may want to explore the distribution of NONMEM with a Fortran compiler. This might allow an easier installation and fewer challenges. In the end, NONMEM is a tool for pharmaceutical modeling and simulation, not a week-long IT project. ### Where to get NONMEM? You can contact ICON Development Solutions to purchase a license to NONMEM. ### Part 2 – Using NONMEM Later this week I will post about my experience using NONMEM. Watch for Part 2 of this software review. ## What is shrinkage? In 2007, Mats Karlsson and Radojka Savic published a perspective in Clinical Pharmacology & Therapeutics titled “Diagnosing Model Diagnostics” (link to CP&T website). In this article they examined the use of diagnostic plots to evaluate the adequacy of model fits for nonlinear mixed-effects analysis. Although there is a wealth of information in this article, the population PK analysis community latched on to the term “shrinkage” that was used to describe the phenomenon that occurs when a model is over-parameterized for the amount of information contained in the data. They described ε-shrinkage and η-shrinkage which I will try to summarize here. ## ε-shrinkage ε is the term that refers to the residual error in the model. ε-shrinkage is calculated as 1-SD(IWRES) where IWRES is an individual weighted residual. When data is very informative, ε-shrinkage is zero and it moves toward 1 when data is less informative. Thus ε-shrinkage can range from 0% to 100%. But what is ε-shrinkage? When ε-shrinkage is large, the individual predictions are of little value for assessing model adequacy because the individual predictions “shrink” back toward the observation, meaning that IPRED ≈ DV (observation). ## η-shrinkage η is the term that refers to the between individual variation in the model, in other words, how patients differ from one another. η-shrinkage is calculated as 1-SD(η)/ω where η are the between individual variation terms and ω is the population model estimate of the standard deviation in η. When data is very informative, η-shrinkage is zero and it moves toward 1 when data is less informative, meaning that η-shrinkage can range from 0% to 100%. When η-shrinkage is high, the individual parameter estimates “shrink” back toward the population parameter estimate, meaning that CLi ≈ CLpopulation. When η-shrinkage is large, diagnostic plots of individual parameter estimates and covariates could be misleading. ## What to do about shrinkage? So, how much shrinkage is too much? Karlsson and Savic suggest that bias can result with only 20-30% shrinkage. What should you do if you see shrinkage? In general, shrinkage indicates that the model is over-parameterized for the data that is available. The first recommendation is to simplify the model if possible. If that doesn’t resolve the issue, the second recommendation is to remember that the diagnostic plots may be misleading. ## Is a Monte Carlo simulation an exotic drink? The term “Monte Carlo simulation” is often used in the modeling and simulation literature with PK/PD analysis. When I was first exposed to this term, I was thoroughly confused and thought that it was some exotic statistical method that required 3 PhDs and a few days to comprehend. Well, I was very wrong. A Monte Carlo simulation is a simulation that utilizes the “Monte Carlo Method“. It was named after the famous Monte Carlo Casino in Monaco. Monte Carlo Casino Monaco At the Monte Carlo Casino, people take their money and gamble on games of chance. Games of chance are based on probabilities of random events ocurring. For example, roullette is a game where a ball bounces around a spinning platform and eventually comes to rest on one of 36 spots. Players can make various bets on the chance that the ball will stop on a specific spot or spots. You may ask, “what in the world does that have to do with simulations?!” Well, let me tell you. Prior to the Monte Carlo method, simulations were performed with specific parameter values to generate a single simulation. For example, let’s assume we have the following PK model: $C(t)=\frac{Dose}{V}*e^{(-\frac{CL}{V}*t)}$ We can predict a concentration-time curve by providing a value for CL and V. We can then do that for various combinations of CL and V. It would look something like this: Discrete Simulation This gives us 2 concentration-time curves. While this is useful, we don’t always know the exact values of CL and V for a given individual before they take the drug. What we usually know is that the CL and V have some average value along with a variance. In other words, we have a distribution of values for CL and V, with some being more likely than others. Thus instead of just choosing a few sets of values for CL and V, what if we chose many values. And what if we used the known distribution to select more likely values more often and less likely values less often? Well, we would then have a simulation that looks like this: Monte Carlo Simulation As output, we would get a large distribution of plasma concentration-time curves that would represent the range of possibilities, and the more likely possibilities would occur more frequently. This is extremely useful in PK/PD simulations because we can quantify both the mean response and the range of responses. To do a Monte Carlo simulation, you simply have to have a program (like NONMEM or WinNonlin) that randomly selects a parameter value from a known distribution. Then runs the PK model and saves the output. That process is repeated many times (usually between 1,000 and 10,000 times) to generate the expected outcomes. Hopefully you understand Monte Carlo simulations better now … and if not, you should go get an exotic drink and try reading this post again tomorrow! ## How to use the scale parameter in NONMEM When using the non-linear mixed effects modeling program NONMEM, there is a scaling parameter (S, S1, S2, S3, etc) that should be included in most modeling code. Unfortunately the rationale for that parameter, and the directions on how to use that parameter are not explained in a clear fashion in the NONMEM manuals. NONMEM models the amount of drug in each “compartment” of the model. The amount in a compartment is denoted by the parameter A in the NONMEM control stream. With a 1 compartment model with extravascular administration, you will have 2 compartments. Compartment 1 is the dosing compartment and compartment 2 is the central compartment (e.g. circulatory system). NONMEM keeps track of the amount of drug in compartment 1 and the amount of drug in compartment 2 using the parameters A1 and A2, respectively. Amounts are usually in the units of mass (i.e. g, mg, ng), which is fine for the administered dose. But the data we use for model fitting is concentration data in the central compartment. That data is in units of mass per volume (i.e. ng/mL) or concentration. This means that NONMEM must convert each compartment amount into a concentration during the model-fitting process. To accomplish this, NONMEM divides each amount by a scaling factor. The concentrations for the 2 compartments calculated by NONMEM would be: $C_1=\frac{A1}{S1}$ Equation 1 $C_2=\frac{A2}{S2}$ Equation 2 If you don’t set S1 or S2 to anything, NONMEM will simply use a value of 1, but within the NONMEM code, you can add any expression you choose for each scaling factor. In this example, we don’t have any concentration data for compartment 1, and we only need the dose amount, thus we can leave the scaling factor to be 1. In compartment 2, we have concentration data, and we do NOT have amount data, thus we need to convert the amount in compartment 2 into the concentration in compartment 2 using the following equation: $C_2=\frac{A2}{V}$ Equation 3 Combining equations 2 and 3 and solving for S2, we get: $\frac{A2}{S2}=\frac{A2}{V}\;\;therefore\;\;S2 = V$ Equation 4 Thus the proper scaling factor for S2 is the volume of distribution for the central compartment. This same exercise can be performed for all compartments in the model. Each relationship that is derived (e.g. Equation 4) should be included in the NONMEM control stream to ensure that NONMEM understands how to convert between amount and concentration for each copartment. In addition to converting between amounts and volumes, the scaling parameter can be used to adjust for differences in mass units between the dose and the concentration. Often we give doses in mg units, but we measure concentrations in μg or ng units. Using Equation 3 and including units, we would get the following: $C_2=\frac{A2\;mg}{V\;L}$ Equation 5 But the concentration measurements are not in mg/L units! So we must adjust for the differences. I have found this conversion to get confusing if I work too hard … but here are the key steps to simplify it: 1. Assume that the volume parameter in the model always has units of L 2. Convert the concentrations to units of mass/L 3. Divide the mass unit from the converted concentration units in Step #2 by the dose units to give a fraction 4. Add the fraction from #3 to your scaling parameter via multiplication Here is an example of each step. Step 1: Assume volume is in liters (L). Dose is in milligrams (mg). Step 2: $\frac{ng}{mL}=\frac{\mu g}{L}$ Step 3: $\frac{1\;\mu g}{1\;mg}=\frac{1\;\mu g}{1000\;\mu g}= \frac{1}{1000}$ Step 4: $S2=V*\frac{1}{1000}=\frac{V}{1000}$ The equation in Step 4 can be used in NONMEM control streams to allow you to model ng/mL concentration data and mg dose data within NONMEM. Hopefully you have a better understanding of the scale parameter in NONMEM and how it is used. Good luck in your modeling! ## What is NONMEM? Have you ever been in a conversation with someone in pharmacokinetics and heard the term “NONMEM”? Generally, people throw it into a conversation like its a good friend that everyone knows. Unfortunately, if you are like most people, you are not really sure what those crazy pharmacokineticists are talking about. Is NONMEM® a complex methodology, or a special PK parameter, or is it a monk who lives on a mountaintop? NONMEM is a software package, just like Microsoft Office. It is a specialized software for the analysis of pharmacokinetic and pharmacodynamic data. The name of the software actually provides a significant amount of information about the software. You see, NONMEM is an abbreviation of the real name of the software. The full name is “NONlinear Mixed-Effect Modeling” which was developed at the University of California at San Francisco by two professors, Lewis Sheiner and Stuart Beal. The NONMEM software is a regression program that specializes in non-linear systems. A non-linear system is when the response variable changes non-linearly with changes in the predictor variable. An example of a non-linear system is the basic pharmacokinetic equation: $C(t) = \frac{Dose}{V}*e^{-\frac{CL}{V}*t}$ The response variable (C) changes with the predictor variable (t), but it is not a linear combination because the t is in the exponential term. Unlike linear equations, non-linear systems often do not have exact solutions, therefore numerical integrators are required to perform regressions. The second part of NONMEM is the mixed-effects model. Some models contain what are called “fixed effects”. Other models are have what are called “random effects”. A mixed-effects model is one that includes both fixed effects and random effects. Instead of providing a complex statistical explanation, I’ll try to use a specific example using the PK equation shown above. In a mixed effects model, you have some parameters that are normally distributed, and others that are random. Combining these together creates a mixed-effects model. In the equation above, there are 2 PK parameters: V and CL. The average (population) estimate for V and CL will normally come from a log-normal distribution. Then each individual will have his/her own V and CL that differ from the population V and CL. Finally, the observed concentration will differ from the individual predicted concentration by some random error. Thus, you have fixed effects (V and CL) and random effects (random error), making this a mixed-effects model. In the end, NONMEM is a software package that is used to fit data to statistical models. You can think of it like a specialized version of Excel that does a specialized form of linear regression. In a future blog I will talk more about how NONMEM works, how to install and use it, and how to perform population analysis. ## Using NONMEM to fit IV and oral data simultaneously After learning how to use the nonlinear mixed effects modeling software NONMEM, one of the first things I tried to do was estimate the absolute bioavailability of a drug that I was working with. I had PK data following IV administration and following subcutaneous injection in monkeys. Using non-compartmental methods I calculated the bioavailability by calculating the ratio of the mean AUC for both routes of administration. While this method was perfectly acceptable, I was attracted the the possibility that I could use all of the data together to derive not only the pharmacokinetic parameters (CL, V, F, and ka) but also the variability in each parameter using NONMEM. Unfortunately the only way to figure out how to do this in NONMEM was ask a NONMEM expert to help me, or spend time working through the NONMEM manuals. So when a recent email from PharmPK arrived with the folowing question: Can anybody provide me nonmem script for simultaneous fit of IV and Oral data (population pharmacokinetic modeling) to derive parameters ka and F ? I decided that I would post not only an answer (the NONMEM control stream and a sample data file), but I would add an explanation of how I was able to arrive at the answer. I hope that the combination of the explanation along with the files will help others understand what to do … and why. Before we start, let’s make some assumptions: 1. The IV data follows a 1 compartment model with constant clearance. 2. The oral data follows a first-order elimination and first-order absorption model. Given these assumptions, let’s draw the compartmental models that we are considering for the 2 routes of administration: IV model Oral model Click to enlarge Click to enlarge Both models exhibit elimination from a central compartment with PK parameters for the dose, volume of distribution (V) and the clearance (CL) from the central compartment. The oral model has the additional PK parameters for the absorption rate (ka) and the bioavailability (F). The IV model corresponds to ADVAN1 in NONMEM. The ADVAN1 help reads:  Compt. No. Function Initial Status On/Off Allowed Dose Allowed Default for Dose Default for Obs. 1 Central On No Yes Yes Yes 2 Output Off Yes No No No The headers mean the following: • Comp. No. = Compartment number • Function = Type of compartment (e.g. central peripheral, or output [urine]) • Initial Status = Ability to set the initial status of a compartment to a value (On = Yes, Off = No) • On/Off Allowed = Ability to turn on or off the compartment. On means NONMEM will calculate values in that compartment, and Off means that NONMEM will ignore that compartment. • Dose Allowed = Ability to add a dose to the compartment • Default for Dose = If no other information is provided, the compartment with Yes will be assumed to be the dosing compartment. • Default for Obs. = If no other information is provided, the compartment with Yes will be assumed to have the drug measurements. For ADVAN1, the dose is given to compartment 1, and drug measurements are made in compartment 1. Now the oral model corresponds to ADVAN2 in NONMEM, and that help reads:  Compt. No. Function Initial Status On/Off Allowed Dose Allowed Default for Dose Default for Obs. 1 Depot Of Yes Yes Yes No 2 Central On No Yes No Yes 3 Output Off Yes No No No For ADVAN2, the dose is given in compartment 1, and drug measurements are made in compartment 2. In addition, it is possible to dose in compartment 2, but that is not the default dosing compartment. So when initially looking at these, it appears that we need to use 2 different ADVANs to run our model in NONMEM … and, unfortunately, that is impossible in NONMEM. So what is the solution? Well, the solution is the little detail in ADVAN2 about the ability to dose to compartment 2. Pretend for a minute that compartment 1 of ADVAN2 is missing. The rows for compartments 2 and 3 look almost identical to those in ADVAN1. What we need to do is trick NONMEM by using an oral model (ADVAN2), but giving an IV dose. This is accomplished by building your dataset appropriately using the compartment (CMT) variable to tell NONMEM where you want to dose the drug. For IV you will want to dose in compartment 2 (CMT = 2), but for Oral, you will want to dose in compartment 1 (CMT = 1). All concentration measurements should be assigned to compartment 2 (CMT = 2). All other aspects of your datafile should follow NONMEM standards. For ease of analysis, you may want to include a variable that identifies which individuals (or treatments if a crossover study) are on IV and which are on Oral (e.g. RTE = 1 or 2; 1 = IV, 2 = Oral). A sample dataset is here. Now that your dataset is ready, you need to construct a control stream that will calculate the desired parameters. A sample file is here. The PK block should look like this:$PK
KA=THETA(1)*EXP(ETA(1))
CL=THETA(2)*EXP(ETA(2))
V=THETA(3)*EXP(ETA(3))
F1=THETA(4)*EXP(ETA(4))
S2=V/1000

For subjects receiving an oral dose, all 4 parameters will be estimated. For subjects receiving an IV dose, only CL and V will be estimated. One advantage of this method of simultaneously fitting IV and oral data is the ability to leverage both datasets (IV and oral) to estimate the clearance and volume of distribution. In turn, this leads to more accurate estimates for the absorption rate constant and the bioavailability.

Please note that I did not fit the sample datset/control stream in NONMEM. The dataset was small (only 2 subjects) and thus some characteristics of the model (e.g. etas) are not reasonable. The samples files are intended to provide the reader a working file with the proper structure.

Good luck, and I hope this helps!