## Review of Phoenix Connect Software Tool

In this post, I am reviewing the Phoenix® Connect package from Pharsight (A Certara Company). Over the past several years, Pharsight has made a significant effort to modernize the pharmacokinetic analysis software tools to aid in the drug development process. While many longtime users of the PCNonlin and WinNonlin software solutions are disappointed with the need to learn a new computer interface, I liken this change similar to the move from text entry computers (e.g. DOS) to the windows-based platforms we now use (e.g. Windows, Mac OSX). You can still find people who love command-line computing; however the vast majority of users are comfortable and even more efficient with the graphical user interfaces we now see on computers and other electronic devices.

The Phoenix platform consists of modular packages that fit together on the Phoenix whiteboard to provide an integrated solution for pharmacokinetic and pharmacodynamic analysis.

Phoenix Platform

These software packages, each with a different purpose, fit together interchangeably, similar to the toy Lego’s. This review focuses on the Phoenix Connect product which is displayed as a Data Management tool; however it actually provides connectivity to a myriad of other external tools. I prefer to think of Phoenix Connect as the tool that connects items within Phoenix to the outside world. Phoenix Connect has 3 primary functions, which I will review in order:

1. Connecting Phoenix with data sources
2. Integration of external analysis tools into Phoenix workflows
3. Exporting results for reporting

Connecting Phoenix with data sources

Pharmacokinetic analysis is one part of data analysis in a study. As such, data flows from one source to another with the final location a report or regulatory document. The Phoenix Connect tool provides seamless integration to connect incoming demographics and concentration data with pharmacokinetic analysis procedures. Phoenix Connect provides methods to connect to a variety of external data sources. One of those sources is the Pharsight Knowledgebase Server (PKS), a proprietary database for PK and PD analyses. Phoenix Connect also provides flexibility by allowing import of SAS transport files in SDTM formats, or other external database structures (e.g. Oracle’s ODBC).

While connectivity to external data sources is important, Phoenix amplifies this valuable feature of Phoenix Connect by allowing for creation of templates and data links to automate data import. This allows a user to set up a pharmacokinetic analysis procedure (e.g. handling of BLQ values, non-compartmental analysis, concentration-time plots, data summarization, etc.) that can be repeated by simply connecting to new data sources.

Integration of external analysis tools

Many users think of Phoenix as an updated version of Pharsight’s WinNonlin product. But Phoenix is really a whiteboard platform that can be used with a large number of analysis tools. Phoenix Connect permits analysis with NONMEM, R, S-Plus, SAS, and many other analytical tools. There are competing tools that Pharsight sells (e.g. Phoenix WinNonlin, and Phoenix NLME); however, the Phoenix platform can be used to conduct native analyses with other tools. With Phoenix Connect, the user can add a NONMEM object that includes the dataset, control stream, and all output files. The resulting output files can then be used in other Phoenix objects (tables, plots, etc.) including integrated R-scripts. Phoenix Connect allows the user to organize and standardize pharmacometric analysis. Instead of a myriad of files and directories, all pharmacometric files are consolidated in a single Phoenix project file.

NONMEM setup

NONMEM Output

NONMEM Plot

This feature of Phoenix Connect allows for consolidation and harmonization of all pharmacokinetic and pharmacometric analyses using the Phoenix platform, while retaining the value of multiple analysis tools (e.g. WinNonlin, NONMEM, R, etc.). Analyses can be performed using the native environment, yet the advantages of Phoenix can be leveraged with any of these tools. Using Phoenix Connect with external analysis tools also provides an increased level of quality control that can improve compliance with regulations and company standard operating procedures.

Exporting results for reporting

Phoenix Connect also provides a reporter tool that can be used to export results for incorporation into study reports. This feature completes the connection of Phoenix from data to results. The reporter tool allows you to select tables, figures, or text (called “listings” in the Reporter object) from your workflow and output them into a Microsoft Word document using context-sensitive table/figure/listing titles. The reporter object functions like all other Phoenix objects in that it takes inputs (tables, figures and text) and operates on it to produce output (MS Word file).

This new Reporter object can be used to compile tables and figures for study reports easily and quickly. For example, let’s take a bioequivalence clinical study (2-period 2-way crossover study). Standard figures include individual concentration-time profiles, mean concentration-time profiles for each formulation, and dot plots comparing Cmax and AUC across formulations. Standard tables include a listing of concentration-time data by subject, listing of individual PK parameter estimates, mean concentration-time data by formulation, mean PK parameter estimates by formulation, and a statistical comparison of Cmax and AUC across formulations. All of these tables and figures can be produced using Phoenix objects (table objects and plot objects, respectively). Instead of exporting each one independently, these objects can be fed into a Reporter object with context-sensitive information (e.g. analyte name, formulation, period, subject ID, etc.) that can be included in the title of the table or figure. These separate objects can then be exported together into a Microsoft Word document that can be saved and directly imported into a clinical study report.

Reporter Setup

Reporter Output

If the data is updated, then the Reporter object, like any other Phoenix object, will turn pink to notify you that it is not current with the preceding objects. You can quickly refresh the workflow and update the output document to reflect the changes. This new Reporter object can be used within templates to standardize output for different studies and reports.

The new Reporter tool is an exciting advancement for the Phoenix platform. While Phoenix has been an excellent analysis platform, it always lacked a quality method for exporting information in an easy and useful manner. Individual file exports were time consuming, and could not be tracked within the workflow. In addition, the output was simply a data dump rather than a series of organized outputs that are report-ready. By adding the ability to customize titles and footers, the Reporter tool allows Phoenix to fully execute analyses from input data to final output (tables, figures, listings) for pharmacokinetic and pharmacodynamic analyses.

Overall impression and recommendations for the future

The updates to the Phoenix Connect product have made it an indispensable part of the Phoenix experience. The data connections provide access to a wide range of data sources. As companies begin to standardize on SDTM data formats, the data connection tool will become increasingly important instead of requiring a separate PK analysis dataset to be created by a SAS programmer. The ability to execute 3rd party software within Phoenix is the hidden gem of the Phoenix Connect product. This permits incorporation of tools into a regulatory compliant workflow that can be controlled in a validated environment. It simplifies the collection of output, and clearly identifies whether the results are current by color coding (white for current, pink for not current). Using Phoenix to manage your pharmacometrics work may soon become the standard for NONMEM control file management in the future. Additional tools such as custom R or S-Plus scripts or even custom SAS code allow the pharmacokinetic scientist to execute programs independently in a way that permanently links the analysis to the inputs and outputs. Finally, the new reporter tool is a first step to providing quality, report-ready output for inclusion in study reports. The reporter tool’s context-sensitive titles and footnotes are simple, useful, and easy to create. Like other objects, the reporter confirms that the output is current by the same color coding (white and pink) used in all other Phoenix objects. Future additions of reporter outputs to PowerPoint files, font selections for titles, and footnote locations will further refine the reporter tool to make it even more useful.

I highly recommend using the Phoenix Connect tool in your standard pharmacokinetic analysis workflow. Also, addition of this tool turns Phoenix into a pharmacometrics platform, even if you use NONMEM exclusively for your nonlinear mixed-effects modeling work. You can learn more about Phoenix Connect at the Pharsight website (http://www.certara.com/products/pkpd/phx-connect).

## Changing column names and units in Phoenix WinNonlin

One of the most common tasks when working with data in Phoenix WinNonlin is to change the column titles or units. In many software packages that consists of clicking on the data spreadsheet and re-typing the new information; however, with Phoenix, you have to take a few additional steps. Here’s some quick tips on how to change column names and units:

Select the original dataset and send it to a Data Wizard object. You will make all of you changes within this Data Wizard object.

In the Data Wizard object, select the “Properties” under the Action menu.

You will then see a screen with your existing column titles and units. In this example, there are 2 columns: “TimeofSample” and “Result”. I would like to change these to “Time” with units of hours, and “Conc” with units of ng/mL. This can be done in a single quick Data Wizard step.

Select the “Old Column” that you would like to change in the box. Then the properties boxes on the right will change from grey to black. Just enter a new column name and the new units in the boxes provided. You can use the Unit Builder if necessary to help you. After you complete your changes, press the “Enter” or “Return” key. You will then get a response box like this:

In this case we are setting the units for the first time, so we can click “No”. If we were converting the units from ng/mL to mg/mL, then we would click “Yes” to convert units. After you change all of the columns and units, click the “Execute Step” button on the left site of the dialog. This will execute the Data Wizard step and create the new dataset with the updated column titles and units:

Original
Modified

That’s how you change column names and units in Phoenix WinNonlin. A little more complex than in some other spreadsheet software, but not too difficult.

## How to filter data with Phoenix WinNonlin

Phoenix WinNonlin is Pharsight’s new implementation of the popular pharmacokinetic software that has been the mainstay of noncompartmental analysis for over 15 years. But, this newest version is the biggest change in the software since the original PC Nonlin was converted to the Windows-based “WinNonlin” (i.e. Windows Nonlin). In Phoenix WinNonlin, there are a powerful set of data manipulation tools that allow a user to standardize analyses and create re-usable objects.

In this post, I will demonstrate how to filter data using the powerful Data Wizard object. For this example, let’s assume we have completed a noncompartmental analysis, and now we want to create a subset of results that includes just a few of the PK parameters (Cmax, Tmax, and AUCall) for summarization. This is commonly done when completing an analysis to support a bioequivalence study. The Cmax and AUC are needed for statistical analysis, but all other PK parameters are only presented in a summary table. By creating a standard filter (like I demonstrate here), you can quickly add this filter object to any workflow where you need to subset your data.

Start by sending the non-pivoted Final results from the NCA Analysis to a Data Manager object. It is important to use the Final Results file rather than the pivoted Final results. The difference is that the Final results contain 1 record per line while the pivoted Final results contain all records for an individual subject. The Final results are much easier to use for programming purposes (like we are doing here).

When the Data Wizard object appears, it will appear as shown below. There are three main areas: the summary in the upper left, the data in the upper right, and the operation details in the lower section. Since no operations have been selected, there will be no information in the upper right and lower sections.

Now select “Filter” using the Action pulldown, then click the Add button. (See below)

After the filter action is selected, options appear in the upper right and lower sections. The upper right section contains information about the dataset. It lists the columns of data available. You can select the magnifying glass on paper icon in the upper section (circled in red) to preview the dataset. Then in the lower section, select the “Custom” option button and click the “Add” button (both circled in red in the lower section).

The “Custom” option allows you to write your own selection criteria for summarizing the data. It is particularly useful when you have more than one selection to make. In our case, we want to include 3 different values of the Parameters: Tmax, Cmax, and AUCall. If you only need to select a single value (i.e. only Cmax), you can use the “Built In” option, or if you only need to exclude a single value (i.e. exclude parameter estimates of “0″) you can use the “Selection Exclude” option. I personally prefer the Custom option as it is more flexible. After you select the Add button, a box will pop open and allow you to enter your custom code. Enter the information shown below to include only the Tmax, Cmax, and AUCall parameter estimates.

This code allows you to use boolean logic (e.g. and, or) to make selections. In this case, we want to use “or” to include all cases when the Parameter column equals either Tmax OR Cmax OR AUCall. When you have completed the code shown above, click the OK button. Then click the “Execute Step” button in the lower section as shown below (circled in red). This will execute the filter procedure and produce a results dataset.

By clicking on the results tab in the upper section, you can then view the results of your filter. The “Results” worksheet will include the results of the filter procedure. The “Step 1 Filter Filtered Results” worksheet will include all records excluded by the filter procedure. As shown here, a dataset that included more than 15 parameters for each subject now only has the three parameters of interest. You can look at both the results and filtered results to make sure that the filter operated as expected.

Now this results file can be used to calculate descriptive statistics, produce tables or figures, or conduct other statistical analyses. In addition, this Data Wizard object can be copied and pasted into other workflows and connected to an input dataset to give the same results. That makes repeat analyses very simply. That is all there is to filtering data with Phoenix WinNonlin.

## PK Curve released for Android

The only mobile app that can generate pharmacokinetic curves and calculate non-compartmental pharmacokinetic parameters has been released for the Android platform. The same app that was released for the iPhone is now available for your android device.

## Phoenix NLME Software Review – Part 3

In this third and final post about by review of the Phoenix WinNonlin software, I review the newest feature of the software and provide overall thoughts. You can read about the Phoenix platform in Part 1 of my review, and the noncompartmental and single subject analysis in Part 2 of my review. With the exception of WinNonmix which has been discontinued, the WinNonlin software did not include a population pharmacokinetic analysis feature. That is … until now. With the new Phoenix platform, Pharsight has built a completely new non-linear mixed effects modeling system that seamlessly integrates into the Phoenix platform. This new tool is called NLME (short for Non-Linear Mixed Effects), and performs identical analysis to NONMEM, with additional features of an integrated graphical user interface and post-processing.

### NLME Workflow Object

As with the other Phoenix tools, everything depends on the NLME workflow object (shown below, click to enlarge).

NLME Workflow Object

The NLME workflow object appears very similar to the noncompartmental analysis and PK model objects. There are 4 main setup items: Main (data), Dosing, Parameters, and Parameter mapping. Each of these function similar to noncompartmental and individual PK analysis as described in Part 2 of my review.

### Model Editor

The most important part of any population pharmacokinetic modeling program is the ability to build the appropriate model. In the past nearly all software packages required learning a unique coding language (usually a version of Fortran) to enter the model using text expressions. Phoenix NLME takes a completely different path and provides 3 ways to edit models. The first involves built-in models that have closed-form analytical solutions. These built-in models can be selected using dropdown lists from the user interface. The second method is a standard text editor (shown on left below). This editor requires learning a new language but is rather intuitive. The third method, which is my favorite, is the graphical editor (shown on the right below). The graphical editor allows you to draw the desired model using standard compartments and flow arrows.

Textual Editor Graphical Editor

The beauty of the graphical editor is that it allows a user to draw a model then it constructs the needed equations on the fly in the background. These equations are updated in the graphical and textual editor as the model is contructed. This allows the user to define their model using graphical tools without having to worry about the equations needed for the model. But the user can always switch between the text and graphical models to adjust the equations as needed. This graphical editor is available with both NLME and WinNonlin individual PK modeling.

### Output

After executing the population model, Phoenix NLME automatically produces tabular, graphical and text output for the user to evaluate the quality of the model fit. The tabular output includes parameter estimates, covariance matrices, residuals, and other model diagnostics. These tabular data can be sent to other Phoenix workflow objects like tables. A variety of plots like the one shown below are automatically produced and can be customized by the user.

The automated output makes model evaluation simple and easy. Following execution of the model the user can directly view the parameter estimates, diagnostic plots, and text output to effectively evaluate the model.

### Modeling tools

Phoenix has also incorporated some excellent modeling tools to help in the model building effort. First among those tools is the worflow object. Once a model is built and run, the workflow object can be duplicated using “copy/paste”. Then the new workflow object can be modified. This is excellent for testing multiple models within a single project. The second tool is the automated covariate search feature. As shown above, users can add covariates and select the method of centering, the method of covariate addition (Direction) and the specific parameter to which each covariate should be added. After these selections are made, the automated search will test all combinations of covariates and select the best model using the log-likelihood ratio test. Finally, a workflow object called the “Model Comparer” allows the user to compare model fits. The user can select several models (top window frame), the items to compare, and the diagnostic plots to compare (lower window frame). Executing this workflow object creates a set of tables with a comparison of the selected parameters, and side-by-side graphical output.

### Overall

With the new NLME feature in Phoenix, I believe that there are now two comparable options for population pharmacokinetic analysis. Although NONMEM has been the industry standard since the late 1980′s, I believe that the Phoenix interface and powerful modeling tools have put Phoenix NLME in position to gain market share. I enjoy the ability to integrate multiple analysis methods in a single project using the workflow layout. I can move from noncompartmental analysis to population analysis simply by adding a workflow object. The graphical model editor is the new industry standard. Not only is it flexible to allow for differential equations, but the models can be built with the intuitive graphical model builder rather than relying on text entry. Finally, the modeling tools simplify many of the complex modeling tasks such as stepwise covariate addition and model comparisons.

I am impressed with Phoenix WinNonlin/NLME as a complete software package. It is a welcome departure from the historical WinNonlin interface to a modern workspace that allows the analyst to focus on the modeling process rather than the details of the model execution. There is seamless integration of all tools into a single package that is easy to use and powerful. I will likely begin to use Phoenix more often for all of my pharmacokinetic/pharmacodynamic analyses. My only suggestion is to simplify the licensing structure for the product. Although each piece is seamlessly integrated from a software perspective, purchasing the product can be confusing because each feature (e.g. WinNonlin, Connect, NLME, etc.) comes at a different price, and it isn’t always clear which product includes each feature. Future integration of all three would be beneficial to the user.

An evaluation copy of Phoenix was provided by Pharsight with the WinNonlin, Connect & NLME modules. You can learn more about Phoenix WinNonlin by visiting the vendor’s website, by calling your local Pharsight representative, or by requesting information from Pharsight.

## Phoenix WinNonlin Software Review – Part 2

Part 2 of the WinNonlin review will cover the noncompartmental and PK modeling functions of Phoenix WinNonlin. To many people, these 2 features have defined WinNonlin for many years. And the updated software does not disappoint with significant improvements to the functionality, ease of use, automated graphics, and other features.

As I discussed in Part 1 of my review, the new Phoenix platform allows integration of data and analysis methods. The “WinNonlin” feature includes the noncompartmental analysis and individual PK modeling features. These features were the basis for the previous versions of standalone WinNonlin versions 3, 4, and 5. Thus using the term “Phoenix WinNonlin” signifies the use of the new Phoenix platform to execute analyses with WinNonlin. This confusing nomenclature for the products is not helpful to the user, but it can be fixed very easily.

### Noncompartmental Analysis

The noncompartmental analysis workflow object is shown here.

NCA Workflow Setup

Within this workflow object you can select the options for the noncompartmental analysis, including the type of model, if sparse data is included, the AUC calculation method, and the terminal slope method. One change in the noncompartmental analysis engine from previous versions of WinNonlin is that Cmax is no longer included in the terminal slope calculation.

In addition to the analysis options, the concentration-time data to be used is identified in the “Main” item, dosing information can be imported from a dataset or by manual entry, slopes can be selected by the WinNonlin algorithm (maximize adjusted r2) or manually, partial areas can be defined, units can be specified, and parameter names can be selected or modified. A new feature of the noncompartmental engine is the ability to define a therapeutic response window. Lower and upper bounds can be specified by treatment or by subject. These bounds then appear on plots of concentration-time data. This feature is great for identifying either efficacy levels or toxicity margins.

After all of the input options have been selected, the model can be executed to produce the desired output. The results are presented in a set of tables under the results option. These include parameter estimates, exclusions, dosing information, and various settings. An example of the parameters table, in CDISC-like format is shown below:

NCA Parameter Output

Each of the output worksheets can be sent to other Phoenix objects such as tables or plots. If standard tables and plots templates have been created prior to analysis, the delivery of report-ready tables and graphics can be instantaneous. Although it appears that little has changed with the NCA engine, there have been a few modifications that simplify data analysis. First, the number of models has been reduced to 3 basic model types (plasma, urine, drug effect) with separate selections for the dosing input profile, and steady-state settings. The second improvement is the improved plotting engine which provides report-ready graphics without having to leave the application. And finally, the dosing input is simplified and can be automatically populated using study design features.

### PK Models

The pharmacokinetic model workflow object is shown here.

Pharmacokinetic Model Workflow Object

The model can be selected by double-clicking the “PK Model” workflow object icon shown at the top of the image to the right. This pulls up the different models that can be selected. It also permits selection of weighting options, the ability to select initial estimates, and minimization options.

The model requires 4 inputs: The study data (time and concentration), dosing information, initial parameter estimates, and units. The Main input includes the concentration-time data and any unique identifiers (e.g. subject ID, sex, weight, etc.). The Dosing input can be entered by the user or added from a data file. The initial estimates are entered by the user, and the units for both input and output parameters can be adjusted as needed.

After the workflow is set, the user can execute the model by running the workflow. The results include standard modeling output such as parameter estimates, residuals, model diagnostics, variance estimates and predicted values. All of these results are presented in worksheets and can be converted to report-ready tables using the Table workflow object. The user also receives the settings and model fits in the text output. Finally, diagnostic plots are automatically produced using the new plotting engine. These plots are fully customizable and can include data from multiple datasets. One example of these plots is shown here:

PK Model Fit Plot

Any of the results files can be sent to a plot, table, or other workflow object. This powerful feature provides easy communication of modeling results for study reports or even presentations. This workflow makes individual model fits simple and easy. And since these PK models can be integrated with other objects, you could take mean concentration-time output from a large dataset and send it to a PK Model object to generate initial estimates of the PK model before embarking on a population analysis.

### Overall

For many users of WinNonlin versions 3 through 5, the new Phoenix WinNonlin interface presented an unexpected learning curve; however, I believe the improvements are well worth the time required to relearn how to interact with the software. Minor modifications have been made in the noncompartmental and PK modeling features of WinNonlin. The modifications (mentioned above) are nice, but for me, there are 2 key features of the new software that make my life easier. First, the graphics rival those produced within R or SigmaPlot with little or no effort to learn a different software package. These plots can be linked to output so that they are automatically updated if the output changes, and the whole package (analysis and plots) can be set up as a template for repetitive analyses. The second feature is the ability to “send” a result object (e.g. parameter worksheet) to another workflow object. Before Phoenix WinNonlin, I would export output from WinNonlin into another application to create tables and figures. Now, I can simply send the results from either my NCA or PK Model output to a table or plot workflow object within the Phoenix platform.

Overall, the noncompartmental and PK modeling features of WinNonlin are of high quality and include the desired features. Operation is simple and straightforward. And added features such as the improved plotting engine and the workflow interactivity have created a single platform for pharmacokinetic data analysis and reporting.

An evaluation copy of Phoenix was provided by Pharsight with the WinNonlin, Connect & NLME modules. You can learn more about Phoenix WinNonlin by visiting the vendor’s website, by calling yourlocal Pharsight representative, or by requesting information from Pharsight.

## WinNonlin Software Review – Part 1

WinNonlin by Pharsight has been a fixture in pharmacokinetic analysis software for over 20 years. While it has been known as a tool for noncompartmental analysis and model-based analysis of single subject data, the new Phoenix WinNonlin creates an entirely new platform for pharmacokinetic and pharmacodynamic analysis. Similar to my review of NONMEM, I will be evaluating features and usability of the Phoenix WinNonlin software from a user’s perspective.

Part 1 will review the Phoenix platform and integration with other tools. Part 2 will review the noncompartmental and individual pharmacokinetic model fitting tools. Finally Part 3 will review the new nonlinear mixed effects module (NLME).

The installation of Phoenix was simple and easy. A standard Windows installation program was used with the default options on computers with Windows Vista, Windows 7, and a Mac running Windows Vista through a Virtual Machine. WinNonlin is not natively supported on operating systems other than Windows (e.g. Linux, Mac OS X, and UNIX).

The new Phoenix platform is best described with a picture (Click image to enlarge).

Phoenix Workflow

The newly designed interface has a centerpiece called the “workflow”. The left side of the image shows the object browser. This is where you have a list of all the objects in your file, and it is organized much like a set of nested folders. Users who are familiar with the Windows File Explorer or the SPlus statistical package will be immediately comfortable with the object browser. The right side of the image shows the workflow space. Within this white space you can place objects and then cause them to interact with one another. The orange box titled “External Sources” is a collection of data sets from external sources. Those data sets act as the input for 5 different noncompartmental analysis (NCA) objects that each have their own properties and output. The NCA in the lower left of the image is then the source of a summary statistics worksheet titled “Descriptive Stats”.

The types of objects available to use in Phoenix include: worksheets, plots, NCA, nonlinear modeling, nonlinear mixed effects modeling, in vitro-in vivo correlation tools, tables, NONMEM, SAS shell, SigmaPlot shell, SPlus script, R scripts, and other workflow objects. Each object in the workflow (or box on the white space) has its own inputs, results, and outputs. Each of these outputs can then be directed to become the input of another object (e.g. a set of final PK parameters from an NCA object can be sent to a table object). These workflow connections are illustrated by arrows and are saved in the single Phoenix project file. This allows a single workflow to be used as a template. For example, you could set up a template workflow for a drug-drug interaction study that includes the following:

• NCA analysis for Drug 1
• NCA analysis for Drug 2
• Summary statistics worksheet for Drug 1
• Summary statistics worksheet for Drug 2
• Statistical comparison of drug-drug interaction
• Tables for summary statistics of Drug 1, Drug 2, and drug-drug interaction
• Plots with individual and mean concentration-time data

This workflow could be saved as a Phoenix template file and then when a new study is conducted the concentration-time data can be added to the workflow, linked to the NCA analyses and a single button click will perform all analyses, calculate summary statistics, and produce the desired tables and figures. This ability to automate can revolutionize traditional pharmacokinetic analysis to simplify the work, standardize output, and allow for faster data analysis.

A new feature with Phoenix is is the ability to incorporate different analysis types on a single workflow. A single workflow can contain NCA, individual nonlinear models, and nonlinear mixed effects or population models. No need to switch back and forth between multiple model files for different analyses on a single set of data! You can conduct your NCA for initial estimates, along with 1- and 2-compartment model fits on the same workflow.

In addition to the workflow feature, Phoenix integrates well with other software packages such as NONMEM, SAS, R, SPlus, and ODBC-compliant databases like Watson LIMS. This integration is achieved through the Phoenix Connect module that allows seamless transfer of Phoenix output to selected software programs, and then the ability to receive output from those same programs. An example of this is the export of AUC values to SAS for statistical analysis followed by the import of the bioequivalence summary statistics into Phoenix for inclusion in a table object. This allows the Phoenix workflow to control data analysis procedures from beginning to end, while allowing a user to interact with their preferred software solution.

Overall, the new workflow layout and design is a significant advance in pharmacokinetic software. And although the new Phoenix user interface is a departure from the previous one, the flexibility and power of the new workflow will create a great opportunity for users to streamline their work processes and simplify data analysis.

More to come in Part 2 (NCA and individual model fitting) and Part 3 (NLME) of my review of Phoenix WinNonlin.

An evaluation copy of Phoenix was provided by Pharsight with the WinNonlin, Connect & NLME modules. You can learn more about Phoenix WinNonlin by visiting the vendor’s website, by calling your local Pharsight representative, or by requesting information from Pharsight.

## NONMEM Software Review – Part 2

In the first part of my review of NONMEM, I focused on the installation of the software. This portion of the review will focus on the use of the software. NONMEM is at a collection of Fortran programs that need to be run from a command line or through some sort of batch procedure. While this method of program execution was common in the 1980′s and 1990′s it is very uncommon in 2011. Thus many companies and individuals have developed scripts, batch files, and graphical user interfaces to simplify the execution of NONMEM and the post-processing activities after NONMEM completes. Two of the most popular GUIs are PDx-POP and PLT Tools. These will be reviewed separately as they add functionality not currently available in the NONMEM package.

### Running NONMEM

To run NONMEM you can issue a simple statement from the command line:

nmfe7 [model file] [output file]
(Note: The command on each operating system differs slightly.)

The nmfe7 batch file provided during the installation takes the model file and initiates a NONMEM run and then produces a text output file for the user. NONMEM will also output any table files requested by the user in the model file. These table files generally include individual parameter estimates, model concentration estimates, and other data necessary for model diagnostic evaluations.

### Estimation methods

NONMEM 7 has built upon the robust model fitting engine that was originally developed by Stuart Beal and Lewis Sheiner in the 1970′s. The original model fitting methods of First Order approximation (FO), First Order Conditional Estimation (FOCE), First Order Conditional Estimation with Interaction (FOCEI), and Laplace (second order approximation) are all present and work well. These methods benefit from some improvements in gradient processing which reduces run times and avoids premature ending of certain complicated models. In addition to these methods, ICON has added several Bayesian methods. These new methods provide distributions of parameter estimates as a result rather than a single set of best-fit parameters. These new methods include:

• Importance sampling Expectation Maximization
• Iterative two-stage
• Stochastic Approximation Expectation Maximization (SAEM)
• Full Markov Chain Monte Carlo (MCMC) Bayesian Analysis

All of these new methods require MU referencing. This tells NONMEM how the THETA parameters are associated arithmetically with the etas and individual parameters. An example of this conversion is shown here:

NONMEM VI Code

K = THETA(3)+ETA(3)

NONMEM 7 MU Referencing

MU_2=THETA(3)
CL=MU_2+ETA(3)

This MU referencing is used to speed up the execution of the new Bayesian methods. Reported improvements are seen with complex models that may not minimize using FOCE, but MU-referenced Bayesian methods provide adequate model fits for interpretation.

A key new feature for NONMEM 7 is the ability to run multiple estimation steps in one control stream. For example, you can start the model using a Bayesian methodology to quickly approach a set of parameter estimates that are reasonable. Then a FOCE method can be invoked using the last iteration of the previous method as a starting point. This new ability to execute multiple estimation steps in a single control stream can prove to be very useful when working with difficult problems or complex sets of data and equations.

### NONMEM Output

In addition to the new methods, ICON has updated the output to make it more user-friendly and versatile. In the text output file, the following tags have been added to facilitate extraction of key information. These new tags are:

• #METH: Text that describes the method, for example First Order Conditional Estimation Method with Interaction.
• #TERM: This tag indicates the beginning of the termination status of the estimation analysis.
• #TERE: This tag indicates the end of the lines describing the termination status of the analysis.
• #OBJT: This tag indicates the beginning of the text describing the objective function, such as Minimal Value Of Objective Function.
• #OBJV: This tag indicates the objective function value.
• #OBJS: This tag indicates the objective function standard deviation (MCMC Bayesian analysis only).

Another addition is the output of conditional weighted residuals as a default output of NONMEM 7. This eliminates the need to calculate these separately or with customized code. Finally all variance, covariance, and parameter estimates are now output in a standard table format as a separate file. This eliminates the need to extract this information from the text output file using a specialized tool.

### The Good

NONMEM continues to be the gold standard in software for nonlinear mixed effects models for the pharmaceutical industry. The updates and additions in NONMEM 7 have added to the repertoire of tools that are available to the pharmacometrician to evaluation pharmacokinetic and pharmacodynamic data. The addition of the Bayesian methods is of particular interest to many who develop complex models that do not appear to converge using the traditional conditional estimation methods.

The upgrade to Fortran 95 was significant as support for Fortran 77 was waning, making it increasingly difficult to find appropriate compilers. In addition, the complete rebuilding of the NONMEM output is a welcomed improvement. It provides the end user with the ability to quickly access the desired information without the need for a text extraction program or painful review of the output file.

### Room for Improvement

Although NONMEM 7 is a step in the right direction, there is still a huge void in the space for a high quality nonlinear mixed effects modeling program with a viable graphical user interface. NONMEM 7 still requires command line interaction at a minimum for the installation process, and then again for execution, unless a separate GUI is purchased. Furthermore, NONMEM 7 only performs model regression. It does not contain any post-processing capabilities. This leaves diagnostic analysis split amongst a variety of tools such as Excel, R, S-Plus, SAS, and many others. Each user tends to create a “system” of software to perform their analysis. In the end, we have no common end-to-end software package for pharmacometric analysis.

My recommendations for future NONMEM development are the following:

• Integrated fortran compiler that is invisible to the user
• Integrated GUI and post-processing tool for standard analysis
• Continued improvements to existing estimation methods and addition of new methods

### Conclusion

Overall, NONMEM continues to be a leader in pharmacometric analysis tools. After many years of minimal development, the ICON team has added significant value to the product. However, there is still room to improve and simplify the software installation and interface to ensure continued leadership in the market. I will continue to use NONMEM for my population pharmacokinetic and pharmacodynamic analyses, but I will always be looking for that next software that can bridge the gap to the modern era of GUI computing.

You can find out more about NONMEM at ICON’s website. The following contact information was found on the ICON website: License Enquiries can be made by email (IDSSoftware@iconplc.com), telephone (+1 410-696-3100) or fax (+1 215-789-9549).

## NONMEM Software Review – Part 1

In early March, many of you voted on which PK software I should review. NONMEM received 41% of the votes, so I will review it first. I decided to break up my review into two parts: Installation of NONMEM and Using NONMEM. This is particularly important for NONMEM because the installation of the software proved to be challenging.

NONMEM 7.1.0 CD

NONMEM is an acronym for the Nonlinear Mixed Effects Modeling Software originally designed by Lewis Sheiner and Stuart Beal, formerly of the University of California, San Francisco. The software arrived on  single CD from ICON Development Solutions, the current owner and developer of NONMEM. I received version 7.1.0 on the CD and was instructed to download 7.1.2 update from a website.

The CD contains the NONMEM source code, help files, an installation batch file, and installation instructions. It does not come packaged with a Fortran compiler, which is required for installation and execution. NONMEM supports multiple operating systems, including Linux, UNIX, Windows, and Mac OS X. I attempted the installation of NONMEM in 3 distinct environments: Windows Vista Home Premium, Mac OS X (Snow Leopard), and a virtual machine (Virtual Box) running Windows XP on a Mac OS X computer.

### Installation on Windows Vista Home Premium

I attempted to install NONMEM on my Windows Vista Home Premium computer by first installing Fortran G95 (www.g95.org). I followed the instructions on the G95 website and successfully installed Fortran. I was able to test the Fortran installation by compiling a small fortran program provided in the NONMEM installation instructions. I then disabled the user access control feature of Windows and proceeded to install NONMEM.

NONMEM is installed from a command window by calling a batch file and appending several commands. These commands include the installation drive, destination folder, fortran command, fortran optimizations, archive command, and a few other optional items. After calling the batch file, commands begin to be issued that copy the necessary files to the desired location and compile the NONMEM programs (NONMEM, PREDPP, and NMTRAN) using Fortran. After NONMEM is compiled and installed, the help files are installed and then a test run is executed.

My installation worked normally until the test run. At that point the command window closed and NONMEM was not executed. I spent a few hours investigating the problem, but was unable to resolve the problem.

### Installation on Mac OS X (Snow Leopard)

After my failure to install NONMEM on my Windows Vista computer, I attempted to install it on my iMac. I attempted to using Fortran G95 for the NONMEM installation (as described above), but was also unsuccessful. I then used gfortran (hpc.sourceforge.net), another version of Fortran. When using gfortran, NONMEM installed without any problems. The test run was executed and worked properly. I also successfully completed the installation using Intel Fortran (version 11).

### Installation on virtual Windows XP (on Mac OSX using VirtualBox)

I also tested the installation of NONMEM using a virtual machine on Mac OS X. Using Sun Microsystems’ VirtualBox (www.virtualbox.org), I installed a Windows XP client operating system. I attempted the same installation procedures using both G95 and gfortran. Unfortunately, the same problem occurred as was seen with Windows Vista.

### Overall impressions of installation procedure

The installation of NONMEM was very difficult to say the least. Of the 3 system setups, I was only able to get NONMEM installed on one … and only after trying different Fortran compilers. I have been using NONMEM for almost 10 years, and have performed installations of previous NONMEM versions (versions 5 and 6) on various Windows platforms (2000, XP, 7), Linux (RedHat), and OS X. Frankly, I was quite surprised of the many challenges that I experienced with NONMEM 7.1.0. I spent approximately 6 hours working on the various installations.

Although I was able to get NONMEM working on my primary computer, I believe the installation could be much smoother. The difficulty I experienced is not uncommon with NONMEM. It is particularly vexing to new users who are trying to use the software for the first time. ICON may want to explore the distribution of NONMEM with a Fortran compiler. This might allow an easier installation and fewer challenges. In the end, NONMEM is a tool for pharmaceutical modeling and simulation, not a week-long IT project.

### Where to get NONMEM?

You can contact ICON Development Solutions to purchase a license to NONMEM.

### Part 2 – Using NONMEM

Later this week I will post about my experience using NONMEM. Watch for Part 2 of this software review.

## Is a Monte Carlo simulation an exotic drink?

The term “Monte Carlo simulation” is often used in the modeling and simulation literature with PK/PD analysis. When I was first exposed to this term, I was thoroughly confused and thought that it was some exotic statistical method that required 3 PhDs and a few days to comprehend. Well, I was very wrong.

A Monte Carlo simulation is a simulation that utilizes the “Monte Carlo Method“. It was named after the famous Monte Carlo Casino in Monaco.

Monte Carlo Casino Monaco

At the Monte Carlo Casino, people take their money and gamble on games of chance. Games of chance are based on probabilities of random events ocurring. For example, roullette is a game where a ball bounces around a spinning platform and eventually comes to rest on one of 36 spots. Players can make various bets on the chance that the ball will stop on a specific spot or spots.

You may ask, “what in the world does that have to do with simulations?!” Well, let me tell you. Prior to the Monte Carlo method, simulations were performed with specific parameter values to generate a single simulation. For example, let’s assume we have the following PK model:

$C(t)=\frac{Dose}{V}*e^{(-\frac{CL}{V}*t)}$

We can predict a concentration-time curve by providing a value for CL and V. We can then do that for various combinations of CL and V. It would look something like this:

Discrete Simulation

This gives us 2 concentration-time curves. While this is useful, we don’t always know the exact values of CL and V for a given individual before they take the drug. What we usually know is that the CL and V have some average value along with a variance. In other words, we have a distribution of values for CL and V, with some being more likely than others. Thus instead of just choosing a few sets of values for CL and V, what if we chose many values. And what if we used the known distribution to select more likely values more often and less likely values less often? Well, we would then have a simulation that looks like this:

Monte Carlo Simulation

As output, we would get a large distribution of plasma concentration-time curves that would represent the range of possibilities, and the more likely possibilities would occur more frequently. This is extremely useful in PK/PD simulations because we can quantify both the mean response and the range of responses.

To do a Monte Carlo simulation, you simply have to have a program (like NONMEM or WinNonlin) that randomly selects a parameter value from a known distribution. Then runs the PK model and saves the output. That process is repeated many times (usually between 1,000 and 10,000 times) to generate the expected outcomes.

Hopefully you understand Monte Carlo simulations better now … and if not, you should go get an exotic drink and try reading this post again tomorrow!