Journey of an API: Developing robust analytical methods
13th Mar 2024
As an API makes its way through development and to market, product quality, safety and efficacy ultimately rely on sound and robust analytical methods. The significance of analytical method development is not confined to a single stage in the API’s journey; instead, it begins in a project’s earliest stages and extends through to validation and commercialisation. The stakes are high: problems in the analytical development phase may not become apparent until later, at which point they can introduce significant delays and regulatory hurdles.
In this blog, we’ll take a closer look at analytical method development and its implications for product quality and compliance. Read on to explore key indicators of analytical success, how they are measured, and how organisations can address common obstacles along the way.
Measuring success: Key objectives for analytical method development
The ultimate goal of analytical method development is to create robust and reliable testing procedures tailored to the characteristics of the product in order to ensure that it is safe and effective for patients. However, in order to reach this end state, there are several milestones that organisations must meet along the way. Let’s break down each of these objectives and explore how scientists can effectively measure success as an API progresses through development.
In order to streamline analysis and ensure confidence in their results over time, organisations must develop analytical methods that are robust and reliable. A reliable method is dependent upon a good control strategy through instrument calibration and system suitability assessments. The analytical procedure should be fit for its intended purpose: to measure the attribute(s) of the analysed material with the required specificity, selectivity, accuracy and precision so that results can be correctly determined over the required reportable range and can be reliably achieved across different instruments or laboratories.
How it’s measured: A number of studies may be leveraged to measure robustness and reliability. For robustness in particular, scientists can systematically vary a single parameter, such as pH or temperature, to understand the influence of these changes on results. In addition, accuracy and precision studies measure the closeness of test results to the true value and how reproducible they are to each other, generally comparing against a reference standard using high-performance liquid chromatography (HPLC), gas chromatography (GC) or similar methods. Robustness testing stresses an analytical method to ensure results remain reliable despite small changes in certain conditional factors such as temperature or pH. Other studies, like establishing a limit of detection (LOD) and limit of quantification (LOQ) to understand the method’s sensitivity, are also important in affirming robustness and reliability.
In support of achieving a high-quality, safe end product, impurity identification is another important component of analysis. Impurities can exist in starting material, be introduced through solvents or reagents, or result from degradation under certain storage conditions. Certain impurities, including genotoxic impurities, can be extremely harmful to human health, making them critical to identify and eliminate.
How it’s measured: Impurities must remain under a certain permitted daily exposure (PDE) as defined by the International Council for Harmonisation (ICH), and scientists employ a variety of analytical techniques throughout development to identify and quantify a range of different impurities, including organic impurities, residual solvents, structurally related degradation products and inorganic impurities. Chromatographic methods such as HPLC and GC are particularly useful for their ability to detect impurities even at low concentrations. Mass spectrometry, often used in conjunction with chromatographic methods, is also useful in accurately identifying and measuring impurities.
Scientists must also ensure that analytical methods will stand up to scrutiny in regulatory submissions as the product reaches later stages of development. Regulatory agencies such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have stringent quality control requirements, and all methods must be validated in accordance with Good Manufacturing Practices (GMP).
How it’s measured: Compliance and validation go hand in hand when it comes to analytical method development. As a product progresses into the later stages of development, organisations conduct a variety of validation studies tailored to the specific phase of clinical development. Methods are routinely assessed and reviewed based on process requirements and historical data. In addition, maintaining thorough documentation throughout all stages of development can enable organisations to flag and rectify any issues earlier on, and be more prepared when it comes time for formal validation.
Overcoming obstacles: Avoiding common analytical pitfalls
A number of challenges can arise during analytical method development, and many of them have critical implications for compliance, project timelines, and other priorities. Let’s explore how organisations can best address some of these common obstacles.
Ensuring that analytical methods are reliable can be further complicated when a project transitions from one organisation to another. Formal method transfer activities are crucial in demonstrating comparability between laboratories and verifying the results of another organisation. In addition, methods may be outdated, or incompatibilities with pre-established methods may arise when a product moves from lab scale to plant scale. Addressing challenges with knowledge transfer and scale-up requires extensive analytical expertise, enabling scientists to identify appropriate modifications to the analytical method, such as adjusting sample preparation, or altering parameters like injection volumes, column dimensions and others to accommodate the larger scale.
Although many studies are not explicitly required in the early stages of development, organisations may choose to perform more testing than is required to ensure greater reliability and to be prepared when the product moves into later phases. However, rigourous testing also comes with time and cost implications. This leaves pharma and biotech companies with a challenging dilemma, whether to perform more testing than is necessary and ensure a more reliable method, or to simply cover only the required studies to contain costs and save time early on.
The answer can vary based on the organisation, their product and their order of priorities. For most organisations, it makes sense to aim for a higher level of rigour than is strictly required at each stage, while making sure to avoid tests that are unnecessary or inapplicable to their product. An experienced analytical team can provide guidance and aid in determining what studies should be conducted earlier on and which should be reserved for later stages of development to maximise reliability and efficiency.
One of the most frustrating challenges that can arise during analysis is results that do not meet specification, particularly in later phases of development. Numerous factors can cause out of specification (OOS) results, from true changes in the impurity profile, to false results from instrument malfunctions, sample contamination, environmental conditions and more. In earlier phases, analytical chemists should report results to R&D chemists, working together to investigate what may have impacted results, and take measures to rectify them immediately, whether process or analytical driven. For example, a reduction in product purity indicates an increase in the impurity profile. Assisting the chemist may show that this was caused by a change in parameter, such as changes in temperature, a different charging approach, or other factors.
If OOS results surface during production, organisations should have standard operating procedures in place to address them, which include thorough investigation of the results to determine the true root cause and conducting additional testing when appropriate, often involving multiple analysts and/or different equipment.
Back to the big picture: Getting analytical methods right the first time
While organisations may be inclined to perform no more testing than is required at a given stage, it’s imperative to avoid rushing or shortcutting this critical aspect of the API’s journey to market. Analytical methods that are not significantly reliable or robust can create setbacks down the line that are even more costly, requiring scientists to go back and adjust methods in later stages and potentially halting production. By getting analytical development right the first time, organisations can ultimately save time and costs in the long run while ensuring optimal product quality. The analytical method development and its role in controlling the API production are an important part of the process for ensuring patient safety. The analytical methods themselves are registered with the regulatory bodies that allow an API to reach market.
Analytical chemistry at Sterling
With an experienced analytical team and a collaborative approach, we at Sterling support our customers in developing robust analytical methods at every stage of development. We take the time to ensure that our methods produce high-integrity, reliable data from the start, and our wide range of analytical equipment enables us to support a range of testing requirements. We work closely with our customers to establish an approach that works best for their product, considering regulatory requirements, project timelines, current stage of the lifecycle, and other important factors.
Interested in learning more about our approach to analytical method development and discovering how we can support your programme? Contact us to speak to an expert, or visit our Knowledge Hub for more helpful insights.