Understanding Elapsed Time Measurement in Python

Python

The concept of elapsed time plays a critical role in software development, especially when performance, efficiency, and responsiveness are primary concerns. In Python, measuring the time a block of code takes to execute is more than just a technical exercise—it’s a practical necessity. Whether optimizing scripts, benchmarking functions, or ensuring that critical operations meet timing constraints, elapsed time measurement offers a window into how your code behaves in real-world scenarios.

Python provides several built-in modules and approaches to help developers assess the duration of code execution. These tools offer various levels of precision and complexity, making it possible to tailor the solution to the problem at hand. As programs grow in size and complexity, precise time tracking becomes indispensable for identifying performance bottlenecks and refining computational efficiency.

What Does Elapsed Time Represent in Programming

Elapsed time refers to the total time taken between two points in a program’s execution. It’s the span that starts when a particular block of code begins running and ends when it finishes. This can include all the time used by computations, I/O operations, and delays. Unlike CPU time, which focuses only on the time the processor spends working on tasks, elapsed time encompasses every moment from start to end—whether active or idle.

For many use cases, wall-clock time is what matters most. This is especially true when working with operations that involve external systems, such as API requests or file downloads. Measuring elapsed time gives a complete picture of user-facing performance and is ideal for scenarios where perceived responsiveness is essential.

Reasons to Measure Execution Duration in Python

Measuring execution time isn’t just for academic curiosity; it serves several practical purposes across many domains of programming. Here are some reasons why it’s necessary to keep track of how long code takes to run.

Performance Optimization

Applications that are sluggish or inefficient can frustrate users and consume unnecessary resources. By measuring how long certain operations take, developers can focus their efforts on improving performance where it truly matters. This is particularly useful for functions that are executed repeatedly or consume a large portion of total execution time.

Debugging and Analysis

Sometimes code doesn’t perform as expected—not due to logic errors, but due to delays. Identifying slow segments using elapsed time can point to inefficient algorithms, redundant processes, or external dependencies that introduce latency. This makes elapsed time an invaluable tool during troubleshooting.

Benchmarking and Comparisons

When comparing two or more implementations of the same task, elapsed time serves as an objective measure of which version is more efficient. This is commonly seen when evaluating new libraries, selecting algorithms, or migrating legacy systems to more modern alternatives.

Automation and Scheduling

In systems that involve scheduled jobs or batch processing, knowing how long a job will take to run helps in planning and avoiding resource contention. Accurate timing ensures that long-running processes don’t overlap and interfere with each other, preserving the integrity and performance of the system.

Exploring Python Modules for Measuring Elapsed Time

Python’s strength lies in its extensive standard library, which includes several tools specifically for measuring execution time. Each has its own characteristics and ideal use cases. Selecting the right method depends on whether you need high precision, simplicity, human-readable formats, or system independence.

Time Module

One of the simplest and most widely used modules for tracking time is the time module. It includes functions that return the current time in seconds, which can be used to compute the difference between two timestamps. This method is often used for general-purpose timing and is accessible to beginners due to its straightforward implementation.

Perf Counter

For higher precision and more stable timing unaffected by system clock changes, Python offers perf_counter. This function provides a monotonically increasing timer, which means it will never go backward, even if the system clock is adjusted. It’s ideal for short-duration measurements and performance-critical benchmarking.

Process Time

When the goal is to measure only the time the CPU spends processing tasks, excluding sleep and waiting periods, process_time is the preferred function. It tracks the processor time rather than wall-clock time, making it particularly useful for computational benchmarks.

Datetime Module

For those who require human-readable timestamps and durations, the datetime module offers convenient features. While it may not offer the precision needed for micro-benchmarking, it’s extremely useful for logging and tracking execution over broader periods.

Timeit Module

The timeit module is designed specifically for benchmarking small snippets of code. It runs the code multiple times and returns the average execution time, reducing the impact of background processes and one-off delays. This makes it an excellent choice for comparing the efficiency of different approaches.

Key Considerations When Measuring Time

While Python provides the tools needed to capture execution duration, obtaining accurate and meaningful results requires thoughtful implementation. Several factors can introduce noise or bias into the timing process, and it’s important to be aware of these to avoid drawing misleading conclusions.

Impact of System Load

One of the most common sources of variability is the system’s background activity. Other processes running on the machine can consume resources and introduce delays. As a result, a single measurement may not accurately reflect the performance of a code block. Running tests multiple times and averaging the results can help mitigate this effect.

Clock Resolution and Accuracy

Not all timing functions offer the same level of resolution. Using a low-resolution clock for very short operations may yield imprecise results, or even zero duration if the operation completes faster than the clock can measure. For this reason, selecting the right timing function is essential, especially when testing high-speed computations.

JIT Warm-up and Caching

Python’s interpreter may behave differently on the first few executions of a function, especially when importing modules or compiling bytecode. This warm-up phase can skew timing results if not accounted for. A good practice is to run the function a few times before recording the measurement.

I/O Operations and Delays

When measuring tasks that involve disk access, network communication, or user input, elapsed time includes all waiting periods. This is useful for understanding user experience but may not reflect the computational efficiency of the algorithm itself. Differentiating between active and idle time can provide a clearer picture of system performance.

Matching Methods to Use Cases

Each timing function in Python has strengths and limitations. The best method to use depends on the specific goals of your timing exercise. The following scenarios illustrate how different methods can be matched to appropriate use cases.

For high-precision measurements of quick functions, such as evaluating sorting algorithms or simple arithmetic loops, perf_counter offers the most accurate results.

When determining the time a script takes to complete from start to finish, including all subprocesses and delays, the time module is sufficient and easy to implement.

If the aim is to profile how much actual processing time the CPU spends on computations, without being skewed by file access or sleep functions, process_time is a logical choice.

For scenarios involving timestamp logging or elapsed duration display in logs, the datetime module provides intuitive and readable outputs.

For systematic benchmarking of alternative code approaches, timeit removes noise and isolates performance differences effectively by running repeated tests.

Importance of Repeating Measurements

No matter which function is used, relying on a single measurement is rarely advisable. The environment in which code executes is dynamic, with many uncontrollable variables. Repeating the measurement multiple times and calculating averages helps smooth out anomalies and produce consistent, reliable data.

The timeit module automates this process, but even when using other methods, creating custom loops that run the function multiple times and record durations can yield a better representation of actual performance.

This approach is especially useful when evaluating optimizations. By comparing the average time of the original and optimized code across many runs, developers can quantify improvements with greater confidence.

The Role of Time Measurement in Modern Development

Modern software development emphasizes agility, scalability, and responsiveness. In such an environment, understanding how code performs under different conditions is not just a bonus—it’s a requirement. Measuring execution time equips developers with the data needed to make performance-focused decisions.

Whether it’s choosing between different libraries, designing APIs that respond quickly, or developing data pipelines that must complete within time constraints, accurate time tracking helps ensure systems meet both technical and user expectations.

Moreover, time measurement supports collaboration and communication. When working on large teams, being able to present performance data in a clear and quantitative manner fosters trust and facilitates discussions around trade-offs and design choices.

Timing as a Foundation for Optimization

Performance optimization often begins with measurement. Without knowing where time is being spent, efforts to improve efficiency can be misguided or counterproductive. Measuring elapsed time reveals the hot spots—those sections of code that consume the most resources or introduce the greatest delays.

With this knowledge, developers can target their optimizations more precisely. Whether it involves refactoring code, switching algorithms, or reordering operations, improvements can be tested and validated using the same timing mechanisms that revealed the issues in the first place.

This feedback loop—measure, optimize, re-measure—is at the heart of high-performance programming. It ensures that changes produce real benefits and that those benefits are meaningful in the context of the application’s goals.

Execution Time Measurement

Elapsed time is more than a technical detail—it’s a metric that reflects how users experience your application and how well your system manages its workload. Measuring it effectively requires not only the right tools but also an understanding of context, goals, and potential pitfalls.

Python’s rich set of timing functions provides the flexibility to address a wide range of performance monitoring needs. By choosing the appropriate method, avoiding common errors, and interpreting results with care, developers can gain deep insights into their code and create software that runs efficiently and predictably.

In the broader picture, timing is a foundation for many disciplines within software development, including profiling, optimization, benchmarking, and reliability engineering. Mastering it empowers developers to build better software—software that is not only correct but also fast, responsive, and resource-conscious.

In-Depth Exploration of Timing Techniques in Python

After understanding the foundations of elapsed time measurement in Python, it’s time to delve deeper into the specific techniques and tools provided by the language. Each method is suited to different kinds of tasks, and choosing the right one depends on the nature of the code being measured. In this part, we explore how these tools behave under various conditions, compare their characteristics, and highlight real-world applications.

Assessing the Time Module for General Timing

Among Python’s core modules, the time module stands out for its simplicity. One of its key functions offers timestamps relative to the system epoch. These timestamps can be stored before and after the execution of a code block. Subtracting the initial value from the final one yields the duration taken by the block.

This method is especially suitable for broader tasks, such as measuring the time it takes to load data from a file, connect to a server, or complete a user-triggered operation. Because the function draws from the system clock, it reflects real-world duration, including any waiting or delays.

However, this simplicity comes with limitations. For very short durations, the resolution might not be fine enough to detect differences, especially when functions complete within microseconds. Also, system clock adjustments may introduce unexpected inconsistencies, leading to non-monotonic results.

High-Resolution Measurement Using Performance Counters

When greater precision is necessary, performance counters offer an advanced alternative. These provide access to timers with significantly higher resolution and are ideal for benchmarking short snippets of code. Unlike traditional timestamps, performance counters use a monotonic clock source, ensuring that the returned value only increases over time, unaffected by any external system changes.

The reliability of this method becomes especially evident when comparing near-identical algorithms. Even minor variations in efficiency, such as differences in memory access or internal iteration strategies, can be detected due to the counter’s granularity. This makes it a go-to method for developers conducting detailed performance evaluations.

Still, it’s worth noting that these counters produce floating-point values representing fractional seconds. While this is perfect for technical analysis, the results may require formatting or rounding for clarity in user-facing applications or reports.

Measuring Processor Time for Computational Analysis

Another timing approach focuses specifically on processor usage. This is particularly valuable when the goal is to isolate the time spent actively processing, without including delays due to sleep functions, input/output waits, or external calls.

Processor time is a critical metric in CPU-intensive applications, such as simulations, data transformations, or scientific computations. In these contexts, developers are less interested in overall duration and more concerned with how much processor capacity the operation consumes. Tracking processor time can help identify inefficient algorithms or functions that hog computational resources.

Despite its precision in measuring active execution, this method is less useful in scenarios that include external dependencies or waiting states. It won’t reveal the time spent fetching data from a server or reading from disk. As such, it’s most effective when paired with other timing techniques that account for wall-clock duration.

Human-Readable Timing with the Datetime Module

While most high-precision methods focus on numerical results, there are situations where a readable time format is more appropriate. This is where the datetime module becomes useful. Rather than capturing processor ticks or epoch-based timestamps, this module provides formatted date and time values.

This makes it ideal for logging, reporting, and systems that involve auditing or compliance. When a task begins and ends, storing these times with full date and time stamps allows for easy review later. While the accuracy of this approach may not be as sharp as performance counters, the benefits of clarity and context often outweigh the lack of precision.

It’s important to remember, however, that this method is sensitive to changes in the system clock. If the system time is modified manually or through network synchronization, results can be distorted. This vulnerability makes it less suitable for performance testing but well-suited for historical records.

Systematic Benchmarking Through Repetition

For truly reliable performance assessment, executing code just once is rarely enough. Background processes, temporary system load, and unpredictable fluctuations can all influence a single result. To overcome this, repetition is key.

Benchmarking through repeated execution helps iron out anomalies and provides an average execution time. This is where a specialized module becomes valuable. It allows developers to define a block of code and run it many times, capturing the total time taken. Dividing this total by the number of runs yields a highly accurate average.

This approach not only smooths out noise but also highlights consistent performance differences between two or more implementations. Even if both versions seem to run in comparable time initially, multiple executions reveal subtle efficiency gains or losses that could affect performance at scale.

Comparative Overview of Timing Functions

When deciding which timing method to use, it helps to consider their comparative features. Each method provides distinct benefits, and their effectiveness varies depending on the task.

The time-based function offers a convenient way to capture wall-clock duration. It is appropriate for general measurement where delays, pauses, and user waiting periods are significant. This makes it suitable for monitoring real-world interactions or end-to-end script execution.

The performance counter delivers unmatched precision. It should be employed when benchmarking fast functions or when minute changes in performance need to be detected. Its stability against system clock adjustments makes it highly dependable in controlled testing environments.

Processor time measurement is highly specialized, focused solely on the CPU’s active engagement with the code. It is best used for isolating processing overhead, especially in computational projects. However, it ignores delays and should not be relied on for user experience evaluation.

The datetime module shines when timestamps need to be logged or reported. While less precise for internal testing, it adds tremendous value to documentation, especially when code is part of a larger process requiring traceability.

For thorough and fair benchmarking, the repeated execution strategy proves invaluable. It captures a more representative average and helps minimize distortion from transient factors, making it ideal for comparison studies.

Real-World Applications of Timing Tools

Timing techniques are more than theoretical tools—they’re crucial for solving real-world programming problems. Here are several practical scenarios where accurate time measurement plays an essential role.

In data processing pipelines, developers often encounter delays at various stages, from data ingestion to transformation and export. By measuring elapsed time at each step, it’s possible to identify the slowest segment and take corrective action, such as replacing inefficient functions or parallelizing operations.

In web development, timing is key to ensuring that endpoints respond quickly. Measuring the time taken for server-side logic, database queries, and third-party service calls allows developers to maintain a responsive application. Poorly performing routes can be optimized once their execution durations are known.

In machine learning projects, model training and evaluation may require hours of computation. Tracking the duration of each training cycle helps in estimating resource usage and determining whether early stopping criteria are being met. It also provides insight into how parameter changes affect training time.

Even in interactive applications like games or simulations, developers use timing to control rendering rates, animation speeds, and input responsiveness. Accurate timekeeping ensures smooth user experience and consistent behavior across devices.

Avoiding Common Timing Pitfalls

While timing tools are powerful, incorrect usage can lead to misleading conclusions. One frequent mistake is failing to account for warm-up time. The first few executions of a function may behave differently due to caching, memory allocation, or dynamic compilation. Timing should begin only after the code has stabilized.

Another common error involves using low-resolution timers for short durations. When measuring high-speed operations, using functions with limited precision can yield zero or inconsistent results. It’s important to match the function’s resolution to the expected duration of the task.

External system influences such as antivirus scans, software updates, or resource-heavy background applications can skew timing results. Running tests on a clean system or using multiple repetitions can help neutralize these variables.

Sometimes developers include setup and teardown code inside the timing block. This dilutes the accuracy of the measurement by including irrelevant overhead. Isolating the core function from its context ensures that only the intended task is being timed.

Finally, assuming that one method fits all situations is a mistake. No single timing function is universally perfect. Choosing the correct one based on the specific use case yields better accuracy, insight, and efficiency.

Enhancing Accuracy with Best Practices

Accurate measurement depends on more than just the right tool—it requires disciplined methodology. Here are best practices to follow when capturing elapsed time in Python.

Always warm up the code. Run it a few times before recording results, allowing the system to allocate resources and cache dependencies.

Choose the highest-resolution timer available for the job. For short code segments, avoid standard wall-clock functions and favor precision counters.

Perform multiple iterations and calculate averages. Single measurements are vulnerable to noise, while averages produce more reliable insights.

Isolate the timing block. Ensure that only the logic of interest is within the measured area. Setup code, validation, and logging should occur outside this block.

Account for the context. If code depends on external services, measure multiple layers to see where delays originate. This leads to more effective optimization strategies.

Interpret results in context. Not all improvements are worth pursuing. Sometimes a faster approach might be more complex or harder to maintain. Use timing data as one factor in broader design decisions.

The Path Toward Performance-Aware Development

Incorporating elapsed time measurement into regular development habits leads to smarter, more responsive software. It changes how developers approach problem-solving, turning vague performance impressions into measurable data. This fosters a culture of continuous improvement and optimization.

By embracing timing tools early in the development cycle, teams can prevent bottlenecks, reduce user-facing latency, and scale more effectively. Performance monitoring should be as integral as unit testing and version control—an essential aspect of creating professional-grade applications.

Moreover, performance data improves communication. Whether presenting findings to stakeholders or collaborating with peers, objective metrics about time usage help validate decisions and align development goals with user expectations.

As modern software evolves toward more interconnected and data-heavy architectures, time becomes an even more precious resource. Knowing how to measure it, understand it, and act upon it is no longer optional—it’s a necessity.

Elevating Performance Analysis through Advanced Time Measurement

After gaining an understanding of the tools and methods available in Python for measuring execution time, the natural progression leads to more advanced applications. These include integrating timing into profiling workflows, building custom benchmarking suites, evaluating algorithmic choices at scale, and adopting a performance-first mindset in design and development.

At this stage, the objective is no longer just to measure how long something takes, but to extract insight, identify inefficiencies, and optimize code behavior using time as a diagnostic tool. This part explores strategic applications of elapsed time measurement that go beyond isolated tasks and serve broader architectural and development goals.

Combining Timing with Profiling Techniques

Profiling is the process of examining code execution to determine which parts consume the most time or resources. While basic timing functions measure specific blocks, profilers offer a holistic view across the entire program. However, integrating elapsed time metrics into profiling sessions adds clarity and sharpens the interpretation of profiler results.

Using both tools in parallel provides a double-layered analysis. The profiler shows which functions or lines are bottlenecks, while direct timing confirms the duration of specific paths or sequences. This synergy is especially useful in large applications where multiple components interact and influence each other’s execution.

Profiling also highlights unexpected time sinks—areas of the code that intuitively seem lightweight but consume more time than anticipated. These are perfect candidates for targeted timing with high-resolution functions to dissect their behavior more closely. By alternating between broad profiling and focused timing, developers gain a multidimensional understanding of their application’s performance.

Designing Custom Benchmarking Frameworks

While Python provides modules for basic benchmarking, some projects require more control and customization. For example, when measuring performance across multiple versions of a function, handling different input sizes, or aggregating results for visualization, off-the-shelf solutions may fall short.

Building a custom benchmarking framework allows developers to:

  • Create structured test scenarios for different conditions
  • Store timing results in a standardized format
  • Generate comparative summaries
  • Visualize trends in execution time

A well-structured benchmark framework involves defining test functions, wrapping them with timing logic, repeating executions, and logging output in a consistent way. This infrastructure becomes invaluable when performance tracking is a recurring task, such as in performance regression testing during continuous integration.

Custom frameworks can also incorporate conditional execution based on data type, load, or hardware. This flexibility ensures that benchmarks are contextually relevant and simulate real-world usage more accurately than isolated test cases.

Scaling Time Measurement with Input Growth

As software grows more complex, so do the inputs it must handle. Timing small inputs might offer little insight into how a function scales. Evaluating how execution time evolves as input size increases is essential for understanding time complexity and algorithmic behavior.

This form of analysis reveals whether a process scales linearly, exponentially, or otherwise. Timing across a range of input sizes and plotting the results exposes growth patterns and uncovers inefficiencies that only emerge at scale.

For example, a sorting algorithm might perform similarly to another on small lists, but diverge significantly as list size increases. By timing each variant across gradually increasing inputs, developers can make informed decisions about which algorithm to use in different circumstances.

Scaling tests are especially relevant in data-heavy domains like analytics, machine learning, and real-time processing, where the cost of inefficient scaling is magnified over millions of operations. Time measurement becomes a predictive tool, offering foresight into how systems will behave under heavier loads.

Embedding Timing in Test Suites

Time-based measurements are typically associated with optimization or benchmarking tasks, but they can also be embedded into test suites. When applications must meet strict performance requirements, integrating timing checks into unit or integration tests ensures that regressions are caught early.

This involves defining acceptable performance thresholds for key operations. If execution time exceeds the defined limit, the test fails, prompting investigation. This proactive approach prevents unnoticed slowdowns from creeping into production over time.

However, due care must be taken to avoid false positives. Environmental conditions like CPU load or memory usage can affect timing results. As such, timing thresholds should be based on averaged measurements and accompanied by buffer margins to account for fluctuation.

Embedded timing is particularly useful in systems where latency matters—such as finance, gaming, or real-time communication. It serves as a safeguard against performance degradation that functional tests alone might miss.

Using Time Data to Guide Refactoring

Code refactoring often focuses on clarity, maintainability, and modularity. However, time measurement adds another dimension to the process—one rooted in empirical data. By timing a function before and after it is refactored, developers can ensure that improvements in readability or structure do not come at the cost of performance.

In some cases, restructuring code may inadvertently introduce inefficiencies, such as redundant operations or deeper function calls. Without timing, such issues can remain undetected. With precise measurement, developers gain the confidence that refactoring is improving the code both structurally and operationally.

Conversely, when a performance bottleneck is identified, refactoring can be guided by timing feedback. For instance, separating concerns, simplifying nested loops, or caching results may all reduce execution time. By measuring the effect of each change, developers stay informed throughout the optimization process.

Comparing Third-Party Libraries and Tools

In many development scenarios, teams face decisions about whether to build a solution in-house or rely on external libraries. When functionality overlaps, performance can become a decisive factor. Measuring the time taken by different libraries to perform the same task provides concrete data for comparison.

This applies to tasks like parsing files, processing images, handling requests, or performing numerical computations. Libraries often advertise their speed, but actual performance can vary depending on context and usage.

By setting up test cases and timing multiple tools under identical conditions, developers can assess which option best fits their needs. This approach also helps quantify the trade-offs between features, reliability, and speed.

Such benchmarking informs architectural choices and helps avoid the costly process of integrating a dependency only to discover it falls short under load.

Interpreting Time Measurement in Distributed Systems

In modern software ecosystems, many systems are distributed across multiple machines, networks, or cloud regions. In such environments, elapsed time measurement becomes more complex, as latency, transmission delays, and synchronization must be considered.

Timing operations across distributed systems requires understanding the distinction between local execution time and end-to-end time. The former refers to the processing time on a single node, while the latter includes network latency, queuing, and orchestration delays.

To gain a complete picture, developers must measure time at several points in the pipeline. For example, tracking when a request is sent, when it reaches the server, when it is processed, and when the response is received. Aggregating this data provides a breakdown of where time is spent.

In distributed systems, time measurement also informs load balancing, fault detection, and capacity planning. When services respond slower than expected, time data helps pinpoint the source of delay and supports better decision-making under operational pressure.

Timing in Asynchronous and Concurrent Environments

Modern Python applications frequently use asynchronous programming to improve efficiency. Measuring time in such environments poses unique challenges, as tasks may begin and end in overlapping patterns, and concurrent operations can skew straightforward timing results.

In asynchronous contexts, developers must account for the event loop’s behavior. Measuring only the start and end of a coroutine may not reflect true processing time, as other tasks could be interleaved during execution.

To capture accurate durations, it’s important to distinguish between wall-clock time and active time. Tools tailored for async environments can help monitor when specific coroutines are suspended or resumed, offering a more precise breakdown of time spent processing versus waiting.

Concurrency adds another layer of complexity. In multi-threaded or multi-process applications, execution is not linear. Developers must carefully track timing per thread or process and coordinate measurements without introducing race conditions or synchronization errors.

Understanding timing in these environments is crucial for systems that aim to scale efficiently. It ensures that asynchronous code delivers performance benefits rather than introducing unpredictability.

From Measurement to Monitoring: Real-Time Performance Tracking

In production systems, performance cannot be treated as a one-time concern. Conditions evolve, traffic grows, and dependencies change. Embedding real-time time measurement into applications allows developers to monitor how execution times behave under actual user loads.

Real-time tracking involves capturing execution durations during operation and exporting them to a centralized system for visualization and analysis. This can include performance dashboards, alert systems, or anomaly detection tools.

By continuously observing trends, teams can detect regressions early, correlate timing spikes with specific events, and maintain service level objectives. Monitoring complements testing and benchmarking by offering live, contextual feedback from real users.

It also helps in capacity planning, forecasting infrastructure needs, and conducting root-cause analysis after incidents. Ultimately, integrating timing into monitoring frameworks closes the loop between development and operations, promoting a culture of observability.

Final Reflections 

Time is one of the most precious and revealing resources in software development. How code spends time reflects its structure, efficiency, and readiness for scale. Mastering the techniques of elapsed time measurement empowers developers to move beyond guesswork and into a realm of precision, where performance decisions are guided by evidence and insight.

Throughout this series, we’ve explored not only the technical mechanics of measuring execution time but also the strategic applications that make it transformative. From refining a single function to orchestrating a distributed system, time measurement serves as both compass and lens—guiding optimization and revealing hidden complexities.

In a field where milliseconds can separate excellence from mediocrity, the ability to measure, understand, and respond to time data is a vital skill. It helps build software that is not just functional, but efficient, resilient, and delightful to use.

As the demands on modern systems continue to grow, those who treat time as a first-class concern will be best equipped to lead, innovate, and deliver value at scale.