100% free download of Servicenow-CIS-ITSM cram and PDF Braindumps

We have valid and up-to-date Servicenow-CIS-ITSM brain dumps, that really work in the actual Servicenow-CIS-ITSM exam. This website provides the latest tips and tricks to pass Servicenow-CIS-ITSM exam with our practice test. With the information base of Servicenow-CIS-ITSM questions bank, you do not have to squander your chance on perusing Certified Implementation Specialist IT Service Management reference books, Just go through 24 hours to master our Servicenow-CIS-ITSM exam prep and answers and step through the exam.

Servicenow-CIS-ITSM Certified Implementation Specialist IT Service Management test | http://babelouedstory.com/

Servicenow-CIS-ITSM test - Certified Implementation Specialist IT Service Management Updated: 2023

Servicenow-CIS-ITSM Dumps and Practice software with Real Question
Exam Code: Servicenow-CIS-ITSM Certified Implementation Specialist IT Service Management test November 2023 by Killexams.com team

Servicenow-CIS-ITSM Certified Implementation Specialist IT Service Management

Test Detail:
The ServiceNow Certified Implementation Specialist - IT Service Management (CIS-ITSM) examination is designed to assess the knowledge and skills of professionals working with ServiceNow in the field of IT Service Management. Below is a detailed description of the test, including the number of questions and time allocation, course outline, exam objectives, and exam syllabus.

Number of Questions and Time:
The exact number of questions and time allocation for the ServiceNow CIS-ITSM exam may vary, as the exam is periodically updated by ServiceNow. Typically, the exam consists of multiple-choice questions, and candidates are given a specific time limit to complete the test. The duration of the exam is generally around 90 to 120 minutes.

Course Outline:
The ServiceNow CIS-ITSM certification covers a comprehensive set of syllabus related to IT Service Management and the implementation of ServiceNow ITSM solutions. The course outline typically includes the following areas:

1. IT Service Management Overview:
- Introduction to IT Service Management (ITSM) concepts and best practices.
- Understanding IT service lifecycle stages (Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement).
- ServiceNow's role in supporting ITSM processes.

2. ServiceNow ITSM Implementation:
- ServiceNow platform overview and architecture.
- Key ITSM modules and their functionalities.
- ServiceNow ITSM implementation lifecycle and methodologies.
- ServiceNow configuration and customization options.

3. ITSM Process Configuration:
- Incident Management: Handling and resolving incidents.
- Problem Management: Identifying and resolving underlying problems.
- Change Management: Managing changes to the IT environment.
- Service Catalog Management: Defining and managing service offerings.
- Request Fulfillment: Handling service requests.
- Knowledge Management: Capturing and sharing knowledge within the organization.
- Service Level Management: Monitoring and managing service levels and agreements.
- Service Portfolio Management: Defining and maintaining the service portfolio.

4. Integration and Reporting:
- Integration of ServiceNow with other IT systems and tools.
- Configuration of dashboards and reports for ITSM metrics and performance monitoring.

Exam Objectives:
The objectives of the ServiceNow CIS-ITSM exam are to assess the candidate's knowledge and skills in the following areas:

1. Understanding ITSM concepts, frameworks, and best practices.
2. Knowledge of the ServiceNow platform and its ITSM modules.
3. Ability to configure and customize ServiceNow ITSM processes.
4. Proficiency in implementing ITSM processes, including Incident Management, Problem Management, Change Management, and Service Catalog Management.
5. Understanding of integration options and reporting capabilities in ServiceNow ITSM.

Exam Syllabus:
The ServiceNow CIS-ITSM exam syllabus outlines the specific syllabus and competencies covered in the exam. The syllabus typically includes the following areas:

- IT Service Management Overview
- ServiceNow Platform and Architecture
- Incident Management
- Problem Management
- Change Management
- Service Catalog Management
- Request Fulfillment
- Knowledge Management
- Service Level Management
- Service Portfolio Management
- Integration and Reporting
Certified Implementation Specialist IT Service Management
ServiceNow Implementation test

Other ServiceNow exams

ServiceNow-CSA ServiceNow Certified System Administrator 2023
Servicenow-CAD ServiceNow Certified Application Developer
Servicenow-CIS-CSM Certified Implementation Specialist - Customer Service Management
Servicenow-CIS-EM Certified Implementation Specialist - Event Mangement
Servicenow-CIS-HR Certified Implementation Specialist - Human Resources
Servicenow-CIS-RC Certified Implementation Specialist - Risk and Compliance
Servicenow-CIS-SAM Certified Implementation Specialist - Software Asset Management
Servicenow-CIS-VR Certified Implementation Specialist - Vulnerability Response
Servicenow-PR000370 Certified System Administrator
Servicenow-CIS-ITSM Certified Implementation Specialist IT Service Management
ServiceNow-CIS-HAM Certified Implementation Specialist - Hardware Asset Management

Are you looking for Servicenow-CIS-ITSM Servicenow-CIS-ITSM Dumps with test questions for the Servicenow-CIS-ITSM exam prep? We provide recently updated and great Servicenow-CIS-ITSM Dumps. Detail is at http://killexams.com/pass4sure/exam-detail/Servicenow-CIS-ITSM. We have compiled a database of Servicenow-CIS-ITSM Dumps from real exams. If you want to can help you put together and pass Servicenow-CIS-ITSM exam on the first attempt. Just put together our Q&A and relax. You will pass the exam.
Servicenow-CIS-ITSM Dumps
Servicenow-CIS-ITSM Braindumps
Servicenow-CIS-ITSM Real Questions
Servicenow-CIS-ITSM Practice Test
Servicenow-CIS-ITSM dumps free
ServiceNow
Servicenow-CIS-ITSM
Certified Implementation Specialist – IT Service Management
http://killexams.com/pass4sure/exam-detail/Servicenow-CIS-ITSM
Question: 422
An administrator notices that there are two account records in the system with the same name. A contact record with the same name is associated with each account.
Which set of steps should be taken to merge these accounts using the Salesforce merge feature?
A. Merge the duplicate contacts and then merge the duplicate accounts.
B. Merge the duplicate accounts and the duplicate contacts will be merged automatically.
C. Merge the duplicate accounts and check the box that optionally merges the duplicate contacts.
D. Merge the duplicate accounts and then merge the duplicate contacts.
Answer: D
Question: 423
Which two values roll up the hierarchy to the manager for Collaborative forecasting? (Choose two.)
A. Product quantity
B. Quota amount
C. Opportunity amount
D. Expected revenue
Answer: BC
Question: 424
An administrator has been asked to grant read, create and edit access to the product object for users who currently have the standard marketing user profile.
Which two approaches could be used to meet this request? (Choose two.)
A. Create a new profile for the marketing users and change the access levels to read, create and edit for the product object.
B. Change the access levels in the marketing user standard profile to read, create and edit for the product object.
C. Create a permission set with read and write access for the product object and assign it to the marketing users.
D. Create a permission set with read, create and edit access for the product object and assign it to the marketing users.
Answer: AD
Question: 425
The sales team has requested that a new field called Current Customer be added to the Accounts object. The default value will be "No" and will change to "Yes" if any
related opportunity is successfully closed as won.
What can an administrator do to meet this requirement?
A. Configure Current Customer as a roll-up summary field that will recalculate whenever an opportunity is won.
B. Use an Apex trigger on the Account object that sets the Current Customer field when an opportunity is won.
C. Use a workflow rule on the Opportunity object that sets the Current Customer field when an opportunity is won.
D. Configure Current Customer as a text field and use an approval process to recalculate its value.
Answer: C
Question: 426
Sales management wants a small subset of users with different profiles and roles to be able to view all data for compliance purposes.
How can an administrator meet this requirement?
A. Create a new profile and role for the subset of users with the View All Data permission.
B. Create a permission set with the View All Data permission for the subset of users.
C. Enable the View All Data permission for the roles of the subset of users.
D. Assign delegated administration to the subset of users to View All Data.
Answer: B
Question: 427
How can an administrator ensure article managers use specified values for custom article fields?
A. Create a formula field on the article.
B. Require a field on the page layout.
C. Use field dependencies on article types.
D. Create different article types for different requirements.
Answer: C
Question: 428
A user has a profile with read-only permissions for the case object.
How can the user be granted edit permission for cases?
A. Create a permission set with edit permissions for the case object.
B. Create a sharing rule on the case object with read/write level of access.
C. Create a public group with edit permissions for the case object.
D. Add the user in a role hierarchy above users with edit permissions on the case object.
Answer: A
Question: 429
Which three actions can occur when an administrator clicks "Save" after making a number of modifications to Knowledge data categories in a category group and changing
their positions in the hierarchy? (Choose three.)
A. Users are temporarily locked out of their ability to access articles.
B. Users may temporarily experience performance issues when searching for articles.
C. The contents of the category drop-down menu change.
D. The articles and questions visible to users change.
E. The history of article usage is reset to zero utilization.
Answer: ADE
Question: 430
What are three capabilities of Collaborative forecasting? (Choose three.)
A. Rename categories
B. forecast using opportunity splits
C. Overlay quota
D. Add categories
E. Select a default forecast currency setting
Answer: ABE
Question: 431
Universal Containers wants customers who buy the freight Container product to be billed in monthly installments.
How should an administrator meet this requirement?
A. Create a default quantity schedule on the product.
B. Create a default revenue schedule on the product.
C. Create a workflow rule on the product.
D. Create custom fields on the product.
Answer: B
Question: 432
Which two deployment tools can be used to deploy metadata from a Developer Edition organization to another organization? (Choose two.)
A. Data Loader
B. Salesforce Extensions for Visual Studio Code
C. Change sets
D. Ant Migration Tool
Answer: BC
Question: 433
An administrator wants to allow users who are creating leads to have access to the find Duplicates button.
Which lead object-level permission will the administrator need to provide to these users?
A. Merge
B. Read and Edit
C. View All
D. Delete
Answer: C
Question: 434
An administrator has been asked to create a replica of the production organization. The requirement states that existing fields, page layouts, record types, objects, and data
contained in the fields and objects need to be available in the replica organization.
How can the administrator meet this requirement?
A. Create a developer sandbox.
B. Create a configuration-only sandbox.
C. Create a metadata sandbox.
D. Create a full sandbox.
Answer: D
For More exams visit https://killexams.com/vendors-exam-list
Kill your exam at First Attempt....Guaranteed!

ServiceNow Implementation test - BingNews https://killexams.com/pass4sure/exam-detail/Servicenow-CIS-ITSM Search results ServiceNow Implementation test - BingNews https://killexams.com/pass4sure/exam-detail/Servicenow-CIS-ITSM https://killexams.com/exam_list/ServiceNow Chapter 2.4: Item Qualification Test Implementation No result found, try new keyword!A project should seek to organize all verification implementation work under the management responsibility ... The person chosen to do this work should be from the test and evaluation functional ... Tue, 13 Mar 2018 05:20:00 -0500 en-US text/html https://www.globalspec.com/reference/33872/203279/chapter-2-4-item-qualification-test-implementation Siemens and ServiceNow Enable Cloud-based Management of OT Assets

The technology company Siemens and ServiceNow, a leading company specializing in digital workflows, will work more closely together in the future. Siemens’ cloud-based software service makes all OT devices on the shop floor completely transparent and connects them with the market-proven NowPlatform from ServiceNow.

The partnership between Siemens and ServiceNow enables transparency in industrial asset management.


Transparency in industrial asset management

This Software-as-a-Service solution from Siemens enables the recognition, identification, and management of all OT devices to simplify and automate their processes. It makes the status of all OT devices across the network completely transparent, regardless of manufacturer or device type, using just one tool. This functionality extends the NowPlatform, which already provides management of IT assets. With this expansion, Siemens and ServiceNow are addressing the need of their shared customers to increase transparency across the entire shop floor.

As a result, incidents that could disrupt the production process in industrial plants can be prevented. The tool also allows for planning service tasks, identifying potential security vulnerabilities, and dispatching service personnel without additional manual or time costs. OT assets can now be managed with the same flexibility and interoperability as IT assets.

“OT management is 10 years behind IT management. ServiceNow has already mastered IT asset management – and this partnership means opening our ecosystem and leaving behind the silos. By combining the IT expertise of ServiceNow with our OT knowledge, we’re truly putting IT and OT convergence into practice and enabling speed and scale for our shared customers," said Dirk Didascalou, CTO Digital Industries.


Collaboration of Siemens and ServiceNow strengthens the industrial ecosystem for better integration of IT and OT

“The digital transformation of manufacturing processes is happening at a rapid pace. We are witnessing the fusion of the physical and digital worlds, and IT and OT convergence is an underlying enabler. That digital transformation brings new opportunities for new business models and can help increase productivity significantly. Siemens and ServiceNow are committed to helping customers realize these benefits," said Karel van der Poel, senior vice president Products at ServiceNow.

With this partnership, Siemens and ServiceNow are strengthening an industrial ecosystem to accelerate the digital transformation of industrial customers. The scalable, open, and secure cloud service extends the Siemens Xcelerator digital business platform and Industrial Operations X interoperable portfolio. This continuously growing portfolio covers the areas of production engineering, execution, and optimization. With Industrial Operations X, Siemens is consistently integrating IT and software capabilities in the world of automation to make production processes more flexible, autonomous, and better tailored to people’s needs.


About Siemens

Siemens AG (Berlin and Munich) is a technology company focused on industry, infrastructure, transport, and healthcare. From more resource-efficient factories, resilient supply chains, and smarter buildings and grids, to cleaner and more comfortable transportation as well as advanced healthcare, the company creates technology with purpose adding real value for customers. By combining the real and the digital worlds, Siemens empowers its customers to transform their industries and markets, helping them to transform the everyday for billions of people. Siemens also owns a majority stake in the publicly listed company Siemens Healthineers, a globally leading medical technology provider shaping the future of healthcare. In addition, Siemens holds a minority stake in Siemens Energy, a global leader in the transmission and generation of electrical power.

Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..

Subscribe
Mon, 13 Nov 2023 04:36:00 -0600 en text/html https://www.automation.com/en-us/articles/november-2023/siemens-servicenow-cloud-ot-assets
Siemens, ServiceNow Enable Cloud-Based Control of Operational Technology Assets

The technology company Siemens and ServiceNow, a leading company specializing in digital workflows, will work more closely together in the future. Siemens’ cloud-based software service makes all OT devices on the shop floor completely transparent and connects them with the market-proven NowPlatform from ServiceNow.

Transparency in industrial asset management

Service solution from Siemens enables the recognition, identification, and management of all OT devices to simplify and automate their processes. It makes the status of all OT devices across the network completely transparent, regardless of manufacturer or device type, using just one tool. This functionality extends the NowPlatform, which already provides management of IT assets. With this expansion, Siemens and ServiceNow are addressing the need of their shared customers to increase transparency across the entire shop floor.

As a result, incidents that could disrupt the production process in industrial plants can be prevented. The tool also allows for planning service tasks, identifying potential security vulnerabilities, and dispatching service personnel without additional manual or time costs. OT assets can now be managed with the same flexibility and interoperability as IT assets.

Collaboration of Siemens and ServiceNow strengthens the industrial ecosystem for better integration of IT and OT

With this partnership, Siemens and ServiceNow are strengthening an industrial ecosystem to accelerate the digital transformation of industrial customers. The scalable, open, and secure cloud service extends the Siemens Xcelerator digital business platform and Industrial Operations X interoperable portfolio. This continuously growing portfolio covers the areas of production engineering, execution, and optimization. With Industrial Operations X, Siemens is consistently integrating IT and software capabilities in the world of automation to make production processes more flexible, autonomous, and better tailored to people’s needs.

Dirk Didascalou, CTO Digital Industries

OT management is 10 years behind IT management. ServiceNow has already mastered IT asset management – and this partnership means opening our ecosystem and leaving behind the silos. By combining the IT expertise of ServiceNow with our OT knowledge, we’re truly putting IT and OT convergence into practice and enabling speed and scale for our shared customers.

Karel van der Poel, Senior Vice President Products at ServiceNow

The digital transformation of manufacturing processes is happening at a rapid pace. We are witnessing the fusion of the physical and digital worlds, and IT and OT convergence is an underlying enabler. That digital transformation brings new opportunities for new business models and can help increase productivity significantly. Siemens and ServiceNow are committed to helping customers realize these benefits.

Mon, 13 Nov 2023 12:02:00 -0600 en text/html https://www.thefastmode.com/technology-solutions/33832-siemens-servicenow-enable-cloud-based-control-of-operational-technology-assets
Best ERP Systems of 2023 No result found, try new keyword!Maximizing ROI gets easier with the best enterprise resource planning (ERP) systems. After deep analysis, these are our top picks. Tue, 14 Nov 2023 00:49:00 -0600 https://www.usnews.com/360-reviews/business/best-erp-systems Implementation Matters: Evaluating the Proportional Hazard Test’s Performance

1 Introduction

Political scientists typically use Grambsch and Therneau’s (Reference Grambsch and Therneau1994; Therneau and Grambsch Reference Therneau and Grambsch2000) Schoenfeld residual-based test to assess the Cox duration model’s proportional hazards (PH) assumption. This assumption states that a covariate x’s effect is multiplicative on the baseline hazard, h 0(t). One way proportionality can occur is if x’s effect is unconditional on t, a subject’s time at risk of experiencing some event. If x’s effect is conditional on t, it is no longer proportional, as its effect is “time-varying.” Failing to account for a covariate’s time-varying effect (TVE) produces inefficient estimates, at best, and bias in all the covariates’ point estimates, at worst (Box-Steffensmeier and Zorn Reference Box-Steffensmeier and Zorn2001; Keele Reference Keele2008, 6). Detecting PH violations, then, is a priority for political scientists, given our general interest in explanation and, therefore, accurate estimates of covariates’ effects. R’s survival::cox.zph, Stata’s estat phtest, and Python’s lifelines.check_assumptions all currently use Grambsch and Therneau’s Schoenfeld-based test (hereafter, “PH test”).

Like any specification-related test, the PH test’s ability to correctly diagnose PH violations depends on several factors. Examples include the TVE’s magnitude, the presence of misspecified covariate functional forms, omitted covariates, covariate measurement error, the number of failures, and demo size (Therneau and Grambsch Reference Therneau and Grambsch2000, sec. 6.6); covariate measurement level (Austin Reference Austin2018); unmodeled heterogeneity (Balan and Putter Reference Balan and Putter2019); choice of g(t), the function of t on which the covariate’s effect is presumed to be conditioned (Park and Hendry Reference Park and Hendry2015); the nature of the PH violation, and the percentage of right-censored (RC) observations (Ng’andu Reference Ng’andu1997). Each of these affects either the PH test’s statistical size or power, impacting the frequency with which we obtain false positives (size) or true positives (power), thereby affecting the test’s performance.

New factors affecting the PH test’s performance have recently come to light. Metzger (Reference Metzger2023c) shows how the PH test is calculated also impacts the test’s performance. Traditionally, Stata, Python, and R (< survival 3.0-10) all compute the PH test using an approximation, which makes certain simplifying assumptions to expedite computation (Metzger Reference Metzger2023c, Appx. A). By contrast, R (≥ survival 3.0-10) now computes the PH test in full, using the genuine calculation (AC), without any simplifying assumptions.Footnote 1 Metzger’s (Reference Metzger2023c) simulations suggest surprising performance differences between the approximated and genuine calculations, with the latter outperforming the former. However, Metzger examines a limited number of scenarios to address her main issues of concern, pertaining to model misspecification via incorrect covariate functional forms among uncorrelated covariates, and leaves more extensive investigations of the calculations’ performance differences to future work.

This article uses Monte Carlo simulations to more thoroughly investigate whether the PH test’s approximated and genuine calculations perform similarly, in general. My simulations show that they do not, but in unexpected ways. Congruent with Metzger (Reference Metzger2023c), I find that the AC generally outperforms the approximated calculation when the covariates are uncorrelated, regardless of the amount of right censoring (RC), the way in which RC is induced, the demo size, the PH-violator’s time-varying-to-main-effect ratio, or the non-PH-violating covariate’s magnitude or dispersion. In these instances, the AC is well sized and well powered, whereas the approximation is also well sized but can be underpowered.

However, in a surprising turn of events, the approximation outperforms the AC considerably when the covariates are correlated, even moderately so (|Corr(x 1,x 2)| = 0.35). The AC continues to be well powered, but produces an increasingly large amount of false positives as the correlation’s absolute value increases—sometimes as high as 100% of a simulation run’s draws. By contrast, the approximation’s behavior effectively remains the same as the no-correlation scenario: well sized or very near to it, but sometimes underpowered. These findings have weighty implications because they point to a complex set of trade-offs we were previously unaware of: using an appropriately sized test (the approximation, for the scenarios I check here), while knowing the approximation can also have many false positives in misspecified models (Metzger Reference Metzger2023c), among other potential complications. False positives would lead researchers to include PH violation corrections, likely in the form of a time interaction. Including unnecessary interaction terms results in inefficiency, which can threaten our ability to make accurate inferences (Supplementary Appendix E).

My findings are also weighty because political science applications frequently satisfy the conditions under which the AC is likely to return false positives. I identified all articles using a Cox duration model in eight political science journals across 3.5 years, and examined the correlations between identified PH violators and non-violators.Footnote 2 Nearly 87% of the articles have a moderate correlation for at least one violator–non-violator pairing, with an average of 5.15 such pairings per article. By contrast, only ~14% of these articles have easily identifiable features that might prove problematic for the approximation, in theory (fn. 1). To further underscore my findings’ implications for political scientists, I also reanalyze a recently published study using the Cox model (Agerberg and Kreft Reference Agerberg and Kreft2020) to show that we reach different conclusions about the authors’ main covariate of interest, depending on which PH calculation we use.

I begin by walking through the differences between the PH test’s approximated and genuine calculations, to provide some sense of why their applied behavior may differ. Next, I describe my simulations’ setup. Third, I discuss my simulation results that show the approximation is appropriately sized in far more scenarios than the AC. Fourth, I move to the illustrative application and the different covariate effect estimates the two calculations imply. I conclude with a summary and discuss my findings’ implications for practitioners.

2 The PH Test Calculation

2.1 Overview

Why might the two calculations perform differently? In short, the approximation makes several simplifying assumptions when calculating one of the formula’s pieces.Footnote 3

Grambsch and Therneau’s PH test amounts to a score test (Therneau and Grambsch Reference Therneau and Grambsch2000, 132), also known as a Rao efficient score test or a Lagrange multiplier (LM) test. Score tests take the form:

where U is the score vector, as a row, and $\mathcal{I}$ is the information matrix. In a Cox model context, a covariate’s entry in the score vector is equal to the sum of its Schoenfeld residuals, making U particularly easy to compute (Therneau and Grambsch Reference Therneau and Grambsch2000, 40, 85). The score test for whether covariate j is a PH violator amounts to adding an extra term for xj *g(t) to the original list of covariates (Therneau Reference Therneau2021), where g(t) is the function of time upon which xj ’s effect is potentially conditioned. Usual choices for g(t) include t and ln(t), but others are possible (and encouraged, in some cases: see Park and Hendry Reference Park and Hendry2015).

To specifically assess whether xj is a PH violator using the full score test, the expanded U vector’s dimensions, ${U}_j^{\mathrm{E}}$ , are $1\times \left(J+1\right)$ , where J is the number of covariates in the original model. The $\left(J+1\right)$ th element contains the score value for the additional xj *g(t) term, calculated by multiplying xj ’s Schoenfeld residuals from the original Cox model by g(t), then summing together that product. With a similar logic, the expanded $\mathcal{I}$ matrix for testing whether xj is a PH violator ( ${\mathcal{I}}_j^{\mathrm{E}}$ ) has dimensions of $\left(J+1\right)\times \left(J+1\right)$ . It is a subset of the full expanded information matrix ( ${\mathcal{I}}^{\mathrm{E}}$ ), which is equal to (Therneau Reference Therneau2021, lines 23–33):

where $k$ is the $k$ th event time ( $0<{t}_{1}<\dots <{t}_{k}<{t}_{K}$ ) and $\widehat{V}\left({t}_{k}\right)$ is the $J \times J$ variance–covariance matrix at time tk from the original Cox model. We obtain ${\mathcal{I}}_j^{\mathrm{E}}$ by extracting the rows and columns with indices 1: J and j + J from ${\mathcal{I}}^{\mathrm{E}}$ . This amounts to all of ${\mathcal{I}}_{1}$ and the row/column corresponding to xj in the matrix’s expanded portion.Footnote 4

2.2 Implementation Differences

In a basic Cox model with no strata,Footnote 5 the biggest difference between the two calculations originates from ${\mathcal{I}}^{\mathrm{E}}$ . The approximated calculation makes a key simplifying assumption about $\widehat{V}\left({t}_{k}\right)$ : it assumes that $\widehat{V}\left({t}_k\right)$ ’s value is constant across t (Therneau and Grambsch Reference Therneau and Grambsch2000, 133–134). The approximation also uses the average of $\widehat{V}\left({t}_{k}\right)$ across all the observed failures (d), $\overline{V} = {d}^{-1}\sum \widehat{V}\left({t}_{k}\right) = {d}^{-1}{\mathcal{I}}_1$ , in lieu of $\sum \widehat{V}\left({t}_{k}\right)$ , because $\widehat{V}\left({t}_{k}\right)$ “may be unstable, particularly near the end of follow-up when the number of subjects in the risk set is not much larger than [ $\widehat{V}\left({t}_{k}\right)$ ’s] number of rows” (Therneau and Grambsch Reference Therneau and Grambsch2000, 133–134).

As a consequence of these simplifying assumptions:

  1. 1. ${\mathcal{I}}^{\mathrm{E}}$ ’s upper-left block diagonal ( ${\mathcal{I}}_1$ ) is always equal to $\overline{V} = \sum \widehat{V}\left({t}_k\right)/d$ for the approximation, after the $\overline{V}$ substitution. By contrast, it equals $\sum \widehat{V}\left({t}_k\right)$ for the AC.

  2. 2. ${\mathcal{I}}^{\mathrm{E}}$ ’s block off-diagonals ( ${\mathcal{I}}_2$ ) are forced to equal 0 for the approximation. For the AC, they would be nonzero ( $ = \sum \widehat{V}\left({t}_k\right)g\left({t}_k\right)$ ).

  3. 3. ${\mathcal{I}}^{\mathrm{E}}$ ’s lower-right block diagonal ( ${\mathcal{I}}_3$ ) is equal to $\overline{V}\sum {g}^2\left({t}_k\right)\equiv \sum \widehat{V}\left({t}_k\right){d}^{-1}\sum {g}^2\left({t}_k\right)$ for the approximation (Therneau Reference Therneau2021, lines 38–41), after the $\overline{V}$ substitution. By contrast, ${\mathcal{I}}_3$ would equal $\sum \widehat{V}\left({t}_k\right){g}^2\left({t}_k\right)$ for the AC.

Supplementary Appendix A provides ${\mathcal{I}}^{\mathrm{E}}$ for both calculations in the two-covariate case, to illustrate.

Consider the difference between the test statistic’s two calculations for covariate xj in a model with two covariates (J = 2).Footnote 6 For the approximation, it is equal to (Therneau and Grambsch Reference Therneau and Grambsch2000, 134):

where ${s}_{j,k}^{\ast }$ are the scaled Schoenfeld residualsFootnote 7 for xj at time k and ${\widehat{V}}_{{\widehat{\beta}}_j}$ is ${\widehat{\beta}}_j$ ’s estimated variance from the original Cox model.Footnote 8

If we rewrite the approximation’s formula using unscaled Schoenfelds, to make it analogous to the AC’s formula:

where ${s}_{j,k}$ is the unscaled Schoenfeld residual for covariate j at time k and $\neg j$ refers to the other covariate in our two-covariate specification.

By contrast, the AC for xj when J = 2 will equal:

where the various $\widehat{V}$ s and $\widehat{\mathrm{Cov}}$ refer to specific elements of $\widehat{V}\left({t}_{k}\right)$ , the time-specific variance–covariance matrix, and $\left|{\mathcal{I}}_j^{\mathrm{E}}\right|$ is ${\mathcal{I}}_j^{\mathrm{E}}$ ’s determinant.Footnote 9 $\left|{\mathcal{I}}_j^{\mathrm{E}}\right|$ has J + 1 terms; when J = 2, it equals (before demeaning $g\left({t}_k\right)$ [fn. 8]):

(5) $$\begin{align}\left|{\mathcal{I}}_j^{\mathrm{E}}\right| &= \left\{\left({\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right)\right)\left(\left[{\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_{\neg j}\right){\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right){g}^2\left({t}_k\right)\right] \right.\right.\nonumber\\& \qquad \left.\left. - \left[{\left({\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)g\left({t}_k\right)\right)}^2\right]\right)\right\} \nonumber\\&\quad +\left\{\left({\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)\right)\left(\left[{\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)g\left({t}_k\right){\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right)g\left({t}_k\right)\right]\right.\right.\nonumber\\& \qquad \left.\left. -\left[{\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right){\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right){g}^2\left({t}_k\right)\right]\right)\right\}\nonumber\\&\quad +\left\{\left({\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right)g\left({t}_k\right)\right)\left(\left[{\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right){\sum}_{k = 1}^K\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)g\left({t}_k\right)\right]\right.\right.\nonumber\\& \qquad \left.\left. -\left[{\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_{\neg j}\right){\sum}_{k = 1}^K\widehat{V}\left({t}_k,{x}_j\right)g\left({t}_k\right)\right]\right)\right\}.\end{align}$$

2.3 Implications

Equations (3) and (4) diverge in two major places. Both manifest in the AC (Equation (4)):

  1. 1. The additional, non-Schoenfeld term in the numerator (shaded light gray);

  2. 2. A substantially more complex denominator. The AC’s denominator is one consequence of ${\mathcal{I}}_{2}\ne 0$ , as Supplementary Appendix B explains. Additionally, g(t) only appears inside the k-summations involving $\widehat{V}\left({t}_k\right)$ for the AC’s denominator, which stems from ${\mathcal{I}}_{3} \ne \sum \widehat{V}\left({t}_k\right){d}^{-1}\sum {g}^2\left({t}_k\right)$ .

${T}_j$ is distributed asymptotically ${\chi}^{2}$ when the PH assumption holds (Therneau and Grambsch Reference Therneau and Grambsch2000, 132), meaning ${T}_j$ ’s numerator and denominator will be identically signed.

Understanding when each calculation is likely to be appropriately sized (few false positives) and appropriately powered (many true positives) amounts to understanding what makes Tj larger. A higher Tj translates to a lower p-value, and thus a higher chance of concluding a covariate violates PH, holding Tj ’s degrees of freedom constant. The key comparison is the numerator’s size relative to the denominator. Specifically, we need a sense of (1) when the numerator will become larger relative to the denominator and/or (2) when the denominator will become smaller, relative to the numerator.

However, the numerator’s and denominator’s values are not independent within either calculation. Moreover, the numerator and the denominator do not simply share one or two constituent quantities, but several quantities, often in multiple places (and sometimes transformed), making basic, but meaningful comparative statics practically impossible within a given calculation, let alone comparing across calculations. This interconnectivity is one reason I use Monte Carlo simulations to assess how each calculation performs.

The additional term in ${T}_j^{act}$ ’s numerator hints at one factor that may make the calculations perform differently: the correlation among covariates. $\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)$ appears in the AC for J = 2, both in the numerator’s non-Schoenfeld term (Equation (4), light gray shading) and all three terms in the denominator.Footnote 10 $\widehat{\mathrm{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)$ is equal to (Therneau and Grambsch Reference Therneau and Grambsch2000, 40):

where $r\in R\left({t}_k\right)$ represents “observations at risk at ${t}_k^{-}$ ” and XB is the at-risk observation’s linear combination. Correlated covariates would impact ${x}_j{x}_{\neg j}$ ’s value, which eventually appears in both bracketed terms. Generally speaking, as $\left| \operatorname{Corr}\left({x}_j{x}_{\neg j}\right)\right|$ increases, $\left| {x}_j{x}_{\neg j}\right|$ increases, thereby increasing $\left|\widehat{\operatorname{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)\right|$ ’s value.

More broadly, each formula provides guidance as to which features of the data-generating process (DGP) might be useful to vary across the different simulation scenarios. Consider the pieces that appear in either equation:

  1. • $\widehat{V}\left({t}_k\right)$ . In the AC, the individual elements of $\widehat{V}\left({t}_k\right)$ appear in both the numerator and the denominator (e.g., $\widehat{\operatorname{Cov}}\left({t}_k,{x}_j,{x}_{\neg j}\right)$ , as previously discussed for the correlation among covariates). In the approximation, $\widehat{V}\left({t}_k\right)$ appears only indirectly via $\widehat{V}\left(\widehat{\beta}\right)$ , the model’s estimated variance–covariance matrix, as $\widehat{V}\left(\widehat{\beta}\right) = {\mathcal{I}}^{-1}$ and $\mathcal{I}=\sum \widehat{V}\left({t}_{k}\right)$ . Portions of $\widehat{V}\left(\widehat{\beta}\right)$ appear in the approximation’s numerator, as part of the scaled Schoenfeld calculation ( ${\widehat{V}}_{{\widehat{\beta}}_{j}}$ , ${\widehat{\mathrm{Cov}}}_{{\widehat{\beta}}_j,{\widehat{\beta}}_{\neg j}}$ ), and in its denominator ( ${\widehat{V}}_{{\widehat{\beta}}_{j}}$ ).

  2. • ${\sum}_{r\in R\left({t}_k\right)}\exp (XB)\theta$ , where $\theta$ is a generic placeholder for a weight,Footnote 11 appears in multiple places in both calculations: namely, within the formula for $\widehat{V}\left({t}_k\right)$ ’s individual elements and within the unscaled Schoenfeld formula. $\exp (XB)$ is an at-risk observation’s risk score in tk , meaning its (potentially weighted) sum speaks to the total amount of weighted “risk-ness” in the dataset at tk .Footnote 12 The riskset’s general size at each tk , then, is relevant.

  3. • $\exp (XB)$ also suggests that the covariates’ values, along with their respective slope estimates, are of relevance. Additionally, the covariates are sometimes involved with the weights (see fn. 11), producing another way in which their values are relevant.

  4. • t, the duration. It ends up appearing demeaned in both calculations, $g\left({t}_k\right)-\overline{g(t)}$ (see fn. 8). The demeaning makes clear that t’s dispersion is relevant.

  5. • Only observations experiencing a failure are involved in the final steps of the $\widehat{V}\left({t}_{k}\right)$ and Schoenfeld formulas, implying the number of failures (d) is relevant.

3 Simulation Setup

I use the simsurv package in R to generate my simulated continuous-time durations (Brilleman et al. Reference Brilleman, Wolfe, Moreno-Betancur and Crowther2021).Footnote 13 All the simulations use a Weibull hazard function with no strata, a baseline scale parameter of 0.15, and two covariates: (1) a continuous, non-PH-violating covariate (x 1 ~ $\mathcal{N}$ ) and (2) a binary, PH-violating covariate (x 2 ~ Bern(0.5)). x 2’s TVE is conditional on ln(t). Making the PH violator a binary covariate gives us a best-case scenario, because others’ simulations suggest that the Schoenfeld-based PH test’s performance is worse for continuous covariates than for binary covariates (Park and Hendry Reference Park and Hendry2015).

I design my simulations to address whether there are performance differences between the approximated and genuine PH test calculations in a correctly specified base model, where x 1 and x 2 are the only covariates.Footnote 14 I vary a number of other characteristics that can impact the PH test’s performance, per Section 1’s discussion. Some of the characteristics’ specific values are motivated by existing duration model-related simulations. In total, I run 3,600 different scenarios, derived from all permutations of the characteristics I list in Supplementary Appendix C.Footnote 15 The results section’s discussion focuses primarily on five of these characteristics:

  1. • Three Weibull shape parameter (p) values {0.75, 1, 1.25}, producing scenarios with decreasing, flat, and increasing baseline hazards, respectively. p = 1 matches Keele (Reference Keele2010) and Metzger (Reference Metzger2023c). Varying p impacts t’s dispersion by affecting how quickly subjects fail. Higher shape values reduce t’s dispersion, all else equal.

  2. • Two sample sizes {100, 1,000}. The first matches Keele (Reference Keele2010) and Metzger (Reference Metzger2023c). I run n = 1,000 to check whether the n = 100 behavior persists when the PH test’s asymptotic properties are likely in effect.

  3. • Five levels of correlation between the two covariates {−0.65, −0.35, 0, 0.35, 0.65}. I use the BinNor package to induce these correlations (Demirtas, Amatya, and Doganay Reference Demirtas, Amatya and Doganay2014).Footnote 16 I run both positive and negative correlations to verify that the behavior we observe is independent of the correlation’s sign, as the formulas suggest. The results are indeed roughly symmetric for the scenarios I run here. Therefore, I only report the positive correlation results in text, but the supplemental viewing app (see fn. 15) has the graphs for both.

  4. • Two RC patterns. In one pattern, I randomly select rc% subjects and shorten their observed duration by (an arbitrarily selected) 2%. In the second, I censor the top rc% of subjects such that their recorded durations are at the (100 − rc%)th percentile. The first (“random RC”) corresponds to a situation where subjects become at risk at different calendar times, whereas the second (“top rc%”) corresponds to a situation where all subjects become at risk at the same calendar time, but data collection ends before all subjects fail. For two otherwise identical scenarios (including d’s value), the top rc% pattern gives me another way to affect t’s dispersion without impacting other quantities in either formula, because t’s highest observed value is restricted to its (100 − rc%)th percentile.

  5. • Three RC percentages (rc%) {0%, 25%, 50%}. The 25% matches Keele (Reference Keele2010), Metzger (Reference Metzger2023c), Park and Hendry’s (Reference Park and Hendry2015) moderate censoring scenario, and is near Ng’andu’s (Reference Ng’andu1997) 30% scenario. The 50% matches Park and Hendry’s (Reference Park and Hendry2015) heavy censoring scenario and is near Ng’andu’s (Reference Ng’andu1997) 60% scenario. Manipulating rc% allows me to vary d across otherwise comparable scenarios.

As Supplementary Appendix C discusses, I also vary the pattern regarding x 2’s effect (specifically, the ratio of x 2’s TVE to its main effect), the recorded duration’s type, x 1’s mean, and x 1’s dispersion.

For each of these 3,600 scenarios, I estimate a correctly specified base model to determine whether PH violations exist, as discussed previously. I then apply the two PH test calculations and record each calculation’s p-values for every covariate. I report the PH tests’ p-values for g(t) = ln(t) from both calculations, to match the DGP’s true g(t).Footnote 17 , Footnote 18

In the ideal, I would run 10,000 simulation draws for each of the 3,600 scenarios because of my interest in p-values for size/power calculations (Cameron and Trivedi Reference Cameron and Trivedi2009, 139–140). However, the estimating burden would be prohibitive. Additionally, while I am interested in seeing how each calculation performs against our usual size/power benchmarks, my primary interest is comparing how the calculations perform relative to one another. Having fewer than 10,000 draws should affect both calculations equally, provided any imprecision is unaffected by any of the calculations’ performance differences (i.e., the simulations might provide an imprecise estimate of statistical size, but both calculations would have the same amount of imprecision). Nonetheless, I compromise by running 2,000 simulations per scenario.

4 Simulation Results

The key quantity of interest is the rejection percentage ( ${\hat{r}}_p$ ), the percent of p-values < 0.05, from the PH test for each calculation–covariate pairing within a scenario.Footnote 19 For x 1, the non-PH violator, this value should be 5% or lower, corresponding to a false positive rate of α = 0.05. For PH-violating x 2, 80% or more of its PH test p-values should be less than 0.05, with 80% representing our general rule of thumb for a respectably powered test.Footnote 20 Our first priority typically is evaluating whether a statistical test’s calculated size matches our selected nominal size, α. Our second priority becomes choosing the best-powered test, ideally among those with the appropriate statistical size (Morris, White, and Crowther Reference Morris, White and Crowther2019, 2088)—a caveat that will be relevant later.

I report ${\hat{r}}_p$ along the horizontal axis of individual scatterplots grouped into 3 × 3 sets, where each set contains 45 scenarios’ worth of results. The set’s rows represent different Corr(x 1,x 2) values, and its columns represent different shape parameter values. Each scatterplot within a set, then, represents a unique Corr(x 1,x 2)–shape combination among a set of scenarios that share the same true linear combination, demo size, recorded duration type, and values for x 1’s mean and dispersion. I split each scatterplot into halves and report the results from random RC on the left and top rc% RC on the right, with the halves’ dividing line representing 0% of a scenario’s p-values < 0.05 $\left({\hat{r}}_p = 0\%\right)$ and the scatterplot’s side edges representing ${\widehat{r}}_p = 100\%$ . I use short, solid vertical lines within the plot area to indicate whether a particular covariate’s ${\widehat{r}}_p$ should be low (non-PH violators ⇒ size; closer to halves’ dividing line) or high (PH violators ⇒ power; closer to scatterplot’s edges). Within each half, I report the three censoring percentages using different color symbols, with darker grays representing more censoring.Footnote 21

I report one of the scatterplot sets in text (Figure 1) to concretize the discussion regarding correlated covariates' effect, as it exemplifies the main patterns from the results.15 I then discuss those patterns more broadly.

Figure 1 Illustrative simulation results, nonnegative correlations only (n = 100). Negative correlations omitted for brevity; Corr(x 1,x 2) < 0 follow similar patterns as Corr(x 1,x 2) > 0. Vertical lines represent target ${\widehat{r}}_p$ for a well-sized (x 1) or well-powered (x 2) test.

4.1 Specific Scenario Walkthrough

Figure 1 shows the simulation results for ${x}_1\sim \mathcal{N}\left(0,1\right)$ where $XB = 0.001{x}_1+1{x}_2\ln (t)$ , n = 100, and the estimated model uses the true continuous-time duration. In general, if the two tests perform identically, the circles (approximation) and triangles (AC) should be atop one another for every estimate–RC pattern–rc% triplet in all scatterplots. Already, Figure 1 makes clear that this is not the case.

I start by comparing my current results with those from previous work, to ground my findings' eventual, larger implications. Figure 1’s top row, second column most closely corresponds to Metzger’s (Reference Metzger2023c) simulations. This scatterplot, Corr(x 1,x 2) = 0, p = 1, with top 25% RC (scatterplot’s right half, medium gray points), is analogous to her Section 3.3’s “correct base specification” results.Footnote 22 My top 25% RC results match Metzger (Reference Metzger2023c): both calculations are appropriately sized or close to it (for x 1: 6.5% [approx.] vs. 5.5% [actual]) and both calculations are well powered (for x 2: 90.2% [approx.] vs. 90.6% [actual]). The calculations having similar size and power percentages also mirrors Metzger's (Reference Metzger2023c) Section 3.3.

The story changes in important ways once Corr(x 1,x 2) ≠ 0 (moving down Figure 1’s columns). Figure 1 shows that the AC performs progressively worse as Corr(x 1,x 2) becomes larger, evident in how the triangles representing non-PH violator x 1’s false positive rate move away from each scatterplot’s ${\hat{r}}_p = 0\%$ dividing line. The AC returns an increasingly large number of false positives for x 1 that far surpass our usual 5% threshold, nearing or surpassing 50% in some instances. This means we become more likely to conclude, incorrectly, that a non-PH-violating covariate violates PH as it becomes increasingly correlated with a true PH violator. Despite the AC’s exceptionally poor performance for non-violating covariates, it continues to be powered just as well or better than the approximation for PH violators, regardless of |Corr(x 1,x 2)|’s value. These patterns suggest that the AC rejects the null too aggressively—behavior that works in its favor for PH violators, but becomes a serious liability for non-PH violators.

By contrast, correlated covariates only marginally affect the approximated calculation. The approximation has no size issues across |Corr(x 1,x 2)| values—it stays at or near our 5% false positive threshold, unlike the AC. However, it does tend to become underpowered as |Corr(x 1,x 2)| increases, meaning we are more likely to miss PH violators as the violator becomes increasingly correlated with a non-PH violator. While this behavior is not ideal, it suggests that practitioners should be more mindful of their covariates’ correlations, to potentially contextualize any null results from the approximation.

Finally, Figure 1 shows these general patterns for both calculations persist across panels. More specifically, the patterns are similar when the baseline hazard is not flat (within the scatterplot set’s rows), for different censoring percentages (within a scatterplot’s half), and for different RC types (across a scatterplot’s halves, for the same rc%).

4.2 Broader Correlation-Related Patterns: Descriptive

The AC’s behavior is the more surprising of the two findings, but similarly as surprising, Figure 1’s patterns are not unusual. They are representative of the AC’s behavior in nearly all the 1,800 scenarios where n = 100. There are 360 unique combinations of the Weibull’s shape parameter (p), x 2’s TVE-to-main-effect ratio, recorded duration type, RC pattern, RC percentage, x 1’s mean, and x 1’s dispersion for n = 100. Of these 360, the AC’s false positive rate for |Corr(x 1,x 2)| ≠ 0 is worse than the comparable Corr(x 1,x 2) = 0 scenario in 359 of them (99.7%; Table 1’s left half, second column). For the lone discrepant combination,Footnote 23 three of the four nonzero correlations perform worse than Corr(x 1,x 2) = 0. Or, put differently: for the AC, out of the 1,440 n = 100 scenarios in which Corr(x 1,x 2) ≠ 0, 1,439 of them (99.9%) have a higher false positive rate than the comparable Corr(x 1,x 2) = 0 scenario. When coupled with the number of characteristics I vary in my simulations, this 99.9% suggests that the AC’s high false positive rate cannot be a byproduct of p, the PH violator’s TVE-to-main-effect ratio, the way in which the duration is recorded, the RC pattern or percentage, or x 1’s magnitude or dispersion.

Table 1 False positive %: Corr(x 1,x 2) = 0 vs. ≠ 0, n = 100.

Other AC-related patterns from Figure 1 manifest across the other scenarios as well. In particular, like Figure 1, the AC’s false positive rate gets progressively worse in magnitude as |Corr(x 1,x 2)| increases across all 360 combinations (Table 1’s right half, second column). On average, the AC’s false positive rate for Corr(x 1,x 2) = 0 is ~9 percentage points lower compared to |Corr(x 1,x 2)| = 0.35 and ~33.6 percentage points lower compared to |Corr(x 1,x 2)| = 0.65.

The AC’s most troubling evidence comes from Figure 1’s equivalent for n = 1,000 (Figure 2). With such a large n, both calculations should perform well because the calculations’ asymptotic properties are likely active. For Corr(x 1,x 2) = 0, this is indeed the case. Both calculations have 0% false positives for x 1 (size) and 100% true positives for x 2 (power), regardless of p, the RC pattern, or the RC percentage (Figure 2's first row). However, like Figure 1’s results, the AC’s behavior changes for the worst when Corr(x 1,x 2) ≠ 0. It continues to have a 100% true positive rate (Figure 2’s last two rows, x 2 triangles), but also has up to a 100% false positive rate, and none of its Corr(x 1,x 2) ≠ 0 false positive rates drop below 50% (Figure 2’s last two rows, x 1 triangles). Also, like Figure 1, the approximation shows no such behavior for Corr(x 1,x 2) ≠ 0.

Figure 2 Illustrative simulation results, nonnegative correlations only (n = 1,000). Negative correlations omitted for brevity; Corr(x 1,x 2) < 0 follow similar patterns as Corr(x 1,x 2) > 0. Vertical lines represent target ${\hat{r}}_p$ for a well-sized (x 1) or well-powered (x 2) test.

These patterns for the AC appear across the other n = 1,000 Corr(x 1,x 2) ≠ 0 scenarios, of which there are 1,440. Corr(x 1,x 2) = 0 outperforms the comparable Corr(x 1,x 2) ≠ 0 scenario in all 1,440 scenarios. Figure 2’s 100% false positive rate also bears out with some regularity for the AC (330 of 1,440 scenarios [22.9%]); in all 330, |Corr(x 1,x 2)| = 0.65. In the remaining 1,110 scenarios, the AC’s lowest false positive rate is 22.6%. The AC’s behavior is so troubling because properly sized tests are typically our first priority in traditional hypothesis testing, as Section 4’s opening paragraph discusses. These results indicate that the AC is far from properly sized, whereas the approximation has no such issues. Taken overall, my simulation results for both demo sizes suggest that we should avoid using the AC for situations mimicking the scenarios I examined here, at minimum, if not also more broadly, provided we temporarily bracket other issues that may arise from using the approximation—a theme I return to in my closing remarks.

5 Illustrative Application

The simulations show that the AC is particularly susceptible to detecting violations, with many false positives when true PH violations do exist, but the PH violator(s) are even moderately correlated with non-violators. Political scientists typically correct for PH violations using an interaction term between the offending covariate and g(t). The potential perils of including an unnecessary interaction term are lower than excluding a necessary one, in relative terms. For any model type, unnecessary interactions produce less efficient estimates.Footnote 24 This increased inefficiency can take a particular toll in the presence of many such unnecessary interaction terms, which would occur in a Cox model context when a PH test reveals many potential PH violations.

Using the AC to diagnose PH violations for Agerberg and Kreft (Reference Agerberg and Kreft2020; hereafter “A&K”) illustrates the potential perils of the AC’s high false positive rate and its ramifications for inference. A&K’s study assesses whether a country having experienced high levels of sexual violence (SV) during a civil conflict (“high SV conflicts” [HSVC]) hastens the country’s adoption of a gender quota for its national legislature, relative to non-HSVC countries.Footnote 25 They find support for their hypotheses, including the one of interest here: HSVC countries adopt gender quotas more quickly compared to countries experiencing no civil conflict. In their supplemental materials, the authors check for any PH violations using the approximation, with g(t) = t. Two of their control variables violate at the 0.05 level (Table 2's “Approx.” column), but correcting for the violations does not impact A&K’s main findings.

Table 2 Agerberg and Kreft: PH test p-values.

However, a different story emerges if I use the ACFootnote 26 to diagnose PH violations.Footnote 27 The AC detects six violations in A&K’s model—three times as many as the approximation. Importantly, A&K’s key independent variable, HSVC, is now a PH violator according to the AC, implying that the effect of high sexual violence during civil conflict is not constant across time. Furthermore, examining HSVC’s effect (Gandrud Reference Gandrud2015) from a fully corrected modelFootnote 28 shows that HSVC’s hazard ratio (HR) is statistically significant for only t ∈ [5,15] (Figure 3’s solid line).

Figure 3 Effect of high sexual violence conflicts across time.

The t restriction matters because 93% of the countries in A&K’s demo become at risk in the same calendar year, meaning HSVC now only affects whether countries adopt a legislative gender quota for a small subset of years in the past (1995–2004) for nearly their whole sample. This conclusion differs from A&K’s original findings, which suggested (1) a country having experienced HSVC always increased its chances of adopting a gender quota, relative to countries with no civil conflict, regardless of how long since the country could have first adopted a quota, and (2) this relative increase was of a lesser magnitude, evident by the vertical distance between HSVC’s estimated HR from the PH-corrected model (Figure 3’s solid line) and A&K’s original estimated HR (Figure 3, long-dashed horizontal line).

We do not know whether HSVC is a true violator because the data’s true DGP is unknown. However, three pieces of evidence suggest that HSVC may be a false positive, albeit not conclusively. First, there is a moderate correlation between HSVC and one of the control variables, “Conflict Intensity: High” (Corr = 0.516), which both the approximation and AC flag as a violator (Table 2). We know the AC is particularly prone to returning false positives in this situation. Second, HSVC’s scaled Schoenfeld plotFootnote 29 shows no unambiguous trends, as we would expect to see for a PH violator. Finally, a series of martingale residual plots show no clear non-linear trends,Footnote 30 ruling out model misspecification from incorrect functional forms, which was Keele’s (Reference Keele2010) and Metzger’s (Reference Metzger2023c) area of focus.

6 Conclusion

For Grambsch and Therneau’s (Reference Grambsch and Therneau1994) test for PH violations, does the way it is calculated affect the test’s performance? My Monte Carlo simulations show that the answer is a resounding yes. More importantly, I show that the performance differences are non-trivial. I find that the AC has a high false positive rate in situations where a PH violator is correlated with a non-PH violator, even for correlations as moderate as 0.35. The approximation does not suffer from the same issue, meaning that it has a crucial advantage over the AC, given the importance we place on correctly sized statistical tests in traditional hypothesis testing. From Supplementary Appendix G's meta-analysis, we know moderate correlations are the norm among political science applications, underscoring the potential danger of the AC’s behavior.

The biggest takeaway from these findings is that practitioners are currently stuck between a rock and a hard place. Both calculations perform adequately when covariates are uncorrelated with one another, but that condition is rarely true in social science applications. Purely on the basis of my simulation results, then, we should favor the approximation.

However, other factors preclude such an easy conclusion. One is a common limitation of any Monte Carlo study: the behavior I find for the approximation is limited in scope to the scenarios I investigated. It may be that, for other scenarios that vary different sets of characteristics, the approximation runs into performance issues similar to the AC. While this may certainly be true, the AC running into such serious performance issues for relatively simple, straightforward DGPs—while the approximation does not—is concerning and is sufficiently notable in its own right. These results also point to a number of related questions worth investigating. As one example, we might ask how the two calculations perform in a model with more than two covariates, and how the correlation patterns among those covariates might matter. The answers would be particularly relevant for applied practitioners.

A second factor is Therneau’s main motivation for shifting survival::cox.zph from the approximated to genuine calculation. His concern was the approximation’s simplifying assumption being violated, which is particularly likely in the presence of strata (see fns. 1 and 5). In light of my results, though, violating the approximation’s assumption may be the lesser of two evils, if the choice is between that or the AC’s exceptionally poor performance for non-PH violators. Future research would need to investigate whether the trade-off would be worthwhile, and if so, under what conditions.

Finally, model misspecification is also a relevant factor. All the models I estimate here involve the correct base specification, with no omitted covariates or misspecified covariate functional forms. However, we know model misspecification can affect the PH test’s performance, in theory (Keele Reference Keele2010; Therneau and Grambsch Reference Therneau and Grambsch2000). Metzger (Reference Metzger2023c) examines how both calculations perform in practice with uncorrelated covariates, in both in the presence and absence of model misspecification. She finds that the approximation can have a high false positive rate for some misspecified base models, going as high as 78.3% in one of her sets of supplemental results.Footnote 31 Knowing the approximation can suffer from the same performance issues as the AC means we cannot leverage my simulation results regarding the approximation’s low false positive rate—the approximation returning evidence of a PH violation does not always mean a PH violation likely exists unless practitioners can guarantee no model misspecification exists, which is a potentially necessary, but likely insufficient, condition.

What might practitioners do in the meantime? The stopgap answers depend on the estimated Cox model's complexity, after addressing any model misspecification issues. If the Cox model has no strata and no strata-specific covariate effects, using the approximation is likely the safer bet. If the model has strata, but no strata-specific effects, practitioners can again use the approximation, but only after making the adjustments discussed in fn. 5. In the presence of both strata and strata-specific effects, there is no strong ex ante reason to suspect fn. 5’s adjustments would not work, but it is a less-studied situation, traditionally. Future research could probe more deeply to ensure this is the case, especially as competing risks models can fall into this last category.

Social scientists’ interest in a covariate’s substantive effect makes it paramount to obtain accurate estimates of that effect. Any covariate violating the Cox model’s PH assumption threatens that goal, if the violation is not corrected. I have shown here that successfully detecting PH violations is more fraught than we previously realized when using Grambsch and Therneau’s full, genuine calculation to test for these violations, rather than an approximation of it. I have suggested some short-term, stopgap solutions, but more research needs to be done to develop more nuanced recommendations and longer-term solutions for practitioners.

Mon, 06 Nov 2023 21:19:00 -0600 en text/html https://www.cambridge.org/core/journals/political-analysis/article/implementation-matters-evaluating-the-proportional-hazard-tests-performance/05E2657A64B18FB1E25C2BE1F7D92C3B
NBB Partners with ServiceNow for Advanced Digital Workflow Solutions No result found, try new keyword!The National Bank of Bahrain (NBB) has partnered with ServiceNow (NYSE: NOW) to develop systems that optimise the Bank’s processes and operations. As part of the agreement, NBB will leverage the ... Sat, 11 Nov 2023 18:01:56 -0600 en-us text/html https://www.msn.com/ Implementation of a Multiplex Molecular Test for Anaplasma, Babesia and Ehrlichia

Date:  April 9, 2021

Time: 9:00am PDT, 11:00am EDT

There are tick-borne diseases beyond what causes Lyme disease. Different geographies have differing prevalence’s of tick-borne infections. In the northeastern USA, some of the other infectious agents are Anaplasma phagocytophilum, Babesia species and Ehrlichia species. This talk will describe our experiences on the processes used to implement a multiplex molecular laboratory developed test (LDT) for the detection of A. phagocytophilum, Babesia spp. and Ehrlichia spp. in a clinical reference laboratory. We will discuss various testing options including assay types and ordering algorithms. Our experiences and lessons learned after implementation will be discussed, including the testing challenges we had related to COVID-19.

Learning Objectives:

  • Describe the various methods available for detection of Anaplasma, Babesia and Ehrlichia
  • Understand the process of LDT validation
  • Discuss advantages and limitations of in-house testing

Webinars will be available for unlimited on-demand viewing after live event.

Mon, 23 Oct 2023 22:00:00 -0500 text/html https://www.labroots.com/webinar/implementation-multiplex-molecular-test-anaplasma-babesia-ehrlichia
Developing an Organization Capable of Strategy Implementation and Reformulation: A Preliminary Test No result found, try new keyword!Beer, M., R. A. Eisenstat, and R. Biggadike. "Developing an Organization Capable of Strategy Implementation and Reformulation: A Preliminary Test." Harvard Business ... Mon, 29 Jan 2018 05:35:00 -0600 en text/html https://www.hbs.edu/faculty/Pages/item.aspx?num=15166 Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results

INTERNATIONAL ATOMIC ENERGY AGENCY, Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results, IAEA Human Health Reports No. 4, IAEA, Vienna (2011).

Download to:
EndNote BibTeX
*use BibTeX for Zotero

Mon, 13 Mar 2017 06:16:00 -0500 en text/html https://www.iaea.org/publications/8561/implementation-of-the-international-code-of-practice-on-dosimetry-in-diagnostic-radiology-trs-457-review-of-test-results
Implementation of the International Code of Practice on Dosimetry in Radiotherapy (TRS 398): Review of Testing Results

INTERNATIONAL ATOMIC ENERGY AGENCY, Implementation of the International Code of Practice on Dosimetry in Radiotherapy (TRS 398): Review of Testing Results, IAEA TECDOC (CD-ROM) No. 1455, IAEA, Vienna (2010).

Download to:
EndNote BibTeX
*use BibTeX for Zotero

Fri, 02 Jun 2017 23:11:00 -0500 en text/html https://www.iaea.org/publications/8456/implementation-of-the-international-code-of-practice-on-dosimetry-in-radiotherapy-trs-398-review-of-testing-results




Servicenow-CIS-ITSM answers | Servicenow-CIS-ITSM teaching | Servicenow-CIS-ITSM education | Servicenow-CIS-ITSM reality | Servicenow-CIS-ITSM benefits | Servicenow-CIS-ITSM course outline | Servicenow-CIS-ITSM exam contents | Servicenow-CIS-ITSM study help | Servicenow-CIS-ITSM study tips | Servicenow-CIS-ITSM Practice Test |


Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
Servicenow-CIS-ITSM exam dump and training guide direct download
Training Exams List