Phase III Timelines Update Predictive Modeling
Essay Preview: Phase III Timelines Update Predictive Modeling
Report this essay
Predictive Modeling: Exam 2 Study Guide
Understand each model – Use case, what can the model predict – continuous/categorical/both, input variable type – continuous/categorical/both, pros/cons, theory – how and why it works

Be able to calculate both Exact NaĂŻve Bayes and NaĂŻve Bayes prediction (multiplying fractions of probabilities).
Target variable: Audit finds fraud, no fraud
Predictors:
Prior pending legal charges (yes/no)
Size of firm (small/large)
Be able to interpret the output in JMP from screenshots.
Examples:
Linear Regression – Interpret the results and write regression equation from parameter estimates. Toyota Corolla price = left column below which corresponds to parameter estimates on right.

Be able to interpret parameter estimates. Below are the parameter estimates from the linear regression of predicted the price of a Toyota Corolla.
Continuous variables – When interpreting the estimate column, we know that for ever 1 month increase in age the price of the car decreases by $134.14. We also know that for ever mile the car is driven the price decreases by $0.02 straightforward interpretation for continuous variables.

Categorical variables are a little more difficult to interpret because the parameter estimate shows n-1 values. Example – Metallic is 0 or 1 but only Metallic[0] is shown below. Similarly, there are 3 fuel types, but only Fuel Type[CNG] and Fuel Type[Diesel] are listed below. Key idea: Categorical variables will always sum to zero. Therefore, a car that is not Metallic has a coefficient of 19 (from below parameter estimates) and this means a car that is Metallic has a coefficient of -19. And the price difference between a car that is metallic and is not metallic is $38 (not $19). For Fuel Type CNG = -933 and Diesel = -804 which means that Petrol must be positive 1737 because -933-804+1737=0

Be able to interpret residuals – For the Toyota Corolla price on average the linear regression model is predicting $110.91 too low (from Mean below) with an even distribution of errors and for the mid 50% of the model is predicting about + or – $850.

Neural Networks – Craft the formula that feeds into each node based on parameter estimates, find number of layers (one in below example), number of nodes

Node 3 output = Fat score 0.2*randomly assigned weight 0.05 + salt score 0.9*randomly assigned weight 0.01 + -0.3 which then goes through the transformation in the equation below and comes out as 0.43

In JMP: Node 1 input formula = fat score * 15.08 + salt score * 4.31 + -4.42
Note: The numbers won’t show on the lines in JMP, they’re just listed under Estimate
Variable selection methods
Exhaustive Search: All Possible Model (not as popular or common as forward and backward)
All possible subsets of predictors assessed:
Each individual predictor, Pairs of predictors, Sets of 3 predictors, Etc
Look for lowest RMSE (number 7 below)
Forward selection (very common method): Start with no predictor variables and add them in one by one until you achieve optimal results. You want to maximize RSquare on Validation. The best is shown at the bottom of the step history with an RSquare of 0.8774

Start with no predictors, add them one by one (add the one with largest contribution). P-Values change every time you include or exclude a variable.
When to stop:
P-value Threshold: Stop when no other potential predictor has statistically significant contribution
Max Validation RSquare: Stop when the RSquare on the validation set stops improving when predictors are added (only available when there is a validation column)

Backward elimination (very common method): Start with all the predictors variables in and remove them one by one until you achieve optimal results
Start with all predictors, successively eliminate least useful predictors one by one. P-Values change every time you include or exclude a variable.
Stopping Rules:
P-value Threshold: Stop when all remaining predictors have statistically significant contribution
Max Validation RSquare: Stop when the RSquare on the validation set stops improving when predictors are removed (only available when there is a validation column)

Mixed Stepwise: Remove one, add another, remove a different one.
Like Forward Selection, except at each step, also consider dropping non-significant predictor variables
Stopping Rules
P-value Threshold: Stop removing when all remaining predictors are significant, and stop adding when no other potential predictor is significant
Understand p-values: P-Value is the probability that the Null Hypothesis is true.
Based around statistical testing. Base understanding is that nothing is statistically significant. Null hypothesis is that the coefficient is zero and the variable is not significant, it has no impact on the target.

When considering predictor variables:
Null Hypothesis
The coefficient for the variable = 0
The variable is not significant
Alternate Hypothesis
The coefficient is <> 0
The variable is significant
Alpha is the threshold for determining significance (cutoff)
Often

Get Your Essay

Cite this page

P-Values And Variable Type. (June 29, 2021). Retrieved from https://www.freeessays.education/p-values-and-variable-type-essay/