# Machine Learning - Model Evaluation

Updated: 2018-06-30

## Confusion Matrix

Actual \ Predicted Positive Negative
Positive True Positive False Negative
Negative False Positive True Negative

## Derivations

• Precision
$Precision = {TP \over ActionRecords} = {TP \over TP+FP}$
• True Positive Rate(TPR), Sensitivity, Recall, HitRate,
$TPR=Sensitivity=Recall=HitRate= {TP \over AllPos} = {TP \over TP+FN}$
• Specificity
$Specificity= {TN \over AllNeg}= {TN\over TN+FP}$
• False Positive Rate(FPR)
$FPR=1-Specificity= {FP \over AllNeg}= {FP\over TN+FP}$
• ActionRate
$ActionRate = {ActionRecords \over AllRecords} = {TP+FP \over AllRecords}$
• F-measure / F1 Score
$F_1 = 2\cdot {Precision \cdot Recall \over Precision + Recall}$

### Illustration

$TPR = {TP \over TP+FN}$ $FPR = {FP\over TN+FP}$ $Precision = {TP \over TP+FP}$ $Recall = {TP+FP \over TP+FP+TN+FN}$

## Curves

One point in ROC space is superior to another if it is to the northwest of the first

• x-Axis: FPR
• y-Axis: TPR(CatchRate)

### Precision-Recall (PR)

• x-Axis: Recall(HitRate)
• y-Axis: Precision

### Lift

• x-Axis: ActionRate(% Total)
• y-Axis: Lift

Random: (AllPositive / Total) _ Action = (TP + FN) / (TP + FP + TN + FN) _ (TP + FP) UseModel: TP

Lift = UseModel / Random = TP / ((TP + FN) / (TP + FP + TN + FN) * (TP + FP))

### Gain

• x-Axis: ActionRate(% Total)
• y-Axis: HitRate(% Positive)