In part 2, we tried different models to predict passenger survival on Titanic. We know that an increase in training accurracy, does not always result in increase in the Kaggle score. Herein, we discuss more details.
Overfitting & Cross-validation
In the machine learning practice, we are intersted in predictive models capable of predicting something in the real world. We’d like to predict something given all we know about the instance. So, our models should be capable of being generalized to cover real-world scenarios.
When we check the accuracy of our predictions against the training data and maximize our accuracy in this setting, there is a risk of overfitting the training data, meaning we have captured so much of the outliers, exceptions, and particularities of the training data that the resulting model cannot be generalized anymore. This is what has happened with our sex.class.age.tree model.
What can be done to avoid overfitting? We can create different versions of the training data, keeping part of the training data as test set, evaluate our model, and pick the model that performs best, on average. This is called cross-validation. Let’s see a practical example of cross-validation.
K-fold Cross Validation
One of the popular methods of cross validation is K-fold cross validation. In this method, the training data is split into k partitions. Each time, we leave out one of the partitions and train the model on the rest, then test and measure the performance on the partition not used for training. To determine the model performance, average performance of all the k runs will be used.
set.seed(100)cvCtrl<-trainControl(method="cv",number=10)# use 10-fold cross validation
trainControl function is used to create a 10-fold cross validation control object. This object is then passed to the train function. Decision trees are controlled by cp (complexity parameter), which tells the algorithm to stop when the measure (accuracy here) does not improve by this factor. So, we are using 10-fold cross validation to find an appropriate cp.
plot.train and print.train show the details of training, i.e. how accuracy changed by changing cp.
The default value for cp is 0.01 and that’s why our tree didn’t change compared to what we had at the end of part 2.
Another parameter to control the training behavior is tuneLength, which tells how many instances to use for training. The default value for tuneLength is 3, meaning 3 different values will be used per control parameter. Since we only have one control parameter (cp), it will try three different trees with different values for cp. Let’s change this default to see what happens.
As we see in the output, 5 different trees with different cps have been tried and the cp with highest accuracy has been picked. Because the resulting cp is lower that the default value, we get a different tree this time.
When submitted to Kaggle, our increased training accuracy (85.75%) did not translate to increased Kaggle score, as we could expect. This is another example of overfitting, where our model couldn’t be generalized to accurately predict survival for unknown test data.
So, in general it is not advisable to maximize tuneLength, because high values may result in overfitting, as we may get too many similar trees that won’t help. The result in this case is a complete tree with branches not pruned and some nodes too sepcific.
Putting it together
Let’s bring Fare back with all the tuning in place.
This time we get better training accuracy than that of the model without Fare (83.95% compared to 83.05%). Our Kaggle score has improved to 0.7799 from 0.75598.
It should be noted that the best score we have had up to this point is for the model using Sex, Pclass, and Fare. I guess that it is because of the inherent errors in imputing the missing values for Age. I will discuss different strategies for imputing the missing values and compare their results. So, see you next time.