k-fold cross validation is a resampling procedure used to evaluate the performance of machine learning models. The method involves partitioning the dataset into k equally sized folds. Each fold acts as a testing set while the remaining k-1 folds form the training set. This process is repeated k times, with each fold used exactly once as the testing set. The final performance metric is obtained by averaging the results from all k iterations, providing a more reliable estimate of the model's performance.