In the previous part, we discussed what is transfer learning and how we can use it to increase our model accuracy.
Link for previous part:
in this part, we are discussing how we can build a model training loop, and actually train our model on top of resnet-18 model with PyTorch
Step 5: Creating training loop
we are creating the training loop with PyTorch for training our model. In PyTorch for training our models, we have to write our own training loop here steps I’m taking for the training model are:
1. setting model for training model.train()
2. getting batches from our data loaders (we have train and val data separately here)
3. feeding batches to our model getting outputs and updating gradients
Step 6: Writing code for visualizing our predictions with the model
for visualizing predictions we are using matplotlib.pyplot
and some random images from our validation dataset. We will use this after we finished with training our model
Step 7: Actual training (setting model parameters and train step)
we are setting our base model as resnet-18 , loss as crossentropy loss
and optimizer to Adam
here since we are not freezing layers of our pre-trained layer our model is changing its weights making it quite worse)
Step 7: Training with freezing layers:
In this step, we are setting freezing our base layers parameters and using them this will not change already learned pre-trained model parameters (this model gives higher accuracy because weights are not getting lost while training the model)
Step 8: visualizing predictions
visualize_model()
function from the above code helps us to visualize our model predictions and we can see if our model actually learned any patterns or is it just guessing 😉