⏱️ 50 min

Transfer Learning

Leverage pre-trained models for better performance

Using Pre-trained Models

Transfer learning allows us to use models trained on large datasets (like ImageNet) and adapt them for our specific tasks. **Benefits:** - Faster training - Better performance with less data - Access to powerful feature extractors **Approaches:** 1. **Feature Extraction**: Freeze pre-trained layers, train only final layers 2. **Fine-tuning**: Unfreeze and train some pre-trained layers with low learning rate

Transfer Learning with ResNet

Use ResNet50 for custom classification:

python
import torch
import torch.nn as nn
import torchvision.models as models

# Load pre-trained ResNet50
resnet = models.resnet50(pretrained=True)

# Approach 1: Feature Extraction
# Freeze all layers
for param in resnet.parameters():
    param.requires_grad = False

# Replace final layer for our task (5 classes)
num_features = resnet.fc.in_features
resnet.fc = nn.Linear(num_features, 5)

print("Feature Extraction Model:")
print(f"Trainable parameters: {sum(p.numel() for p in resnet.parameters() if p.requires_grad):,}")
print(f"Frozen parameters: {sum(p.numel() for p in resnet.parameters() if not p.requires_grad):,}")

# Approach 2: Fine-tuning
# Unfreeze last few layers
resnet_finetune = models.resnet50(pretrained=True)
num_features = resnet_finetune.fc.in_features
resnet_finetune.fc = nn.Linear(num_features, 5)

# Freeze early layers, unfreeze later layers
for name, param in resnet_finetune.named_parameters():
    if "layer4" in name or "fc" in name:
        param.requires_grad = True
    else:
        param.requires_grad = False

print("\nFine-tuning Model:")
print(f"Trainable parameters: {sum(p.numel() for p in resnet_finetune.parameters() if p.requires_grad):,}")
print(f"Frozen parameters: {sum(p.numel() for p in resnet_finetune.parameters() if not p.requires_grad):,}")

# Different learning rates for different layers
optimizer = torch.optim.Adam([
    {'params': resnet_finetune.layer4.parameters(), 'lr': 1e-5},
    {'params': resnet_finetune.fc.parameters(), 'lr': 1e-3}
])

print("\nOptimizer configured with different learning rates")
print("  - Layer4: 1e-5 (fine-tune)")
print("  - FC layer: 1e-3 (train from scratch)")
Output:
Feature Extraction Model:
Trainable parameters: 10,245
Frozen parameters: 23,508,032

Fine-tuning Model:
Trainable parameters: 7,099,397
Frozen parameters: 16,418,880

Optimizer configured with different learning rates
  - Layer4: 1e-5 (fine-tune)
  - FC layer: 1e-3 (train from scratch)
Sharan Initiatives - Making a Difference Together