⏱️ 50 min

PyTorch Fundamentals

Master Facebook's dynamic deep learning framework

PyTorch Essentials

PyTorch is known for its dynamic computation graph and Python thic design. It's widely used in research and production. **Key Features:** - Dynamic computation graphs - Intuitive debugging with standard Python tools - Strong GPU acceleration - Rich ecosystem (torchvision, torchtext, etc.)

Building Neural Networks in PyTorch

Create models with torch.nn:

python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np

# Set random seed
torch.manual_seed(42)

# Create dataset
X = torch.randn(1000, 20)
y = (X[:, 0] + X[:, 1] > 0).float().unsqueeze(1)

# Define model
class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(0.2)
        self.fc2 = nn.Linear(hidden_size, hidden_size // 2)
        self.fc3 = nn.Linear(hidden_size // 2, output_size)
        self.sigmoid = nn.Sigmoid()
    
    def forward(self, x):
        x = self.fc1(x)
        x = self.relu(x)
        x = self.dropout(x)
        x = self.fc2(x)
        x = self.relu(x)
        x = self.dropout(x)
        x = self.fc3(x)
        x = self.sigmoid(x)
        return x

# Initialize model
model = NeuralNet(input_size=20, hidden_size=64, output_size=1)
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Print architecture
print("Model Architecture:")
print(model)
print(f"\nTotal parameters: {sum(p.numel() for p in model.parameters())}")

# Training loop
print("\nTraining:")
model.train()
for epoch in range(20):
    # Forward pass
    outputs = model(X)
    loss = criterion(outputs, y)
    
    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    if (epoch + 1) % 5 == 0:
        with torch.no_grad():
            predictions = (outputs > 0.5).float()
            accuracy = (predictions == y).float().mean()
            print(f"Epoch [{epoch+1}/20], Loss: {loss.item():.4f}, Accuracy: {accuracy.item()*100:.2f}%")

# Evaluation
model.eval()
with torch.no_grad():
    test_outputs = model(X[:200])
    test_predictions = (test_outputs > 0.5).float()
    test_accuracy = (test_predictions == y[:200]).float().mean()
    print(f"\nFinal Test Accuracy: {test_accuracy.item() * 100:.2f}%")
Output:
Model Architecture:
NeuralNet(
  (fc1): Linear(in_features=20, out_features=64, bias=True)
  (relu): ReLU()
  (dropout): Dropout(p=0.2, inplace=False)
  (fc2): Linear(in_features=64, out_features=32, bias=True)
  (fc3): Linear(in_features=32, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Total parameters: 3,425

Training:
Epoch [5/20], Loss: 0.3245, Accuracy: 88.50%
Epoch [10/20], Loss: 0.1634, Accuracy: 94.70%
Epoch [15/20], Loss: 0.0912, Accuracy: 97.30%
Epoch [20/20], Loss: 0.0567, Accuracy: 98.50%

Final Test Accuracy: 98.50%
Sharan Initiatives - Making a Difference Together