PyTorch Reference
Reference
PyTorch
Getting started
Creating tensor: tns = torch.tensor(2500, dtype = torch.int)
## torch.float is another dtype BaseClass for all neural networks : nn.Module
nn.Linear
and nn.Sequential
are objects/derived class of nn.Module
class/Parent Class.
2,3) // Initializes with random paramters
nn.Linear(
# Building sequential network
= nn.Sequential(
model 2,3),
nn.Linear(
nn.ReLU(), 3,1))
nn.Linear(input) //FeedForward
model(
# buildinv custom network
class NN_Regression(nn.Module):
super(NN_regression, self).__init():
#initialize components
self.layer1 = nn.Linear(3,6)
self.layer2 = nn.Linear(4,1)
self.relu = nn.ReLU()
def forward(self, x):
= self.layer1(x)
x = self.relu(x)
x = self.layer2(x)
x return x
= optim.Adam(model.parameters(), lr=0.01)
optimizer = nn.MSELoss()
loss
= loss(model(input), y)
MSE #Backward propogation or gradient calculation
MSE.backward() #Stepping
optimizer.step() #reset the gradients
optimizer.zero_grad()
# Model Evaluation
eval()
model.with torch.no_grad():
= loss(model(X_test), y_test)
test_MSE 'model.pth')
torch.save(model, 'model.pth') torch.load(
To split dataset into validation and testing; use sklearn.model_selection.train_test_split
PyTorch references and functions
Non Linear Functions
Name | Functions | Notes |
---|---|---|
ReLU | nn.ReLU() |
|
Sigmoid | nn.Sigmoid() |
Loss Functions
Name | Functions | Notes |
---|---|---|
Absolute Mean Error or L1 norm | torch.nn.functional.l1_loss(a,b) |
|
Root Mean Square Error or L2 norm | torch.nn.functional.mse_loss(a,b) |
Optimizers
Name | Functions | Notes |
---|---|---|
SGD | torch.optim.SGD(model_paramets, lr) |
|
Adam | torch.optim.ADAM(model_paramets, lr) |
Linear Regressions
Name | Functions | Notes |
---|---|---|
Linear Regression | nn.Linear(n_weights/n_inputs, n_bias/n_outputs) |
wx + b |