================ by Jawad Haider
- What is CUDA?
- How do I install PyTorch for GPU?
- How do I know if I have CUDA available?
- Using GPU and CUDA
- Using CUDA instead of CPU
- Sending Models to GPU
- Convert Tensors to .cuda() tensors
What is CUDA?¶
Most people confuse CUDA for a language or maybe an API. It is not.
It’s more than that. CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever expanding list of supported languages, and incorporates extensions of these languages in the form of a few basic keywords.
These keywords let the developer express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.
How do I install PyTorch for GPU?¶
Refer to video, its dependent on whether you have an NVIDIA GPU card or not.
How do I know if I have CUDA available?¶
True
Using GPU and CUDA¶
We’ve provided 2 versions of our yml file, a GPU version and a CPU version. To use GPU, you need to either manually create a virtual environment, please watch the video related to this lecture, as not every computer can run GPU, you need CUDA and an NVIDIA GPU.
0
'GeForce GTX 1080 Ti'
# Returns the current GPU memory usage by
# tensors in bytes for a given device
torch.cuda.memory_allocated()
0
# Returns the current GPU memory managed by the
# caching allocator in bytes for a given device
torch.cuda.memory_cached()
0
Using CUDA instead of CPU¶
tensor([1., 2.])
device(type='cpu')
device(type='cuda', index=0)
512
Sending Models to GPU¶
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
class Model(nn.Module):
def __init__(self, in_features=4, h1=8, h2=9, out_features=3):
super().__init__()
self.fc1 = nn.Linear(in_features,h1) # input layer
self.fc2 = nn.Linear(h1, h2) # hidden layer
self.out = nn.Linear(h2, out_features) # output layer
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return x
# From the discussions here: discuss.pytorch.org/t/how-to-check-if-model-is-on-cuda
next(model.parameters()).is_cuda
False
True
df = pd.read_csv('../Data/iris.csv')
X = df.drop('target',axis=1).values
y = df['target'].values
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=33)
Convert Tensors to .cuda() tensors¶
X_train = torch.FloatTensor(X_train).cuda()
X_test = torch.FloatTensor(X_test).cuda()
y_train = torch.LongTensor(y_train).cuda()
y_test = torch.LongTensor(y_test).cuda()
trainloader = DataLoader(X_train, batch_size=60, shuffle=True)
testloader = DataLoader(X_test, batch_size=60, shuffle=False)
import time
epochs = 100
losses = []
start = time.time()
for i in range(epochs):
i+=1
y_pred = gpumodel.forward(X_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%10 == 1:
print(f'epoch: {i:2} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'TOTAL TRAINING TIME: {time.time()-start}')
epoch: 1 loss: 1.15071142
epoch: 11 loss: 0.93773186
epoch: 21 loss: 0.77982736
epoch: 31 loss: 0.60996711
epoch: 41 loss: 0.40083539
epoch: 51 loss: 0.25436994
epoch: 61 loss: 0.15052448
epoch: 71 loss: 0.10086147
epoch: 81 loss: 0.08127660
epoch: 91 loss: 0.07230931
TOTAL TRAINING TIME: 0.4668765068054199