How to use free GPU with Google Colab

There is no better time to get into deep learning than lockdown, right? This is especially true since now you can do this at no extra costs. Google Colab allows you to get unlimited GPU runs for up to 12h for free. While hardware is randomly allocated depending on the load, if you get a Tesla grade GPU that can save you several thousands of bucks! Let do a deep dive in how to do it:

  1. Have a google account. Usually gmail automatically gives you access to google drive and that’s all you need.

  2. Go to https://colab.research.google.com/ and follow easy steps to strat a new jupyter-like notebook in your browser

  3. Change hardware settings of your notebook to GPU acceleration:

  1. From notebook itself you can install and remove python packages like you would in jupyter

!pip uninstall tensorflow

!pip install tensorflow-gpu==1.15

!pip install keras==2.2.4

!pip install numpy==1.15.2

5a. See if you were lucky to get a GPU

from tensorflow.python.client import device_lib

device_lib.list_local_devices()

[name: “/device:CPU:0”
device_type: “CPU”
memory_limit: 268435456
locality {
}
incarnation: 12923367599567010312, name: “/device:XLA_CPU:0”
device_type: “XLA_CPU”
memory_limit: 17179869184
locality {
}
incarnation: 12401479978837918318
physical_device_desc: “device: XLA_CPU device”, name: “/device:XLA_GPU:0”
device_type: “XLA_GPU”
memory_limit: 17179869184
locality {
}
incarnation: 232088881652370763
physical_device_desc: “device: XLA_GPU device”, name: “/device:GPU:0”
device_type: “GPU”
memory_limit: 14912199066
locality {
bus_id: 1
links {
}
}
incarnation: 10540233397852010914
physical_device_desc: “device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5”]

5b. Ok so it says we’ve got Tesla T4, but what is this device? Let’s find out how much memory we have:

!pip install gputil

!pip install psutil

!pip install humanize

import psutil

import humanize

import os

import GPUtil as GPU

GPUs = GPU.getGPUs()

XXX: only one GPU on Colab and isn’t guaranteed

gpu = GPUs[0]

def printm():

process = psutil.Process(os.getpid())

print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))

print(“GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB”.format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))

printm()

This gives:

Gen RAM Free: 12.7 GB | Proc size: 157.4 MB
GPU RAM Free: 15079MB | Used: 0MB | Util 0% | Total 15079MB
Not bad!

  1. now let’s check you have CUDA installed

!nvcc --version

nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

7, Finally to load data, models and save results you need access to your google drive. To gain the access you need to mount it like so:

from google.colab import drive

drive.mount(’/content/drive/’)

a dialog would open in a new tab promting you to allow access to the content of you drive and confirm it by copying API token to the colab notebook.

The free version of Google Colab runtime stays connected for up 12h. However, if you are training a rather large network and leave to take a jogging brake your inactivity will disconnect colab’s runtime. One trick recently discussed on the internets is to use browsers built-in javascript console to prevent that from happening. Here’s how you can do that:

  1. Start a process in colab

  2. While on the colab window open ‘developer tools’ like so:

  3. Enter dev tools:

  4. Open javascript console as so:

  5. Invoke following code in the console:

keep google colab connected

function ClickConnect(){
console.log(“Working”);
document.querySelector(“colab-toolbar-button#connect”).click()
}setInterval(ClickConnect,60000)

NB: there are several versions of the snippet above. Provide yours in the comments below :slight_smile: happy hacking.

1 Like