Google Colab vs own GTX 1060


Creation date: 2018-10-12

Tags: ai, neural-nets, machine-learning

This is more or less a continuation of the last post where I described how to use a free Tesla K80 GPU in the cloud. I pointed out some problems I had which appeared during my time I did a project in university.

After searching for a couple of days I was able to find a GTX 1060 6GB for a bit more than 200€ which is about 235$. It arrived yesterday so this is only a short comparison but it might be useful for you. I wasn't able to find any comparison between Google colab and a own GPU only paid services like AWS vs a own one. If you're interested: GPU servers for machine learning startups: Cloud vs On-premise?

First of all a bit about the test I run which I did for the university project (might write about it if interested). The problem was to determine whether a region poorer than the global poverty line of 1.9$ per day or above just using satellite imagery. I used roughly 150,000 images for training and for one epoch I used half of that training set. The overall model was a vgg16 model where I only trained the classification layers or in different words the CNN layers (feature extraction) was fixed.

The differences between Google Colab and my GTX 1060 6GB are:

Value Google Colab GTX 1060 6GB
Batch Size max 8 images 16 (4.6GB / 6 GB)
Time / epoch 100 min 36 min

I think not only the GPU is relevant but also the CPU but I'm not able to test the effect on it but will include my values here for comparison as well as what I found out about Google Colab.

CPU/Memory Google Colab Own machine
Model Intel(R) Xeon(R) CPU @ 2.20GHz Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz
CPUs 2 12
Cores 1 6
Memory ~14 GB 32GB

You can find out more about the specs of Google Colab here which I found on stackoverflow.

In general I got a speed up of about 3 for learning but I think there is something much more powerful. I don't have to install everything every time and unzip my images and things like that. On Google Colab I was happy to run 2 epochs before the connection was somehow refused even if the normal limit is 12 hours. Here I was able to just run 6 Epochs without any problems and improved my accuracy by 2% :D which didn't happen before because I was too frustrated.

I think if you really wanna do some stuff with deep learning and maybe already have a desktop computer or workstation: Invest a bit to make that computer good. For me the 1060 will be good enough for a while, hopefully, and 235$ aren't that much. I'll also try to sell my old AMD Graphics card which wasn't suitable for machine learning but for same gamers it might be not too bad.

Thanks for reading and see you next time.

If you enjoy the blog in general please consider a donation via Patreon. You can read my posts earlier than everyone else and keep this blog running.



Want to be updated? Consider subscribing and receiving a mail whenever a new post comes out.

Powered by Buttondown.


Subscribe to RSS