First you need to register for the service. An advice here: fast.ai course students receive a promotional code, which is worth $ 15. This equates to about 30 hours of GPU usage, so I suggest you start the excellent course of fast.ai and get more out of it.
After signing up, you have access to your console. There are three products available for selection: Gradient, Core and API. For quick access to GPU resources, I prefer Slope. More specifically, I use the notebooks to do this.
When you click on Create Notebook +, there is a selection of free containers available at your disposal. It is also possible to configure a new container, if necessary. For a quick analysis or experiment, the Jupyter Notebook Data Science Stack it's perfect. You can find the specifications here. It allows you to use both notebooks and consoles, so it's no problem to extend functionality with additional packages.
In the second stage of creating an instance of a notebook, you need to decide on a machine that you want to use. For example, I use the cheaper CPU option to download data and perform test runs of my deep learning models. This option is called C2 and costs only $ 0.009 per hour. It's four and a half days of computing power for $ 1.
Starting from an economic CPU solution is possible thanks to the integrated storage. There are 200 GB of permanent storage space available, which automatically connects to each new instance via
storage folder. This configuration allows you to prepare everything first with an economic CPU.
Once you're sure everything works well, you can start a new JupyterHub environment. This time, I select a GPU option. The cheaper option starts at $ 0.51 an hour and rises to $ 1.72 an hour. More powerful machines are also available (with a maximum cost of $ 20.99 per hour), but you need to upgrade your account to do so. I use Paperspace to support my personal development in the deep learning area. To this end, the $ 0.51 option has always been sufficient.
After you are satisfied with the results, you should export notebooks you used (both the original file and an HTML copy) and the trained model in a backup location or in a version control system of your choice.