A Tesla K80 and Ubuntu in a Consumer Motherboard

Samuel Gonzalez
5 min readMar 20, 2021

--

We’re all aware there has been a worldwide shortage of GPUs since the COVID-19 pandemic became a global problem. Most of us were forced to work from home therefore increasing demand for personal computers. The new NVIDIA RTX 3000 series showed great capabilities for many markets especially for artificial intelligence (AI) related tasks. However, these GPUs have been difficult to find and buy at reasonable prices since released. Sadly ROCM support is still developing and not all of the cards are supported leaving us with less options. Luckily, a friend told me that the NVIDIA Tesla K80 started showing up on Ebay and other marketplaces at reasonable prices.

The NVIDIA Tesla K80

The Tesla K80 is a processing unit designed by Nvidia, specifically for servers and data centers, featuring a dual GPU design with 4,992 CUDA cores, and 24 GB of GDDR5 memory. Although it is an old card originally released in 2014, it still packs decent processing power for AI projects like training, inferring, and experimenting with YOLO. Read more about the Tesla K80 here.

Figure 1 — NVIDA Tesla K80

Limitations

NVIDIA licenses the Tesla K80 reference to manufacturers such as Hewlett Packard, Dell, among others, who built their own custom solutions around the card. This comes with limitations for the traditional PC enthusiast consumer like ourselves. You need to take into consideration (1) the space available inside your computer case, (2) that your power supply unit (PSU) can deliver the sustainable power that the card requires, and (3) heat dissipation nonetheless. The card isn't actively cooled with fans, instead the manufacturers implement their own cooling solutions. The easiest way, not necessarily the best one, to cool the card would be to use two to three 80MM fans running at a minimum of 2,000 RPM each. Consider getting better fans capable of producing more airflow if you can afford them because the card can get toasty. Have in mind that faster RPM means more power consumption and probably direct power from the PSU.

Hardware Requirements

Figure 2— Gigabyte Motherboard — Example of 4G Decoding and Resizable Bar Setting.
  1. Consumer motherboard with the latest BIOS which supports 4G Decoding and Resizable Bar. Please, confirm the requirements before buying a new motherboard.
    I have a Gigabyte Aorus Master Z390 which enabled support for Resizable Bar in recent BIOS updates. See figure 2 for reference.
  2. 2–3 80MM Fans (2000 RPM each or more). Pay attention to power consumption, you might need to connect them directly to the PSU due to amperage constraints in fan headers.
  3. Zip tides or any other tool/material to place and hold the fans on the card.

Software Requirements

  1. Linux Kernel 5.x
  2. gcc
  3. g++
  4. cmake
  5. Latest NVIDIA CUDA Toolkit (includes driver)

Installation

I'll assume we are working on a fresh install of Ubuntu. You might need to clean up otherwise.

Ubuntu 18.04

Make sure your installation of Ubuntu 18.04 is running the latest version available of the Linux Kernel 5.x.

In a terminal, run the command (see figure 3 for reference)

uname -r
Figure 3 — Linux uname -r command output

Otherwise run the following command in a terminal to install the latest 5.x kernel:

sudo apt install --install-recommends linux-generic-hwe-18.04 -y

Ubuntu 18.04 & 20.04

Confirm the system granted memory regions correctly to the Tesla K80.

In a terminal, run the command (see figure 4 for reference)

lspci -vvv | grep -i -A 20 nvidia
Figure 4 —Example of lspci command output

You might need to review previous steps and confirm hardware requirements if the previous commands doesn't output similar to the figure 3 above.

Install Dependencies

Install CUDA installer dependencies. In a terminal, run the command

sudo apt install gcc g++ cmake

NVIDIA CUDA and Driver

This guide uses the run file installation. Download the latest NVIDIA CUDA Toolkit run file here. Select OS, Architecture, Distribution, Version, and Installer Type. For our purposes it would be Linux, x86_64, Ubuntu, 18.04 or 20.04, and runfile (local) respectively. You will be provided with a wget command and url similar to

wget https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run

Copy and paste the command from the NVIDIA website to your terminal window and run it. The command will download the run file to your computer. The download might take a while depending on your internet connection bandwidth.

Installation Steps (Minimum Steps required to make it work)

If you prefer you could follow the instructions here for the installation of the CUDA Toolkit.

Some commands require root privileges or sudo

  1. Disable the Nouveau drivers:

a. Create a file at /etc/modprobe.d/blacklist-nouveau.conf with the following contents:

blacklist nouveau
options nouveau modeset=0

b. Regenerate the kernel initramfs:

sudo update-initramfs -u

2. Reboot the system:

sudo reboot

3. Run the installer silently to install with the default selections (implies acceptance of the EULA).

sudo sh cuda_<version>_linux.run — silent

If you encounter any error during the toolkit installation check one of the following logs and/or search the internet for the errors:

/var/log/cuda-installer.log
/var/log/nvidia-installer.log

4. Create an xorg.conf file to use the NVIDIA GPU for display:

sudo nvidia-xconfig

5. Reboot the system:

sudo reboot

6. Set up the development environment by modifying the PATH and LD_LIBRARY_PATH variables:

export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Confirmation

Assuming hardware requirements are met, software installed correctly, and errors were fixed (if any), the card should be ready for use.

In a terminal, run the command

nvidia-smi

You should see the Tesla K80 as two independent GPUs and information about them such as power delivery, temperature, and more. See figure 5 for reference.

Figure 5— Example of nvidia-smi command output.

Conclusion

I needed a GPU to train a model for a specific requirement and I couldn't find/buy one of the new GPUs regardless of brand, model, or manufacturer. This is a card for server class motherboards, and there isn't much information about installing in a consumer motherboard. Some articles/forums actually didn't recommend installing in a consumer motherboard. Luckily it is possible to install the Tesla K80 nowadays as long as your motherboard meets the requirements and provide enough airflow. I wanted to help reduce the stuggle for others in need who might consider getting this card if available.

--

--