🔧Setting up Coretex Node

Most important use case of the Coretex CLI is setting up an machine as a worker Node on Coretex. This Node is used for executing Workflows and Endpoints on Coretex.

Requirements

Docker

Coretex CLI uses Docker under the hood to run the worker Node. If you do not have Docker installed on your system you can to it by following one of the tutorials for your OS.

Installing Docker Engine directly is supported on Linux systems only:

If your OS does not support Docker Engine installation directly you can install Docker Desktop instead. Docker Desktop uses a Linux VM under the hood so it might require more resources compared to a direct installation of Docker Engine.

NVIDIA Container Toolkit

This step is required only if you want to allow worker Node to access your GPU.

If you have an NVIDIA GPU available on your machine and you want to use it in your Coretex Node you will need to have a valid GPU driver and NVIDIA Container Toolkit installed.

Installing a driver for your NVIDIA GPU depends on the OS that you are using.

To enable GPU pass-through on Windows you will need to run Docker (and Coretex CLI) through Windows Subsystem for Linux (WSL).

Configuring the worker Node

Configuring the Node is done by using this CLI command:

coretex node config

You will be prompted to enter multiple values:

  • Node name

  • Docker image which will be used for running Coretex Node

  • Allow or deny acces to the GPU (only available for NVIDIA GPUs)

Node name

This value is used as a unique identifier for the machine which you connect to Coretex.

Docker image

You can select either "Official Coretex image" to use an image prebuilt by Coretex, or you can inherit from that image to customize the Coretex Node to fit your needs.

GPU access

If you have NVIDIA GPU and NVIDIA Container Runtime set up on your machine you will be asked to provide access for that GPU during the configuration process. For more info on how to set up your GPU click here.

Starting the worker Node

Starting the worker Node is done using this command:

coretex node start

If your worker Node is configured properly it will immediately begin to pull the Node's docker image (if you don't have it cached locally) and after that it will proceed with starting that image as a docker container.

To stop a running worker Node you can use this command:

coretex node stop

Updating the worker Node

First time you start the worker Node automatic updates will be enabled for it. Crontab is used internally (by the CLI) to schedule a job which gets executed every 30 minutes and that job checks if there are new version of docker image for the worker Node.

If there is a new version of worker Node available the automatic update will only be performed if that Node is not currently executing anything.

You can also manually update the worker Node by running this command:

coretex node update

Automatic updates are not available for Windows if you are running the worker Node outside of WSL.

Advanced Node configuration

Coretex CLI offers a configuration with much more details for more experienced users. You can start the more advanced configuration by adding a flag "--verbose" to the configuration command, like so:

coretex node config --verbose

Advanced configuration options

Node mode

Worker Node can be used for multiple purposes:

  • Running Workflows

  • Serving a single Endpoint (dedicated inference)

  • Serving multiple Endpoins (shared inference)

  • Automatic (Any)

Coretex CLI lets you pick the purpose you want to use your Node for. Picking "Automatic" option let's Coretex decide what will the Node be executing by tracking if it is assigned to Endpoints, or if the users are starting Workflows on it.

Nodes which serve Endpoints (dedicated or shared) can have a NEAR wallet attached. This allows the Node to get NEAR cryptocurrency (amount is configurable by the user) transfered to that wallet per Endpoint invocation.

Storage path

Directory from host machine which will be mounted in the Node's docker container. Node automatically caches Datasets, Models, Artifacts, Python environments, and other when executing Runs or serving Endpoints and it stores them in this directory.

This optimizes the execution time of Runs since they can reuse the cached environments, Datasets, etc...

Resources

You can configure the amount of hardware resources used by the Node:

  • Number of CPUs

  • RAM

  • Swap memory

  • Shared memory (shm, /dev/shm)

Using large amounts of swap is not advised as it is very slow software emulation of RAM which is used when the OS has run out of physical RAM.

Docker access

Allow the worker Node to access the docker from the host machine which enables the Node to run docker images as an Workflow.

Allowing Node to access docker is a security risk. Only allow it if you know what you are doing.

Node init script

Path to the sh script which should be executed inside of the Node's docker container before the Node is started. This allows you to install any additional dependencies which you might need for executing your Workflows.

Node secret

A secret passphrase which is used to generate a Master key for the Node. You must configure this if you want to run Workflows from encrypted Projects on this Node (User-Owned AI). You can read more about it here.

Last updated