If you're looking for a way to run large scale experiments quickly and efficiently, then Coretex has exactly what you need. With Coretex, you can easily run tasks both in the cloud and on your local machine. This makes experimentation much easier and more cost-effective.

Plus, Coretex automates the entire workflow, so you don't have to worry about manually setting up your runs. You're free to focus on your results! Furthermore, Coretex allows you to easily share your results with others. You can even collaborate on projects with your team, making the entire process much more efficient. With Coretex, large scale experimentation has never been easier.

Scaling your runs

When executing your runs you have two options when it comes to choosing the computers (called Nodes) which will perform your runs:

  1. Self-managed Node

  2. Coretex Cloud Node

Comparison of various aspects of both approaches are given bellow.




Maintenance effort






As a rule of thumb, obtaining and maintaining your own compute infrastructure can be a daunting task and is usually done by a team of IT administrators. If you are a lone rider or you already have access to a GPU accelerator you wish to use to run your tasks, please refer to the Coretex CLI page for instructions on how to connect it to the Coretex Platform. Once you have your local Node set up, you can use it to run any task on the platform - either from our Web UI, or by creating your own Local Datasets and Runs.

For even higher flexibility, download our Official Python Library and dive into fine-grain customizations of your runs.

In case your runs require a lot of RAM, CPU cores, or GPU memory you can use one of the available cloud execution queues from the drop-down when running the tasks.

If you select a cloud queue, you will be given a detailed breakdown of costs per hour for the node you have chosen, so you can estimate your costs.

Each cloud queue is a somewhat elastic resource - if there are no queued runs the queues will turn off all of the cloud nodes to minimize costs. As soon as a queue receives a request to execute a run a new node will be automatically spawned to run it. If there are multiple runs queued and queue still has unused capacity it will start as many nodes as possible to serve all of the runs in it.

You will be charged only for the execution time of your runs up to a one minute resolution.

Estimating the runtime of a run can not be done in a reliable way. Please take into account the guaranteed hourly price and monitor your runs to control the costs by preemptive execution stopping.

These are the cloud queues Coretex currently supports, along with their specs and limits:

Queue nameInstance specsCost per hourMax instances



$0.00 USD / hour



A100 GPU, 16 GB VRAM, 54 GB RAM, 256 SSD

$1.04 USD / hour



A100 GPU, 64 GB VRAM, 256 GB RAM, 1 TB SSD

$10.05 USD / hour



A100 GPU, 740 GB VRAM, 2.2 TB RAM, 8 TB SSD

$40.00 USD / hour


Last updated