Algorithm for Container Distribution across Hosts

The platform performs a smart distribution of end-user containers between available hosts based on the load level and anti-affinity rules. In general, the criteria are following (refer to the linked sections for the explanation of how it works):

  • loadMark - the least loaded host is prioritized
  • anti-affinity - nodes from the same layer should run on separate hosts (if possible)

After familiarizing with how the host selection algorithm works, you can check some of the examples.

What is LoadMark and How It is Calculated

The platform utilizes a special “loadMark” value to represent the current state of a host and determine its relative availability. With such information provided for each host, the platform can easily choose the least loaded one to allocate a new container.

loadmark in jca

The platform takes into consideration the usage of various resources to accurately calculate the loadMark: swap space, memory (RAM), disk space, and load average.

loadMark = (swapValue + memValue + diskValue + laValue) * 100

As you can see, each resource contributes to the final loadMark value. Herewith, each resource value is calculated as a consumption percentage from the total. However, they have different influences on the final load mark of the node. By default, the highest weight has the amount of busy swap (coefficient equal to 1), then disk space (0.5), and RAM (0.2).

swapValue = (swapUsed / swapTotal) * 1
memValue = (memUsed / memTotal) * 0.2
diskValue = (diskUsed / diskTotal) * 0.5

If needed, the coefficient values can be manually redefined via the platform settings.

The Load Average affects the performance in a different way and has no coefficient. It is calculated as the average system load over the last five minutes divided by the number of safe threads. Two threads per CPU core are considered safe and shouldn’t influence the final load mark value, so the resulting formula is:

laValue = la5min / (coreCount * 2)

In case the resulting value is less or equal to one, it is skipped altogether (i.e. laValue is considered as 0).

Let’s try it on the example of a host with the following parameters and loads:

  • RAM = 38GiB used of the 64GiB total
  • Disk = 1.7TB used of the 2.5TB total
  • Swap = 12GB used of the 32GB total
  • LA5min = 5 (8 cores)

According to the formulas:

swapValue = (12 / 32) * 1 = 0.375
memValue = (38 / 64) * 0.2 = 0.119
diskValue = (1.7 / 2.5) * 0.5 = 0.34
laValue = 5 / (8 * 2) = 0.31, which is <= 1, so the laValue = 0
loadMark = (0.375 + 0.119 + 0.34 + 0) * 100 = 83 (rounding to integer)

LoadMark Critical Values and Custom Coefficients

The platform provides several jelastic.nodeselector.* system settings to configure how the target host is selected during a node creation.

nodeselector settings

In order to redefine the default coefficients for the swap space, memory (RAM), disk space metrics, use the following settings:

  • jelastic.nodeselector.swap.coefficient (1 by default)
  • jelastic.nodeselector.mem.coefficient (0.2 by default)
  • jelastic.nodeselector.disk.coefficient (0.5 by default)

If needed, you can configure critical marks for each criteria to automatically exclude hosts with the higher load from the selection:

  • jelastic.nodeselector.criticalMark (i.e. loadMark value, 300 by default)
  • jelastic.nodeselector.criticalMemPercent (80 by default)
  • jelastic.nodeselector.criticalSwapPercent (50 by default)
  • jelastic.nodeselector.criticalDiskPercent (85 by default)

Anti-Affinity Rules

The platform automatically distributes containers from the same layer of the same environment equally across the available hosts (i.e. physical servers) of the platform to ensure the high availability of end-user applications.

anti-affinity rules

In such a way, if a single host fails due to some issues, the horizontally scaled applications will be able to sustain the impact and proceed to work.

The distribution is performed across the hosts of the same Host Group (i.e. environment region) only. host group

Host Selection Examples

Let us guide you through a couple of examples to understand better how the host selection mechanism works.

  1. Precondition: Creating an environment with a single node.
    Result: The platform selects a host with the least loadMark to provision a container.

  2. Precondition: Creating an environment with a load balancer and two application server nodes.
    Result: The platform selects a host with the least loadMark to provision a load balancer and one application server. The second server will be created on the next least loaded host.

  3. Precondition: Creating an environment with a number of nodes (in a single layer) higher than the hosts' count within one region.
    Result: The platform sorts the available hosts by the loadMark and creates containers on them one-by-one, starting over once there is an equal number of nodes (from the layer) on each host.

What’s next?