Storing Failed Containers
The platform provides a possibility to temporarily store (rather than just remove) environment nodes if their creation has failed, including a failure during redeploy. Such behavior helps with the analysis of the occurred problem, allowing to fix it and avoid similar issues in the future. Obviously, the failed containers will be properly unbound from the user account, avoiding the unfair client balance charges.
To enable this functionality on the platform, the following two system settings are used:
- jelastic.node.failed.persist.enabled - enables (true) or disables (false, by default) the storing of the failed nodes
- jelastic.node.failed.persist.days - sets a number of days (7, by default) to keep the failed containers before automatically remove them with the qjob.delete_illegal_containers.cron_schedule job
Redeployment Backups (Deprecated)
Redeployment with backups was utilized in the PaaS 5.6 - 5.7.6 versions and is deprecated since 5.7.7.
Also, you can save backups of the initial container even after the successful redeploy. This will give a possibility, in case of necessity, to easily roll back customer to the previous version. The following quotas are used for the task:
- redeploy.backup.count - a number of backups per node (1 by default), which should be kept after successful redeploy; only the latest backups are saved
- redeploy.backup.persist.days - a number of days (7 by default) to keep backups after container redeploy; the qjob.delete_illegal_containers.cron_schedule job removes outdated backups hourly
To restore container from the redeployment backup, the next two API methods are used:
- GetBackups - returns a list of backups assigned to the specified node ID
- RestoreBackup - substitutes the specified container with the required backup