We were running e2e tests on cloudbuild. In cloudbuild we use a docker-compose.yml
.
I learned that it's possible to launch those containers in gitlab-ci using services. This article here was really useful to take the first steps using this approach.
I've failed to complete the task because the worker-nodes of our k8s cluster doesn't have enough memory. Maybe if we had worker nodes with 8Gi of RAM, it would be possible.
TODO: create a convincing message that we should have more powerful machines in our node-pool. It means that we could decrease the amount of nodes, therefore the cost would be the same.
docker-compose.yml
to .gitlab-ci.yml
.gitlab-ci.yml
has its peculiarities for passing argumentsvalues.yaml
) to allow overwriting memory requests/limitsI struggled with elasticsearch configs, as in docker-compose.yml
it accepts things like:
evironment:
- node.name=elastic
- discovery.type=single-node
- ...
But in .gitlab-ci.yml
it's not possible to define variables with a .
dot. I found the solution here.
Pay special attention at the very last sentence of that section:
If the max overwrite has not been set for a resource, the variable is ignored.
So, I had to add *memory_[request|limit]_overwrite_max_allowed
in the values.yaml
of the gitlab-runner (related documentation).
With this configuration we can overwrite the memory request/limit in the .gitlab-ci.yml
using variables like:
variables:
KUBERNETES_SERVICE_MEMORY_REQUEST: 400Mi
KUBERNETES_SERVICE_MEMORY_LIMIT: 400Mi
KUBERNETES_MEMORY_REQUEST: 1100Mi
KUBERNETES_MEMORY_LIMIT: 1100M
Even after going through all these 👆 obstacles, in the end I wasn't able to run the e2e test on gitlab-runner because the k8s worker node doesn't have enough memory to run this setup 😔