Hi @<1576381444509405184:profile|ManiacalLizard2> , I think this is the env var you're looking for
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
None
DilapidatedDucks58 , I think this is what you're looking for
https://github.com/allegroai/clearml/blob/master/docs/clearml.conf#L69
What are the Elasticsearch, Mongo and apiserver versions in the docker compose are? Backup will only work in this scenario when they are exactly the same between 2 systems.
When you say local machine you mean you're trying to access the UI / BE from the same machine you're running the server?
Maybe @<1523701087100473344:profile|SuccessfulKoala55> or @<1523701827080556544:profile|JuicyFox94> might have some insight into this 🙂
Hi @<1691620905270120448:profile|LooseReindeer62> , I would suggest a machine that can run dockers
Hi @<1533620191232004096:profile|NuttyLobster9> , first, you can insert it using the bash startup script.
Also, I think you can add this repo to the requirements using the following format:
git+
Hi RattyLouse61 🙂
Are these two different users using two sets of different credentials?
That's the problem. ClearML has to detect the uncommitted changes somehow. This is done while the code itself is running or when running with execute_remotely()
. Otherwise, someone has to do a git diff
and push it into the task object(database)
I would suggest the website 🙂
Can you connect directly to the instance? If so, please check how large /opt/clearml is on the machine and then see the folder distribution
Also a small clarification:
ClearML doesn't build the docker image itself. You need to have a docker image already built to be used by ClearML
DilapidatedDucks58 , regarding internal workings - MongoDB - all experiment objects are saved there. Elastic - Console logs, debug samples, scalars all is saved there. Redis - some stuff regarding agents I think
Do you mean you don't have a files server running? You can technically circumvent this by overring the api.files_server
in clearml.conf
and set it to your default storage
Hi DepravedCoyote18 , can you please elaborate a bit on what the current state is now and how you would like it to be?
Hi @<1529271085315395584:profile|AmusedCat74> , what are you trying to do in code? What version of clearml
are you using?
Hi @<1576381444509405184:profile|ManiacalLizard2> , it will be part of the Task object. It should be part of the task.data.runtime
attribute
CluelessElephant89 , Hi!
In the UI, under the execution tab there is a 'Container' section.
There you can configure all of those 🙂
You need the docker to be available on dockerhub so the agent will be able to pull the docker
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None
Hi @<1587615463670550528:profile|DepravedDolphin12> , can you please add the full log?
Hi @<1597762318140182528:profile|EnchantingPenguin77> , can you please add the full log?
In the UI, you can edit the docker image you want to use. You can then choose an image with the needed python pre-installed
full log of the ec2 instance like you provided earlier but from an instance after you've added the init script i mentioned to the autoscaler (stop running one and clone it and make the change)
Hi @<1761199244808556544:profile|SarcasticHare65> , it looks like the failure results from it not finding a queue called services. Try creating one in the webUI
Hi @<1760474471606521856:profile|UptightMoth89> , what if you just run the pipeline without run locally and then enqueue it (assuming you have no uncommitted changes)
Also try adding the following to the bash init script
python -m pip install -U clearml-agent
and add the log of the machine please