JitteryCoyote63 , reproduces on my side as well 🙂
Yeah this is a lock which is always in our cache, cant figure out why it's there, but when I delete the lock and the other files, they always reappear when I run a new clearml task.
Is the lock something that occurs on your machine regardless of ClearML?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I think these are the env variables you're looking for:
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_FORCE_CODE_DIR
None
You don't need to have the services queue, but you need to enqueue the controller into some queue if not running locally. I think this is what you're looking for.
None
Hi @<1587615463670550528:profile|DepravedDolphin12> , your cache is defined by your clearml.conf you can see where it points and delete that folder 🙂
@<1610083503607648256:profile|DiminutiveToad80> , can you give a stand alone code example for such a pipeline that reproduces the issue? Each task should have it's own requirements logged. What is failing, the controller or individual steps?
Hi SuperiorCockroach75 , can you please elaborate? What is taking to execute?
CurvedHedgehog15 , isn't the original experiment you selected to run against is the basic benchmark?
Hi @<1780043419314294784:profile|LargeHamster21> , are you running multiple instances of the agent on the same machine? If that is the case, can you elaborate on the use case?
Hi @<1523707653782507520:profile|MelancholyElk85> , I assume you're running remotely?
@<1797800418953138176:profile|ScrawnyCrocodile51> , you can edit the hyper params when a task is in draft mode
Hi @<1639799308809146368:profile|TritePigeon86> , can you please elaborate? What do you mean by external way?
DisturbedElk70 Hi 🙂
Can you elaborate?
SubstantialElk6 , do you mean compiling them into a language or calling certain functions from the wheel?
Hi @<1562610699555835904:profile|VirtuousHedgehong97> , I think you can mount some shared folder between the ec2 instances to use as cache. ClearML hashes data so it can know if what it has in it's cache is relevant or not.
Are you sure that the file is on the server? Can you access it?
Oh, I misunderstood. You mean you're using app.clear.ml ?
Hi @<1585078763312386048:profile|ArrogantButterfly10> , you can fetch a task using it's id. Then with the task object in hand you can find the model in the artifacts section. For ease of use I suggest playing with dir(task) in python
Are you running it inside a docker yourself or is it run via the agent?
In compare view you need to switch to 'Last Values' to see these scalars. Please see screenshot
Hi RoughTiger69 ,
Have you considered maybe cron jobs or using the task scheduler?
Another option is running a dedicated agent just for that - I'm guessing you can make it require very little compute power
Can you please provide a snippet of how the debug images are saved, also an example url would be useful :)
What version of clearml and clearml-agent are you using, what OS? Can you add the line you're running for the agent?
I think it tries to get the latest one. Are you using the agent in docker mode? you can also control this via clearml.conf with agent.cuda_version
Yes, for an enqueued task to run you require an agent to run against the task 🙂
Hi @<1523703397830627328:profile|CrookedMonkey33> , not sure I follow. Can you please elaborate more on the specific use case?
Currently you can add plots to the preview section of a dataset