Shouldn't be 🙂
Did you notice any difference?
Hi @<1523722618576834560:profile|ShaggyElk85> , please see here: None
I think these are the ones you're looking for:
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
Regarding your second question, I think this is possible only when talking about submodules
Hi @<1523704461418041344:profile|EnormousCormorant39> , on the agent. Although I think you could even pass them as env variables if you're running in docker mode
PanickyMoth78 , if I'm not mistaken that should be the mechanism. I'll look into that 🙂
I think that's about that 🙂
CrookedWalrus33 , Hi 🙂
Can you please provide which packages are missing?
Also, what version of clearml
& clearml-agent
are you using?
In the open source you don't have users & groups, user management is done via fixed users - None
What errors are you seeing in the apiserver pod?
Hi @<1524560082761682944:profile|MammothParrot39> , can you please elaborate on exactly what you did?
Did you go into the task view of pipeline step and changed it's name but then back in the pipelines view the name didn't update?
AbruptCow41 , can you please elaborate? You want to move around files to some common folder and then at the end just create the dataset using that folder?
Is there a vital reason why you want to keep the two accounts separate when they run on the same machine?
Also, what if you try aligning all the cache folders for both configuration files to use the same folders?
Is this on app.clear.ml or is this a self hosted server?
WackyRabbit7 I don't believe there is currently a 'children' section for a task. You could try managing the children to access them later.
One option is add_pipeline_tags(True)
this should mark all the child tasks with a tag of the parent task
Yeah, I missed that you defined the storage with --storage
please try with the port as well there
Can you share a screenshot of the workers page?
Hi SparklingHedgehong28 , can you please elaborate on the steps you take during the process + how you connect your config to the task?
@<1526734383564722176:profile|BoredBat47> , that could indeed be an issue. If the server is still running things could be written in the databases, creating conflicts
Might make life easier 🙂
I think there might be some option, let me check if I can find something I have 🙂
Hi CostlyElephant1 , where is the data stored? on the fileserver or some s3 bucket or other solution?
Hello MotionlessCoral18 ,
Can you please add a log with the failure?
You can mix and match various buckets in your ~/clearml.conf
Do you have resource monitoring on that machine? Any chance that that something ran out of space or memory or cpu?
AbruptCow41 , you can already do this, just add the entire folder 🙂
Hi @<1736194481398484992:profile|MoodySeaurchin62> , how are you currently reporting it? Are you reporting iterations?
Hi @<1534496192929468416:profile|EagerGiraffe33> , what if you try to put a specific version of pytorch you've tested on your remote environment in the requirements section of the cloned task?