AgitatedDove14 This looks awesome! Unfortunately this would require a lot of changes in my current code, for that project I found a workaround 🙂 But I will surely use it for the next pipelines I will build!
Sure, just sent you a screenshot in PM
Now I'm curious, what did you end up doing ?
in my repo I maintain a bash script to setup a separate python env. then in my task I spawn a subprocess and I don't pass the env variables, so that the subprocess properly picks up the separate python env
Thanks SuccessfulKoala55 for the answer! One followup question:
When I specify:agent.package_manager.pip_version: '==20.2.3'
in the trains.conf, I get:trains_agent: ERROR: Failed parsing /home/machine1/trains.conf (ParseException): Expected end of text, found '=' (at char 326), (line:7, col:37)
Nevermind, I just saw report_matplotlib_figure
🎉
sorry, the clearml-session. The error is the one I shared at the beginning of this thread
Sure, it’s because of a very annoying bug that I shared in this https://clearml.slack.com/archives/CTK20V944/p1648647503942759 , that I couldn’t solve so far.
I’m not sure you can downgrade that easily ...
Yea that’s what I thought, that’s a bit of pain for me now, I hope I can find a way to fix the bug somehow
(I didn't have this problem so far because I was using ssh keys globaly, but I want know to switch to git auth using Personal Access Token for security reasons)
you mean to run it on the CI machine ?
yes
That should not happen, no? Maybe there is a bug that needs fixing on clearml-agent ?
It just to test that the logic being executed in if not Task.running_locally()
is correct
I see 3 agents in the "Workers" tab
I will probably just use everywhere an absolute path to be robust against different machine user accounts: /home/user/trains.conf
They are, but this doesn’t work - I guess it’s because temp IAM accesses have an extra token, that should be passed as well, but there is no such option on the web UI, right?
The cloning is done in another task, which has the argv parameters I want the cloned task to inherit from
Note: Could be related to https://github.com/allegroai/clearml/issues/790 , not sure
I made sure before deleting the old index that the number of docs matched
I’d like to move to a setup where I don’t need these tricks
Awesome! (Broken link in migration guide, step 3: https://allegro.ai/docs/deploying_trains/trains_server_es7_migration/ )
with my hack yes, without, no
it would be nice if Task.connect_configuration could support custom yaml file readers for me
Yes I agree, but I get a strange error when using dataloaders:RuntimeError: [enforce fail at context_gpu.cu:323] error == cudaSuccess. 3 vs 0. Error at: /pytorch/caffe2/core/context_gpu.cu:323: initialization error
only when I use num_workers > 0
In my github action, I should just have a dummy clearml server and run the task there, connecting to this dummy clearml server
That would be awesome, yes, only from my side I have 0 knowledge of the pip codebase 😄
I checked the server code diffs between 1.1.0 (when it was working) and 1.2.0 (when the bug appeared) and I saw many relevant changes that can introduce this bug > https://github.com/allegroai/clearml-server/compare/1.1.1...1.2.0
Nice, the preview param will do 🙂 btw, I love the new docs layout!