Try to set this line in your clearml.conf to true:
https://github.com/allegroai/clearml/blob/6e6271fb91f2aeb2aa7a13c6d07d4e635baaa670/docs/clearml.conf#L177
No, I mean actually compare using the UI, maybe the arguments are different or the "installed packages"
Hi @<1542316991337992192:profile|AverageMoth57>
Not sure I follow how the integration what you have in mind regarding Gerrit integration None
Sounds interesting ...
wdyt?
Hi @<1523701066867150848:profile|JitteryCoyote63>
Could you please push the code for that version on github?
oh seems like it is not synced, thank you for noticing (it will be taken care immediately)
Regrading the issue:
Look at the attached images
None does not contain a specific wheel for cuda117 to x86, they use the pip defualt one

It's only active when you have one "main" task.
You can also check the continue_last_task argument in Task.init , it might be a good fit for your scenario
https://allegro.ai/docs/task.html#trains.task.Task.init
Thanks OutrageousGrasshopper93
I will test it "!".
By the way the "!" is in the project or the Task name?
I hope you can do this without containers.
I think you should be fine, the only caveat is CUDA drivers, nothing we can do about that ...
I still see things being installed when the experiment starts. Why does that happen?
This only means no new venv is created, it basically means install in "default" python env (usually whatever is preset inside the docker)
Make sense ?
Why would you skip the entire python env setup ? Did you turn on venvs cache ? (basically caching the entire venv, even if running inside a container)
I double checked the code it's always being passed π
well I do not think you set your pytorch lightining to use cuda:
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/code/.venv/lib/python3.9/site-packages/lightning/pytorch/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='gpu', devices=1)`.
It seems something is wrong with the server itself...
im helping train my friend
on clearml to assist with his astrophysics research,
if that's the case, what you can do is use the agent inside your sbatch script,
(full open source). This means the sbatch becomes " clearml-agent execute --id <task_id_here> " this will set up the environment and monitor the job and still allow you to launch it from slurm, wdyt?
Hi MinuteGiraffe30
Thank you so much for your awesome product!
π !
s address 10.68.167.10. I am able to send requests from all other virtual machines on the server to the address 10.68.167.10:8008. However, when I try to do this from my own computer connected to the corporate network via VPN, it fails to connect to 8008.
I'm assuming there is a firewall on the VPN connection itself (i.e. the VPN gateway) that blocks 8008 port, as you already tried curl to 8008 is...
Whatβs interesting to me (as a ClearML newbie) is itβs clearly compiling that wheel using my host machine (MacOS).
Hmm kind of, and kind of not.
If you take a look at the Tasks created (regardless on how they are created,. pipeline, manually, etc.), you have a list of python packages required by the code, as they are detected at runtime (i.e. when the code was first executed, on the development machine). When creating a Pipeline controller (runner), the pipeline Tasks are just lists, ...
Hi PanickyMoth78 an RC with a fix is out, let me know if it works (notice you can now set the max_workers from CLI or Dataset functions) pip install clearml==1.8.1rc1
and I have no way to save those as clearml artifacts
You could do (at the end of the codetask.upload_artifact('profiler', Path('./fil-result/'))wdyt?