@<1529271085315395584:profile|AmusedCat74> , what happens if you try to run it with clearml 1.8.0?
Hi UnevenDolphin73 , when you say pipeline itself you mean the controller? The controller is only in charge of handling the components. Lets say you have a pipeline with many parts. If you have a global environment then it will force a lot of redundant installations through the pipeline. What is your use case?
Great to hear, and now you also have the latest version 🙂
by local storage you mean deleted it from the cache folder it was downloaded to?
Hmmm interesting. According to bytes it looks like 2GB. What type is the file?
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
You can try 🙂 Should work
Hi @<1747428509627715584:profile|CumbersomeDuck6> , are you using a self hosted server?
What version of python is the agent machine running locally?
Does it supporttorch == 1.12.1?
Hi @<1558986867771183104:profile|ShakyKangaroo32> , can you please open a GitHub issue to follow up on this? I think a fix should be issued shortly afterwards
Do you have a screenshot of your settings?
Hi SucculentWoodpecker18 , I don't think there is an updated roadmap currently. You can see updates and releases here: https://clearml.slack.com/archives/C03E7MNDG3C
Is there some specific feature you're looking for?
You can take a look at the pipeline examples here:
None
Transferring artifacts between tasks is exactly what they do.
Hi @<1523701868901961728:profile|ReassuredTiger98> , you can select multiple experiments and compare between the experiments, this way you can see all the scalars at once.
You can also utilize the reports feature to create really cool looking dashboard
None
Are you using a self hosted server? Were the files written to some bucket/storage or directly to the fileserver?
So even if you abort it on the start of the experiment it will keep running and reporting logs?
If you remove any reference of ClearML from the code on that machine, does it still hang?
Hi ExasperatedCrocodile76 , what do you mean by "I would like to have a feature for training where you just do not use clearML."
Can you please elaborate?
It means that I do not pass
pname
and
tname
at all. However i would like to treat this issue before
task.init
is called.
Do you mean that after you use this code after you've cloned a task into a specific project with a specific name when run in the agent it loses it's name and...
How are the metrics being reported? directly via the Logger module or via automatical logging of some framework? Also how are iterations reported
BroadCoyote44 , Hi!
This is very strange indeed. Which version of clearml are you using? I'm guessing you tried with clearml-init ?
I see. If this happens from your home network I think you might need to talk to your internet provider as it looks something is blocking the communication to the server.
Hi @<1732933002259861504:profile|ComfortableRobin65> , I believe that you would be pulling all 150 files. Why not test it out?
DepravedSheep68 , can you please give a bit more context on the error? Also can you show an example of your usage?
You can try the following:
Configure your ~/clearml.conf with sdk.development.default_output_uri: "s3://<YOUR_BUCKET>" and in code simply using Task.init(..., output_uri=True) . Let's see if that setup works.
MelancholyElk85 , it looks like add_files has the following parameter: dataset_path
Try with it 🙂
Also, can you copy here the contents of your docker-compose file here?
ShinyLobster84 , sorry for the delay, had to look into it 🙂
Please try task.get_reported_scalars()