ScaryBluewhale66 , please look in:
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
The relevant section for you is auto_connect_frameworks
The usage would be along these lines:Task.init(..., auto_connect_frameworks={'matplotlib': False})
Hi @<1638712150060961792:profile|SilkyCrocodile89> , it looks like a connectivity issue. Are you trying to upload data to the files server? Can you share the full log?
AbruptWorm50 , you can send me. Also can you please answer the following two questions?When were they registered? Were you able to view them before?
Also, you mention plots but in the screenshot you show debug samples. Can I assume you're talking about debug samples?
Hi DepressedFox45 ,
For the agent you'll need to run clearml-agent init
Not really, you can even point the files_server in clearml.conf
to s3. the files server is there so there would be some basic storage solution attached.
task that reads a message from a queue
Can you give a specific example?
I'll try to see if it reproduces on my side 🙂
Hi @<1546303293918023680:profile|MiniatureRobin9> , what version of clearml are you using and can you give an example live you've tried running?
Hi @<1673501397007470592:profile|RelievedDuck3> , there is some discussion of it in this video None
As I wrote, you need to remove the s3 from the start of the host section..
DilapidatedDucks58 , can you verify please that the machine that is running the cleanup is properly configured with S3 credentials and they work for sure (Can delete and create files)
Hi @<1631102016807768064:profile|ZanySealion18> , I think this is what you're looking for:
None
How did you set the output URI?
DistressedKoala73 , can you send me a code snippet to try and reproduce the issue please?
FreshKangaroo33 , what do you mean by syntax examples?
I think this should give you some context on usage 🙂
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py
This is part of the Scale/Enterprise versions only
Can you try running the pipeline locally using pipeline.start_locally()
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#start_locally-1
Also, try connecting a "starter" node and then make it parent of all the others at the start
Hi @<1603198163143888896:profile|LonelyKangaroo55> , you can change the value of files_server in your clearml.conf
to control it as well.
You mean you want the new task created by add_step
to take in certain parameters? Provided where/by who?
Also services agent is not related to regular agent executions
Hi @<1566596983865479168:profile|ZanyCrocodile93> , I'm afraid you cannot move experiments between workspaces in the free/PRO versions
I think that once a version has been finalized you can't add changes to it directly. You could probably hack something with setting it manually to running via the API, add the relevant connections and then move it to completed
This is strange. Can you take a look inside the apiserver and webserver docker logs to see if any errors pop up?
Hi @<1572032783335821312:profile|DelightfulBee62> , I think 1 TB should be enough. I would suggest maybe even having 2T just for the safe side
You certainly can do it with the python APIClient OR through the requests library