JitteryCoyote63 , I'm afraid currently not and only available in docker mode.
What do you need it for if I may ask?
JitteryCoyote63 , If you mean storage secrets (aws,azure, etc..) then you can configure them in your ~/clearml.conf π
UnevenDolphin73 , what do you say, did it work?
@<1523701087100473344:profile|SuccessfulKoala55> , what is the intended behavior?
Hi @<1544853695869489152:profile|NonchalantOx99> , can you please add the full log?
You mean you want the new task created by add_step
to take in certain parameters? Provided where/by who?
Is it possible the machines are running out of memory? Do you get this error on the pipeline controller itself? Does this constantly reproduce?
Hi @<1529633468214939648:profile|CostlyElephant1> , I think this is what you're looking for:CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
None
Can you try hitting F12 and seeing if there are any errors in console?
Hi @<1559711623147425792:profile|PlainPelican41> , How are you running the pipeline? Where is agent running ?
From the looks of this example this should be connected automatically actually
https://github.com/allegroai/clearml/blob/master/examples/frameworks/hydra/hydra_example.py
can you try reinstalling clearml-agent
?
Hi @<1576381444509405184:profile|ManiacalLizard2> , I would suggest playing with the Task object in python. You can do dir(<TASK_OBJECT>)
in python to see all of it's parameters/attributes.
Which version of clearml are you using?
How about this by the way?
https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#outputmodelset_default_upload_uri
Hi, SuperiorPanda77 , this looks neat!
I could take a look on a windows machine if it helps π
@<1544853721739956224:profile|QuizzicalFox36> , are you running the steps from the machine who's config you checked?
Hi @<1710827348800049152:profile|ScantChicken68> , I'd suggest first reviewing the onboarding videos on youtube:
None
None
After that, I'd suggest just adding the Task.init()
to your existing code to see what you're getting reported. After you're familiar with the basics then I'd suggest going into the orchestration/pipelines features π
Thanks FiercePenguin76
We will update the roadmap and go into details on the next community Talk (in a week from now, I think)
Regrading clearml-serving, Yes! we are actively working on it internally, but we would love to get some feedback, I thinkΒ AnxiousSeal95 Β would appreciate itΒ π
Can you see if in the APIserver logs something happened during this time? Is the agent still reporting?
What about the worker that was running the experiment?
I'm just not sure what error you're getting
MelancholyElk85 , you can specify the uri for that in your ~/clearml.conf
file under sdk.development.default_output_uri
Please note that you don't provide target storage for InputModel
since it's an input, and can be used only as an existing object in the system π
@<1523701132025663488:profile|SlimyElephant79> , it looks like you are right. I think it might be a bug. Could you open a GitHub issue to follow up on this?
As a workaround programmatically you can set Task.init(output_uri=True)
, this will make the experiment outputs all to be uploaded to whatever is defined as the files_server
in clearml.conf
.
Hi UnevenDolphin73 , can you please elaborate on what do you mean by what CPU does the queue consume?
You can pull all machine usage statistics using the API. Is there something specific you're looking for?