Reputation
Badges 1
25 × Eureka!Task deletion failed: unhashable type: 'dict'Hi FlutteringWorm14 trying to figure where this is coming from, give me a sec
Hi SkinnyPanda43
In your local machine do not pass output_uri at all, so nothing will be uploaded.
On the agent's configuration file configure, default_output_uri to the S3 bucket
(Notice you can always override them in the UI, see the bottom of the execution Tab)
https://github.com/allegroai/clearml-agent/blob/e93384b99bdfd72a54cf2b68b3991b145b504b79/docs/clearml.conf#L312
Hi SucculentBeetle7
The parameters passed to add_step need to contain the section name (maybe we should warn if it is not there, I'll see if we can add it).
So maybe something like:{'Args/param1', 1}Or{'General/param1', 1}Can you verify it solves the issue?
Yep, that would do it ...
You can disable it with:Task.init(..., auto_connect_frameworks={'scikit': False})
I can see all the steps like git clone,
git clone has nothing to do with "env setup" this is brining the code, you cannot skip that one, that said, this is why the git itself is cached on the host machine, so it is fast
... There may be some odd package that need to be installed because one of our DS is experimenting ... But all that we can see what is happening.
even if everything is preinstalled, it Verifies the packages match, this might take a long time. It's just pip being ...
Woot woot, great to hear π
Notice you have configure the shared driver for the docker, as the volume mount doesn't work without it. https://stackoverflow.com/a/61850413
Hi PanickyMoth78
You mean like another Task? or maybe Slack message?
UnevenDolphin73 you mean the clearml-server helm chart ?
is number of calls performed, not what those calls were.
oh, yes this is just a measure of how many API calls are sent.
It does not really matter which ones
ReassuredTiger98 when you look for task "dca2e3ded7fc4c28b342f912395ab9bc" there are no artifacts ?
Could you add some prints? this should have worked...
Yes! Thanks so much for the quick turnaround
My pleasure π
BTW: did you see this (it seems like the same bug?!)
https://github.com/allegroai/clearml-helm-charts/blob/0871e7383130411694482468c228c987b0f47753/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L14
(only works for pyroch because they have diff wheeks for diff cuda versions)
Is there an option to do this from a pipeline, from within theΒ
add_step
Β method? Can you link a reference to cloning and editing a task programmatically?
Hmm, I think there is an open GitHub issue requesting a similar ability , let me check on the progress ...
nope, it works well for the pipeline when not I don't choose to continue_pipeline
Could you send the full log please?
Yes in the UI, clone or reset the Task, then youcan edit the installed packages section under the Execution tab
Thanks SmallDeer34 , I think you are correct, the 'output' model is returned properly, but "input" are returned as model name not model object.
Let me check something
yes ...
What's your use case for passing an empty dict ? (meaning how would one use it later)
Hi @<1730033904972206080:profile|FantasticSeaurchin8>
Is this only relates to this
https://github.com/coqui-ai/Trainer/issues/7
Or is it a clearml sdk issue?
- Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)?
Totally forgot about the global search feature, hmm I'm not sure the webapp is in the correct "state" for that, i.e. I think that the selection only works in "table view", which is the "all experiments" flat table
- Could we add a filter on the project name in the "All Experiments" project?
You mean "filter by project" ?
Could we ad...
GreasyPenguin14 makes total sense.
In that case I would say variants to the accuracy make sense to me, I would suggest:title='trains', series='accuracy/day' and title='trains', series='accuracy/night'
Regrading hierarchy, from the implementation perspective a unique identifier is always the combination of title/series (or in other words metric/variant), introducing another level is a system wide change.
This means it might be more challenging than expected ...
This only talks about bugs reporting and enhancement suggestions
I'll make sure this is fixed π
By default the pl Trainer will output everything to TB, which we automatically store. But verify that TB is installed