Hi @<1545216070686609408:profile|EnthusiasticCow4> , start_locally() has the run_pipeline_steps_locally parameter for exactly this 🙂
Hi @<1585078763312386048:profile|ArrogantButterfly10> , does the controller stay indefinitely in the running state?
Hi @<1524922424720625664:profile|TartLeopard58> , you mean the side bar on the left with projects/datasets/etc... ?
JitteryCoyote63 , are you on a self hosted server? It seems that the issue was solved for 3.8 release and I think should be released to the next self hosted release
Is it possible the projects were empty? Do you still have projects?
Hi @<1800699527066292224:profile|SucculentKitten7> , I think you're confusing the publish action to deployment. Publishing a model does not deploy it, it simply changes the state of the model to published so it cannot be changed anymore and also publishes the task that created it.
To deploy models you need to either use clearml-serving or the LLM deployment application
Hi @<1655744373268156416:profile|StickyShrimp60> , happy to hear you're enjoying ClearML 🙂
To address your points:
Is there any way to lock setting of scalar plots? Especially, I have scalars that are easiest comparable on log scale, but that setting is reverted to default linear scale with any update of the comparison (e.g. adding/removing experiments to the comparison).
I would suggest opening a GitHub feature request for this
Are there plans of implementing a simple feature t...
Hi @<1797800418953138176:profile|ScrawnyCrocodile51> , not sure I understand. You have some model ID and you want to find it's project? Via code or UI?
ExcitedSeaurchin87 , Hi 🙂
I think it's correct behavior - You wouldn't want leftover files flooding your computer.
Regarding preserving the datasets - I'm guessing that you're doing the pre-processing & training in the same task so if the training fails you don't want to re-download the data?
Is something failing? I think that's the suggested method
Hi SubstantialElk6 ,
From a quick glance I don't see any abilities not covered. Is there some specific capability you're looking for?
This is part of the Scale/Enterprise versions only
I don't think you need to mix. For example if you have a pre-prepared environment then it should something like export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=<PATH_TO_ENV_BINARY>
Hi @<1547390438648844288:profile|ScaryJellyfish75> , you have described the HyperDataset feature which is part of the Scale/Enterprise versions. I suggest you contact sales@clear.ml to get a quote for a license 🙂
Hi PricklyRaven28 , can you try with the latest clearml version? 1.7.1
BTW, are you using http://app.clear.ml or a self hosted server?
Try setting it outside of any section. Basically set an environment section by itself
Hi VivaciousBadger56 , can you add the full error here?
You can provide it in the extra configurations sections
DepressedChimpanzee34 , the only way I see currently is to update manually each parameter
For example:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Does this help?
Yes you can set everything on the task level and of course you can also use different docker images for different python versions
I think this is what you're looking for 🙂
https://clear.ml/docs/latest/docs/references/sdk/dataset#datasetlist_datasets
Try removing the region, it might be confusing it
@<1590514584836378624:profile|AmiableSeaturtle81> , its best to open a GitHub issue in that case to follow up on this 🙂
Hi @<1562973095227035648:profile|ThoughtfulOctopus83> , do you mean apiserver logs? What do you mean in regards to security?
Also, can you share which machine image you're using?
Hi FlutteringWorm14 , what happens if you try to delete those tasks manually?