Reputation
Badges 1
18 × Eureka!Thank you SuccessfulKoala55 👍
when we upgrade to the enterprise version, do we get new docker images w.r.t upgrade and access to the code ?
Currently we have locally hosted the community version 🙏
yes, I'll better explain :
I have a task (a script created as a task), that can execute with different configurations
I want #n instances of the task to run with #n different configs
hence my pipeline can get a list of #n configs
and based on 'n', I'd like to have dynamic 'n' steps in the pipeline
thanks @<1523701205467926528:profile|AgitatedDove14>
yes argument saying always create from code
can be helpful
also, if we can edit the configuration objects of a pipeline, that can be beneficial too. which we're unable to do from UI
hey,
we did delete the old configs and clearml-agent init
'ed everything, pointing to the new IP
hello AgitatedDove14
thanks for your reply.
yes, the HTTP link is valid I was able to download it using wget
I'm doubtful if it was an inconsistency
right now this seems to be solved for me.
previously I was using ${V3M_step.artifacts.Detections}
- which will return a dictionary
on changing it to ${V3M_step.artifacts.Detections.url}
(the url of the artifact) is returned and this seemed to have helped.
@<1523701205467926528:profile|AgitatedDove14> sure, I'll open a issue.
thank you for briefing, you're right, cloning and editing is feasible. However, the pipeline experiment is not visible in the project experiment list.. they are hidden, which troubles in cloning the pipeline..
yes, but the pipe starts running before we can edit it..
@<1523701205467926528:profile|AgitatedDove14> we @<1539417873305309184:profile|DangerousMole43> found an issue in the pipeline, that can be closely related to this.
- we have a pipeline running perfectly.
- The parent node fails for a valid reason, and the child nodes are skipped.3. but when we try to do a "New Run" from UI, it tries to follow the DAG of previous run (the run with all child nodes skipped) and the new run fa...
hello again,
It will be helpful to know why we experience this when running a pipeline2022-12-19 15:13:47,884 - clearml - WARNING - Could not retrieve remote configuration named 'RUN_CONFIG'
- how do I add a configuration object to a pipeline.
the dictionary is split into multiple values.. when using it as a param in the pipeline
Hi John, I also found the --config-file
flag 🙂 Thank you for your help.
yes it is reproducible.
that is amazing! thank you @<1523701070390366208:profile|CostlyOstrich36>
fyi @<1533619716533260288:profile|SmallPigeon24>
Hey CostlyOstrich36
Happens when I try to execute a pipeline remotely.2022-12-19 15:13:47,884 - clearml - WARNING - Could not retrieve remote configuration named 'RUN_CONFIG' Using default configuration: {...}
It happens in my pipeline and here is the code :
` pipe = PipelineController(
name="mypipe", project="myproject", version="0.0.1", add_pipeline_tags=False
)
pipe.set_default_execution_queue("default")
my_json = "jsons/my_json.json"
clearml_input_path = "jsons/clearml_input.j...
@<1523701205467926528:profile|AgitatedDove14> , hi, will it be possible for us to configure the "new run" button in a way so that it always clones from a particular pipeline ?
So I tried it again,
what resolved this was, just commenting a line# from clearml import Task
we did not use Task anywhere, but this has caused the last node to skip without any reason...
Hi all,
I did the move as directed by @<1523701070390366208:profile|CostlyOstrich36> ,
We have our new clearml server filled with data from old server.
None
however, the new agents are not able to pull the tasks. All tasks remain pending.
Please let us know what could be the cause of this
Hey @<1523701087100473344:profile|SuccessfulKoala55> , I assumed so because 'worker_name:cpu:0', and I find it very slow! hence assumed it was utilising only a single cpu, cpu number 0