
Reputation
Badges 1
25 × Eureka!Hi @<1533620191232004096:profile|NuttyLobster9>
Hi All, is there a way to clone a pipeline from the web UI like you can with a task?
Right click on the pipeline and select Run (it is basically the same thing as cloning it)
GrievingTurkey78 yes, you are correct on both.
Will the sweep functionality work?
Yes it should, that said, it will not use the trains-agent
so you are limited to the machine running the sweep.
If you want to do HPO on multi-node, checkout this example π
https://github.com/allegroai/trains/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py
but cant catch that only one way for service queue or I can experiments with that?
UnevenOstrich23 I'm not sure what exactly is the question, but if you are asking weather this is limited, the answer is no it is not limited to that use case.
Specifically you can run as many agents in "services-mode" pulling from any queue/s that you need, and they can run any Task that is enqueued on those queues. There is no enforced limitation. Did that answer the question ?
AstonishingWorm64
You can turn on the venv cache , it will just handle it's own full env caching π
See here:
https://github.com/allegroai/clearml-agent/blob/4f7407084d1900a79d455570c573e60f40208742/docs/clearml.conf#L100
Okay, I think I lost you...
DilapidatedDucks58 you mean detect at which "iteration" the max value was reported, and then extract all the other metrics for that iteration ?
Hmm maybe different numpy version? ( numpy==1.22.1
maybe the Task needs a diff version) ? Can you post the Task log ?
If this is the case then the easiest is:from clearml.backend_api.session.client import APIClient client = APIClient() res = client.events.get_task_plots(task="<task-id>")
We should defiantly have a nice interface π
Hi EmbarrassedSpider34clearml-init
will try to create ~/clearml.conf
I'm assuming that when you execute under root it is resolved to /root/clearml.conf
That said you might be able to override it with:CLEARML_CONFIG_FILE=$HOME/clearml.con sudo clearml-init
For now I come to the conclusion, that keeping aΒ
requirements.txt
Β and making clearml parse
Maybe we could just have that as another option?
EnviousStarfish54 we just fixed an issue that relates to "installed packages" on windows.
RC is due to be release in the upcoming days, I'll keep you posted
but somewhere along the way, the request actually remove the header
Where are you seeing the returned value?
when I duplicate the experiment and clone it remote, the call is ignored and the recorded values are used?
Yes ScantChimpanzee51 exactly.
Think of it as the inital value you want to put on the Task when you are running the code on your machine, later when you clone the Task, you can edit the base docker image in the UI (or with the API), of course the new value is used when the agent spins this Task, and to avoid the actual docker (the one you changed in the UI) to be overwritten by ...
Try to upload something to the file server ?
None
or do you mean the machine I ran the experiment locally?
Yes this one
post_optional_packages: ["google-cloud-storage", ]
Will install it last (i.e. after all the other packages) but only if you have it in the "Installed packages" list
Has anyone done this exact use case - updates to datasets triggering pipelines?
Hi TrickySheep9 seems like this is following a diff thread, am I missing something ?
BTW: there is a full Pipeline class that does everything for you, example here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
SuccessfulKoala55 please post here once the code is available in your pytorch_ignite π
So for this...
Sorry, what is exactly "this" ?
Hi AgitatedTurtle16 could you verify you can access the API server with curl?
LOL, Okay I'm not sure we can do something that one.
You should probably increase the storage on your instance π
BoredHedgehog47 can you provide some logs, this is odd..
Hmm that is odd, it seemed to missed the fact this is a jupyter notbook.
What's the clearml version you are using ?
Just to make sure I understand, running locally creates the Args/command correctly, then when actually executed on the remote machine (i.e. execute_remotely creates the correct Args/command But when the agent actually executes it) it updates back the Args/command as a list. Is that a correct description ?
wdym 'executed on different machines'?The assumption is that you have machines (i.e. clearml-agents) connected to clearml, which would be running all the different components of the pipeline. Think out of the box scale-up. Each component will become a standalone Job and the data will be passed (i.e. stored and loaded) automatically on the clearml-server (can be configured to be external object storage as well). This means if you have a step that needs GPU it will be launched on a GPU machine...
BoredHedgehog47 I tried changing the order of imports on the sample code I shared before, it worked in both cases ...
PompousParrot44 Enterprise licensing pricing usually custom tailored to the size of the company and based on usage. If you are interested feel free to leave details in the "contact us" form on the website, and someone from sales will contact you shortly after.