Hi @<1523702000586330112:profile|FierceHamster54>
I think I'm missing a few details on what is logged, and ref to the git repo?
yes you are correct, I would expect the same.
Can you try manually importing pt, and maybe also moving the Task.init before darts?
But it does make me think, if instead of changing the optimizer I launch a few workers that "pull" enqueued tasks, and then report values for them in such a way that the optimizer is triggered to collect the results? would it be possible?
But this is Exactly how the optimizer works.
Regardless of the optimizer (OptimizerOptuna or OptimizerBOHB) both set the next step based on the scalars reported by the tasks executed by agents (on remote machines), then decide on the next set of para...
Hi @<1715175986749771776:profile|FuzzySeaanemone21>
and then run "clearml-agent daemon --gpus 0 --queue gcp-l4" to start the worker.
I'm assuming the docker service cannot spin a container with GPU access, usually this means you are missing the nvidia docker runtime component
Oh what if the script is in the container already?
Hmm, the idea of clearml is that the container is a "base environment" and code is "injected", this makes sure it is easy to reuse it.
The easiest way is to add an "entry point" scripts that just calls the existing script inside the container.
You can have this python initial script on your local machine then when you call clearml-task
it will upload the local "entry point" script directly to the Task, and then on the remote machin...
Hi @<1720249416255803392:profile|IdealMole15>
I'm assuming you mean on a remote machine with clearml-agent running ?
If you do, then you either use clearml-task
to create a Task (Job) and specify the container and script. or click on "Create New Experiment" in the UI, and fill out the git repo / script and specify the docker image.
Make sense?
Hi CleanPigeon16
can I make the steps in the pipeline use the latest commit in the branch?
Yes:
manually clone the stesp's Task (in the UI), and in the UI edit the Execution section and change to "last sommit on branch" and specify the branch name programmatically (as the above, clone+edit)
ValueError: Could not parse reference '${run_experiment.models.output.-1.url}', step run_experiment could not be found
Seems like the "run_experiment" step is not defined. Could that be ...
Nice workaround!
RoughTiger69 how do I reproduce this behavior? (I'm still unsure on why exactly the clearml binding broke it, and would like to fix that)
(can you also provide the crash trace, maybe that could help as well)
Hi YummyFish22
Looks like the task does not have "Task.init" call on the main script (or an import of clearml)? could that be the case?
Number of entries in the dataset cache can be controlled via cleaml.conf : sdk.storage.cache.default_cache_manager_size
- Maybe we should add an option, archive components as well ...
Also in the same open docker session, can you try:$LOCAL_PYTHON -m clearml_agent execute --disable-monitoring --id <task_id_here>
Where the Task ID is one of the failed executions (only reset it before)
Yes, you are too quick for the resource monitoring 🙂
PanickyMoth78 quick update the fix is already being tested, I'm hoping an RC tomorrow 🙂
PleasantGiraffe85 can you send examples of the different git repo links (one internal one public) ?
TrickyRaccoon92 I didn't know that 🙂
where did you try to add it? did you report a plotly figure or is it with report_???
PlainSquid19 yes the link is available on in the actual paid product 😞
I don't think they have the documentation open yet...
My recommendation is to fill the contact us form, you'll get a free online tour as well 😉
Hi @<1747428509627715584:profile|CumbersomeDuck6>
but is it possible to use ClearML in Rust, without writing a wrapper.
With the RestAPI you can...
noticed the API doesnt cover dataset operations but the CLI can.
Yes the CLI will fetch/create datasets for you,
wdyt?
Oh I do not think this is possible, this is really deep in a background thread.
That said we can sample the artifacts and re-register the html as a debug media:url = Task.current_task().artifacts['notebook preview'].url Task.current_task().get_logger().report_media('notebook', 'notebook', iteration=0, url=url)
Once the html is uploaded, it will keep updating on the same link so no need to keep registering the "debug media". wdyt?
ReassuredTiger98
(for some reason it kind of jumps over PyTorch, but then installs torchvision?!)
Could you run with the latest with --debug
We just added but you will have to install from git:pip3 install git+
Then run with --debug:clearml-agent --debug daemon ...
JitteryCoyote63
Yes this extremely annoying, I think it was updated on the community server, let me check if we deployed a new docker with a fix ...
RipeGoose2 you mean to have the preview html on S3 work as expected (i.e. click on it add credentials , open in a new tab) ?
can someone show me an example of howÂ
PipelineController.create_draft
I think the idea is to store a draft versio of the pipeline (not the decorator type, I think, but the one launching pre-executed Tasks).
GiganticTurtle0 I'm not sure I fully understand how / why you are using it, can you expand?
EDIT:
However, my intention is ONLY to create it to be executed later on.
Hmm so may like enqueue it?
. Does
Task.connect
send each element of the dictionary as a separate api request? Has anyone else encountered this issue?
Hi SuperiorPanda77
the task.connect ends up as a single call with all the data being sent on a single request.
That said, maybe the connect dict is not the best solution for thousand key dictionary ...
Maybe artifact, or connect_configuration are better suited ?
wdyt?
RobustGoldfish9 I see.
So in theory spinning an experiment on an gent would be clone code -> build docker -> mount code -> execute code inside docker?
(no need for requirements etc.?)
GiganticTurtle0 fix was just pushed to GitHub 🙂pip install git+
that embed seems to be slightly off with regards to where the link is actually pointing to
I think this is the Slack preview... 😞