Reputation
Badges 1
25 × Eureka!I also have task_override that adds a version which changes each run
It's just a tag, so no real difference
Hmm reading this: None
How are you checking the health of the serving pod ?
I callΒ
Task.init
Β after I import tensorflow (and thus tensorboard?)
That should have worked...
Can you manually add a TB report before calling opennmt
function ?
(I want to verify the Task.init is indeed catching the TB calls, my theory is that somewhere inside the opennmt
we loose the TB)
Ohhhh , okay as long as you know, they might fall on memory...
PompousParrot44 I assume the folder structure is something like:
repo_root:
--> test
-----> scripts
If this is the case, make sure the ""working directory" is .
which means repository root
Yes please, just to verify my hunch.
I think that somehow the docker mounts the agent is creating are (for some reason) messing it up.
Basically you can just run the following (it will do everything automatically) (replace the <TASK_ID_HERE> with the actual one)
` docker run -it --gpus "device=1" -e CLEARML_WORKER_ID=Gandalf:gpu1 -e CLEARML_DOCKER_IMAGE=nvidia/cuda:11.4.0-devel-ubuntu18.04 -v /home/dwhitena/.git-credentials:/root/.git-credentials -v /home/dwhitena/.gitconfig:/root/.gitconfig ...
Hi @<1654294828365647872:profile|GorgeousShrimp11>
can you run a pipeline on a
schedule
or are schedules only for Tasks?
I think one tiny details got lost here, Pipelines (the logic driving them) are a type of Task, this means you can clone and enqueue them like other tasta
(Task.enqueue / Task.clone)
Other than that looks good to me, did I miss anything ?
Hi SarcasticSparrow10
Is it better to post such questions on Stackoverflow so they benefit everybody?
Yes, I think you are correct it would please do π
Try to do " reuse_last_task_id='task_id_here'" ,t o specify the exact Task to continue )click on the ID button next to the task name in the UI)
If this value is true it will try to continue the last task on the current machine (based on project/name, combination) if the task was executed on another machine, it will just start a ...
Hmm that is odd. Let me take a look and ask the guys. Thank you for quickly testing the RC! I'm hoping a new RC with a fix will be there tomorrow, if we can quickly replicate
Hi SubstantialElk6
saved in the files_server (indicated in ClearML.conf) instead of the indicated output_uri in the dataset.create argument
What's the clearml SDK version ? how are you specifying the output target?
Okay let me check if we can reproduce, definitely not the way it is supposed to work π
feature request: tell me what gets passed along each edge of the pipeline graph
Nice! please feel free to add to GH issue π
Hmm yes that is odd, let me see if I can reproduce
Hi @<1541954607595393024:profile|BattyCrocodile47>
This looks like a docker issue running on mac m2
None
wdyt?
Nested in the UI is not possible I think?
Yes, but the next version will have nested projects, that's something π
I mean that it is possible to start the subtask while the main task is still active.
You cannot call another Task.init while a main one is running.
But you can call Task.create and log into it, that said the autologging is not supported on the newly created Task.
Maybe the easiest solution is just to do the "sub-tasks" and close them. That means the main Task i...
If I install using
pip install -r ./requirements.txt
then pip installs the packages in the order of the requirements file.
Actually this is not how it works, pip will install in any way it sees fit, and it is not consistent between versions (it has to do with dependency resolving)
However, during the installation process from ClearML, it installs the packages in order UNLESS there's a custom path provided, then it's saved for last
Correct because the custom (I...
BTW: which clearml version are you using ?
(I remember there was a change in the last one, or the one before, making the config loading differed until accesses)
Correct the serving Task ID is the clearml serving session. It is the instance that holds all the information of this specific setup and models
Depending on your security restrictions, but generally yes.
Hi RotundHedgehog76
Notice that the "queued" is on the state of the Task, as well as the the tag
We tried to enqueue the stopped task at the particular queue and we added the particular tagWhat do you mean by specific queue ? this will trigger on any Queued Task with the 'particular-tag' ?
Hi @<1543766544847212544:profile|SorePelican79>
You want the pipeline configuration itself, not the pipeline component, correct?
pipeline = Task.get_task(Task.current_task().parent)
conf_text = pipeline.get_configuration_object(name="config name")
conf_dict = pipeline.get_configuration_object_as_dict(name="config name")
Is this reproducible? I tried to run the same example code on my machine, and it started training ...
Do you have issues with other pytorch examples? Could you try simple reporting example:
https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py
Hi @<1541954607595393024:profile|BattyCrocodile47>
Do you mean to start a remote session instead of the cli directly from the vscode ui and connect to it? If so, that would be awesome!! We have a remote session from the web were it spins you remote session and launches vscode inside the container so you work on it in your browser. But a VSCode plugin is a great idea, do you have a ref code to similar plugins?
JoyousKoala59 what is the Trains server you have? the link you posted is to upgrade from v0.15 to v0.16, not from trains to clearml
Let me check the API reference
https://clear.ml/docs/latest/docs/references/api/endpoints#post-tasksget_all
So not straight query, but maybe:
https://clear.ml/docs/latest/docs/references/api/endpoints#post-tasksget_all_exall
section might do the trick.
SuccessfulKoala55 any chance you have an idea on what to pass there ?
correct, you can pass it as keys on the "task_filter" argument, e.g:Task.get_tasks(..., task_filter={'status': ['failed']})
Hi JealousParrot68
You mean by artifact names ?
Hi @<1529633468214939648:profile|CostlyElephant1>
Is it possible to get user ID of the current user
On the Task.data
object itself there should be a filed named " user
" that's the user ID of the owner (creator) of the Task.
You can filter based on this id with
Tasks.get_tasks(..., task_filter={'user': ["user-id-here"]})
wdyt?