Reputation
Badges 1
25 × Eureka!It's more or less here:
https://github.com/allegroai/clearml-session/blob/0dc094c03dabc64b28dcc672b24644ec4151b64b/clearml_session/interactive_session_task.py#L431
I think that just replacing the package would be enough (I mean you could choose hub/lab, which makes sense to me)
Thanks ReassuredTiger98 , yes that makes sense.
What's the python version you are using ?
UnsightlySeagull42 the assumption is that the agent has a read-only all access user.
As the moment there is no way to configure it to have diff user/pass per repository in the clearml.conf
You can however:
embed the user/pass on the repository link (not very secure) Use ssh-key and have it on .ssh on the host machine Use .git-credentials and configure them (with per project user/pass)
Also I would suggest using Task.execute_remotely
https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
Hi SoggyFrog26
Yes, it is stored at ~/.clearml_data.json
Notice you can always change it by passing --id dataset_id
Would this be best if it were executed in the Triton execution environment?
It seems the issue is unrelated to the Triton ...
Could I use theΒ
clearml-agent build
Β command and theΒ
Triton serving engine
Β task ID to create a docker container that I could then use interactively to run these tests?
Yep, that should do it π
I would start simple, no need to get the docker itself it seems like clearml credentials issue?!
If you passed the correct path it should work (if it fails it would have failed right at the beginning).
BTW: I think it is clearml-agent --config-file <file here> daemon ...
SmugLizard25 are you saying that with the latest version it does not work?
I wonder, does it launch all "step two" instances in parallel ?
In theory it should , but in practice since these are the same "template" I'm not sure what would happen.
One last note, you can call PipelineDecorator.debug_pipeline()
to debug the pipeline locally, it will have the exact same behavior only it will run the steps as subprocesses.
Hi TightElk12
would like to understand the limitations ofΒ
Task.current_task()
Basically this will always get you an instance of the current Task. This will work from sub-processes as well as the main process. Is there a specific scenario you have in mind, or a challenge with the use case ?
Hi ExcitedFish86
Of course, this is what it was designed for. Notice in the UI under Execution you can edit this section (Setup Shell Script). You can also set via task.set_base_docker
I think you are correct the env variable is not resolved in "time". It might be it's resolved at import not at Task.init
That makes total sense. The question was about the Mac users and OS environment in the configuration file and having that os environment set in code (this is my assumption as it seems that at import time it does not exist). What am I missing here?
Thanks for pinging OutrageousGiraffe8
I think I was able to reproduce.
model is saved to the clearml as an output model when
b
is not a dictionary.
How did you make the example work with the automagic ?
I see, actually what you should do is a fully custom endpoint,
- preprocessing -> doenload video
- processing -> extract frames and send them to Triton with gRPC (see below how)
- post processing, return a human readable answer
Regrading the processing itself, what you need is to take this function (copy paste):
None
have it as internal `_process...
I think there is a bug on the UI that causes series with "." to only use the first part of the series name for the color selection. This means "epsilon 0" and "epsilon 0.1" will always get the same color, and this will explain why it works on other graphs
VirtuousFish83
could that be that "inplace-abn" while installing the package needs torch ?
but it is not possible to write to a private channel in which the bot is added.
Is this a Slack limitation ?
however when I clone or reset said task after completion and then enqueue it again, I get the above error.
This part is somewhat confusing... There is no magic happening behind the scenes, cloning a Task and creating it, is basically the same ... Do you have a reference to the YOLOv5 code base itself, maybe I can figure out what's the issue?
(I think the GCP is already up, I'll double check)
Just to clarify, where do I run the second command?
Anywhere just open a python console and import the offline task:from trains import TaskTask.import_offline_session('./my_task_aaa.zip')
Related, how to I specify in my code the cache_dir where the zip is saved?
This is the Trains cache folder, you can set it in the trains.conf file:
https://github.com/allegroai/trains/blob/10ec4d56fb4a1f933128b35d68c727189310aae8/docs/trains.conf#L24
Hi WickedElephant66
Setting the pipeline controller with pipeline_execution_queue as None
is actually launching the pipeline controller on your "dev" machine, not sure why you have two of them?
Hi CleanWhale17 let me see if I can address them all
Email Alert for finished Job(I'm not sure if it's already there).
Slack integration will be public by the end of the weekend π
It is fully customization / extendable, I'll be happy to help .
DVC
Full dataset tracking is supported using the artifacts and the ability to integrate to any central storage (shared folders/ S3 / GS / Azure etc.)
From my experience, it is easier to work with artifacts from Data-Processing Tasks...
It can be a different agent.
If inside a docker thenclearml-agent execute --id <task_id here> --docker
If you need venv doclearml-agent execute --id <task_id here>
You can run that on any machine and it will respin and continue your Task
(obviously your code needs to be aware of that and be able to pull its own last model checkpoint from the Task artifacts / models)
Is this what you are after?
RoundMosquito25 do notice the agent is pulling the code from the remote repo, so you do need to push the local commits, but the uncommitted changes clearml will do for you. Make sense?
Are you saying this component should pull a specific git repo?PipelineDecorator.component( ..., )
seems like there is no reference to a specific repo (arguments repo
and repo_branch
etc are missing) is that correct?