Reputation
Badges 1
25 × Eureka!I guess no hurdles vs. safety is inherently no solvable.
LOL
Point taken, I reserve the option to comeback with alternative solutions π
Ok, but it must be somewhere in the bst class
It is the XGboost callback feature, basically just reporting everything xgbosst reports:
None
So inside the pipeline logic you can do Task.current_task().id
Or inside a component Task.current_task().parent
but we run everything in docker containers. Will it still help?
As long as you are running with clearml-agent(in docker mode), all the cache folders (this one included) are mounted on the host machine for persistency
Yes that makes sense, if the overhead of the additional packages is not huge, I do not think it is worth the maintenance π
BTW clearml-agent has full venv caching that you can turn on, so when running remotely you are not "paying" for the additional packages being installed:
Un-comment this line π
https://github.com/allegroai/clearml-agent/blob/51eb0a713cc78bd35ca15ed9440ddc92ffe7f37c/docs/clearml.conf#L116
PompousParrot44 did you manage to get it working ?
how to make sure it will traverse only current package?
Just making sure there is no bug in the process, if you call Task.init in your entire repo (serve/train) you end up with "installed packages" section that contains all the required pacakges for both use cases ?
I have separate packages for serving and training in a single repo. I donβt want serving requirements to be installed.
Hmm, it cannot "know" which is which, because it doesn't really trace all the import logs (this w...
Hi FiercePenguin76
By default clearml
will list only the packages you import, and not derivative packages.
This means that if you import package X
and it imports package Y
, only package X
will be listed.
The way it should work is by statically analyzing the entire repository, but if you import a local package from a different local folder, and that folder is Not in the same repo, it will not get listed (obviously if you install the external local package, it will be...
FiercePenguin76 the git repo should detect only clearml
as required python package
Basically the steps are:
decide if the initial python entry script is a standlone script (i.e. no local imports) in the git repo (in your example "task_with_deps.py") If this is a "standlone script" only look for imports inside the calling python script, and list those packages under "installed packages" If this is Note a standalone script, go over All the python files inside the repository, look for "i...
- Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)?
Totally forgot about the global search feature, hmm I'm not sure the webapp is in the correct "state" for that, i.e. I think that the selection only works in "table view", which is the "all experiments" flat table
- Could we add a filter on the project name in the "All Experiments" project?
You mean "filter by project" ?
Could we ad...
Hi SmallDeer34
Can you try with the latest RC , I think we fixed something with the jupyter/colab/vscode support!pip install clearml==1.0.3rc1
ColorfulBeetle67 you might need to configure use_credentials_chain
see here:
https://github.com/allegroai/clearml/blob/a9774c3842ea526d222044092172980ae505e24f/docs/clearml.conf#L85
Regrading the Token, I did not find any reference to "AWS_SESSION_TOKEN" in the clearml code, my guess it is used internally by boto?!
Hi ShallowCormorant89
Can you verify the http link is valid? Can you download it from code on your machine (i.e. not via an agent), maybe 8081 port is blocked from the agent machine to the server?
Hi FiercePenguin76
Artifacts are as you mentioned, you can create as many as you like but at the end , there is no "versioning" on top , it can be easily used this way with name+counter.
Contrary to that, Models do offer to create multiple entries with the same name and version is implied by order. Wdyt?
Hmm, interesting, why would you want that? Is this because some of the packages will fail?
PompousBeetle71 If this is argparser and the type is defined, the trains-agent will pass the equivalent in the same type, with str
that amounts to '' . make sense ?
Make sense. BTW: you can manually add data visualization to a Dataset with dataset.get_logger().report_table(...)
Interesting!
Wouldn't Dataset (class) be a good solution ?
BTW: server-side vault is in progress, hopefully will be available in the upcoming releases :)
but this will be invoked before fil-profiler starts generating them
I thought it will flush in the background π
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile
No worries, I'll see if I can replicate it anyhow
FiercePenguin76
So running the Task.init from the jupyter-lab works, but running the Task.init from the VSCode notebook does not work?
Hi @<1653207659978952704:profile|LovelyStork78>
I have a docker container with all the dependencies.
Well I think the main question is are you using the clearml-agent to launch jobs/experiments? If you do it makes sense to specify your docker as "base docker image" (in the UI look for under the Execution tab, Container).
This means the agent will use the pre-installed environment and will add anything that your Task needs on top of it, this of course includes pushing your codebase i...
then will clearml associate that image with my experiment and always use that image with it,
when you say "agent to use my docker image," I'm assuming you mean the configuration file or --docker
argument, in both cases this means Default conatiner.
This means that if the Task does Not specify a docker, it will use the one you set in the conf/argument, But Tasks can always specify a different docker to use, and the agent will pull the requested docker based on the Task's entry.
Eve...
Hi PanickyAnt52
hi, is there a way to get back the pipeline object when given a pipeline id?
Yes basically this is a specific type of Task, anything you stored on it can be accessed via the Task object, i.e. pipeline_task=Task.get_task(pipeline_id)
I'm curious, how would you use it?
BTW: since pipeline is also a Task you can have a pipeline launch a step that is a pipeline by its own
I see... In the triton pod, when you run it, it should print the combined pbtxt. Can you print both before/after ones? so that we could compare ?
SubstantialElk6 This seems to be the issuecp: failed to access '/root/default_clearml.conf': Permission denied clearml_agent: ERROR: Could not find task id=024a421c0e174650a1c7ff64af756c26 (for host: )
Notice it seems it just cannot read the clearml.conf
, wdyt?
Oh, I was assuming you are passing the entire DB backups to the cloud.
Are you saying you just want the file server on the cloud ? if this is the case, I would just use S3