Reputation
Badges 1
25 × Eureka!Is there a way to move existing pipelines between projects?
You should be able to, go to your settings page and turn on "show hidden folders"
Then go to your project, you should see " .pipeline
" sub project there, right click it and move it to another folder.
Nice SubstantialElk6 !
BTW: you can configure your cleaml client to store the changes from the latest Pushed commit (and not the default which is latest local commit)
see store_code_diff_from_remote:
in clearml.conf:
https://github.com/allegroai/clearml/blob/9b962bae4b1ccc448e1807e1688fe193454c1da1/docs/clearml.conf#L150
Okay let me check if we can reproduce, definitely not the way it is supposed to work π
It's always the details... Is the new Task running inside a new subprocess ?
basically there is a difference between
remote task spawning new tasks (as subprocesses, or as jobs on remote machine), remote task still running remote task, is being replaced by a spawned task (same process?!)UnevenDolphin73 am I missing a 3rd option? which of these is your case?
p,s. I have a suspicion that there might be a misuse of "Task" here?! What are you considering a Task? (from clearml perspective a Task...
Hi StraightCoral86
When I run an experiment usingΒ
Task.create()
Β ,
Use Task.init
π
Task.create is meant to create an extranl Task (i.e. Job) ins the system, Not to auto-gernerate a job from the running code. Make sense ?
GrievingTurkey78 in your cleaml.conf do you have?agent.package_manager.type: conda
Or
https://github.com/allegroai/clearml-agent/blob/73625bf00fc7b4506554c1df9abd393b49b2a8ed/docs/clearml.conf#L59
This means all the components of the pipeline use the exact same packages, and then it will just reuse the venv. Make sense ?
tried it and restarted the agent, but not working properly
What do you mean not working? can you provide logs ?
This really makes little sense to me...
Can you send the full clearml-session --verbose console output ?
Something is not working as it should obviously, console output will be a good starting point
Hi GrittyKangaroo27
How could I turn off model logging when running this training step?
This is a good point! I think we cannot pass these arguments.
Would this make sense to you?PipelineDecorator.component(...,
auto_connect_frameworks)
wdyt?
In the docker bash startup scriptapt-get install poppler-utils
So on the ec2 instance (with the agent running), just install prior to running the agent:apt-get install poppler-utils
Hi JealousParrot68
spinning the clearml-agent with docker support (i.e. each experiment is running inside its own container):
https://clear.ml/docs/latest/docs/clearml_agent#docker-mode
Basically you can specify a default docker to use (per agent) and a specific docker container to use per Task (configured in the UI under execution at the bottom)
Do we support GPUs in a) docker mode b) k8s glue?
yes on both
Is there a good reference to get started with k8s glue?
A few folks here already set it up, do you have a k8s cluster with GPU support ?
I remember being told that the ClearML.conf on the client will not be used in a remote execution like the above so I think this was the problem.
SubstantialElk6 the configuration should be set on the agent's machine (i.e. clearml.conf that is on the machine running the agent)
- Users have no choice of defining their own repo destination of choice.
In the UI you can specify in the "Execution" tab, Output "destination", a different destination for the models/artifacts. Is this...
Hmm I tested on chromium and it seemed to work, let me see if I can reproduce it...
Ohh no I see, yes that makes sense, and I was able to reproduce m thanks!
Hmm that makes sense, I "think" the enterprise offering has a solution for that as well (i.e. full separation over static cluster), but probably the best way to constituent this avenue is talk to Sales (I'm assuming they'll setup a call to discuss the details)
Going back to the open source, I think that adding the credentials as part of the source code might allow to have "credentials" auto populate as part of the remote execution, wdyt?
Would this be best if it were executed in the Triton execution environment?
It seems the issue is unrelated to the Triton ...
Could I use theΒ
clearml-agent build
Β command and theΒ
Triton serving engine
Β task ID to create a docker container that I could then use interactively to run these tests?
Yep, that should do it π
I would start simple, no need to get the docker itself it seems like clearml credentials issue?!
If you passed the correct path it should work (if it fails it would have failed right at the beginning).
BTW: I think it is clearml-agent --config-file <file here> daemon ...
(Also can you share the clearml.conf, without actual creds π )
Yes this is Triton failing to load the actual model file
Okay the type is inferred from the default value of the function step itself, that means that both:data_frame = step_one(pickle_url, extra=1337)
anddata_frame = step_one(pickle_url, 1337)
Will pass extra as int
.
That said if the default value of the argument is missing, it will revert to str
In order to use the type hints as casting hint, we actually need to improve the task.connect
to support the type casting (they are stored)
It's just the print (_ repr _) not showing the datafor w in client.workers.get_all(): print(w.data)
When you say status, what do you mean? Is it active? Running a task?
GiganticTurtle0 quick update, a fix will be pushed, so that casting is based on the Actual value passed not even type hints π
(this is only in case there is no default value, otherwise the default value type is used for casting)