actually the issue is that the packages are not being detected π
what happens if you do the following?Task.add_requirements("tensorflow") task = Task.init(...)
Not sure why, but for some reason it seems it is failing to analyze the code, hence the warning and no packages...
Any other hints on your setup that might help to better understand the root cause ? maybe home folder with unicode characters ? python installed in a specific way?
Hi AstonishingWorm64
Is this the same ?
https://github.com/allegroai/clearml-serving/issues/1
(I think it was fixed on the later branch, we are releasing 0.3.2 later today with a fix)
Can you try:pip install git+
Can you print the actual values you are passing? (i.e. local_file
remote_url
)
So now for it to take place you need to enqueue the Task and set an agent to pick it up and run it.
When the agent is running the Task the new parameter will be passed.
does that make sense ?
Good news a dedicated class for exactly that will be out in a few days π
Basically task scheduler and task trigger scheduler, running as a service cloning/launching tasks either based on time (cron alike) or based on a trigger).
wdyt?
Hi RipeGoose2
So the http://app.community.clear.ml already contains it.
Next release of the standalone server (a.k.a clearml-server) will include it as well.
I think the ETA is end of the year (i.e. 2 weeks), but I'm not sure on the exact timeframe.
Sounds good ?
Hi CourageousDove78
Not the cleanest, but you can basically pass everything here:
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html#post--tasks.get_all
Reasoning is that it is passed almost as is to the server for the actual query.
However, that would mean passing back the hostname to the Autoscaler class.
Sorry my bad, the agent does that automatically in real-time when it starts, no need to pass the hostname it takes it from the VM (usually they have some random number/id)
Building the pipeline in runtime from external configuration is very cool!!
I think nested components is exactly the correct solution, and it is a great use case.
Tried context provider for Task?
I guess that would only make sense inside notebooks ?!
Yes, I think the API is probably the easiest:from clearml.backend_api.session.client import APIClient client = APIClient() project_list = client.projects.get_all() print(project_list)
I can verify the behavior, I think it has to do with the way the subparser was setup.
This was the only way for me to get it to run:script.py test blah1 blah2 blah3 42
When I passed specific arguments (for example --steps) it ignored them...
Hi ClumsyElephant70
What's the clearml
you are using ?
(The first error is a by product of python process.Event created before a forkserver is created, some internal python issue. I thought it was solved, let me take a look at the code you attached)
Where did you add the Task.init call ?
Thanks, new doc site is scheduled for next week, it will also be on github, so pr-ing fixes will be a breeze :)
Hi WickedElephant66
Setting the pipeline controller with pipeline_execution_queue as None
is actually launching the pipeline controller on your "dev" machine, not sure why you have two of them?
Meaning the node restarted (or actually moved)
Hi SmallDeer34
Hmm I'm not sure you can, the code will by default use rglob
with the last part of the path as wildcard selection
π
You can of course manually create a zip file...
How would you change the interface to support it ?
That was the idea behind the feature (and BTW any feedback on usability and debugging will be appreciated here, pipelines are notorious to debug π )
the ability to exexute without an agent i was just talking about thia functionality the other day in the community channel
What would be the use case ? (actually the infrastructure now supports it)
Think I will have to fork and play around with itΒ
NICE! (BTW: if you manage to get it working I'll be more than happy to help push the PR)
Maybe the quickest win is to store just the .py as model ?
Yes it does. I'm assuming each job is launched using a multiprocessing.Pool (which translates into a sub process). Let me see if I can reproduce this behavior.
Oh dear, I think your theory might be correct, and this is just the mongo preallocating storage.
Which means the entire /opt/trains just disappeared
Should I useΒ
update_weights_package
Yes
BTW, config.pbtxt you should pass when "registering" the endpoint with the CLI
WackyRabbit7 interesting! Are those "local" pipelines all part of the same code repository? do they need their own environment ?
What would be the easiest pipeline interface to run them locally? (I would if we could support this workflow, it seems you are not alone in this approach, and of course that you can always use them remotely, i.e. clone the pipeline and launch it on an agent)
Also btw, is this supposed to be screenshot from community verison
Hmm seems like screenshot from an enterprise version, I'll ask them to update π
I am also not understanding how clearml-serving is doing the version for models in triton.
Basically you have two Tasks, one is the "controller" checking model changes and updating itself.
The other is the engine, checking on the "controller" Task, which models it needs to download/configure and replaces them.
This way you can ha...
π no worries
Could it be it was never allocated to begin with ?