can you try reinstalling clearml-agent
?
MuddySquid7 , I couldn't reproduce case 4.
In all cases it didn't detect sklearn.
Did you put anything inside _init_.py
?
Can you please zip up the folder from scenario 4. and post it here?
Hi @<1706116294329241600:profile|MinuteMouse44> , is there any worker listening to the queue?
Hi @<1620955143929335808:profile|PleasantStork44> , can you please elaborate? You mean the info section of the task? Do you mean programmatically?
like some details about attributes, dataset size, formats.
Can you elaborate on how exactly you'd be saving this data?
here when we define output_uri in task_init in which format the model would be saved?
It depends on the framework I guess 🙂
Can you check up on the dockers and see if they're all up and running?
You can create a queue through the UI. You can go into Workers & Queues tab -> Queues -> "New Queue"
You can also create new queues using the API as well
https://clear.ml/docs/latest/docs/references/api/queues#post-queuescreate
Hi @<1698868530394435584:profile|QuizzicalFlamingo74> , Try compression=False
BitterLeopard33 , ReassuredTiger98 , my bad. I just dug a bit in slack history, I think I got the issue mixed up with long file names 😞
Regarding http/chunking issue/solution - I can't find anything either. Maybe open a github issue / github feature request (for chunking files)
A workaround can be to set up a local Minio server or upload to s3 directly, this way there shouldn't be a limit
Hi @<1544128915683938304:profile|DepravedBee6> , the task that created the model would also get published.
About our second question, I think this is what you are looking for - None
I'm not sure I understand. Can you give a more specific example?
How do you bring down agents currently? I usually kill the process or send tasks.abort
via the API
I'm reading on task.set_credentials at the moment. What exactly are you trying to do?
I'm not sure I understand. Can you give a specific example of what you have VS what you'd like it to be?
Hi @<1523701260895653888:profile|QuaintJellyfish58> , can you elaborate on what uv
is?
on /data/
On what OS are you on?
Regarding your question - I can't recall for sure. I think it still creates a virtualenv
SparklingElephant70 , Hi
Can you please provide a screenshot of the error?
How are you trying to 'target' the file in the code?
CrookedMonkey33 , let me take a look if I can find an AMI 🙂
Hi @<1675675722045198336:profile|AmusedButterfly47> , what is your code doing? Do you have a snippet that reproduces this?
I assigned both the pipeline controller and the component to this worker. Do I rather need to create two agents, one in services mode for the controller and then another one (not in services mode) for the component (which does training and predictions)? But, this seems to defeat the point of being able to run multiple tasks in services mode...
Yes. Again, the services mode is for special 'system' services if you will. The controller can run on the services agent (although not necessary...
Really depends on how you want to set up your pipeline. I suggest going over the documentation and watching the youtube videos for a better understanding.
CluelessElephant89 , Hi 🙂
For ClearML to treat your artifact as a model you'd have to register it as a Model class like here:
https://clear.ml/docs/latest/docs/references/sdk/model_model
I'm guessing you'd want it as an output model, correct?
Do you want to register this artifact as both a model AND an artifact or would only having it as a model is enough?
SmallDeer34 , great, thanks for the info 🙂
HelplessCrocodile8 , I managed to reproduce, hold tight 🙂
GrievingTurkey78 , please try task.init(
auto_resource_monitoring=False, ...
)
GrievingTurkey78 , can you try disabling the cpu/gpu detection?