Reputation
Badges 1
25 × Eureka!Yes, but I'm not sure that they need to have separate task
Hmm okay I need to check if this can be easily done
(BTW, the downside of that, you can only cache a component, not a sub-component)
Could you manually configure the ~/trains.conf ?
(Just copy paste the section from the UI)
then try to run:trains-agent list
Hi DrabCockroach54
This seems like a pip issue trying to install from source, try upgrading the pip version and before installing numpy, it should solve it 🤞
I do it to get project name
you can still get it from the task object (even after closing it)
another place I was using was to see if i am in a pipeline task
Yes that makes sense, this is one of the use cases (to see get access to the Task that is currently running). The bug itself will only happen after closing the Task (it needs to clear OS variable).
You can either upgrade to the 1.0.6rc2 or you can hack/fix it with :
` os.environ.pop('CLEARML_PROC_MASTER_ID', None)
os.envi...
Hmm that is odd, but at least we have a workaround 🙂
What's the matplotlib backend ?
Hi MiniatureCrocodile39
Which packages to you need to run the viewer? I suppose dicom reader is a must?
Import Error sounds so out of place it should not be a problem :)
Maybe WackyRabbit7 is a better approach as you will get a new object (instead of the runtime copy that is being used)
RobustSnake79 this one seems like scalar type graph + summary table, correct?
BTW: I'm not sure how to include the "Recommendation" part 🙂
Can't figure out what made it get to this point
I "think" this has something to do with loading the configuration and setting up the "StorageManager".
(in other words setting the google.storage)... Or maybe it is the lack of google storage package?!
Let me check
HI BurlyRaccoon64
Yes, we did the latest clearml-agent solves the issue, please try:
'pip3 install -U --pre clearml-agent'
ERROR: torch-1.12.0+cu102-cp38-cp38-linux_x86_64.whl is not a supported wheel on this platform
TartBear70 could it be you are running on a new Mac M1/2 ?
Also quick question, any chance you can test with the latest RC?pip3 install clearml-agent==1.3.1rc6
That would match what
add_dataset_trigger
and
add_model_trigger
already have so it would be good
Sounds good, any chance you can open a github issue, so that we do not forget?
Another parameter for when the task is deleted might also be useful
That actually might be more complicated, because there might be a race condition, basically missing the delete operation...
What would be the use case?
Ohhh I see, yes this is regexp matching, if you want the exact match:'^{}$'.format(name)
However, that would mean passing back the hostname to the Autoscaler class.
Sorry my bad, the agent does that automatically in real-time when it starts, no need to pass the hostname it takes it from the VM (usually they have some random number/id)
It seems like you are correct, everything should just work. Are you still getting the error? What's the clearml agent version?
Did you experiment any drop of performances using forkserver?
No, seems to be working properly for me.
If yes, did you test the variant suggested in the pytorch issue? If yes, did it solve the speed issue?
I haven't tested it, that said it seems like a generic optimization of the DataLoader
Hi TrickyRaccoon92 , yes the examples folder is a special case, I'm not sure you can directly delete it.
Can you archive individual experiments in it ?
WickedGoat98 is this related to plotly opening a web page when you call show()
method ?
You can do:if not Task.running_locally() fig.show()
Have a wrapper over Task to ensure S3 usage, tags, version number etc and project name can be skipped and it picks from the env var
Cool. Notice that when you clone the Task and the agents executes it, the project is already defined, so this env variable is meaningless, no ?
But essentially Prefect also has agents to run jobs on machines where the processes run (which seems to be exactly the same model as in ClearML),
Yes ait is conceptually very similar
this data is highly regulated data, ...
The main difference that with ClearML the agents are running on Your machines (either local or on Your cloud account) the clearml-server does not actually have access to the data streaming through it.
Does that make sense ?
The versions don't need to match, any combination will work.
Notice that we are using the same version:
https://github.com/allegroai/clearml-serving/blob/d15bfcade54c7bdd8f3765408adc480d5ceb4b45/clearml_serving/engines/triton/Dockerfile#L2
The reason was that previous version did not support torchscript, (similar error you reported)
My question is, why don't you use the "allegroai/clearml-serving-triton:latest" container ?
Both are fully implemented in the enterprise version. I remember a few medical use cases, and I think they are working on publishing a blog post on it, not sure. Anyhow I suggest you contact the sales people and I'm sure they will gladly setup a call/demo/PoC.
https://allegro.ai/enterprise/#contact
Hi ArrogantBlackbird16
but it returns a task handle even after the Task has been closed.
It should not ... That is a good point!
Let's fix that 🙂
Would be cool to let it get untracked as well, especially if we want to as an option
How would you decide what should be tracked?
Let me know if I understand you correctly, the main goal is to control the model serving, and deploy to your K8s cluster, is that correct ?