 
			Reputation
Badges 1
6 × Eureka!@<1687643893996195840:profile|RoundCat60> you set it once, inside the docker-compose itself.. it will affect all docker containers but, to be honest, docker tends to log everything
To my mind, 'data programmatically' means using python and the functions inside the Task item to get this, but I suspect this is not what you mean ?
that is one of the things I am working away on, even as we speak! If you have any items that you want to see sooner rather than later, please let me know 👍
Hey Leandro, I don't think it will work with replacing elasticserver with the AWS service type 😕
I know the storage can be swapped out to using S3 (obviously)
There is already a pre-built AMI fwiw 🙂
do you have code that you can share ?
Hello E.K, do you have any examples handy to show us the difference ?
Clear-ml agent does have a build flag install-globally.. That may get you where you want to go
I would think that a combination of kubernetes (I believe the preferred way to support multiple users at once, but open to being wrong) and individual queue's is probably the solution here.
for example; in kubernetes you could setup an agent to listen to bob-queue and another agent to listen to alice-queue. In the kubernetes dashboard you could assign a certain amount of cpu/memory and if using taints, gpu or not.
Howdy Jevgeni, that's .. strange. I am using google colab (free edition 🙂 and doing exactly the same as you, but I don't see any uncommited changes.. hrrm.. can you try this on colab maybe ? I am wondering if it's your jupyter notebook's version of python or some other notebook extension maybe
hhrrmm.. in the initial problem, you mentioned that the /var/lib/docker/overlay2 was growing large in size.. but.. 4GB seems "fine" for docker images.. I wonder .. does your nvme0n1p1 ever report like 85% or 90% used or do you think that the 4GB is a lot ? when you restart the server, does the % used noticeably drop ? that would suggest tmp files inside the docker image itself which.. is possible with docker (weird but, possible)
Ohhh... that makes sense.. use best of breed in areas where we don't overlap.
Hey Federico, since you are doing this from inside python, you could always call the 'get_parameters_as_dict' from the Task you have cloned, merge/update whichever ones you want to (or not), and then call ' set_parameters_as_dict ' .. I believe that should get you where you want to go 🙂
oh .. no worries at all then.. you are free to do whatever you want to with it.. but I don't think it's designed with what you are trying to do in mind sadly
if you manage to get it up and running, I would love to do a deep dive with you to understand how you did and share it on the company's blog 🙂 🙂
no pressure 😜
I agree with Martin.B, it appears to be a CUDA mismatch. The version of torch is trying to use cuda 10.2 but you haveagent.default_docker.image = nvidia/cuda:10.1-runtime-ubuntu18.04that should probably beagent.default_docker.image = nvidia/cuda:10.2-runtime-ubuntu18.04
howdy Tim, I have tried to stay out of this, because a lot is going over my head (I am not a smart man 🙂 but, one thing I wanted to ask, are you doing the swapping in and out of code to do a/b testing with your models ?! Is this the reason for doing this ? Because if so, I would be vastly more inclined to try and think of a good way to do that. Again, this maybe wrong, I am trying to understand the use case for swapping in and out code. 🙂
I also want to stress that these don't need to be happy-path interviews/results, although those are infinitely nicer to do 🙂 So I hear you with also noting what does not work as much as what did 👍
I would assume, from the sounds of it, that you are using the dockerfile to pip install python libs.. In which case a pip install clear-ml can also be done at image creation time.. I don't know what other methods you would be using to install python deps.. Easy_install?!?
the one on the right, for example, has no data points at around the 19 mark
Howdy and Morning @<1687643893996195840:profile|RoundCat60> .. docker when using overlay2 doesn't have it's mount points show up in a 'df' btw, they will only appear in a 'df -a', mostly because since they are simply 'overlays', they don't (technically) consume any space (I mean, the files are still in the /var/lib but not for the space counting practices used by df)
this is why I was suggesting a find, maybe with a 'du' .. actually.. let me try that here.. 2s
SubstantialElk6 I am having a bit of a monday morning (on a wednesday, not good)
since python is running inside a docker/cri-o/containerd in k8s anyway, what would you gain from using the installed global python libraries ?? Any libs would have to be installed at container time anyway so.. urm. yeah.
feel free to treat me like an idiot and use small words to explain, I honestly don't mind 🙂 I could be missing something in your use case (more than likely)
huh.. that example animation of automated driving on their homepage https://plotly.com/ is pretty awesome.