Reputation
Badges 1
6 × Eureka!if you manage to get it up and running, I would love to do a deep dive with you to understand how you did and share it on the company's blog 🙂 🙂
no pressure 😜
I know the storage can be swapped out to using S3 (obviously)
you can cut steak with a spoon, but that doesn't mean it's a good idea 😉 😉
I also take it you were thinking of fargate for the clearml-agent's .. which would be awesome but.. oi.. not sure how much luck you would have there 🙂 🙂
Hey Leandro, I don't think it will work with replacing elasticserver with the AWS service type 😕
There is already a pre-built AMI fwiw 🙂
sorry.. I am not understanding what you mean by 'I want the data programmatically'.
oh it's not a problem.. if you can fling up the logs of ES after startup that's probably the next step.. along with a 'docker network list' output 👍
since this is an enterprise machine, and you don't have sudo/root, I am wondering if there is already other docker networks/composer setups running/in use
one last tiny thing TrickySheep9 .. please do let us know how you get on, good or bad.. and if you bump into anything unexpected then please do scream and let us know 🙂
howdy Tim, I have tried to stay out of this, because a lot is going over my head (I am not a smart man 🙂 but, one thing I wanted to ask, are you doing the swapping in and out of code to do a/b testing with your models ?! Is this the reason for doing this ? Because if so, I would be vastly more inclined to try and think of a good way to do that. Again, this maybe wrong, I am trying to understand the use case for swapping in and out code. 🙂
To my mind, 'data programmatically' means using python and the functions inside the Task item to get this, but I suspect this is not what you mean ?
honestly.. I think google are "fine" with it.. there are plenty of other (more egregious) abuses of their colab and they haven't screamed yet.
Sadly, I haven't, but if anyone has then please scream because I would love to pick your brain for (yet another) post/article I am writing 😄 😄
I take it you are wanting to use Airflow to replace/extend an existing Jenkins setup ??
huh.. that example animation of automated driving on their homepage https://plotly.com/ is pretty awesome.
so I am not entirely sure what else you have changed Sir
understand. Are you comfortable with docker ? If so, I would probably suggest doing a docker run -it <identifier> bash and seeing if that folder does, indeed, exist in the docker image
Sorry.. we are currently swamped with ODSC East requests and presentations etc 😄
I am guessing it could be but.. I don't feel that k8s is clearml-session's main focus/push
I have a strange theory, that if the code is in classes, then you could include both in one .py file and then ENV["use_model"]="a" or ENV["use_model"]="b" to select between them .. in that way, you would clone the experiment and change the config and re-run
but of course, this is all largely dependent on your code and structure etc
Hey Manoj, I am not sure how clearml-session would know how to setup kube-proxy, if that's your intent.
Personally, I would run the clearml-server and agents on the k8s cluster, and then expose the server endpoints via kube proxy or some other nicer ingress. Then you can run jupyter locally and you should be good. Jupyter session remotely running on k8s would be a logistical nightmare 🙂
The takeaway from the pricing page, I think, is that clearml is free as in speech. If you want super duper support that may cost $ but the folks in the community here do an awesome job in the meantime.
I would assume, from the sounds of it, that you are using the dockerfile to pip install python libs.. In which case a pip install clear-ml can also be done at image creation time.. I don't know what other methods you would be using to install python deps.. Easy_install?!?
SubstantialElk6 I am having a bit of a monday morning (on a wednesday, not good)
since python is running inside a docker/cri-o/containerd in k8s anyway, what would you gain from using the installed global python libraries ?? Any libs would have to be installed at container time anyway so.. urm. yeah.
feel free to treat me like an idiot and use small words to explain, I honestly don't mind 🙂 I could be missing something in your use case (more than likely)
Clear-ml agent does have a build flag install-globally.. That may get you where you want to go
The way I read that is if you have exposed your clearml-server via k8s ingress then you can, from outside the k8s, say to clearml-session this is the k8s ingress/ip