
Reputation
Badges 1
6 × Eureka!There will be a roadmap for the community up and on the blog this Monday.. It may not be as detailed as you would like but I am always happy to yak about specific requests 👍 👍
honestly, I don't think the feature store we have would suit your needs. It is much closer to a data store in functionality with some nice to haves, rather than a feature store that is missing some bits.
Personally, I have used Feast before with a client, but only because it's a "pip install" to get it into place. It's a much lower barrier to entry than most of the others (again, bear in mind, I am a pythonista)
so yes indeedly ..
sudo find /var/lib/ -type d -exec du -s -x -h {} \; | grep G | more
seems to give saner results.. of course, in your case, you may also want to grep M for megabyte
quick question if I may, are you running clearml-agent with --docker mode or without ? are you running the clearml-agent on the same machine as the path exists on another machine entirely ?
do you have code that you can share ?
Clear-ml agent does have a build flag install-globally.. That may get you where you want to go
since this is an enterprise machine, and you don't have sudo/root, I am wondering if there is already other docker networks/composer setups running/in use
but of course, this is all largely dependent on your code and structure etc
You may also want to brush up on the security and firewalls for AWS.. those always seem to be voodoo as far as I can tell 😄
that's... a very good question. When I was using Feast, it was that more than one person was interested in using the ingested data, so it became that 'single source of truth'. From then on, ClearML was used to do the actual pipeline flow and training/testing/serving runs and, since it's all python shop, it worked pretty well. We used it offline, since we didn't care about online with having features at inference time. I should probably write up something about this when I have the time come t...
huh.. that example animation of automated driving on their homepage https://plotly.com/ is pretty awesome.
to be perfectly honest, I think I stopped investigating all the stuff plotly and friends can do these days.. I am sitting here with my mouth wide open.. some of their examples are awesome eye candy 😄
there is a --docker flag for clearml-agent that will build containers :)
the part that I am concerned on is that the first pair of graphs you showed, the dataset (even from jst looking at it) are very different 😕
To my mind, 'data programmatically' means using python and the functions inside the Task item to get this, but I suspect this is not what you mean ?
howdy Tim, I have tried to stay out of this, because a lot is going over my head (I am not a smart man 🙂 but, one thing I wanted to ask, are you doing the swapping in and out of code to do a/b testing with your models ?! Is this the reason for doing this ? Because if so, I would be vastly more inclined to try and think of a good way to do that. Again, this maybe wrong, I am trying to understand the use case for swapping in and out code. 🙂
The way I read that is if you have exposed your clearml-server via k8s ingress then you can, from outside the k8s, say to clearml-session this is the k8s ingress/ip
I agree with Martin.B, it appears to be a CUDA mismatch. The version of torch is trying to use cuda 10.2 but you haveagent.default_docker.image = nvidia/cuda:10.1-runtime-ubuntu18.04
that should probably beagent.default_docker.image = nvidia/cuda:10.2-runtime-ubuntu18.04
I am so used to pip install, I default to there 😄
Hey Leandro, I don't think it will work with replacing elasticserver with the AWS service type 😕
you will probably want to find the culprit, so a find should work wonders. I probably suspect elasticsearch first. It tends to go nuts 😕
Stupid question Tim (and I understand that maybe your code is under NDA etc but) can you show the python code that you need to a/b against ?
I would assume, from the sounds of it, that you are using the dockerfile to pip install python libs.. In which case a pip install clear-ml can also be done at image creation time.. I don't know what other methods you would be using to install python deps.. Easy_install?!?
that is one of the things I am working away on, even as we speak! If you have any items that you want to see sooner rather than later, please let me know 👍
if you see it in the community server, then I believe the answer is "yes" - although don't hold me accountable on this 😄
the one on the right, for example, has no data points at around the 19 mark