Reputation
Badges 1
6 × Eureka!The takeaway from the pricing page, I think, is that clearml is free as in speech. If you want super duper support that may cost $ but the folks in the community here do an awesome job in the meantime.
hey @<1687643893996195840:profile|RoundCat60> .. did you ever get the problem sorted ?
honestly, I don't think the feature store we have would suit your needs. It is much closer to a data store in functionality with some nice to haves, rather than a feature store that is missing some bits.
Personally, I have used Feast before with a client, but only because it's a "pip install" to get it into place. It's a much lower barrier to entry than most of the others (again, bear in mind, I am a pythonista)
Hey Slava, I don't mean to be "that guy" but, I am interested in what do you think a feature store means/implies/should do. The term is still (to my mind) very open to interpretation.. so I would honestly love to hear from you (and others)
The enterprise feature store we have should probably be more named as "data store but with advanced search/update capabilities" but.. that's not as nice sounding.
If you mean feature store as 'data ingestion via a DSL with type checking' then this is no...
Never a problem Tim.. although it does prompt me to try and figure out a/b model testing myself ... I see everything as a "potential blog post" 😄 😄
there is a --docker flag for clearml-agent that will build containers :)
There will be a roadmap for the community up and on the blog this Monday.. It may not be as detailed as you would like but I am always happy to yak about specific requests 👍 👍
SubstantialElk6 I am having a bit of a monday morning (on a wednesday, not good)
since python is running inside a docker/cri-o/containerd in k8s anyway, what would you gain from using the installed global python libraries ?? Any libs would have to be installed at container time anyway so.. urm. yeah.
feel free to treat me like an idiot and use small words to explain, I honestly don't mind 🙂 I could be missing something in your use case (more than likely)
the brain surface viewer (more dash than anything) .. jst.. wow
you will probably want to find the culprit, so a find should work wonders. I probably suspect elasticsearch first. It tends to go nuts 😕
that's... a very good question. When I was using Feast, it was that more than one person was interested in using the ingested data, so it became that 'single source of truth'. From then on, ClearML was used to do the actual pipeline flow and training/testing/serving runs and, since it's all python shop, it worked pretty well. We used it offline, since we didn't care about online with having features at inference time. I should probably write up something about this when I have the time come t...
do you have code that you can share ?
huh.. that example animation of automated driving on their homepage https://plotly.com/ is pretty awesome.
I am so used to pip install, I default to there 😄
I agree with Martin.B, it appears to be a CUDA mismatch. The version of torch is trying to use cuda 10.2 but you haveagent.default_docker.image = nvidia/cuda:10.1-runtime-ubuntu18.04that should probably beagent.default_docker.image = nvidia/cuda:10.2-runtime-ubuntu18.04
that is one of the things I am working away on, even as we speak! If you have any items that you want to see sooner rather than later, please let me know 👍
Clear-ml agent does have a build flag install-globally.. That may get you where you want to go
speaking as a google colab/jupyter notebooks person, I know we are missing some tutorials/docs there .. noted on the full blown example/testcase 👍
The way I read that is if you have exposed your clearml-server via k8s ingress then you can, from outside the k8s, say to clearml-session this is the k8s ingress/ip
hhrrmm.. in the initial problem, you mentioned that the /var/lib/docker/overlay2 was growing large in size.. but.. 4GB seems "fine" for docker images.. I wonder .. does your nvme0n1p1 ever report like 85% or 90% used or do you think that the 4GB is a lot ? when you restart the server, does the % used noticeably drop ? that would suggest tmp files inside the docker image itself which.. is possible with docker (weird but, possible)
Sadly, I haven't, but if anyone has then please scream because I would love to pick your brain for (yet another) post/article I am writing 😄 😄
I take it you are wanting to use Airflow to replace/extend an existing Jenkins setup ??
Shameless plug here ; https://clear.ml/blog/jupyter-notebooks-used-as-clearml-workers/
this whole area is a WIP of course, but I am trying to capture some of the really interesting Q and A from here so that they don't jst disappear into the void 🙂
aaahhh.. I will wager good money Sir that you are then using ipython in vscode which is probably trying to do something "fancy" with the interpreter
Hey Leandro, I don't think it will work with replacing elasticserver with the AWS service type 😕
that's pretty darned awesome!! I didn't know we could do that 😄 😄
To my mind, 'data programmatically' means using python and the functions inside the Task item to get this, but I suspect this is not what you mean ?
sorry.. I am not understanding what you mean by 'I want the data programmatically'.