
Reputation
Badges 1
25 × Eureka!GreasyPenguin14 could you test with the 0.17.5rc4
?
Also what's the PyCharm / OS?
ImmensePenguin78
I think the latest RC adds it, should be released later today 🙂
Correct (basically pip freeze results)
Exactly!
Regarding adding feature store, probably not in the near future, a scalable feature store is quite the project, probably more realistic to somehow have a recipe to deploy with Feast
I am creating this user
Please explain, I think this is the culprit ...
Hi Martin, of course not,
Smart!
I was just wondering if it has been patched yet and if not what is the expected timeline for patching it
Yes, I believe the target is a patch version 1.15.1 to be released in a couple of weeks. This is not a major issue but it's always better to have have it fixed. (btw: the enterprise version never had this issue to being with, because it is of course authenticated, as well as it has additional RBAC layer on top.)
Hi @<1689808977149300736:profile|CharmingKoala14> , let me double check that
Yep, everything (both conda and pip)
I think CostlyOstrich36 managed to reproduce?!
ThickDove42 sorry, it took some time 🙂import json from trains.backend_api.session.client import APIClient client = APIClient() events = client.events.get_task_plots(task='task_id_here') table = json.loads(events.plots[0]['plot_str']) print('column order', table['data'][0]['cells']['values'])
Not the most comfortable way, but at least it is there
the other repos i have are constantly worked on and changing too
Not only it will be cloned automatically, the git diff of the sub-modules are stored as well 🙂
Hi MagnificentSeaurchin79
Could you test with the tesnorflow toy example?
https://github.com/allegroai/clearml/blob/master/examples/frameworks/tensorflow/tensorboard_toy.py
the parameter datatypes are not being changed when loading them up.
These are the auto logged parameters , inside YOLO, correct?
Just to make sure, you can actually see the value None
in the UI, is that correct? (if everything works as expected, you should see empty string there)
No should be fine... Let me see if I can get a windows box 🙂
Gitlab has support for S3 based cache btw.
This might still be considered "slow" compared to local-dist/cluster mount
Would adding support for some sort of post task script help? Is something already there?
Interesting, can you expand on the use case? (currently there is only pre-task script, for setup)
pip cache & git cache & venvs cache
Are all supported, you just need to map the folders.
If you do not want to spin a PVC with NFS mount, you can just mount an S3 bucket with s3fs as part of the container extra bash script,
https://github.com/allegroai/clearml-agent/blob/b39b54bbafab39e6731cb742fdf317bc6dcae54a/docs/clearml.conf#L140
s3 FUSE fuse filesystems:
https://github.com/kahing/goofys
https://github.com/s3fs-fuse/s3fs-fuse
WDYT?
Hi Team, I'm currently trying to install ClearML-Server on a Powerpc server with RedHat7.
You are a brave man LividCrab90 !
s there dockerfiles for the ClearML-Server stack somewhere ?
The main issue is replacing the DB containers, do you have elastic/mongo/redis for powerpc ?
Hi FunnyTurkey96
Which pip are you using, basically pip changed the dependency resolver after 20.1
Change: https://github.com/allegroai/clearml-agent/blob/aede6f4bac71c8fc56e7cf982318a48527953a3c/docs/clearml.conf#L57pip_version: "<20.2"
See if that helps
Intersting!
I would also add that Task name is not unique and you can use to describe the "process / goal etc" which would make it pretty obvious to search / review from the UI.
Regrading models and branchs, Iw ould use the Task tags (you can have as many as you like) to tag the specific model type (or dev branch if the alg is diff), this means you can also easily filter based on the Tags in the UI.
can you use the Web UI to compare the artifacts from two separate subprojects?
Yes comp...
Where you able to pass the 'clearnl-init' configuration? It verifys your credentials against the api server
Hi CharmingShrimp37
Go to Github to your newly forked repo, you should have a green button suggesting to take your branch and making it a PR. It is that simple 🙂
Thanks CharmingShrimp37 !
Could you PR the fix ?
It will be just in time for the 0.16 release 🙂
I just called exit(0)
in a notebooke and it closed it (the kernel) no exception
SpotlessFish46
yes you can access the entire code in the incomitted changes, you can test it with:task = Task.get_task(task_id='aabb') task_dict = task.export_task()
2. correct, but then if you need the entire code base you need to clone the arepo and apply the uncommitted changes. Basically trains-agent does that when execute with buildtrains-agent build --id aabb --target ~/my_task_env
3. See (2)
Thanks GrievingTurkey78 , this is exactly what I was looking for!
Any chance you can open a GitHub issue ( jsonargparse
+ lighting support) ?
I really want to make sure this issue is addressed 🙂
BTW: this is only if jsonargparse is installed:
https://github.com/PyTorchLightning/pytorch-lightning/blob/368ac1c62276dbeb9d8ec0458f98309bdf47ef41/pytorch_lightning/utilities/cli.py#L33
could one also limit the number of CPU cores available?
If you are running in docker mode you can add:--cpus=<value>
see ref here: https://docs.docker.com/config/containers/resource_constraints/
Just add it to extra_docker_arguments
:
https://github.com/allegroai/clearml-agent/blob/2cb452b1c21191f17635bcb6222fa8bfd82afe29/docs/clearml.conf#L142