Reputation
Badges 1
981 × Eureka!it also happens without hitting F5 after some time (~hours)
The number of documents in the old and the new env are the same though π€ I really donβt understand where this extra space used comes from
Ho and also use the colors of the series. That would be a killer feature. Then I simply need to match the color of the series to the name to check the tags
So I installed docker, added user to group allowed to run docker (not to have to run with sudo, otherwise it fails), then ran these two commands and it worked
Awesome! Thanks! π
Thanks!3. I don't know, I never used Highcharts π
this is the last line, same a before
There is a pinned github thread on https://github.com/allegroai/clearml/issues/81 , seems to be the right place?
hoo thats cool! I could place torch==1.3.1 there
is there a command / file for that?
Yes I agree, but I get a strange error when using dataloaders:RuntimeError: [enforce fail at context_gpu.cu:323] error == cudaSuccess. 3 vs 0. Error at: /pytorch/caffe2/core/context_gpu.cu:323: initialization error
only when I use num_workers > 0
So it is there already, but commented out, any reason why?
Is it safe to turn off replication while a reindex operation is happening? the reindexing is rather slow and I am wondering if turning of replication will speed up the process
Hi SuccessfulKoala55 , will I be able to update all references to the old s3 bucket using this command?
the deep learning AMI from nvidia (Ubuntu 18.04)
Can I simply set agent.python_binary = path/to/conda/python3.6 ?
Maybe the agent could be adapted to have a max_batch_size parameter?
On clearml or clearml-server?
Nevertheless there might still be some value in that, because it would allow to reduce the starting time by removing the initial setup of the agent + downloading of the data to the instance - but not as much as I described initially, if instances stopped are bound to the same capacity limitations as new instances launched
Hi, yes, you can use trains_agent.backend_api.session.client.APIClient.queues.get_all()
Isn't it overkill to run a whole ubuntu 18.04 just to run a dead simple controller task?
Sorry both of you, my problem was actually lying somewhere else (both buckets are in the same region) - thanks for you time!
It worked like a charm π± Awesome thanks AgitatedDove14 !
TimelyPenguin76 That sounds amazing! will there be a fallback mechanism as well? often p3.2xlarge are on shortage, would be nice to define one resources req as first choice (eg. p3.2xlarge) -> if not available -> use another resources req (eg. g4dn)
That would be amazing!
This allows me to inject yaml files into other yaml files