Reputation
Badges 1
662 × Eureka!Not really - it will just show the string. A preview would be more like a low-res version of the uploaded image or similar.
So some UI that shows the contents of users.get_all ?
The idea is that the features would be copied/accessed by the server, so we can transition slowly and not use the available storage manager for data monitoring
On it! Should I include the additional user filters described above?
Don't even need to specify json=... 😉 Thanks!
I can scroll sideways but if I open any of the comparison items, I pretty much can only see one experiment's values
A follow up question (instead of opening a new thread), is there a way I could signal some files/directories to be copied to the execute_remotely task?
Where do I import this APIClient from AgitatedDove14 ? I meanwhile edited it directly in mongo, but editing a db directly on a Friday is a big nono
The logs are on the bucket, yes.
The default file server is also set to s3://ip:9000/clearml
Bump SuccessfulKoala55 ?
It's of course not an MLOps issue so I understand it's not high on the priority list, but would be kinda cool to just have a simple view presenting the content of users.get_all 😄
I am indeed
IIRC, get_local_copy() downloads a local copy and returns the path to the downloaded file. So you might be interested in e.g.local_csv = pd.read_csv(a_task.artifacts['train_data'].get_local_copy())
With the models, you're looking for get_weights() . It acts the same as get_local_copy() , so it returns a path.
EDIT: I think also get_local_copy() for a model should work 👍
Sorry, not necessarily RBAC (although that is tempting 😉 ), but for now was just wondering if an average joe user has access to see the list of "registered users"?
I guess following the example https://github.com/allegroai/clearml/blob/master/examples/advanced/execute_remotely_example.py , it's not clear to me how the server has access to the data loaders location when it hits execute_remotely
No it doesn't, the agent has its own clearml.conf file.
I'm not too familiar with clearml on docker, but I do remember there are config options to pass some environment variables to docker.
You can then set your environment variables in any way you'd like before the container starts
So no direct page to see e.g. how many people have registered and/or if someone accidentally made two (or more) accounts, or somewhere to just delete users, etc
SuccessfulKoala55 CostlyOstrich36 actually it is the import statement, just finally got around to the traceback:
` File "/home/.../ccmlp/configs/mlops.py", line 4, in <module>
from clearml import Task
File "/home/.../.venv/lib/python3.8/site-packages/clearml/init.py", line 4, in <module>
from .task import Task
File "/home/.../.venv/lib/python3.8/site-packages/clearml/task.py", line 31, in <module>
from .backend_interface.metrics import Metrics
File "/home/......
If I set the following:"extra_clearml_conf": "sdk.aws.s3.credentials = [\n{\nhost: 'ip:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n},\n{\nhost: 'ip2:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n}\n]"I run into a weird furl error:ValueError: Invalid port '9000''.
Yup! Seems to have been some brief unavailability for some reason
That will come at a later stage
I see, okay that already clarifies some stuff, I'll dig a bit more into this then! Thanks!
-
I guess? 🤔 I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
So basically what I'm looking for and what I have now is something like the following:
(Local) I have a well-defined aws_autoscaler.yaml that is used to run the AWS autoscaler. That same autoscaler is also run with CLEARML_CONFIG_FILE=.... (Remotely) The autoscaler launches, listens to the predefined queue, and is able to launch instances as needed. I would run a remote execution task object that's appended to the autoscaler queue. The autoscaler picks it up, launches a new instanc...