
Reputation
Badges 1
25 × Eureka!GreasyPenguin66 Nice !!!
Very cool setup, and kudos on making it work with multiple users!
Quick question, shouldn't the JUPYTERHUB_API_TOKEN env variable be enough to gain access to the server? Why did you need to add it to the 'nbserver-x.json' as well?
CooperativeFox72 this is indeed sad news π
When you have the time, please see if you can send a code snippet to reproduce the issue. I'd like to have it fixed
Hi WorriedParrot51
So I think what you need is to map your external code into the docker, is that correct?
Also you want to always set the PYTHONPATH.
You can achieve both by configuring the trains.conf:
Here you can always add a predefined environment and mount point, regardless of the docker image or other docker argument arguments:
https://github.com/allegroai/trains-agent/blob/master/docs/trains.conf#L98
Will this solve the issue?
CourageousLizard33 Are you using the docker-compose to setup the trains-server?
https://www.geeksforgeeks.org/invalid-decimal-literal-in-python/
This is the warning hence my question
Hi CooperativeFox72 trains 0.16 is out, did it solve this issue? (btw: you can upgrade trains to 0.16 without upgrading the trains-server)
Assuming you are using docker-compose, the console output is a good start
Hi JitteryCoyote63 you can bus obviously you should be careful they might both try to allocate more GPU memory than they the HW actually has.TRAINS_WORKER_NAME=machine_gpu0A trains-agent daemon --gpus 0 --queue default --detached TRAINS_WORKER_NAME=machine_gpu0B trains-agent daemon --gpus 0 --queue default --detached
post_optional_packages: ["google-cloud-storage", ]
Will install it last (i.e. after all the other packages) but only if you have it in the "Installed packages" list
I can definitely feel you!
(I think the implementation is not trivial, metrics data size is collected and stored as commutative value on the account, going over per Task is actually quite taxing for the backend, maybe it should be an async request ? like get me a list of the X largest Tasks? How would the UI present it? As fyi, keeping some sort of book keeping per task is not trivial either, hence the main issue)
PlainSquid19 yes the link is available on in the actual paid product π
I don't think they have the documentation open yet...
My recommendation is to fill the contact us form, you'll get a free online tour as well π
Yep it should :)
I assume you add the previous iteration somewhere else, and this is the cause for the issue?
You mean why you have two processes ?
DilapidatedDucks58 trains-agent adds the artifactory URL as --extra-index-url , are you sure you are getting the correct torch version in the container? because the torch html is not an artifactory html, it is a list of links, I just want to make sure you are getting the correct version, because otherwise it can default to the CPU version, which we don't want π anyhow you can use the direct link in the "installed packages and just put there " https://download.pytorch.org/whl/nightly/cu101...
can configuration objects refer to one-another internally in ClearML?
Interesting, please explain?
BTW: the new documentation should contain a full search over the docstring
ohh, not really π this is really low level editing the DB.
You might be able to forcefully edit the links (i.e. artifacts) on the Dataset (task)
Check if this works
from clearml.backend_api.session.client import APIClient
c = APIClient()
t = c.tasks.get_by_id("DATASET_UUID_HERE")
# you might need to loop over the artifacts
t.data.execution.artifacts[0].uri = "
"
c.tasks.edit(task=t.id, execution=t.data.execution, force=True)
Just to clarify, where do I run the second command?
Anywhere just open a python console and import the offline task:from trains import TaskTask.import_offline_session('./my_task_aaa.zip')
Related, how to I specify in my code the cache_dir where the zip is saved?
This is the Trains cache folder, you can set it in the trains.conf file:
https://github.com/allegroai/trains/blob/10ec4d56fb4a1f933128b35d68c727189310aae8/docs/trains.conf#L24
Hi @<1624941407783358464:profile|GrievingTiger47>
I think you should try to contact the sales guys here: None
Alright I have a followup question then: I used the param --user-folder β~/projects/my-projectβ, but any change I do is not reflected in this folder. I guess I am in the docker space, but this folder is not linked to my the folder on the machine. Is it possible to do so?
Yes you must make sure the docker can mount a persistent folder for you to work on.
Let me check what's the easiest way to do that
How do I tell from the ClearML UI which datasets version am I using?
Hi SubstantialElk6 , what exactly do you mean by "ClearML UI which datasets am I using" ? Do you mean is there an auto magic adding the dataset ID when you call Data.get() in your code ? (because if you are I specifically remember discussing adding this feature a few days ago, which you just bumped the priority of π )
UnevenOstrich23
but interesting that auto-reload config does not working as I expected.
Unfortunately the trains-agent does not support auto reloading the config file yet. If you think this will be a great feature, please feel free to open a GitHub feature request issue π
Could you send the "installed packages" section of the Task that was created in the notebook ?
What I'd really want is the same behaviour in the console (one smooth progress bar) and one line per epoch in the logs; high hopes, right?
I think they send some "odd" character instead of CR, otherwise I cannot explain the difference.
Can you point to a toy example demonstrating the same issue ?
Also I just tried the pytorch-lightningΒ
RichProgressBar
Β (not yet released) instead of the default (which is unfortunately based on tqdm) and it works great.
Yey!
Hmm that is a good question, are you mounting the clearml.conf somehow ?
Hi MagnificentSeaurchin79
Unfortunately there is currently no way to reorder the plots, but you have a valid point. I suggest a GitHub UX issue ?
Regrading the debug samples, the difference is that the confutation matrix report is actually metadata, you can get these numbers by the API or the download, but the debug samples are static images ...
BTW: you can try to produce an interactive side by side confusion matrix with plotly, and use report_plotly_figure
from what I gather there is a lightly documented concept
Yes ... π the reason for it is that actually one could do:
` @PipelineDecorator.pipeline(...)
def pipeline(i):
....
if name == 'main':
pipeline(0)
pipeline(1)
pipeline(2) `Basically rerunning the pipeline 3 times
This support was added as some users found a use case for it, but I think this would be a rare one