Reputation
Badges 1
25 × Eureka!It was installed by 'pip install kwcoco' while my conda env was active.
Well I guess my question is, how does conda know ehere to install it form, if this is not on the public channels ? is there a specific conda channel you added (or preconfigured) ?
ColossalDeer61 btw, it turns out the docker-compose services docker was ill configured on the GitHub π I suggest you get the latest copy of it:curl
-o docker-compose.yml
Hmm, so what is the difference ?
Nice!
script, and the kwcoco not imported directly (but from within another package).
fyi: usually the assumption is that clearml will only list the directly imported packages, as these will pull the respective required packages when the agent will be installing them ... (meaning that if in the repository you are never actually directly importing kwcoco, it will not be listed (the package that you do import directly, the you mentioned is importing kwcoco, will be listed). I hope this ...
Okay there is some odd stuff going on in the backend, I'll check with backend guys tomorrow and update π
There is not dataset.close () π
Can you test with the latest RC:pip install clearml==1.0.3rc0
Exporter would be nice I agree, not sure it is on the roadmap at the moment π
Should not be very complicated to implement if you want to take a stab at it.
Hi JitteryCoyote63
I would like to switch to using a single auth token.
What is the rationale behind to that ?
Ohh, if this is the case then it kind of makes sense to store on the Task itself. Which means the Task object will have to store it, and then the UI will display it :(
I think the actual solution is a vault , per user, which would allow users to keep their credentials on the sever, the agent to pass those to the Task when it spins it, based on the user. Unfortunately the vault feature is only available on the paid/enterprise version ( with RBAC etc.).
Does that make sense?
SoggyBeetle95 you can configure the credentials in the clearml.conf
running on the agent machines:
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a516cab60e83/docs/clearml.conf#L320
(I'm assuming these are storage credentials)
If you need general purpose env variables, you can ad them here:
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a516cab60e83/docs/clearml.conf#L149
with ["-e", "MY_VAR=MY_VALUE"]
wouldn't it be possible to store this information in the clearml server so that it can be implicitly added to the requirements?
I think you are correct, and if we detect that we are using pandas to upload an artifact, we should try and make sure it is listed in the requirements
(obviously this is easier said than done)
And if instead I want to force "get()" to return me the path (e.g. I want to read the csv with a library that is not pandas) do we have an option for that?
Yes, c...
SoggyBeetle95 maybe it makes sense to configure the agent with an access-all credentials? Wdyt
Hi QuaintJellyfish58
like we want set using UI but limited option on dropdown list
You mean like limit the ability of users to choose specific values? (so they do not mess things up?)
For that I need more info, what exactly do you need (or trying to achieve) ?
yes, i see no more than 114 plots in the list on the left side in full screen modeβjust checked and the behavior exists on safari and chrome
Let me check with front-end guys π
ColossalDeer61 FYI all is fixed now π
Hi @<1546303269423288320:profile|MinuteStork43>
Failed uploading: cannot schedule new futures after interpreter shutdown
Failed uploading: cannot schedule new futures after interpreter shutdown
This is odd where / when exactly are you trying to upload it?
hmm interesting use case, why do you need to add the "--no-binary"
BTW: what would be a reason to go back to self-hosted? (not sure about the SaaS cost, but I remember it was relatively cheap)
CrookedWalrus33 from the log it seems the code is trying to use "kwcoco" but it is not listed under any "Installed packages" nor do you see any attempt to install it. Can you confirm ?
Hi SmarmyDolphin68
I see this in between my training epochs, what could be causing this?
This is basically saying we are saving a second model on the same Task and even though both are logged, only the last is stored on the Task itself.
This will change as in the next version a Task will be able to hold reference to multiple models in the artifactory π
Of course, I used "localhost"
Do not use "localhost" use your IP then it would be registered with a URL that points to the IP and then it will work
LOL yes π
just make sure it won't be part of the uncommitted changes of the AWS autoscaler π
If I add the bucket to that ....
Oh no .... you should also set SSL off for the connection, but I think this is only in the clearml.conf:
https://github.com/allegroai/clearml/blob/fd2d6c6f5d46cad3e406e88eeb4d805455b5b3d8/docs/clearml.conf#L101