Reputation
Badges 1
25 × Eureka!hmm this might help:
https://pip.pypa.io/en/stable/topics/configuration/#environment-variables
basically you might be able to define:PIP_NO_USE_PEP517=1
works seamlessly throughout and in our current on premise servers...
I'm assuming via something close to what I suggested above with .netrc ?
JitteryCoyote63 what am I missing?
What are the errors you are getting (with / without the envs)
Can you verify by adding the the following to your extra_docker_shell_script:
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a516cab60e83/docs/clearml.conf#L152extra_docker_shell_script: ["echo machine example.com > ~/.netrc", "echo login MY_USERNAME >> ~/.netrc", "echo password MY_PASSWORD >> ~/.netrc"]
No worries, glad to hear you found it π
when u say useΒ
Task.current_task()
Β you for logging? which iβm guessing that the fastai binding should do right?
right, this is a fancy way to say, make sure the actual sub-process is initializing ClearML so all the automagic kicks in, since this is not "forked" but a whole new process, calling Task.current_task is the equivalent of calling Task.init with the same arguments (which you can also do, I'm not sure which one is more straight forward, wdyt?)
default is clearml data server
Yes the default is the clearml files server, what did you configure it to ? (e.g. should be something like None )
No worries π glad to hear it worked out π
Now I'm curious what's the workaround ?
But it should work out of the box ...
Yes it should ....
The user and personal access token are used as is and it propagates down to submodules, since those are simply another git repository.
Can you manually successfully run:git clone --recursive https://user:token@github.com/company/repo_with_submodules
Hi @<1610083503607648256:profile|DiminutiveToad80>
This depends on how you configure the agents in your clearm.conf
You can do https if user/pass are configured, and you can force SSH and it will auto-mount your host SSH folder into the container and use it.
None
[None](https://github.com/allegroai/clearml-agent/blob/0254279ed5987fbc69cebae245efaea33aec1ff2/docs/cl...
GrievingTurkey78
maybe since the package is not directly imported in my code it is possible to get a different version to what I have locally (?).
If these are derivative packages (i.e. imported by other packages) they are not automatically logged when executing the Task manually (in order to keep the "installed packages as lean as possible on the one hand but specify also specify the important packages for you)
That said, when the "trains-agent" executed the task it will store nack...
And how is the endpoint registered ?
Thanks StaleKangaroo85 bug is verified. Let me check to see where exactly is the bug.
Two points
Notice that x_labels should be the size of the histogram It seems that you have to pass the labels as well (otherwise you get the trace-0), so if you add labels=['random histogram'] and labels=['random histogram2'] , you'll get the correct legend.Anyhow I'll make sure we also fix it in code so it is automatically labels are [series] if not specified, thanks!
Hi @<1734020162731905024:profile|RattyBluewhale45>
What's the clearml agent version? And could you verify with the latest RC?
Lastly how are you running the agent, docker mode? What's the bade container?
Pretty confusing that neither
services
StickyLizard47 basically this is how a services queue agent should be spinned:
https://github.com/allegroai/clearml-server/blob/9b108740da21f25407bd2c59583ca1c86f8e1faa/docker/docker-compose.yml#L123
When spinning on a k8s cluster, this is a bit more complicated, as it needs to work with the clearml-k8s-glue.
See here how to spin it on k8s
https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue
GrievingTurkey78 short answer no π
Long answer, the files are stored as differentiable sets (think changes set from the previous version(s)) The collection of files is then compressed and stored as a single zip. The zip itself can be stored on Google but on their object storage (not the GDrive). Notice that the default storage for the clearml-data is the clearml-server, that said you can always mix and match (even between versions).
Hi GreasyPenguin14
- Did using auto_connect_frameworks={'pytorch': False} solved the issue? ( I imagine it did )
- Maybe we should have the option to have wildcard support so I will only auto log based on filename. Basically using auto_connect_frameworks={'pytorch': "model*.pt"} will only auto log the model files saved/logged , wdyt?
AstonishingSeaturtle47 How would the code run without the sub-modules? And what is the problem we are trying to solve? (Because unfortunately there is no switch to disable it)
so moving b in to a wonβt work if some subfolders are already there
I though that if they are already there you would merge / overwrite, isn't that what you need ?a/b/c/2.txt seems like the result of moving b from dataset B into folder b in Dataset A, what am I missing?
(My assumption is that you have both datasets locally on the same machine and that you can just copy the files from b of Datasset B into b folder of Dataset A)
DefeatedCrab47 if TB has it as image, you should find it under "debug_samples" as image.
Can you locate it there ?
Hi @<1687643893996195840:profile|RoundCat60>
anyone with access to the server
Is that a thing? If you have access to the server Not sure how "protected" you are even if using a key ring...
(unfortunately I do not think we support anything else, but what did you have in mind?
DeterminedCrab71 that is a good point, how does plotly adjust for nans on graphs?
I would suggest deleting them immediately when they're no longer needed,
This is the idea for the next RC, it will delete them after it is done using π
DilapidatedDucks58 You might be able to, check the links, they might be embedded into the docker, so you can map diff png file from the host π
BTW: what would you change the icons to?
Let me know if I understand you correctly, the main goal is to control the model serving, and deploy to your K8s cluster, is that correct ?
Do you have a roadmap which includes resolving things like this
Security SSO etc. is usually out of scope for the open-source platform as it really makes the entire thing a lot harder to install and manage. That said I know that on the Enterprise solution they do have SSO and LDAP support and probably way more security features. I hope it helps π
Assuming it was hashed, the seed would be stored on the same server, so knowing both would allow me the same access, no?
If that's the case you have two options:
- Create a Dataset from local/nfs and upload it to the S3 compatible NetApp storage (notice this create an immutable copy of the data)
- Create a Dataset and add "external links" to either the S3 storage with None
:port/bucket/...or direct file linkfile:///mnt/nfs/path, notice that in this example the system does not manage the data that means that if someone deletes/moves the data you are unaware of that And of course you can...