Thanks @<1671689437261598720:profile|FranticWhale40> !
I was able to locate the issue, fix should be released later today (or worst case tomorrow)
That is correct.
Obviously once it is in the system, you can just clone/edit/enqueue it.
Running it once is a mean to populate the trains-server.
Make sense ?
Hmm so is the problem having the gituser inside the code? or the k8s_glue print ?
command line to the arg parser should be passed via the "Args" section in the Configuration tab.
What is the working directory on the experiment ?
Hi @<1715900788393381888:profile|BitingSpider17>
Notice that you need __ (double underscore) for converting "." in the clearml.conf file,
this means agent.docker_internal_mounts.sdk_cache
will be CLEARML_AGENT__AGENT__DOCKER_INTERNAL_MOUNTS__SDK_CACHE
None
Hmm interesting...
of course you can do:dataset._task.connect(...)
But maybe it should be public?!
How are you using that (I mean in the context of a Dataset)?
GiganticTurtle0 is it just --stop that throws this error ?
btw: if you add --queue default
to the command line I assume it will work, the thing is , without --queue it will look for any queue with the "default" tag on it, since there are none, we get the error.
regardless that should not happen with --stop
I will make sure we fix it
Just so we do not forget, can you please open an issue on clearml-agent github ?
DepressedChimpanzee34
I might have an idea , based on the log you are getting LazyCompletionHelp
in stead of str
Could it be you installed hyrda bash completion ?
https://github.com/facebookresearch/hydra/blob/3f74e8fced2ae62f2098b701e7fdabc1eed3cbb6/hydra/_internal/utils.py#L483
I wonder if I just need to join 2 docker-compose files to run everything in one session
Actually that could also work
But for reference, when I said IP i meant the actual host network IP not the 127.0.0.1 (which is the same as localhost)
with remote machine where the code actually runs (you know this pycharm pro remote).
Are you using the pycharm plugin ? (to sync the local git changes with clearml)
https://github.com/allegroai/clearml-pycharm-plugin
Task.add_requirements does not handle it (traceback in the thread). Any suggestions?
Hmm that is a good point maybe we should fix that ๐
I'm assuming someone already created this module? Or is it part of the repository?
(if it than the assume this executed from the git root)
understood, can you tryTask.add_requirements("-e path/to/folder/")
BTW: CloudyHamster42 I think this issue was discussed on GitHub, and the final "verdict" was we should have an option to split/combine graphs on the UI side (i.e. similar to the "smoothing" or wall-time axis etc.)
Is it not possible to serve a model with preprocessing pipeline from scikit-learn using clearml-serving?
of course it is, did you first try the example , here: None
If you need to run your own LogisticRegression
call you can use this example:
None
Notice this is where the custom endpoint actually calls the prediction: [None](https...
Hmm I assume it is not running from the code directory...
(I'm still amazed it worked the first time)
Are you actually using "." ?
Added -v /home/uname/.ssh:/root/.ssh and it resolved the issue. I assume this is some sort of a bug then?
That is supposed to be automatically mounted the SSH_AUTH_SOCK defined means that you have to add the mount to the SSH_AUTH_SOCK socket so that the container can access it.
Try to run when you undefine SSH_AUTH_SOCK and keep the force_git_ssh_protocol
(no need to manually add the .ssh mount it will do that for you)
Yes I think the difference is running conda install with arguments vs conda install with env file...
Nice! So out of curiosity why didn't it work this time and you had to do it manually?
Try to upload something to the file server ?
None
GreasyPenguin14 you mean the artifacts/models ?
NastySeahorse61 it might that the frequency it tests the metric storage is only once a day (or maybe half a day), let me see if I can ask around
(just making sure you can still login to the platform?)
It should have been:output_uri="s3://company-clearml/artifacts/bethan/sales_journeys/artifacts/examples/load_artifacts.f0f4d1cd5eb54795b11508dd1e739145/artifacts/filename.csv.gz/filename.csv.gz
Try to add '--network host' to the docker args on the task you are launching
I think this issue was fixed in clearml-server 1.3.0 (released after the weekend),
Let me check
JuicyDog96 Yes please!
Let me check what's the status with the docs repository, and I'll get back to you soon ๐
But these changes havenโt necessarily been merged into main. The correct behavior would be to use the forked repo.
So I would expect the agent to pull from your fork, is that correct? is that what you want to happen ?