Actually it is better to leave it as is, it will just automatically mount the .ssh folder into the container, i will make sure the docs point to this option first
FierceHamster54 are you sure you have write permissions ?
Or am I forced to do a get, check if the latest version is fainallyzed,
Dataset Must be finalized before using it. The only situation where it is not is because you are still in the "upload" state.
, then increment de version of that version and create my new version ?
I'm assuming there is a data processing pipeline pushing new data?! How do you know you have new data to push?
Bummer... that seems like a bit of an oversight tbh.
There is never a solution for those, unless the helm chart "knows" something about the server before spinning it the first time, which basically means a predefined access-key, I do not think we want that π
I'm checking the possibility of our firewall between the
clearml-agent
machine and the local computer running the
session
Maybe... the thing is, how come the session creates a Task, push it into the queue, but the Task itself is empty.
Hence my request for the clearml-session console log, like actual copy paste of what you have in the terminal, not the Task log from the UI
DepressedChimpanzee34
so parsing bask is done via a yaml reader:
https://github.com/allegroai/clearml/blob/49fcbd7bbf3236f4175cdff29fa951847b0923cc/clearml/backend_interface/task/args.py#L506
We could add extra test here, checking for \ in the string, that should solve it and will be backwards compatible (I think)
https://github.com/allegroai/clearml/blob/49fcbd7bbf3236f4175cdff29fa951847b0923cc/clearml/backend_interface/task/task.py#L935
Hi WickedGoat98
A few background notions:
Docker do not store their state, so if you install something inside a docker, the moment you leave, it is gone, and the next time you start the same docker you start from the same initial setup. (This is a great feature of Dockers) It seems the docker you are using is missing wget. You could build a new docker (see the Docker website for more details on how to use a Dockerfile). The way trains-agent works in dockers is it installs everything you ne...
In regards to the YAML how would you pass data? Like the pipeline from tasks example?
SpotlessFish46 So the expected behavior is to have the single script inside the diff, but you get empty string ?
Thanks OutrageousGrasshopper93
I will test it "!".
By the way the "!" is in the project or the Task name?
1633204289496 clearml-services DEBUG docker: invalid reference format.
This is the strange message, like the execution command is not valid...
and the step is "queued" or is it "queued" in the pipeline state (i.e. the visualization did not update) ?
the question remains though: why docker containers won't launch onΒ
services
Maybe something with the way it launched on the docker-compose?
(I'm assuming it will fail on any docker container regardless, right?!)
I will create a minimal example.
Many thanks ReassuredTiger98 !
assume clearml has some period of time that after it, shows this message. am I right?
Yes you are π
is this configurable?
It is πtask.set_resource_monitor_iteration_timeout(seconds_from_start=1800)
Could you verify the Task.init call is inside the main function and Not the global scope? We have noticed some issues with global scope calls in some cases
You can put a breakpoint here, and see what you are sending:
https://github.com/allegroai/trains/blob/17f7d51a93deb52a0e7d6cdd59da7038b0e2dd0a/trains/backend_api/session/session.py#L220
Hi UnevenDolphin73
Maybe. When the container spins, are there any identifiers regarding the task etc available?
You mean at the container level or at clearml?
I create a folder on the bucket perΒ
python train.py
Β so that the environment variables files doesn't get overwritten if two users execute almost-simultaneously
Nice π I have an idea, how about per user ID? then they can access their "secrets" based on the owner of the Task ?task.data.user
Hmm can you run:docker run -it allegroai/clearml-agent-services:latest
CLEARML_AGENT_GIT_USER
Is your git user (on whatever git host/server you are using, GitHub/GitLab/BitBucket etc.)
Does adding external files not upload them ti the dataset output_uri?
@<1523704667563888640:profile|CooperativeOtter46> If you are adding the links with add_external_files
these files are Not re-uploaded
OutrageousGrasshopper93tensorflow-gpu
is not needed, it will convert tensorflow to tensorflow-gpu based on the detected cuda version (you can see it in the summary configuration when the experiment sins inside the docker)
How can i set the base python version for the newly created conda env?
You mean inside the docker ?
StorageManager is what you need, if you want to download/upload files to any server (this is a utility class the takes care of the DL/uL + adds caching) storage helper is used internally