In standard docker TimelyPenguin76 this quoting you mentioned is wrong, since the whole argument is being passed - hence the double tricky quotation I posted above
TimelyPenguin76 I think our problem is that the agent is not using this environment, I'm not sure which one he does... Is there a way to hard-code the agent environment?
even though I apply append
but using that code - how would I edit fileds?
btw my site packages is false - should it be true? You pasted that but I'm not sure what it should be, in the paste is false but you are asking about true
I only found Project ID, which I'm not sure what this refers to - I have the project name
The only way to change it is to convert apiserver_conf
to a dictionary object ( as_plain_ordered_dict()
) and edit it
AgitatedDove14 permanent. I want to start with a CLI interface that allows me add users to the trains server
Good, so if I'm templating something using clearml-task
(without queue, so the task is in draft mode) it will use this task? Even though it never exeucted?
yeah but I see it gets enquequed to the default
which I don't know what it is connected to
If I execute this task using python .....py
will it execute the machine I executed it on?
Yep, the trains server is basically a docker-compose based service.
All you have to do is change the ports in the docker-compose.yml
file.
If you followed the instructions in the docs you should find that file in /opt/trains/docker-compose.yml
and then you will see that there are multiple services ( apiserver
, elasticsearch
, redis
etc.) and in each there might be a section called ports
which then states the mapping of the ports.
The number on the left, is ...
Can you lend a few a words about how the not-pip freeze mechanism of detecting packages work?
and in the UI configuration I didn't understand where does permission management came into play
AgitatedDove14 sorry for the late reply,
It's right after executing all the steps. So we have the following block which determines whether we run locally or remotely
if not arguments.enqueue: pipe.start_locally(run_pipeline_steps_locally=True) else: pipe.start(queue=arguments.enqueue)
And right after we have a method that calls Task.current_task()
which returns None
I also ran it without $(pwd) on the Create Clearml task templates section, I added it because of CostlyOstrich36 's comments but it didn't help
Cool, now I understand the auto detection better
the level of configurability in this thing is one of the best I've seen
anyway, my ultimate goal is to create templates for other tasks... Is that possible in any other way through the CLI?
and also in the extra_vm_bash_script
variables, I ahve them under export TRAINS_API_ACCESS_KEY
and export TRAINS_API_SECRET_KEY
Worth mentioning, nothing has changed before we executed this, it worked before and now after the update it breaks
Committing that notebook with changes solved it, but I wonder why it failed
What do you mean by submodules?
She did not push, I told her she does not have to push before executing as trains figures out the diffs.
When she pushes - it works