Reputation
Badges 1
42 × Eureka!I was thinking about sending the parameters programatically. We have different pipelines that can generate tasks, I would like to be able to tell trains the user who started the pipeline.
And for some reason this clone is marked as completed. Not sure why, as it failed
Hooray! That works AND the feature works!
Quick follow up question, is there any way to abort a pipeline and all of the tasks it ran?
AgitatedDove14 is there any update on the open issue you talked about before? I think it's this one: https://github.com/allegroai/clearml/issues/214
legit, I was thinking only about task tracking, less about user based credentials. good point
cool! just to verify - I'll still need to have the credentials created in the server, right?
yeah, maybe as an option in the  Task.init
what about using ENV variables? is it possible to override the config file's credentials?
I looked there, but couldn't find it. I'm currently experimenting with your free hosted server
nope, only port 22 is open for SSH. Is there anyway to set that as the port for clearml-session?
Is there an option to do this from a pipeline, from within the  add_step  method? Can you link a reference to cloning and editing a task programmatically? nope, it works well for the pipeline when not I don't choose to continue_pipeline
yup, it's there in draft mode so I can get the latest git commit when it's used as a base task
ok, hopefully last question on this subject  🙂
I want to use Jenkins for some pipelines. What I would like to do is have one set of credentials saved on Jenkins. Then whenever a user triggers a pipeline - this is the user that will be marked as the task's user.
If I understand the options you suggested, I'll currently need either to (1) have some mapping between users and their credentials and have all the credentials saved on Jenkins; or, (2) have each user manually add 2 environment varia...
` Exception in thread Thread-5:
Traceback (most recent call last):
File "/opt/pyenv/versions/3.6.8/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/opt/pyenv/versions/3.6.8/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/root/.clearml/venvs-builds/3.6/lib/python3.6/site-packages/clearml/automation/controller.py", line 615, in _daemon
if self._launch_node(self._nodes[name]):
File "/root/.clearml/venvs-builds/3.6/lib/pyt...
right, of course  🙂  so just to make sure I'm running it correctly. I ran  python aws_autoscaler.py --run  on my laptop and I see the Task on ClearML. Then took a completed task, cloned it and enqueued to the queue defined on the autoscaler. That should spin up an instance, right? (it currently doesn't, and I'm not sure where to debug)
so no magic "username" key? 😛
I just want to use auth0 (which we already use in the company) in order to manage the users...
it's in the docker image, doesn't the git clone command run in the container?
Thanks! A followup question - can I make the steps in the pipeline use the latest commit in the branch?
Sounds promising, any ETA for the next version?
Sure, redacted most of the params as they are sensitive:
` run_experiment {
base_task_id = "478cfdae5ed249c18818f1c50864b83c"
queue = null
parents = []
timeout = null
parameters {
# Redacted the parameters
}
executed = "d1d361d1059c4f0981200f59d7683773"
}
segment_slides {
base_task_id = "ae13cc979855482683474e9d435895bb"
queue = null
parents = ["run_experiment"]
timeout = null
parameters {
Args/param = """
[
#...
something needs to run the autoscaler, I thought it would be the machine that runs the services queue, no?
yeah, totally. Are there any services OOB like this?
when I ran the script it autogenerated the YAML, so I should manually copy it to the remote services agents?
python -m http://script.as .a.module first_arg second_arg --named_arg value   <- something like that