Reputation
Badges 1
27 × Eureka!Still have no clue, something going wrong when reading the file due to certain encoding? Due to windows? Or maybe python?
Not yet, working on running the autoscaler for now, and picking this up again later 🙂
Happens to all! Importing of local packages in these decorated pipelines hasn’t really worked yet (except when running via Pycharm, which seems to make sure that the location of the original code is always in the path)
Thanks to both! Unfortunately the same error occurs with the following code snippet. (Jean, instead of the component parameter you mean packages , right? I could not find the former 🙂 )@PipelineDecorator.component(..., packages=['someutils']) def step_one(): from someutils import someutilfunc someutilfunc(32)
Yes, also present in the git repo (hosted on gitlab and seemed to correctly retrieve it, couldn’t find any errors about this in the logs)
In any case, I’m happy it’s fully running now 🙂
Personally I’ve found this (sort-of hacky) approach to work, by passing your git credentials as environment variables to the agent’s docker and cloning the repo in the code. You’ll have to make sure you have the right packages installed though.
` if 'GIT_USER' in os.environ:
git_name, git_pass = os.environ['GIT_USER'], os.environ['GIT_PASS']
call(f'git clone https://{git_name}:{git_pass}@gitlab.com/myuser/myrepo', shell=True)
global myrepo
from myrepo import func
elif local_re...
If I use the PipelineDecorator.debug_pipeline()
everything works as expected
So you think maybe this is functionality that only works when running with an agent? Interesting
Oh yup, that seems very possible since I run it with the run_locally()
and then clone this task in the UI
I’m curious what the opinions are on this! I asked myself the same question. In my limited experience, going through a workflow with SageMaker was a painful process, and one that required a ton of AWS-specific code and configuration. Compared to this, ClearML was easy and quick to set up, and provides a dashboard where everything from experiments to models to output is organised, queryable and comparable. Way less hassle for way more benefits.
I’m also not sure but it seems like the slack trial renews from time to time in this workspace, which eventually gives access to those older threads
Switched off the windows defender FW, no load balancer present, still not working 😕
Would this then be possible by cloning the task (which is a pipeline) and accessing the right subtask (the component which should be changed)?
Hi Mark! Do you set any of the decorator parameters using variables? That was my issue, and instead of using python variables, I hardcoded one potential value, and then used the get and set methods to change them when cloning programatically, which should be the same as changing them in the configuration tab when cloning with the UI. Hope this helps 🙂
It seems to be working now, by running the Pipeline locally with PipelineDecorator.run_locally()
and running the script using the following command:PYTHONPATH="fill_in_your_current_dir" python pipeline.py
Cloning this in the UI and enqueueing now also allows remote execution.
Calling the script without the PipelineDecorator.run_locally()
i.e. running the pipeline remotely still gives the ModuleNotFoundError: No module named
Just checked and it’s not there, even for the successfully-remotely-ran pipeline. Do note that the needed module is just a local folder with scripts. The differences between the successful pipeline (ran locally and cloned in the UI) vs the errored pipeline (ran remotely) are also very hard to spot to me, they have the exact same Installed Packages and execution details
This is ran by using the UI’s ‘Run’ button without the ‘Advanced configuration’
Reporting back: this example worked, but unfortunately did not run successfully when cloned in the UI, with an error of base_task_id is empty
akin to https://clearml.slack.com/archives/CTK20V944/p1662954750025219 previous slack thread. By editing the configuration object as mentioned above (programmatically also possible with the get and set configuration objects), the pipeline also worked when cloned 🙂
Thanks a lot! I’m still in the process of setting up, so running on a remote worker has not been successful yet, but I’ll report back on this issue if that fixes it!
Good to know! Thanks 🙌
I found this thread https://clearml.slack.com/archives/CTK20V944/p1662456550871399?thread_ts=1662453504.528979&cid=CTK20V944 but unsure how this is applicable when running the pipeline locally
Hi Jake! The clearml.conf file content is exactly the api section that is given by our clearml server, copied using the copy button, something like
api {
web_server: http:// .. :8080
api_server: http:// .. :8008
files_server: http:// .. :8081
credentials {
"access_key" = "KEY"
"secret_key" = "SECRET"
}
}
clearml version 1.9.0
The strange thing is that the configuration works perfectly on my machine. My coworker’s machine does have a different p...
So we got it! Still don’t understand it though.
I generated the credentials on the web ui and sent them to my coworker, they did not work at all.
He generated his own credentials and they work!
Awesome! Really simple and clever, love it. Thanks Eugen!
I’ve seen that you can change the branch of a cloned task like so https://github.com/allegroai/clearml-actions-train-model/blob/7f47f16b438a4b05b91537f88e8813182f39f1fe/train_model.py#L14
I think I got it! I found that the branch for the component is specified in the UI in the component’s configuration object under the pipeline’s configuration tab. In theory I should be able to clone the pipeline task, use the get_configuration_object
method, change the branch, set it using the set_configuration_object
, and finally enqueue! Going to test this out
Have you opened the ports 8080,8008,8081
? I think I had the same thing when setting up, and still had to add some inbound rules to open these ports via the cloud platform
The web ui is hosted at :8080, so make sure to add that to the end of the url as well 🙂