Still not supported 😞
Hi @<1691620877822595072:profile|FlutteringMouse14>
Yes, feast has been integrated by at least a couple if I remember correctly.
Basically there are two ways offline and online feature transformation. For offline your pipeline is exactly what would be recommended. The main difference is online transformation where I think feast is a great start
Hi GrittyKangaroo27
How could I turn off model logging when running this training step?
This is a good point! I think we cannot pass these arguments.
Would this make sense to you?PipelineDecorator.component(...,
auto_connect_frameworks)
wdyt?
Here you go:
` @PipelineDecorator.pipeline(name='training', project='kgraph', version='1.2')
def pipeline(...):
return
if name == 'main':
Task.force_requirements_env_freeze(requirements_file="./requirements.txt")
pipeline(...) If you need anything for the pipeline component you can do:
@PipelineDecorator.component(packages="./requirements.txt")
def step(data):
some stuff `
So assuming they are all on the same LB IP: You should do:
LB 8080 (https) -> instance 8080
LB 8008 (https) -> instance 8008
LB 8081 (https) -> instance 8081
It might also work with:
LB 443 (https) -> instance 8080
@<1523710674990010368:profile|GreasyPenguin14> what do you mean "but I do I get the... " ?
Configuring git user/pass will allow you to launch Tasks from private repositories on the services queue (the agent is part of the docker-compose).
That said, this is not a must, worst case you'll get an error when git fails to clone your repo :)
Make sense 🙂
Just make sure you configure the git user/pass in the docker-compose so the agent has your credentials for the repo clone.
None
Change to:
CLEARML_AGENT_GIT_USER: ${CLEARML_AGENT_GIT_USER:my_git_user_here}
and the same for the password.
You can also just set the environment variables before launching docker-compose, whatever is more convenient for you
@<1523710674990010368:profile|GreasyPenguin14> If I understand correctly you can use tokens as user/pass (it's basically the same interface from the git client perspective, meaning from ClearML
git_user = gitlab-ci-token
git_pass = <the_actual_toke>
WDYT?
@<1523710674990010368:profile|GreasyPenguin14> make sure it to uses https not ssh:
edit ~/clearml.conf
force_git_ssh_protocol: false
and that you have both git_user & git_pass set in your clearml.conf
(Go to the profile page, and click "Disable HiDPI browser scale override" see if that helps)
Nice 🙂
@<1523710674990010368:profile|GreasyPenguin14> for future reference the agent
part in the clearml.conf is only created when you call clearml-agent init (no need for it for the python SDK). Full default configuration is here:
None
@<1687643893996195840:profile|RoundCat60> I'm assuming we are still talking about the S3 credentials, sadly no 😞
Are you familiar with boto and IAM roles ?
None
So this is the only place we need to change to support it, do you feel like messing around with it and adding IAM roles ?
As we can’t create keys in our AWS due to infosec requirements
Hmmm
I suggest a bump in the GitHub issue
@<1523707653782507520:profile|MelancholyElk85>
What's the clearml
version you are using ?
Just making sure... base_task_id has to point to a Task that is in "draft" mode, for the pipeline to use it
@<1523707653782507520:profile|MelancholyElk85> I just run a single step pipeline and it seemed to use the "base_task_id" without cloning it...
Any insight on how to reproduce ?
I ended up usingÂ
task_overrides
 for every change, and this way I only need 2 tasks (a base task and a step task, thus I useÂ
clone_base_task=True
 and it works as expected - yay!)
Very cool!
BTW: you can also provide a function to create the entire Task, see base_task_factory
argument in add_step
I think it's still an issue, not critical though, because we have another way to do it and it works
I could not reproduce it, I think the issue w...
HI @<1687643893996195840:profile|RoundCat60>
Are you running on AWS ?
@<1523701323046850560:profile|OutrageousSheep60> the assumption is that you have "pre_installations.sh" locally (i.e. when you are calling clearml-task
) what will happen is that this bash script will be put on top of the Task and executed before everything else inside the container
does that make sense ?
We're not using a load balancer at the moment.
The easiest way is to add ELB and have amazon add the httpS on top (basically a few clicks on their console)
. I'm trying to run to get a task to run using a specific docker image and to source a bash script before execution of the python script.
Are you running an agent in docker mode ? if so you should be able to see the Output of your bash script first thing in the log
(and it will appear in the docker CMD)
- but the
pytorch/main.py
file doesn't run.
What do you have on the Task itself? is this the correct script ?
Any chance you can send a full log ? (you can DM it if it helps)
@<1687643893996195840:profile|RoundCat60> can you access the web UI over https ?