Can you add a log?
Hi @<1523701079223570432:profile|ReassuredOwl55> , how are you saving them? Are they are saved as artifacts?
@<1556812486840160256:profile|SuccessfulRaven86> , did you install poetry inside the EC2 instance or inside the docker? Basically, where do you put the poetry installation bash script - in the 'init script' section of the autoscaler or on the task's 'setup shell script' in execution tab (This is basically the script that runs inside the docker)
It sounds like you're installing poetry on the ec2 instance itself but the experiment runs inside a docker container
It can be changed with this env var for the apiserver:
CLEARML__hosts__elastic__events__args__timeout=<new number>
Though the better handling could be either increase the elasticsearch capacity (memory and cpu) or decrease the load (send events in smaller batches)
Hi @<1529271085315395584:profile|AmusedCat74> , what are you trying to do in code? What version of clearml
are you using?
Hi @<1575656665519230976:profile|SkinnyBat30> , what version of ClearML are you using? Are you uploading datasets from the same machine also to GCS?
no, it's an environment variable
No, no I mean you need to be logged into your GS account on the same browser as the webserver when visiting
Hi UpsetTurkey67 ,
Is this what you're looking for?
https://clear.ml/docs/latest/docs/references/sdk/trigger#add_model_trigger
From the looks of it, it's failing to recreate the environment - something about numpy. Are you trying to run on two different OS's or different pythons? My best suggestion would be to try running inside docker
@<1533619716533260288:profile|SmallPigeon24> , did you try debugging your pipeline locally(steps included)?
Hi @<1546303254386708480:profile|DisgustedBear75> , there are a few reasons remote execution can fail. Can you please describe what you were trying to do and please add logs?
Hi @<1543766544847212544:profile|SorePelican79> , I don't think you can track the data inside the dataset. Maybe @<1523701087100473344:profile|SuccessfulKoala55> , might have an idea
I think this is what you're looking for
I think you also might find this video useful:
None
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , you can do it if you run in docker mode
Hi @<1545216070686609408:profile|EnthusiasticCow4> , generally speaking, pipelines are a special type of task. When you write steps using decorators you don't have to add the task init. However you can also build pipelines using existing tasks in the system, where those were created with task.init
If you go to settings, the versions should appear at the bottom right
I see. Leave the files_server section as it was by default. Then in the CLI specify the --output-uri
flag
None
Hi @<1558986867771183104:profile|ShakyKangaroo32> , can you please open a GitHub issue to follow up on this? I think a fix should be issued shortly afterwards
Hi SwankySeaurchin41 ,
Did you run any pipelines? You can see some examples here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
Are you using a self deployed server?
Hi @<1582179652133195776:profile|LudicrousPanda17> , I suggest doing a similar filtering in the UI with dev tools open (F12) and see what is sent by the web UI 🙂
Hi @<1570220858075516928:profile|SlipperySheep79> , you can use pre & post execute callback functions that run on the controller. Is that what you're looking for?
TimelyPenguin76 , MammothGoat53 , I think you shouldn't call Task.init()
more than once inside a script
Check the pre_execute_callback
and post_execute_callback
arguments of the component.
Hi PreciousParrot26 ,
Why are you running from gitlab runner - Are you interested in specific action triggers?