can you share the local run log?
Hi WickedBee96 ,
Are you running a standalone script or some code part of a git repository?
Hi FierceHamster54 ,
I think
And is this compatible with the
Task.force_store_standalone_script()
option ?
is causing the issue, you are storing the entire script as a standalone without any git, so once you are trying to import other parts of the git, BTW any specific reason using it in your pipeline?
You can always clone a โtemplateโ task and change everything (it will be on draft
mode), what is you use case? maybe we already have a solution for it
Hi ShallowKitten67 .
Can you send the logs? can you share the machine monitoring (from scalars section)?
๐ great, so if you have an image with clearml agent, it should solve it ๐
which ClearML agent version are you running?
Hi DeliciousBluewhale87 ,
Do you have your credentials for the S3 bucket in your ~/clearml.conf
file?
https://github.com/allegroai/clearml/blob/master/docs/clearml.conf#L76
Hi UnevenDolphin73 , so all works now? With multi credentials?
HI JitteryCoyote63 , so just the print is double?
Hi WackyRabbit7
When calling Task.init()
, you can provide output_uri
parameter. This allows you to specify the location in which model snapshots will be stored.
Allegro-Trains supports shared folders, S3 buckets, Google Cloud Storage and Azure Storage.
For example (with S3):
Task.init(project_name="My project", task_name="S3 storage", output_uri="s3://bucket/folder")
You will need to add storage credentials in ~/trains.conf
file (you will need to add your aws in thi...
Hi MysteriousBee56 .
What trains-agent version are you running? Do you run it docker mode (e.g.trains-agent daemon --queue <your queue name> --docker
?
FierceFly22 like Elior wrote, you can use Task.execute_remotely
, just need to supply the queue name ๐
Hi CheekyToad28 ,
None of the options https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server#deployment works for you?
Hi JitteryCoyote63 ,
Sure you can, you have many https://allegro.ai/docs/use_cases/trains_agent_use_case_examples/ , just pick to one you need ๐
Hi BattyLizard6 ,
Do you have a toy example so I can check this issue my side?
After the task is cloned, the task is in a draft state. In this state every field is editable, so you can just double click the BASE DOCKER IMAGE section and change it to your image. If youโll just delete the value from this section, then the ClearML agent will use the docker image you configure in the clearml.conf file (dockerrepo/mydocker:custom).
Hi UnevenDolphin73 ,
If the ec2 instance is up and running but no clearml-agent is running, something in the user data script failed.
Can you share the logs from the instance (you can send in DM if you like)?
Hi ImmensePenguin78 ,
You can get all the console outputs using task.get_reported_console_output()
. can this do the trick?
Hi GiganticTurtle0 ,
You have all the tasks that are part of the pipeline in an execution table (with links) under plots
section, does it helps?