not sure about the Other
, but maybe adding some metadata to the artifact can do the trick?
You can get all the artifacts with task.artifacts
and you can go over it and filter with the metadata, wdyt?
Hi SmarmySeaurchin8
You can configure TRAINS_CONFIG_FILE
env var with the conf file you want to run it with. Can this do the trick?
those are all? you can copy all the section from the UI, and hide the internal details
Hi WackyRabbit7
You can configure a default one in your trains.conf
file under sdk.development.default_output_uri
👍
So the diff header doesn’t related to line 13 but the error is, can you try adding space to this line or even delete it if you don’t need it? (just for checking)
can you share configs/2.2.2_from_scratch.yaml
file with me? The error point to line 13, anything special in this line?
Hi TrickySheep9 , didnt get the idea, you are using clearml-data
? you just want to upload a local folder to S3?
yes -
task.upload_artifact('local json file', artifact_object="/path/to/json/file.json")
Hi SubstantialElk6 ,
Can you add a screenshot of it? what do you have as MODEL URL
?
The AWS autoscaler doesn’t related to other tasks, you can think about it as a service running in your Trains system.
and are configured in the auto scale task?
Didn’t get that 😕
ok, I think I missed something on the way then.
you need to have some diffs, because
Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
can you re-run this task from your local machine again? you shouldn’t have anything under UNCOMMITTED CHANGES
this time (as we ...
Hi WackyRabbit7 ,
Did you try using sdk.development.detect_with_pip_freeze
as true
in your ~/clearml.conf
file? It should take the same environment as the one you are running from.
Hi ReassuredTiger98 ,
Can you try TRAINS_LOG_ENVIRONMENT=MYENVVAR
instead of TRAINS_LOG_ENVIRONMENT="MYENVVAR"
?
So once I enqueue it is up?
If the trains-agent
is listening to this queue (services mode), yes.
Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to
You are right, the auto scale task has nothing to do to where the auto scale task is enqueued
Hi EagerOtter28 ,
The integration with cloud backing worked out of the box so that was a smooth experience so far
Great to read 🙂
When I create a dataset with 10 files and have it uploaded to e.g. S3 and then create a new dataset with the same files in a different folder structure, all files are reuploaded
For a few .csv files, it does not matter, but we have datasets in the 100GB-2TB range.
Any specific reason for uploading the same dataset twice? ` clearml-da...
Hi WackyRabbit7 ,
You can find all the api docs https://clear.ml/docs/latest/docs/references/api/index , and for task.get_all
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all 🙂
btw my site packages is false - should it be true? You pasted that but I’m not sure what it should be, in the paste is false but you are asking about true
false
by default, when you change it to true
it should use the system packages, do you have this package install in the system? what do you have under installed packages for this task?
Hi GrittyKangaroo27
did you also closed the dataset?
Can you attach the commands with the order of all the datasets?
do you have all your AWS credentials in your ~/clearml.conf
file?
NonchalantDeer14 thanks for the logs, do you maybe have some toy example I can run to reproduce this issue my side?
SpotlessFish46 You can change models and artifacts destination per experiment with output_uri
https://github.com/allegroai/trains/blob/b644ec810060fb3b0bd45ff3bd0bce87f292971b/trains/task.py#L283 , can this work for you?
Hi CleanPigeon16 ,
Currently, only argparse arguments are supported (list of arg=val
).
How do you use the args in your script?
Hi PanickyMoth78 ,
Note that if I change the component to return a regular meaningless string -
"mock_path"
, the pipeline completes rather quickly and the dataset is not uploaded. (edited)
I think it will use the cache from the second run, it should be much much quicker (nothing to download).
The files server is the default for saving all the artifacts, you can change this (default) with env var ( CLEARML_DEFAULT_OUTPUT_URI
) or config file ( ` sdk.development...
Hi MoodyCentipede68 , do you have the repo as part of the https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#pipelinedecoratorcomponent ? you can specify for each step which repo to use
Hi SmallDeer34 👋
The dataset task will download all the dataset when using clearml-data
task, you have both in the same one?
ColossalAnt7 all the ports are open? (8080, 8008, 8081)
Hi ItchyHippopotamus18 ,
it seems the request does not reach the Trains File Server (port 8081, same machine running Trains Server), can you reach it?
Correct 🙂polling_interval_time_min
= the scaler interval for checking tasks in the queue