what do you have under installed packages for this task?
can you share configs/2.2.2_from_scratch.yaml
file with me? The error point to line 13, anything special in this line?
In the installed package you have trains==0.16.4
, do you import it somewhere in your script?
back to the main subject, can you try adding it and re run it?
Do you have space at the end of the -
line? Can you try editing it and adding space to the -
line if it doesn’t have one?
this is the one from the original (template) task? I can’t see the package that raise the error, can you try adding it and re-run? do you have the imports analysis?
BTW you have both trains
and clearml
, can you try with clearml
only? it should support all the trains
imports
thanks for the answer, so for example (to make sure I understand) with the example you gave above when I’ll print the config I’ll see the new edited parameters?
Correct
What about the second part of the question, would it be parsed according to the type hinting?
It should
👍
I can not see google package, can you try clone and add it manually? You can always add any package you like to any task with Task.add_requirements('package name', 'package version')
,
Do you have a toy example so I can reproduce it my side (using google.cloud but package is not listed in task)?
btw my site packages is false - should it be true? You pasted that but I’m not sure what it should be, in the paste is false but you are asking about true
false
by default, when you change it to true
it should use the system packages, do you have this package install in the system? what do you have under installed packages for this task?
Can you check you have space at the end of the diff file?
Hi HealthyStarfish45
If you are running the task via docker, we dont auto detect the image and docker command, but you have more than one way to set those:
You can set the docker manually like you suggested. You can configure the docker image + commands in your ~/trains.conf
https://github.com/allegroai/trains-agent/blob/master/docs/trains.conf#L130 (on the machine running the agent). You can start the agent with the image you want to run with. You can change the base docker image...
Is this the only empty line in the file?
hi DepressedChimpanzee34 . once you change the parameters in the cloned task from the UI, those will be the parameters your task will use when running with the ClearML agent.
The configuration you see in the UI will be the actual running configuration for task
Hi JitteryCoyote63 , Did you edit the diff part?
Hi DepressedChimpanzee34 ,
Hydra should be auto patched, did you try this example?
https://github.com/allegroai/clearml/blob/master/examples/frameworks/hydra/hydra_example.py
From the UI, clone the task you have, and after hit the edit
in the uncommitted changes section (if you can send this file it could be great 🙂 )
those are all? you can copy all the section from the UI, and hide the internal details
Not in your current environment, in the one the clearml-agent creates. In the installed packages you have trains==0.16.4
, so I guess you are using it in your code (if not, you can check in your base task, under installed package, the imports analysis and get the information about where this import is coming from)
About the import issue, did you try adding the missing package and re run the task?
👍
So the diff header doesn’t related to line 13 but the error is, can you try adding space to this line or even delete it if you don’t need it? (just for checking)
With the same error?
Hi PompousBeetle71 ,
Can you please share with me some more information? Where can you see the tags in the server? Do you mean in the web-app? Do you see the tags under the task or the model?
They should be copied, I just want to verify they are.
If so, can you send the logs of the failed task?
yep, but you need to have some initial run for the task to get into the UI (even remote execution mode)
When uploading the files, hash is being calculated for every entry, and this is done for the local files. so currently clearml-data support local files.
What would you like to do with the dataset? Why not using it directly from S3?
according to this part
Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
I don’t see the requirements change, lets try without the cache, can you clear it (ClearML cache dir is located at ~/.clearml
)?
The non-pip freeze will have the package your are using in your script, and not the whole env, according to imports and usage
Hi SubstantialElk6 , does the task have a docker image too (you can check it in the UI)?
ok, I think I missed something on the way then.
you need to have some diffs, because
Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
can you re-run this task from your local machine again? you shouldn’t have anything under UNCOMMITTED CHANGES
this time (as we ...
no, just run the wizard and start the service (enqueue it or run it locally)
Hi WackyRabbit7
The DevOps
project is the project for the AWS autoscaler to run as a service. If you have other project that you run services from, you can change the name 🙂
Hi GrittyKangaroo27
did you also closed the dataset?
Can you attach the commands with the order of all the datasets?