Hi SubstantialElk6 , does the task have a docker image too (you can check it in the UI)?
can you build your own docker image with clearml-agent installed in it?
From the UI, clone the task you have, and after hit the edit
in the uncommitted changes section (if you can send this file it could be great 🙂 )
Hi VexedCat68 , what the dataset task status?
Hi EcstaticBaldeagle77 ,
The comment says “Connecting ClearML with the current process, from here on everything is logged automatically.”
this comment means that every framework is now patched and will report to ClearML too, this can be configure (per task) with auto_connect_frameworks
in your Task.init
call (example can be found here - https://clear.ml/docs/latest/docs/faq#experiments )
Q2: Can I dump this logged keys & values as local files ? (edited)
Not sure ...
do you have all your AWS credentials in your ~/clearml.conf
file?
Hi ElegantDeer55 ,
Are you referring to https://github.com/allegroai/trains-pycharm-plugin ? If so, it should sync you .git
folder to the remote machine so the task will log the git.
Basically I am confused if “remote debugging” should work / kick in automatically when running in docker mode and starting a task like this:
from trains import Task
task = Task.init(project_name=“my project”, task_name=“my task”)
task.execute_remotely()
When you are running this code from you P...
Hi DefiantShark80 ,
task.report_scalar() # does not always work
what do you mean? report_scalar not sending the info or raising an error?
They should be copied, I just want to verify they are.
If so, can you send the logs of the failed task?
you need to run it, but not actually execute it. You can execute it on the ClearML agent with task.execute_remotely(queue_name='YOUR QUEUE NAME', exit_process=True)
.
with this, the task wont actually run from your local machine but just register in the ClearML app and will run with the ClearML agent listening to 'YOUR QUEUE NAME'
.
TrickySheep9 you can also add the queue to execute this task:
task.execute_remotely(queue_name="default")
So it will enqueue it too 🙂
btw. why do I need to give my git name/pass to run it if I serve an agent from local?
The main idea is that you can run the agent in any machine (local, cloud) and all should be done out of the box.
If your code is running as part of a git repository, the clearml agent will have to clone it, and for doing so it will use credentials.
Git name and pass are one way for it, but you can also use ssh - if you dont have the git name and password in the configuration, the clearml-agent will t...
btw my site packages is false - should it be true? You pasted that but I’m not sure what it should be, in the paste is false but you are asking about true
false
by default, when you change it to true
it should use the system packages, do you have this package install in the system? what do you have under installed packages for this task?
Can you try installing the package on the docker’s python but not on the venv?
In that case, you will get only the changes, but, you can upload the script as an artifact, can this do the trick?
or do you mean this by your note? i.e. leaving the execution parts empty.
yep 🙂
Currently, when ClearML detect a .git
file, it will store your running script as part of your git repo. The workaround to store the whole script as a standalone is just like you did (or to run it outside of the repo).
We currently don’t have such an option to store the script as a standalone, but this could be a useful feature.
Can you add a new https://github.com/allegroai/clearml/issues issue...
For this you don’t really need the output_uri
, you can just do it as is.
Hi GreasyWalrus57 , sorry but didn’t get that.
You want to register the data? you can do it with clearml-data
and then use this task to connect between tasks and data
đź‘Ť what do you get in the UI under EXECUTION -> SOURCE CODE ?
somehow the uncommitted changes (full script in the case) weren't detected
can you share the local run log?
Hi WackyRabbit7 ,
If you only want to get the artifact object, you can use:
task_artifact = Task.get_task(task_id=<YOUR TASK ID>).artifacts[<YOUR ARTIFACT NAME>].get()
Hi WackyRabbit7
You can configure a default one in your trains.conf
file under sdk.development.default_output_uri
where task
is the value return from your Task.init
call,
task = Task.init(project_name=<YOUR PROJECT NAME>, task_name=<YOUR TASK NAME>)
Hi EnviousStarfish54 ,
You can add environment vars in you code, and trains will use those (no configuration file is needed)
import os os.environ["TRAINS_API_HOST"] = "YOUR API HOST " os.environ["TRAINS_WEB_HOST"] = "YOUR WEB HOST " os.environ["TRAINS_FILES_HOST"] = "YOUR FILES HOST "
Can this do the trick?
https://clear.ml/docs/latest/docs/clearml_agent#allocating-resources
you can specify GPUs to use for each running agent