The
report_scalar
feature creates a plot of a single data point (or single iteration).
UnevenDolphin73 thats how I would use it. with it you can compare between tasks and compare the results. You can also add it to the project view and filter with it too:
Hi ReassuredTiger98 ,
try:
` from clearml.config import running_remotely
if running_remotely():
... `
you need to run it, but not actually execute it. You can execute it on the ClearML agent with task.execute_remotely(queue_name='YOUR QUEUE NAME', exit_process=True)
.
with this, the task wont actually run from your local machine but just register in the ClearML app and will run with the ClearML agent listening to 'YOUR QUEUE NAME'
.
are you referring to the docker image? The same as before with task.set_base_docker("dockerrepo/mydocker:custom --env GIT_SSL_NO_VERIFY=true")
the controller task? same as here - https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_controller.py
Hi JitteryCoyote63 ,
You can get some stats (for the last month) under the workers section in your app, clicking a specific worker will give you some more options
Those doesn’t includes stats per training, but per worker
I can help you with that 🙂
task.set_base_docker("dockerrepo/mydocker:custom --env GIT_SSL_NO_VERIFY=true")
Hi JitteryCoyote63 , Did you edit the diff part?
Is this the only empty line in the file?
Is it possible to write text file and see it in results of the experiment?
You can upload any file as an artifact to your task, try:
task.upload_artifact(name=“results”, artifact_object=“text_file.txt”)
Notice the max preview for an artifact is 65k, and it is suggested to have one file like this (and not for every iteration for example)
BTW why using the api calls and not clearml sdk?
Can you try upgrade to the latest? pip install clearml-agent==0.17.2
?
The fileserver will store the debug samples (if you have any).
You'll have cache too.
not sure about the Other
, but maybe adding some metadata to the artifact can do the trick?
You can get all the artifacts with task.artifacts
and you can go over it and filter with the metadata, wdyt?
Hi CleanPigeon16 .
Do you get anything in the UI regarding this failure (in the RESULTS -> CONSOLE section)?
Can you check you have space at the end of the diff file?
when you do git diff
on your terminal about this git repo, do you get the requirements changes too? or the same as inApplying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
?
Hi MinuteWalrus85 ,
Do you have tensorboard
installed too?
I installed trains
, fastai
, tensorboard
and tensorboardx
and run a simple example, can be view in this link -
https://demoapp.trains.allegro.ai/projects/bf5c5ffa40304b2dbef7bfcf915a7496/experiments/e0b68d0fe80a4ff6be332690c0d968be/execution
Hi NuttyOctopus69 ,
I’m getting the same, suspect its some pypi server issues (from https://status.python.org/ ).
You can install it the latest version from GitHub withpip install git+
This is a nice issue to open in https://github.com/allegroai/trains :)
Hi DefiantShark80 ,
task.report_scalar() # does not always work
what do you mean? report_scalar not sending the info or raising an error?
Hi SubstantialElk6 ,
You can configuration S3 credentials on your ~/clearml.conf
file, or with environment variables:os.environ['AWS_ACCESS_KEY_ID'] ="***" os.environ['AWS_SECRET_ACCESS_KEY'] = "***" os.environ['AWS_DEFAULT_REGION'] = "***"
Hi SubstantialElk6 ,
Which clearml
version did you use? and which python version?
Can you write a small how to reproduce this issue?