Hi @<1687643893996195840:profile|RoundCat60>
I think the best way will be to configure a default output_uri
to be used by all tasks: None , under default_output_uri
just write your bucket path ( None ).
When using S3/google storage/azure, you will also need to add your credentials in this section - None (s3 in ...
Same credentials configuration for the ClearML-Agent.
Notice that when a task is created, in the UI, under EXECUTION
tab, you can find (and change if you like) the output destination.
Hi JitteryCoyote63 , what commit and branch do you see in the UI?
Hi EnviousStarfish54 ,
Do you get any message regarding the repository detection at the end of the task? Does it log Waiting for repository detection and full package requirement analysis
?
Do you get the repository detection or this section is empty too?
Hi VexedCat68
Is it possible to write text file and see it in results of the experiment?
You can upload any file as an artifact to your task, try:
task.upload_artifact(name="results", artifact_object="text_file.txt")
I want to use it to version data as in keeping a track of what images have been trained on. Or is there a better way of data versioning in ClearML?
You can use https://clear.ml/docs/latest/docs/clearml_data/ for making the data accessible from every machine...
You can always clone a “template” task and change everything (it will be on draft
mode), what is you use case? maybe we already have a solution for it
From the ClearML UI you can just change the value under BASE DOCKER IMAGE section to your image
Hi SubstantialElk6
If you like a new task, you can clone as HugePelican43 suggested.
You can also continue reporting to your task with continue_last_task
parameter in your Task.init
call:
from clearml import Task task = Task.init(project_name="YOUR PROJECT NAME", task_name="YOUR TASK NAME", continue_last_task=True)
You also can specify the task id to continue (from the docs - https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?hig...
Hi OutrageousSheep60 , I think the connect_configuration is your solution for this one (or connect)
Hi SquareFish25 ,
Which section would you like to modify? Can env vars do the trick? e.g.
os.environ["CLEARML_API_HOST"] = "***" os.environ["CLEARML_WEB_HOST"] = "***" os.environ["CLEARML_FILES_HOST"] = "***" os.environ["CLEARML_API_ACCESS_KEY"] = "***" os.environ["CLEARML_API_SECRET_KEY"] = "***"
Hi SquareFish25 , what about AWS_DEFAULT_REGION
, did you add it too? Can you try with it if not?
What version of ClearML are you using?
SquareFish25 Will try to reproduce it
Hi SquareFish25 ,
I tried the follow and succeed to upload the file:
` import os
os.environ['AWS_ACCESS_KEY_ID'] = "***"
os.environ['AWS_SECRET_ACCESS_KEY'] = "***"
os.environ['AWS_DEFAULT_REGION'] = "***"
from clearml import StorageManager
remote_file = StorageManager.upload_file(<file to upload>, 's3://bucket_name/inner_folder/file_name') `Can you try it and update if it works for you?
SquareFish25 Still on it 🤞
SquareFish25 do you have a way trying with access and secret without token? just for the checking
I suspect that
I will try to generate a new token for myself and reproduce it with it
ArrogantBlackbird16 can you send a toy example so I can reproduce it my side?
Why not using it directly from S3?
You can https://allegro.ai/clearml/docs/docs/examples/examples_storagehelper.html#downloading-a-file it with the storageManager
Hi MotionlessMonkey27 ,
first, I’m getting a warning:
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
This simply indicated your task did not start reporting metrics to the server yet. Once reporting started, it will go back to iterations-based.
Also, ClearML is not detecting the scalars, which are being logged as follows:
tf.summary.image(‘output’, output_image, step=self._optimizer.iterations.numpy())
or
for key, value in...
Hi MotionlessSeagull22 ,
You can upload files as artifacts withtask.upload_artifact('text file', artifact_object=<path to your file>)
You can find more artifacts examples in https://github.com/allegroai/trains/blob/master/examples/reporting/artifacts.py
pip install clearml
works for me now, if you like to try…
Hi SmugTurtle78 , can you share you configuration? (without the secrets)
- are you working vpc? did you try configure only one of the params?
👍 great, so if you have an image with clearml agent, it should solve it 😀
Try to clone the task (right click on the task and choose “clone”) and you will get a new task in draft mode, that you can configure ( https://clear.ml/docs/latest/docs/getting_started/mlops/mlops_first_steps#clone-an-experiment )
can you try with the latest? pip install clearml==1.1.4
?
The
report_scalar
feature creates a plot of a single data point (or single iteration).
UnevenDolphin73 thats how I would use it. with it you can compare between tasks and compare the results. You can also add it to the project view and filter with it too:
Hi PanickyMoth78 ,
Can you try with pip install clearml==1.8.1rc0
? it should include a fix for this issue
Hi NonchalantDeer14 ,
Can you share the env you are running with?
I think this still has to do with the ports
Can you check that?
Web application on port 8080 API service on port 8008 File storage service on port 8081