Yea the "-e ." seems to fit this problem the best.
👍
It seems like whatever I add to
docker_bash_setup_script
is having no effect.
If this is running with the k8s glue, there console out of the docker_bash_setup_script ` is currently Not logged into the Task (this bug will be solved in the next version), But the code is being executed. You can see the full logs with kubectl, or test with a simple export test
docker_bash_setup_script
` export MY...
AdventurousRabbit79 are you passing cache_executed_step=False
to the PipelineController ?
https://github.com/allegroai/clearml/blob/332ceab3eadef4997e897d171957975a247a6dc1/clearml/automation/controller.py#L129
Could you send a usage example ?
my pipeline controller always updates to the latest git commit id
This will only happen if the Task the pipeline creates has no specific commit ID, and instead just uses the latest from the git repo. Is this the case ?
Hi QuaintJellyfish58
You can always set it inside the function, withTask.current_task().output_uri = "s3://"
I have to ask, I would assume the agents are pre-configured with "default_output_uri" in the clearml.conf, why would you need to set it manually?
I figured out the problem...
Nice!
Unfortunately, the hyperparameters in configuration object seems to be superior to the hyperparameters in Hyperparameter section
Hmm what do you mean by that ? how did you construct the code itself? (you should be able to "prioritize" one over the over)
@<1523701083040387072:profile|UnevenDolphin73> it's looking for any of the files:
None
and what are their names ?
worker:0 worker:1 etc ?
Thanks VivaciousPenguin66 !
BTW: if you are running the local code with conda, you can set the agent to use conda as well (notice that if you are running locally with pip, the agent's conda env will use pip to install the packages to avoid version mismatch)
Hi PompousBeetle71
Could you test the latest RC, I think the warning were fixed:pip install trains==0.16.2rc0
Let me know...
Hi CurvedHedgehog15
I would like to optimize hparams saved in Configuration objects.
Yes, this is a tough one.
Basically the easiest way to optimize is with hyperparameter sections as they are basically key/value you can control from the outside (see the HPO process)
Configuration objects are, well, blobs of data, that "someone" can parse. There is no real restriction on them, since there are many standards to store them (yaml,json.init, dot notation etc.)
The quickest way is to add...
I see, let me check something 🙂
Add '/' , like you would with a file system.Task.init(project_name='main_project/sub_project', task_name='test')
FrothyShark37 any chance you can share snippet to reproduce?
Hi DeliciousBluewhale87
You mean per Task? Is it reporting? Is it like the project overview?
Hi GrittyHawk31
but it could not connect to the grafana dashboard through port 3000, is there any particular reason for that? I may have missed something.
Did you run the full docker-compose.yml ?
Are you able to curl to the endpoints ?
do you have git repo link in the execution section of the experiment ?
2021-07-11 19:17:32,822 - clearml.Task - INFO - Waiting to finish uploads
I'm assuming a very large uncommitted changes 🙂
Hi @<1569496075083976704:profile|SweetShells3>
Are you using the standard docker-compose ? are using the default elastic container ?
What exactly changed ?
BTW: dockerhub is free and relatively cheap to upgrade 🙂
(GitHub also offers docker registry)
Generally speaking, for the exact reason if you are passing a list of files, or a folder, it will actually zip them and upload the zip file. Specifically to pipeline it should be similar. BTW I think you can change the number of parallel upload threads in StorageManager, but as you mentioned it is faster to zip into one file. Make sense?
Should work out of the box, as long as the task was started. You can forcefully start the task with:task.mark_started()
Thanks EnviousStarfish54
Let me check if I can reproduce it
I have install a python environment by virtualenv tool, let's say
/home/frank/env
and python is
/home/frank/env/bin/python3.
How to reuse the virtualenv by setting clearml agent?
So the agent is already caching the entire venv for you, nothing to worry about, just make sure you have this line in clearml:
https://github.com/allegroai/clearml-agent/blob/249b51a31bee97d63f41c6d5542e657962008b68/docs/clearml.conf#L131
No need to provide it an existing...
Thanks SmallDeer34 , I think you are correct, the 'output' model is returned properly, but "input" are returned as model name not model object.
Let me check something
SubstantialElk6
The CA is taken automatically by urllib, check the OS environments you need to configure it.
https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-errorSSL_CERT_FILE REQUESTS_CA_BUNDLE
Sure thing, hopefully I'll remember to ping tomorrow once GitHub is synced, I'd appreciate it if you could verify the fix works 🙂
ZanyPig66 this should have worked, any chance you can send the full execution log (in the UI "results -> console" download full log) and attach it here? (you can also DM it so it is not public)
Well (yes, I think), the environment section is used mostly for logging, the next version will have full support by the clearml-agent (due next week), and the next release of clearml-server will add basj-script support.