Reputation
Badges 1
31 × Eureka!I have a preprocessing task that is CPU parallelized, but I wanted to create a ClearML task for each instance since it reports some parameters that I want to later aggregate. Running this quickly requires 10-30 agents listening on the same queue
CostlyOstrich36 ok the issue is not the clearml-agent version, it is the conda environment I'm trying to run with. I usually run my agents from one conda environment and when I pass CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1, it just uses the conda interpreter and all is good. When I try to do this with a different conda env, this does not work and tries forcibly to create a new venv. any idea why?
Edit: if it is any help, the conda env that works is installed in the conda install dir, and the n...
SuccessfulKoala55 - Ok, new question that might focus this down:
I want to be able to query the fileserver using HTTP URLs that have "Accept-Ranges" in the header. I exposed this in the apiserver.conf but this does not affect the file server, only the web-server. Any chance to add a fileserver.conf to enable this functionallity?
CostlyOstrich36 it worked! thank you so much :grinning_face_with_star_eyes:
AgitatedDove14 Is there a way to change colors of embedded plot just like in the UI? Some default colors make it hard to view in dark mode. Also, the color of embedded plots is the same regardless of the color of the original plot
Hi! Is there any example/documentation to the new ClearML Reports feature?
edit: never mind. Apparently it is just an embedded Markdown editor... :man-facepalming:
AnxiousSeal95 looking forward to it!
AgitatedDove14 Ok, that's more like what I was hoping for. Thanks for the reply! You might want to consider adding the documentation of a new feature to the release page on github. I searched for this and since "Report" is already used in several other context, I missed it among other reporting features of ClearML
Thanks SuccessfulKoala55 - I will take a look. It is worth adding this information somewhere in the documentation though 🤓
UPDATE: The issue in clearml.conf
In the API settings, the server address used an alias which was not defined in the docker. Once that was replaced with the explicit IP address, everything worked as expected
@<1523701087100473344:profile|SuccessfulKoala55> they're running in the background. Is there an easy way to tail the logs or something?
@<1523701205467926528:profile|AgitatedDove14> I'm having a similar issue. I'm trying to run to get a task to run using a specific docker image and to source a bash script before execution of the python script.
I'm using docker_bash_setup_script
but there is no indication that it was ever run.
Any idea how to get this to work?
@<1523701205467926528:profile|AgitatedDove14> actually no. I was unaware that was needed. I will try that and let you know
GaudyPig83 AgitatedDove14 I'm experiencing the same issue. was this resolved?
BTW, I'm running a self-hosted server with latest versions of server, agent and clearml
I did. Runing clearml==1.8.0, clearml-agent==1.4.1 and clearml-server=1.7.0
edit: I reran the script from terminal instead of the GUI and it works. thanks AgitatedDove14 !
AgitatedDove14 thanks for the tip. I tried it but it still compresses the data to zip format. I suspect it is since ZIP_STORED is a constant that equals 0:
This is problematic due to line 689 in dataset.py ( https://github.com/allegroai/clearml/blob/d17903d4e9f404593ffc1bdb7b4e710baae54662/clearml/datasets/dataset.py#L689 ):compression=compression or ZIP_DEFLATED
Since ZIP_DEFLATED=8, passing compression=0 still causes data to be compressed
It takes around 3-5mins to upload 100-500k plain text files. I just assumed that the added size includes the entire dataset including metadata, SHA2 and other stuff required for dataset functionallity
AgitatedDove14 I have suggested fix, should I open an issue on GitHub, or can I directly open a PR?compression=compression or ZIP_DEFLATED if compression else ZIP_STORED
OutrageousSheep60 passing None means using default compression. You need to pass compression=0
sure, that's slightly more elegant. I'll open a PR now
CostlyOstrich36 from what I can tell, the user_id argument is new and is indeed missing from the enqueue call
CostlyOstrich36 this is the error that I see in dev tools:
well maybe, but that kinda ruins the whole point of auto logging. The scalars are already logged, it is just a matter of plotting so I assumed it could be done in the webapp
CostlyOstrich36 any news? I currently resort to reporting these scalars manually in order to get the desired result (which makes auto logging redundant)
I put it there since I tried using clearml fileserver URLs as input to foxglove studio (playback of remote .bag files recorded in ROS). Foxglove required these CORS settings, but we since then pivoted to a different solution so it is no longer needed.
For future reference - I managed to solve the issue.
I found that I have an old CORS configuration that I no longer need in /opt/clearml/config/ (for both API and file servers). Once I removed the cors segment from the config and restarted the server it worked.
@<1533257407419912192:profile|DilapidatedDucks58> I'm had the same issue and solved it by removing the server's CORS config (from any config file under /opt/clearml/config).
see this thread for details: None )
@<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> any idea how to fix this?
UPDATE:
Tried to kill the services mode agent and start a new one instead and I get a similar error:
CLEARML_WORKER_NAME=pls_work clearml-agent daemon -d --services-mode --queue foo_bar
clearml_agent: ERROR: create.<locals>.Validator.__init__() got an unexpected keyword argument 'types'
makes sense, just wanted to make sure I'm not missing anything obvious. I'll probably just avoid the pipeline grouping view for now. Thanks anyway