![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/HugeArcticwolf77.png)
Reputation
Badges 1
31 × Eureka!UPDATE #2:
Updating clearml_agent to 1.5.2 solved the issue. Not sure why though...
GaudyPig83 AgitatedDove14 I'm experiencing the same issue. was this resolved?
BTW, I'm running a self-hosted server with latest versions of server, agent and clearml
UPDATE: The issue in clearml.conf
In the API settings, the server address used an alias which was not defined in the docker. Once that was replaced with the explicit IP address, everything worked as expected
AnxiousSeal95 looking forward to it!
sure, that's slightly more elegant. I'll open a PR now
OutrageousSheep60 passing None means using default compression. You need to pass compression=0
Thanks SuccessfulKoala55 - I will take a look. It is worth adding this information somewhere in the documentation though 🤓
@<1523701087100473344:profile|SuccessfulKoala55> they're running in the background. Is there an easy way to tail the logs or something?
@<1523701205467926528:profile|AgitatedDove14> actually no. I was unaware that was needed. I will try that and let you know
@<1523701087100473344:profile|SuccessfulKoala55> the docker user is root, but it does not have the "sudo" group if that's what you mean. Is that required?
EDIT: I just ran "sudo ls" and it returned with no issues so I guess I do have sudo permission :man-shrugging:
@<1523701087100473344:profile|SuccessfulKoala55> any update on how to solve this?
I did. Runing clearml==1.8.0, clearml-agent==1.4.1 and clearml-server=1.7.0
edit: I reran the script from terminal instead of the GUI and it works. thanks AgitatedDove14 !
AgitatedDove14 Ok, that's more like what I was hoping for. Thanks for the reply! You might want to consider adding the documentation of a new feature to the release page on github. I searched for this and since "Report" is already used in several other context, I missed it among other reporting features of ClearML
CostlyOstrich36 ok the issue is not the clearml-agent version, it is the conda environment I'm trying to run with. I usually run my agents from one conda environment and when I pass CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1, it just uses the conda interpreter and all is good. When I try to do this with a different conda env, this does not work and tries forcibly to create a new venv. any idea why?
Edit: if it is any help, the conda env that works is installed in the conda install dir, and the n...
AgitatedDove14 I have suggested fix, should I open an issue on GitHub, or can I directly open a PR?compression=compression or ZIP_DEFLATED if compression else ZIP_STORED
CostlyOstrich36 any news? I currently resort to reporting these scalars manually in order to get the desired result (which makes auto logging redundant)
AgitatedDove14 thanks for the tip. I tried it but it still compresses the data to zip format. I suspect it is since ZIP_STORED is a constant that equals 0:
This is problematic due to line 689 in dataset.py ( https://github.com/allegroai/clearml/blob/d17903d4e9f404593ffc1bdb7b4e710baae54662/clearml/datasets/dataset.py#L689 ):compression=compression or ZIP_DEFLATED
Since ZIP_DEFLATED=8, passing compression=0 still causes data to be compressed
UPDATE:
Tried to kill the services mode agent and start a new one instead and I get a similar error:
CLEARML_WORKER_NAME=pls_work clearml-agent daemon -d --services-mode --queue foo_bar
clearml_agent: ERROR: create.<locals>.Validator.__init__() got an unexpected keyword argument 'types'
CostlyOstrich36 it worked! thank you so much :grinning_face_with_star_eyes:
@<1523701205467926528:profile|AgitatedDove14> exactly what I was looking for. Thanks!
these are the mounts I add:-v /home/some_username/workspace/:/root/workspace -v /software:/software -v /images:/images -v /data:/data -v /processedData:/processedData -v /disk1:/disk1 -v /disk2:/disk2 -v /disk3:/disk3 -v /disk4:/disk4 -v /disk5:/disk5 -v /disk6:/disk6 -v /disk8:/disk8
None of these seem problematic to me. The only issue I can think of is that /home is an external mount on the host machine (outside of docker). Should I mount it somewhere?
SuccessfulKoala55 - Ok, new question that might focus this down:
I want to be able to query the fileserver using HTTP URLs that have "Accept-Ranges" in the header. I exposed this in the apiserver.conf but this does not affect the file server, only the web-server. Any chance to add a fileserver.conf to enable this functionallity?
@<1533257407419912192:profile|DilapidatedDucks58> I'm had the same issue and solved it by removing the server's CORS config (from any config file under /opt/clearml/config).
see this thread for details: None )
@<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> any idea how to fix this?
makes sense, just wanted to make sure I'm not missing anything obvious. I'll probably just avoid the pipeline grouping view for now. Thanks anyway
well maybe, but that kinda ruins the whole point of auto logging. The scalars are already logged, it is just a matter of plotting so I assumed it could be done in the webapp
It takes around 3-5mins to upload 100-500k plain text files. I just assumed that the added size includes the entire dataset including metadata, SHA2 and other stuff required for dataset functionallity
I put it there since I tried using clearml fileserver URLs as input to foxglove studio (playback of remote .bag files recorded in ROS). Foxglove required these CORS settings, but we since then pivoted to a different solution so it is no longer needed.
For future reference - I managed to solve the issue.
I found that I have an old CORS configuration that I no longer need in /opt/clearml/config/ (for both API and file servers). Once I removed the cors segment from the config and restarted the server it worked.
AgitatedDove14 Is there a way to change colors of embedded plot just like in the UI? Some default colors make it hard to view in dark mode. Also, the color of embedded plots is the same regardless of the color of the original plot
I have a preprocessing task that is CPU parallelized, but I wanted to create a ClearML task for each instance since it reports some parameters that I want to later aggregate. Running this quickly requires 10-30 agents listening on the same queue