
Reputation
Badges 1
31 × Eureka!I put it there since I tried using clearml fileserver URLs as input to foxglove studio (playback of remote .bag files recorded in ROS). Foxglove required these CORS settings, but we since then pivoted to a different solution so it is no longer needed.
@<1523701205467926528:profile|AgitatedDove14> actually no. I was unaware that was needed. I will try that and let you know
For future reference - I managed to solve the issue.
I found that I have an old CORS configuration that I no longer need in /opt/clearml/config/ (for both API and file servers). Once I removed the cors segment from the config and restarted the server it worked.
Ideally, I would've wanted to use the GUI to sent shutdown/restart signals to the worker itself similar to what I currently do with tasks. From I understand, this is not possible so I would settle for maybe just adding an easier way to find what command to run to kill a certain worker. Even a simple button in the workers/queues GUI to copy a worker's ID and Name would make things easier.
well maybe, but that kinda ruins the whole point of auto logging. The scalars are already logged, it is just a matter of plotting so I assumed it could be done in the webapp
makes sense, just wanted to make sure I'm not missing anything obvious. I'll probably just avoid the pipeline grouping view for now. Thanks anyway
AgitatedDove14 I have suggested fix, should I open an issue on GitHub, or can I directly open a PR?compression=compression or ZIP_DEFLATED if compression else ZIP_STORED
CostlyOstrich36 ok the issue is not the clearml-agent version, it is the conda environment I'm trying to run with. I usually run my agents from one conda environment and when I pass CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1, it just uses the conda interpreter and all is good. When I try to do this with a different conda env, this does not work and tries forcibly to create a new venv. any idea why?
Edit: if it is any help, the conda env that works is installed in the conda install dir, and the n...
AgitatedDove14 thanks for the tip. I tried it but it still compresses the data to zip format. I suspect it is since ZIP_STORED is a constant that equals 0:
This is problematic due to line 689 in dataset.py ( https://github.com/allegroai/clearml/blob/d17903d4e9f404593ffc1bdb7b4e710baae54662/clearml/datasets/dataset.py#L689 ):compression=compression or ZIP_DEFLATED
Since ZIP_DEFLATED=8, passing compression=0 still causes data to be compressed
@<1523701205467926528:profile|AgitatedDove14> exactly what I was looking for. Thanks!
Thanks SuccessfulKoala55 - I will take a look. It is worth adding this information somewhere in the documentation though 🤓
@<1523701205467926528:profile|AgitatedDove14> I'm having a similar issue. I'm trying to run to get a task to run using a specific docker image and to source a bash script before execution of the python script.
I'm using docker_bash_setup_script
but there is no indication that it was ever run.
Any idea how to get this to work?
OutrageousSheep60 passing None means using default compression. You need to pass compression=0
It takes around 3-5mins to upload 100-500k plain text files. I just assumed that the added size includes the entire dataset including metadata, SHA2 and other stuff required for dataset functionallity
these are the mounts I add:-v /home/some_username/workspace/:/root/workspace -v /software:/software -v /images:/images -v /data:/data -v /processedData:/processedData -v /disk1:/disk1 -v /disk2:/disk2 -v /disk3:/disk3 -v /disk4:/disk4 -v /disk5:/disk5 -v /disk6:/disk6 -v /disk8:/disk8
None of these seem problematic to me. The only issue I can think of is that /home is an external mount on the host machine (outside of docker). Should I mount it somewhere?
CostlyOstrich36 any news? I currently resort to reporting these scalars manually in order to get the desired result (which makes auto logging redundant)
I did. Runing clearml==1.8.0, clearml-agent==1.4.1 and clearml-server=1.7.0
edit: I reran the script from terminal instead of the GUI and it works. thanks AgitatedDove14 !
@<1523701087100473344:profile|SuccessfulKoala55> they're running in the background. Is there an easy way to tail the logs or something?
CostlyOstrich36 this is the error that I see in dev tools:
sure, that's slightly more elegant. I'll open a PR now
@<1533257407419912192:profile|DilapidatedDucks58> I'm had the same issue and solved it by removing the server's CORS config (from any config file under /opt/clearml/config).
see this thread for details: None )
@<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> any idea how to fix this?
GaudyPig83 AgitatedDove14 I'm experiencing the same issue. was this resolved?
BTW, I'm running a self-hosted server with latest versions of server, agent and clearml
UPDATE: The issue in clearml.conf
In the API settings, the server address used an alias which was not defined in the docker. Once that was replaced with the explicit IP address, everything worked as expected
yes.
I checked the base docker image I use and noticed that a $HOME/.gitconfig file already exists. would that be an issue given that the .gitconfig of the current user is mounted to that path once the clearml task is run?
CostlyOstrich36 from what I can tell, the user_id argument is new and is indeed missing from the enqueue call
UPDATE #2:
Updating clearml_agent to 1.5.2 solved the issue. Not sure why though...
AnxiousSeal95 looking forward to it!
that's an interesting use case for services mode. I'll try it out. thanks!
SuccessfulKoala55 - Ok, new question that might focus this down:
I want to be able to query the fileserver using HTTP URLs that have "Accept-Ranges" in the header. I exposed this in the apiserver.conf but this does not affect the file server, only the web-server. Any chance to add a fileserver.conf to enable this functionallity?