Reputation
Badges 1
21 × Eureka!Yes. I've also checked configs in all of my machines just in case
@<1523701070390366208:profile|CostlyOstrich36> Hi! Sorry for not responding for a long time. I couldn't reproduce my issue until today. So usually, I don't need to use Storage manager as I would get the contents of the parameter directly. However, for some reason once again instead of contents of dataframe I've got a PosixPath to the artifact or string describing dict with url to artifact. I've implemented if statement to catch such cases but again I can't access the artifact. I've tried S...
http://<host>:8081/lp_veh_detect_train_pipeline/.pipelines/vids_pipe/detect_frames.4a80b274007d4e969b71dd03c69d504c/artifacts/videos_df/videos_df.csv.gz
(the <host>
contains correct hostname)
I didn't understand how to use it. Everything I've tried failed. Could you give me an example?
here's a log of the failing step
@<1523701205467926528:profile|AgitatedDove14> Hi, I have no idea as I don't upload the file to the artifacts myself. I return df from function that is in previous pipeline step and then pass it as a parameter for this step
@<1523701205467926528:profile|AgitatedDove14> I could test it but I just recently fixed this issue by caching the previous step where this artifact is coming from. Now I'm getting the dataframe itself instead of link to artifact.
I don't know should I waste our time on this? However, it's very interesting why ability to cache the step impacts artifacts behavior
@<1523701205467926528:profile|AgitatedDove14> I've checked my configs and it's all good, no / is missing
Here is a log of the pipeline
Hi! Could anyone please take a look into my issue? This is still relevant for me
@<1523701087100473344:profile|SuccessfulKoala55> 1.9.0
Yes, they are. I was using same service account from same code base with python sdk for download, delete, upload and so on.
@<1523701205467926528:profile|AgitatedDove14> it is as expected - a dataframe
@<1523701205467926528:profile|AgitatedDove14> I've found it in the docker compose, thanks!!
Thank you. I was considering this possibility but I hoped for another solution because this would require having multiple agents running on the same machine but in different docker containers for dfferent tasks. I couldn't find how to set up clearml.conf
parameters so that multiple agents could run on same machine with different base images
Also, I've checked do the storageManager upload works for the same bucket and can confirm that it does, so the issue shouldn't be with permissions
is it in clearml.conf api.files_server?
yes, that's right. So do you have any ideas what might be wrong?
@<1523701070390366208:profile|CostlyOstrich36> good to know. I hope you or someone else will be able to take a look at some point into my issue. Thanks!
Have you tried accessing the artifact similarly like in this example None ?
I am no expert but from your example it seems that you aren't running in docker mode. Or is it just an example? services-mode is supported on docker mode only