Reputation
Badges 1
125 × Eureka!The package on my index is called data-service-client
, see the log below:Looking in indexes:
,
`
WARNING: The repository located at unicorn is not a trusted or secure host and is being ignored. If this repository is available via HTTPS we recommend you use HTTPS instead, otherwise you may silence this warning and allow it anyway with '--trusted-host unicorn'.
ERROR: Could not find a version that satisfies the requirement data-service-client==1.0.0 (from -r /tmp/cached-reqs1...
Same thing SuccessfulKoala55 😞
I am tagging AgitatedDove14 since I sort of need an answer asap...!
Ah, so you're saying I can write a callback for stuff like train_loss
, val_loss
, etc.
And then you'll hook it
sure. Removing the task.connect(args_)
does not fix my situation
Absolutely, I could try but I'm not sure what it entails...
in particular, I ran the agent with clearml-agent daemon --queue test-concurrency --create-queue --services-mode 2 --docker "ubuntu:20.04" --detached
and enqueued 4 tasks to it that sleep 15 minutes.
I can see all 4 tasks running, see
I can do curl
http://localhost:8080 but it's a remote server so unless I do X forwarding I can't browse it
i expected to see 2 tasks running, and then when completed the remaining 2 could start. Is this not the expected behavior?
Hi SuccessfulKoala55 I am having some issues with this. I have put a concurrency limit of 2 and I can see 3 workers running
Is this a possible future feature? I have used cometML before and they have this. I'm not sure how they do it though...
I understand! this is my sysadmin message:
"if nothing else, they could publish a new elasticsearch image of 7.6.2 (ex. 7.6.2-1) which uses a newer patched version of JDK (1.13.x but newer than 1.13.0_2)"
Hey AgitatedDove14 , did you get a chance to look at this?
"this means the elasticsearch feature set remains the same. and JDK versions are usually drop-in replacements when on the same feature level (ex. 1.13.0_2 can be replaced by 1.13.2)"
yeah, that's fair enough. is it possible to assign cpu cores? I wasn't aware
@<1523701087100473344:profile|SuccessfulKoala55> hey Jake, how do i check how many envs it caches? doing ls -la .clearml/venvs-cache
gives me two folders
logger.report_media( title=name_title, series="Nan", iteration=0, local_path=fig_nan, delete_after_upload=delete_after_upload, ) clearml_task.upload_artifact( name=name_title, artifact_object=fig_nan, wait_on_upload=True, )
so it tries to find it under /usr/bin/python/
I assume?
when an agent launches a task, it builds a venv, copies the code, runs it, etc. in my case, the code writes files (such as data it downloaded, or model files, etc) and writes them in subfolders. I'm interested in recovering the entire folder structure.
this is because if I run a different task, everything from the previous task is overwritten.
furthermore, I need the folder structure for other things downstream
can you elaborate a bit on the token side? i'm not sure exactly what would be a bad practice here