I have a related question: I read here that 4GB is a http limitation and ClearML will not chunk single files. I take from that, that ClearML did not want/there was no need to implement an own solution so far. But what about models that are larger than 4GB?
Is this really working for you guys? I have no clue what's wrong. Seems so unlikely that my code works with artifacts, datasets, but not logging...
I see a python 3 fileserver.py
running on a single thread with 100% load.
So actually deleting from client (e.g. an dataset with clearml-data) works.
Shows some logs, but nothing of relevance I think. Only Infos and Warning about deprecated stuff that is still used ;D ...
481.2130692792125 seconds
Done
I have my development machine where I develop for multiple projects. I want to configure clearml differently based on the project. Similar to .vscode
, .git
or .idea
at the project level.
No, it is just a pain to find files that have been deleted by a user, but are actually not deleted in the fileserver/s3 π
But no worries, nothing that is crucial.
I created this issue today, which can alleviate the pain temporarily: https://github.com/allegroai/clearml-server/issues/133
Thanks a lot. But even for a user, I can not set a default for all projects, right?
Seems more like a bug or something is not properly configured on my side.
You suggested this fix earlier, but I am not sure why it didnt work then.
There is no way to create an artifact/model/dataset without a task, right? Just always inherit from the parent task. And if cloned change the user to the user who did the clone.
(just for my own interest: how much does the enterprise version divert from the open source version? It it just extended or are there core changes to the enterprise version)
Perfect, thank you π
You can add and remove clearml-agents to/from the clearml-server anytime.
Thank you very much, good to know!
Thank you. The reports feature is super cool! Greetings to the team. One of the best features for educational use!
In the WebUI it just shows that an error happened after the loading bar has been running for a while.
I tried to delete the same tasks again and this time, it instantly confirmed deletion and the tasks are gone.
Thanks, that makes sense. Can you also explain what task_log_buffer_capacity
does?
At least when you use docker containers the agent will reuse the existing python environment.
For now I can tell you that with conda_freeze: true
it fails, but with conda_freeze: false
it works!
Local execution output:ClearML Task: created new task id=855948f5d73c47e2ae37bb821385e15b ======> WARNING! Git diff to large to store (2190kb), skipping uncommitted changes <====== ClearML results page:
uploading artifact done uploading artifact 2021-02-05 16:24:56,112 - clearml.Task - INFO - Waiting to finish uploads 2021-02-05 16:24:58,499 - clearml.Task - INFO - Finished uploading
I installed as told on pytorch.org : pip3 install --pre torch torchvision torchaudio --index-url
None
Yea. Not using the config file does not seem like a good long-term solution for me. However, I still have no idea, why this error happens. But enough for today. Thank you a lot for your help!
The default behavior mimics Pythonβs assert statement: validation is on by default, but is disabled if Python is run in optimized mode (via python -O). Validation may be expensive, so you may want to disable it once a model is working.
Is it possible to set extra-index-url on a per-task basis? Just asking because of the way you wrote it with the two dashes π
Hi KindChimpanzee37 I was more asking about the general idea to make these settings task-specific, but thank you for the suggestion anyways, I will definitely apply it.