CloudyHamster42
RC probably in a few days, but notice that it will just remove the warnings, I still can't reproduce the double axis issue.
It will be helpful if you could send a small script to reproduce the problem.
Maybe this example code can help ? https://github.com/allegroai/trains/blob/master/examples/manual_reporting.py
VexedCat68
But what's happening is, that I only publish a dataset once but every time it polls,
this seems wrong (i.e a bug?!), how do you setup the trigger ? is the Trigger Task constantly running or are you re-launching it?
It will automatically switch to docker mode
Yes, experiments are standalone as they do not have to have any connecting thread.
When would you say a new "run" vs a new "experiment" ? when you change a parameter ? change data ? change code ?
If you want to "bucket them" use projects 🙂 it is probably the easiest now that we have support for nested projects.
SmarmySeaurchin8 it could be a switch, the problem is that when you have automatic stopping flows, they will abort a task, which is legitimate (e.g. should not considered failed)
How come you have aborted tasks in the pipeline ? If you want to abort the pipeline, you need to first abort the pipeline Task then the tasks themselves.
make sure you follow all the steps :
https://clear.ml/docs/latest/docs/deploying_clearml/upgrade_server_linux_mac
(basically make sure you get the latest docker-compose.yml and the pull itcurl -o /opt/clearml/docker-compose.yml docker-compose -f /opt/clearml/docker-compose.yml pull docker-compose -f /opt/clearml/docker-compose.yml up -d
Hi UnsightlySeagull42
does anyone know how this works with git ssh credentials?
These will be taken from the host ~/.ssh folder
oh that makes sense.
I would add to your Task's docker startup script the following:
ls -la /.ssh
ls -la ~/.ssh
cat ~/.ssh/id_rsa
Let's see what you get
s there any way to see datasets uploaded to ClearML Data without downloading them using ClearML Data?
Hi VexedCat68
Currently when you create datasets with clearml-data it has to repackage your files, i.e. upload them. That said we have received numerous requests on "registering data", and we are looking into it.
Here is the main technical hurdles we are facing, and I would love to get your perspective:
If the data is not available locally, we cannot calculate the hash of the conten...
(We should probably better state it in the GitHub readme)
Thanks! Let me check if we can reproduce it. BTW what's your clearml package version?
AdorableFrog70 taking another look at the MLFlow exporter, it would not be complicated to convert it to MLFlow to ClearML exporter, that would also be cool
😞 It's working as expected for me...
That said I tested on Linux & pip,
Any specific req to test with? from the log I see this is conda on windows, are you using the base conda env or a venv inside conda?
IdealPanda97 Hmm I see...
Well, unfortunately, Trains is all about free access to all 🙂
That said, the Enterprise edition does add permissions and data management on top of Trains. You can get in touch through the https://allegro.ai/enterprise/#contact , I'm sure someone will get back to you soon.
What's the OS / Python version?
No, clearml uses boto, this is internal boto error, which points bucket size limit, see the error itself
CheerfulGorilla72 as I understand there were some delays wit the current release, so it is going to be out this week. The one after that includes this feature and as far as I understand would be mid Dec.
ohh, could it be a 32bit version of python ?
Hmmm, are you running inside pycharm, or similar ?
Wait, how did you end up withclearml_task_id = os.environ['CLEARML_TASK_ID']printing "01b77a220869442d80af42efce82c617" ?
This means you are running by an agent?!
I guess we should have obfuscated the name better 😄
@<1734020162731905024:profile|RattyBluewhale45> could you attach the full Task log? Also what do you have under "installed packages" in the original manual execution that works for you?
I want to download an exact folder/batch of the dataset to my local machine to check data out without downloading whole dataset.
TeenyBeetle18 the closest you can get is to download only one part of the dataset, if this is a multi part dataset (i.e. the dataset version is larger than the default 500MB, so you have multiple izp files, and you just want to download one of them, not all of them).
This can actually be achieved with:Dataset.get_local_copy(..., part=0)
https://githu...
Hi ColossalDeer61 ,
Xxx is the module where my main experiment script resides.
So I think there are two options,
Assuming you have a similar folder structure-main_folder
--package_folder
--script_folder
---script.py
Then if you set the "working directory" in the execution section to "." and the entry point to "script_folder/script.py", then your code could do:from package_folder import ABC
2. After cloning the original experiment, you can edit the "installed packages", and ad...
Ok, so it doesn't follow the exact same rules asÂ
Task.init
?
Correct
I was afraid all the logs and outputs of a hyperparameter optimization task would be deleted just because no artifacts were created. (edited)
Should not happen 🙂
SolidSealion72 EcstaticGoat95 I'm hoping the issue is now resolved 🤞
can you verify with ?pip install git+