Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: "
our-host.com :<PORT>"
And NOThost: "
s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
You can also use Task.force_requirements_env_freeze
to freeze an exact copy of your environment.
None
Hi @<1625303806923247616:profile|ItchyCow80> , can you please describe how you're running it? Is it inside a jupyter nokebook? Do you have a code sample?
What do you mean by drop of many GB? Can you please elaborate on what happens exactly?
I know that elastic can sometimes create disk corruptions and requires regular backups..
I think the issue is that the message isn't informative enough. I would suggest opening a GitHub issue on this requesting a better message. Regarding confirming - I'm not sure but this is the default behavior of Optuna. You can run a random or a grid search optimization and then you won't see those messages for sure.
What do you think?
We all do eventually 😛
I'll try to see if it reproduces on my side 🙂
Hi TrickySheep9 , can you be a bit more specific?
I usually use: https://clear.ml/docs/latest/docs/references/api/index
Also, it's quite useful to use the UI as a reference. You can hit F12 in the browser and you can see all the api calls, I use that to figure out call structure
Sounds like some issue with queueing the experiment. Can you provide a log of the pipeline?
MuddySquid7 , I couldn't reproduce case 4.
In all cases it didn't detect sklearn.
Did you put anything inside _init_.py
?
Can you please zip up the folder from scenario 4. and post it here?
OddShrimp85 , Hi 🙂
I'm afraid that the only way to load contents of setup A into setup B is to perform a data merge.
This process basically requires merging the databases (mongodb, elasticsearch, files etc.). I think it's something that can be done in the paid version as a service but not in the open one.
If you want to access them as artifacts via code (OR via UI) you'll have to register via code and call them back that way.
Use the following:
https://clear.ml/docs/latest/docs/references/sdk/task#register_artifact
https://clear.ml/docs/latest/docs/references/sdk/task#get_registered_artifacts
Also please note the difference between reporting those tables as data via the logger and as artifacts since the logger saves things as events (plots, scalars, debug samples).
StickyCoyote36 , I'm looking into a solution.
Please hold on 🙂
Hi @<1673501397007470592:profile|RelievedDuck3> , there is some discussion of it in this video None
Hi @<1600661428556009472:profile|HighCoyote66> , ClearML Agent will try to find the python version dynamically and then revert to most basic one it can find. My suggestion is to run everything in docker mode ( --docker
) so the python version can be set by the docker
Hi @<1562610703553007616:profile|CloudyCat50> , can you provide some code examples?
Hi @<1523701491863392256:profile|VastShells9> , the GCP autoscaler is not available in the open source I'm afraid. Only in PRO licenses and up
Regarding the github issue - Can you send the docker-compose you used for 1.9 that works and 1.10 that doesn't work for you?
Can you check the machine status? Is the storage running low?
Hi MoodySheep3 ,
Can you please provide screenshots from the experiment - how the configuration looks like
Hi @<1673501397007470592:profile|RelievedDuck3> , no you don't. The basics can be run with a docker compose 🙂
Hi, I know that this is a known issue and is supposed to have a hotfix coming really soon.
Regarding your question, this is what I found - None
Hi @<1523702786867335168:profile|AdventurousButterfly15> , you need to create a new dataset and specify the previous one as a parent 🙂
Hi SarcasticSquirrel56 ,
How are the agents running? On top of K8s or bare metal?
Also, can you do a diff between the ~/clearml.conf
of your local machine and the one on the agent?
Hi NervousFrog58 , versions 1.1.1 seem to be quite old. I would suggest upgrading your server. Please note that since then there have been a couple of DB migrations, so make sure to follow all steps 🙂
Hi @<1649946171692552192:profile|EnchantingDolphin84> , it's not a must but it would be the suggested approach 🙂