Hi @<1524560082761682944:profile|MammothParrot39> , I think you need to run the pipeline at least once (at least the first step should start) for it to "catch" the configs. I suggest you run once with pipe.start_locally(run_pipeline_steps_locally=True)
Hi @<1636537816684957696:profile|CooperativeGoat65> , you can change the api.files_server
section of the configuration file to point to your s3 bucket
ExuberantParrot61 , I'm not sure I understand the entire setup. can you please elaborate?
ReassuredTiger98 , I played with it myself a little bit - It looks like this happens for me when an experiment is running and reporting images and changing metric does the trick - i.e reproduces it. Maybe open a github issue to follow this 🙂 ?
Hi @<1734744933908090880:profile|WorriedShells95> , I suggest going through the documentation - None
Hi @<1625303791509180416:profile|ExasperatedGoldfish33> , I would suggest trying pipelines from decorators. This way you can have very easy access to the code.
None
Hi @<1673501397007470592:profile|RelievedDuck3> , no you don't. The basics can be run with a docker compose 🙂
I think the 3rd one, let me know what worked for you
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I think the API server can delete things only from the files server currently. However the SDK certain has the capability to delete remote files
I think you can set this code wise as well - https://clear.ml/docs/latest/docs/references/sdk/task#taskforce_requirements_env_freeze
Hi MistakenDragonfly51 , regarding your questions:
ClearML has a model repository built in. You can load an input model using InputModel module ( https://clear.ml/docs/latest/docs/references/sdk/model_inputmodel ). Also, you can fetch the models of an experiment using Task.get_models()
- https://clear.ml/docs/latest/docs/references/sdk/task#get_models Can you elaborate on how this config looks in the UI when you view it?
GiganticTurtle0 , does it pose some sort of problem? What version are you using?
Hi @<1566596960691949568:profile|UpsetWalrus59> , I think this basically means you have an existing model and it's using it as the starting point.
JitteryCoyote63 , reproduces on my side as well 🙂
AbruptWorm50 , can you confirm it works for you as well?
Looks like you're having issues connecting to the server through the SDK. Are you able to access the webUI? Is it a self hosted server?
The project should have a system tag called 'hidden'. If you remove the tag via the API ( None ) that should solve the issue.
How was the project turned to hidden?
Are you seeing any errors in the webserver container?
I think this is because you're working on a "local" dataset. Only after finalizing the dataset closes up. Can you describe your scenario and what was your expected behavior?
Hi JitteryCoyote63 , you can get around it using the auto_connect_frameworks
parameter in Task.init()
Can you please open a GitHub issue to follow up on this issue?
I see, thanks for the input!
What version of clearml
/ clearml-agent
are you using? Are you running in docker mode? Can you add your agent command here?
Just adding this here for easier readability
` ClearML results page: https:/xxxxt/projects/xxx/experimentsxxx
2022-11-21 11:02:07.590338: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-21 11:02:07.733169: I tensor...
In your ~/clearml.conf
you can specify the following to force the model to upload with the following setting:sdk.development.default_output_uri
Can you access the model in the UI and see the uri there?
VexedCat68 , it looks like it is being saved locally. Are you running all from the same machine?