Hi @<1657556312684236800:profile|ManiacalSeaturtle63> , can you please elaborate specifically on the actions you took? Step by step
You need to also spin up the ClearML server...
None
CrookedWalrus33 , Hi 🙂
Can you please provide which packages are missing?
ContemplativeGoat37 , I'm curious about your implementation. Which method was more relevant - config file change or using an environment variable or are both useful?
Hi @<1595225628804648960:profile|TroubledLion34> , I'm afraid you can't upload via an API since what is doing the uploading is the SDK/CLI, however, you can upload files via your java application and then register the dataset via the API
Makes sense?
Hi @<1523702786867335168:profile|AdventurousButterfly15> , are the models logged in the artifacts section?
Hey ItchySeahorse94 , I think this might be what you're looking for 🙂
https://github.com/allegroai/clearml-serving
Hi ShakyOstrich31 ,
Can you verify that you did push the updated code into your repository?
From my understanding ClearML uses Apache-2.0 license, so it depends if that covers it or not
Hi @<1529271098653282304:profile|WorriedRabbit94> , are you still failing to login? app.clear.ml, right?
Regarding the github issue - Can you send the docker-compose you used for 1.9 that works and 1.10 that doesn't work for you?
CostlyFox64 , Hi 🙂
All 3 databases are requirements for the backend. Redis is used for caching so it's fairly 'lightly' used, you don't need many resources for it. Mongo is for artifacts, system info and some metadata. Elastic is for events and logs, this one might require more resources depending on your usage.
What is the scope of your usage?
I don't think so. However you can use the API as well 🙂
And when you run it again under exactly the same circumstances it works fine?
UnevenDolphin73 , can you provide a small snippet of exactly what you were running? Are you certain you can see the task in the UI? Is it archived?
Also, can you please specify all the versions of agent/sdk/backend you're using?
ThankfulHedgehong21 , server 1.6.0 is available. Can you try with it as well?
@<1541954607595393024:profile|BattyCrocodile47> , that is indeed the suggested method - although make sure that the server is down while doing this
Hi AbruptWorm50 ,
You can use a stand alone file, this way the file will be saved to the backend and used every time without needing to clone the repo. What do you think?
What version of clearml
are you using? Can you try in a clean python virtual env?
Maybe @<1523701087100473344:profile|SuccessfulKoala55> has more insight into this 🙂
SarcasticSquirrel56 , you're right. I think you can use the following setting in ~/clearml.conf
: sdk.development.default_output_uri: <S3_BUCKET>
. Tell me if that works
Yes. Run all the pipelines examples and see how the parameters are added via code to the controller.
For example:
None
I'm not versed in the pricing 😛
Runs perfectly with Minio too 🙂
Hi @<1523701868901961728:profile|ReassuredTiger98> , how are you currently uploading? You can use the max_workers
parameter to use multiple threads
Hi @<1548839979558375424:profile|DelightfulFrog42> , you can use tasks.set_requirements
to provide specific packages or a requirements.txt:
None
Hi @<1523701977094033408:profile|FriendlyElk26> , let's say you have a table, which you report. How would you suggest comparing between two tables?
Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , you can use the same credentials for different notebooks. What are you trying to do?