Hi @<1743079861380976640:profile|HighKitten20> , can you please provide the log of the job itself in ClearML(Console section of the experiment)?
Hi BoredBluewhale23 ,
How did you configure the apiserver when you raised the EKS K8S cluster?
From my understanding ClearML uses Apache-2.0 license, so it depends if that covers it or not
Hi @<1751777178984386560:profile|ConfusedGoat3> , I think you might need to run some migration script on the database, basically changing the paths of the artifacts registered to the new IP
YummyLion54 hi!
are you referring to PARAMETERS
section OR to the CONFIGURATION OBJECTS
I think for this you would need to report this manually. You can extract all of this data using the API and then create custom plots/scalars that you can push into reports for custom dashboards 🙂
Hi @<1539417873305309184:profile|DangerousMole43> , I'm afraid this is not configurable currently. What is your use case?
Cant you paste the output until the stuck point? Sounds very strange. Does it work when it's not enqueued? Also, what version of clearml-agent & server are you on?
Hi @<1715900788393381888:profile|BitingSpider17> , I think this is what you're looking for - None
VictoriousPenguin97 , can you please try with the latest version? 1.1.3 🙂
Discussion moved to internal channels
At 1 call per second for 12 hours you'll get to numbers close to that. I think you could try increasing the flush threshold - None
What about network? Does something return 400 or something of the sort?
If you want to sometimes run with docker and sometimes without, yes.
Hi @<1593413673383104512:profile|MiniatureDragonfly17> , no. The assumption is that serving runs on a dedicated machine. Of course you can edit the docker compose to use different ports
Hi StraightParrot3 , as SuccessfulKoala55 suggested you could maybe use tags for this as well.
In regards to creating views - If you predefine a certain view locally on your browser (with the extra column) I think you can just copy paste the URL and it should include the custom column for anyone using this URL
You can use the CLEARML_LOG_LEVEL
env variable for this - None
I don't think there is any out of the box method for this. You can extract everything using the API from one workspace and repopulate it in another workspace also using the APIs.
Can you add the full log & the dependencies detected in original code? How are you building the pipeline?
Hi @<1547028074090991616:profile|ShaggySwan64> , can you please provide minimal sample code that reproduces this? The local imports - are they from the private repo?
Did this happen suddenly or with some version upgrade?
Hmmmmm do you have a specific usecase in mind? I think pipelines are created only through the SDK but I might be wrong
VexedCat68 , what if you simply add pip.stop()
? Does it not stop the pipeline? Can you maybe add a print to verify that during the run the value is indeed -1? Also looking from your code it looks like you're comparing the 'merged_dataset_id' to -1
Hi JumpyDragonfly13 , can you try going to http://localhost:8080/login ? What happens when you open developer tools (F12) when browsing currently
Looks decent, give it a try and update us it's working 🙂
Hi VastShells9 , can you add the full log of the execution?
Hi @<1544853721739956224:profile|QuizzicalFox36> , currently there is no SDK option for this, however you can automate this using the API. I suggest opening developer tools (F12) to see what the UI sends when creating/editing reports and that way you can automate it
SarcasticSparrow10 , it seems you are right. At which point in the instructions are you getting errors from which step to which?
Did you check permissions?