Hi ConvolutedSealion94 , can you please elaborate on what exactly you're trying to do? Also, I'm not sure Preprocess is part of ClearML
Hi @<1529633475710160896:profile|ThickChicken87> , I would suggest opening developer tools (F12) and observing what api calls go out when going over the experiment object. This way you can replicate the api calls to pull all the relevant data. I'd suggest reading more here - None
WackyRabbit7 Hey, sorry for the delay 🙂 Hopefully I'll have an answer in a couple of hours
You mean that you have 30 jobs each in a separate queue and you'd like to move all of them to top priority in each queue?
Hi @<1523701601770934272:profile|GiganticMole91> , I think for binaries and not just the model files themselves you would need to do a bit of tweaking
That's an interesting question. I think it's possible. Let me check 🙂
Is it hosted by you or is it app.clear.ml ?
If you're running on a windows machine, the same syntax such as export
won't work. I'd suggest on checking how to manipulate env variables in windows
Can you maybe provide a snippet I can play with?
Also, what GPUs are you running on that machine?
I have tried
task.upload_artifact('/text/temp', 'temp.txt')
but it's not working (I can access the task, but as soon as I click artifacts tab, it shows 404 error).
Can you please elaborate on this? Can you please share a screenshot?
My guess other agents are sitting on different machines, did you verify that the credentials are the same between the different clearml.conf
files? Maybe @<1523701087100473344:profile|SuccessfulKoala55> might have an idea
Do any of these API calls have a "Dataset Content" field anywhere in the "configuration" section?
Hi @<1787653555927126016:profile|SoggyDuck67> , can you please provide the full log of the run? Also, can you please add a screenshot of the 'execution' tab of the experiment? I assume the original experiment was ran on python 3.10?
Just to make sure, does Backblaze support the boto3 SDK?
I doubt that would be possible because it looks like the autoscaler versions are global
As a quick workaround you can launch the open source autoscaler until the no-docker capability is available again.
None
Hi @<1702130048917573632:profile|BlushingHedgehong95> , I would suggest the following few tests:
- Run some mock task that uploads an artifact to the files server. Once done, verify you can download the artifact via the web UI - there should be a link to it. Save that link. Then delete the task and mark to delete all artifacts. Test the link again to see that it fails to delete
- Please repeat the same with a dataset
Hi GrittyCormorant73 , can you please provide a standalone snippet with instructions to play with?
Hi @<1845635622748819456:profile|PetiteBat98> , metrics/scalars/console logs are not stored on the files server. They are all stored in Elastic/Mongo. Files server is not required to use. default_output_uri will point all artifacts to your Azure blob
Hi, I think this is the default behavior but I think you can probably edit the source code ( output_uri
parameter of Task.init
would be a good lead).
In what format would you like it saved?
Hi ScantChimpanzee51 , I think you can get it via the API, this sits on task.data.output.destination
retrieve the task object via API and play with it a bit to see where this sits 🙂
AbruptWorm50 , I also see the HPO app is missing, I'm told this is under investigation.
you should set it on the machine running the agent
AbruptWorm50 , can you try deleting your cookies/data on your browser to see if you manage to load the debug samples? I think this might be related: https://github.com/allegroai/clearml/issues/637
AbruptWorm50 , you can send me. Also can you please answer the following two questions?When were they registered? Were you able to view them before?
Also, you mention plots but in the screenshot you show debug samples. Can I assume you're talking about debug samples?