I think this is because you're working on a "local" dataset. Only after finalizing the dataset closes up. Can you describe your scenario and what was your expected behavior?
Hi @<1706116294329241600:profile|MinuteMouse44> , is there any worker listening to the queue?
DeliciousBluewhale87 , Hi 🙂
You mean you created a dataset task on a certain server and you want to move that dataset task to another server?
CurvedHedgehog15 , isn't the original experiment you selected to run against is the basic benchmark?
Hi @<1752139552044093440:profile|UptightPenguin12> , for that you would need to use the API and use the mark_completed call with the force flag on
@<1523701295830011904:profile|CluelessFlamingo93> , just so I understand - you want to upload a string as the artifact?
Hi OutrageousSheep60 , can you elaborate on how/when this happens?
whenever
preview
ing the dataset (which is in a parquet tabular format) the browser automatically downloads a copy of the preview file as a text file
Yes, you can message me directly 🙂
Hi JitteryCoyote63 , I think you can click one of the debug samples to enlarge it. Then you will have a scroll bar to get to your needed Iteration. Does that help?
ClearML has a built in model repository so together I think they make a "feature store" again, it really depends on your definition
I think the pipeline runs from start to end, starting when the first step starts
Thats strange, you don't have a create new credentials button?
VexedCat68 , in the screenshot you provided it looks like the location is being printed. Did you check to see if something is there?
I think the set_default_upload_uri is for all output models, while set_upload_destination is for a specific model/file
default_output_uri is for artifacts & models while files_server is for debug samples and plots (if they are files)
Just to make sure, run the code on the machine itself to verify that python can actually detect the driver
Wait I might be missing something. Are you running a self hosted server?
Can you look in the UI if the execution parameters were logged?
Hi @<1702492411105644544:profile|YummyGrasshopper29> , it looks like the controller is running, but is there any agent listening to where the tasks are being pushed?
Hi @<1669152726245707776:profile|ManiacalParrot65> , is this a specific task or the controller?
Yes. However I don't think you need to backup Redis as it holds only data related to the currently running agents.
Hi @<1570583237065969664:profile|AdorableCrocodile14> , how did you upload the image?
How are you saving your models? torch.save ("<MODEL_NAME>") ?
I don't think there is such an option currently but it does make sense. Please open a GitHub feature request for this 🙂
Hi CrookedMonkey33 ,
Can you please open developer tools (F12) and see what is returned when you navigate to the 'projects' page (When you see 41 experiments)
Also go into 'Settings' -> 'Configuration' and verify that you have 'Show Hidden Projects' enabled
Hi @<1739093605621960704:profile|LovelySparrow29> , there is no such schema currently. Out of curiosity, what is the use case?
Hi GentleSwallow91 ,
- When using jupyter notebooks its best to do
task.close()- It will bring the same affect you're interested in - If you would like to upload to the server you need to add the following parameter to your
Task.init()The parameter is output_uri. You can read more here - https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
You can either mark it asTrueor provide a path to a bucket. The simplest usage would be ` Task.init(..., output_uri...
