BTW: you should probably update the server, you're missing out on a lot of cool features š
However, when 'extra' is a positional argument then it is transformed to 'str'
Hmm... okay let me check something
should i only do mongodb
No, you should do all 3 DBs ELK , Mongo, Redis
Hi @<1546303269423288320:profile|MinuteStork43>
Failed uploading: cannot schedule new futures after interpreter shutdown
Failed uploading: cannot schedule new futures after interpreter shutdown
This is odd where / when exactly are you trying to upload it?
task.mark_completed()
You have that at the bottom of the script, never call it on yourself, it will kill the actual process.
So what is going on you are marking your own process for termination, then it terminates itself leaving the interpreter and this is the reason for the errors you are seeing
The idea of mark_* is to mark an external Task, forcefully.
By just completing your process with exit code (0) (i.e. no error) the Task will be marked as completed anyhow, no need to call...
cannot schedule new futures after interpreter shutdown
This implies the process is shutting down.
Where are you uploading the model? What is the clearml version you are using ? can you check with the latest version (1.10) ?
Hmm, so what is the difference ?
btw: what's the OS and python version?
I'm trying to figure if this is reproducible...
If this is the case, then we do not change the maptplotlib backend
Also
I've attempted converting theĀ
mpl
Ā image toĀ
PIL
Ā and useĀ
report_image
Ā to push the image, to no avail.
What are you getting? error / exception ?
Switching to process Pool might be a bit of an overkill here (I think)
wdyt?
@<1532532498972545024:profile|LittleReindeer37> nice!!! š
Do you want to PR? it will be relatively easy to merge and test, and I think that they might even push it to the next version (or worst case quick RC)
right now I can't figure out how to get the session in order to get the notebook path
you mean the code that fires "HTTPConnectionPool" ?
Hmm and you are getting empty list for thi one:
server_info['url'] = f"http://{server_info['hostname']}:{server_info['port']}/"
@<1523701079223570432:profile|ReassuredOwl55>
Hey, hereās a quickie ā is it possible to specify different ātypesā of input parameters (āArgs/ā¦ā) such that they are handled nicely on the front end?
You me cast / checked in the UI ?
Yes the one you create manually is not really of the same "type" as the one you create online, this is why you do not see it there š
or me it sounds like the starting of the service is completed but I don't really see if the autoscaler is actually running. Also I don't see any output in the console of the autoscaler.
Do notice the autoscaler code itself needs to run somewhere, by default it will be running on your machine, or on a remote agent,
That experiment says it's completed, does it mean that the autoscaler is running or not?
Not running, it will be "running" if actually being executed
Sure go to the "All Projects" and filter by Task Type, application / service
Hi @<1523701260895653888:profile|QuaintJellyfish58>
Based on the docs
None
I think this should have worked, are you running the actual task_scheduler
on yout machine? on the services queue ? what's the console output you see there ?
My current experience is there is only print out in the console but no training graph
Yes Nvidia TLT needs to actually use tensorboard for clearml to catch it and display it.
I think that in the latest version they added that. TimelyPenguin76 might know more
command line to the arg parser should be passed via the "Args" section in the Configuration tab.
What is the working directory on the experiment ?
i hope can run in same day too.
Fix should be in the next RC š
Will the new fix avoid this issue and does it still requires theĀ
incremental
Ā flag?
It will avoid the issue, meaning even when incremental is not specified, it will work
That said the issue any other logger will be cleared as well, so, just good practice ...
From theĀ
logging
Ā documentation ...
Hmmm so I guess Kedro should not use dictConfig ?! I'm not sure on the exact use case, but just clearing all loggers seems like a harsh approach
Hi JuicyFox94
I think you are correct, this bug will explain the entire thing.
Basically what happens is that remote_execute stops the local run before the configuration is set on the Task. Then running remotely the code pull the configuration, sees that it is empty and does nothing.
Let me see if I can reproduce it...
I pass my dataset as parameter of pipeline:
@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df
dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.
Another option is to use pipeline from dec...