Reputation
Badges 1
25 × Eureka!However, SNPE performs quantization with precompiled CLI binary instead of python library (which also needs to be installed). What would be the pipeline in this case?
I would imagine a container with preinstalled SNPE compiler / quantizer, and a python script triggering the process ?
one more question: in case of triggering the quantization process, will it be considered as separate task?
I think this makes sense, since you probably want a container with the SNE environment, m...
That is odd, can you send the full Task log? (Maybe some oddity with conda/pip ?!)
Awesome ! thank you so much!
1.0.2 will be out in an hour
And having a pdf is easier/better than sharing a link to the results page ?
So what is the difference?!
(BTW: you can disable the auto-logging feature of joblib)Task.init(..., auto_connect_frameworks={'scikit': False})
Then try to add the missing apt packages
extra_docker_shell_script: ["apt-get install -y ???", ]
Task.connect is "automagic" i.e. to server when in Manual mode, from server in agent mode,
set_parameter is one way only and should be used to set an external Task's parameters.
Hi @<1561885921379356672:profile|GorgeousPuppy74>
- Could you copy the 3 messages here into your original message, it helps keeping things tidy and nice (press on the 3 dot menu and select edit)
- what do you mean by "currently its not executing in queue-01", you changed it so it should be pushed to queue-02, no? Also notice that you can run the enire pipeline as sub-processes for debugging,
just callpipe.start_locally(run_pipeline_steps_locally=True)
You also need an agent on the ser...
GrievingTurkey78 notice that when enqueuing an aborted Task, the agent will not deleted the previously reported metrics/logs
VexedCat68 are you manually creating the OutputModel object?
is there a built in programmatic way to adjustΒ
development.default_output_uri
?
How about: In your Task.init(output_uri='...')
And you are seeing a bunch of the GS SSL errors?
It does not upload, the default behavior is to log the artifact (so you know where you stored, but not enforce unnecessary uploads)
If you were to change:task = Task.init(project_name='examples', task_name='Keras with TensorBoard example')to:task = Task.init(project_name='examples', task_name='Keras with TensorBoard example', output_uri=" ")It would also upload the model
RoundMosquito25 do notice the agent is pulling the code from the remote repo, so you do need to push the local commits, but the uncommitted changes clearml will do for you. Make sense?
Hi @<1523701523954012160:profile|ShallowCormorant89>
This is generally based on number of agents, or am I missing something ? Also is it based on Task or decorated functions ?
It could be the model storing? could it be the peak is at the end of the epoch ?
And If I create myself a Pro account
Then you have the UI and implementation of both AWS & GCP autoscalers, am I missing something?
Yes, I mean use the helm chart to deploy the server, but manually deploy the agent glue.
wdyt?
Oh found it:temp.linux-aarch64-cpython-39this is Arm?!
ReassuredTiger98 after 20 hours, was it done uploading ?
What do you see in the Task resource monitoring? (notice there is network_tx_mbs metric that should be accordig to this, 0.152)
It looks like the tag being used is hardcoded to 1.24-18. Was this issue identified and fixed in later versions?
BoredHedgehog47 what do you mean by "hardcoded 1.24-18" ? tag to what I think I lost context here
Good question πfrom clearml import Task Task.init('examples', 'test')
Hi ColossalAnt7 , I think we run into it on a few dockers, I believe the bug was fixed in the latest trains-agent RC. Could you verify please ?
CheerfulGorilla72 my guess is the Slack token does not have credentials for the private channel, could that be ?