But where do you manually set the name of each task in this code? the .component has a name argument you can provide
I am curious about the updates on version 1.0.0, where can I see some info regarding this?
Passing state information from pre to postprocessing and the dynamic preprocessing code thing, for example
can you elaborate a bit on the token side? i'm not sure exactly what would be a bad practice here
In fact I just did that yesterday. I'll let you know how it goes
sure. Removing the task.connect(args_) does not fix my situation
ah.. agent was on a different machine..
so it tries to find it under /usr/bin/python/ I assume?
This is a minimal comet example. I'm afraid I don't know what it does under the hood.. There are no callbacks on the metrics tracked in model.fit and yet if you check out your project in the website, your training and validation losses are tracked automatically, live.
using clearML agent
instead of, say, the binary the task was launched with
platform: "tensorflow_savedmodel" input [ { name: "dense_input" data_type: TYPE_FP32 dims: [-1, 784] } ] output [ { name: "activation_2" data_type: TYPE_FP32 dims: [-1, 10] } ]
i'm not sure how to double check this is the case when it happens... usually we have all requirements specified with git repo
but it's been that way for over 1 hour.. I remember I can force the task to wait for the upload. how do i do this?
hi SuccessfulKoala55 ! has the docker compose been updated with this?>
"this means the elasticsearch feature set remains the same. and JDK versions are usually drop-in replacements when on the same feature level (ex. 1.13.0_2 can be replaced by 1.13.2)"
Hi SuccessfulKoala55 , do you have an update on this?
I understand! this is my sysadmin message:
"if nothing else, they could publish a new elasticsearch image of 7.6.2 (ex. 7.6.2-1) which uses a newer patched version of JDK (1.13.x but newer than 1.13.0_2)"
So it still looks like it's using port 8080? I'm not really sure
i'm guessing the cleanup_period_in_days can only actually run every day or whatever if the script is enqueued to services
i'm just interested in actually running a prediction with the serving engine and all