The issue only arises upon sending Images. (Both numpy, mpl and PIL)
BTW: they should appear under debug-samples
Tab in the results
Are you doing from keras import ...
or from tensorflow.keras import
?
. Perhaps it is the imports at the start of the script only being assigned to the first task that is created?
Correct!
owever when I split the experiment task out completely it seems to have built the cloned task correctly.
Nice!!
Now I am passing it the same way you have mentioned, but my code still gets stuck as in above screenshot.
The screenshot shows warning from pyplot (matplotlib) not ClearML, or am I mising something ?
My guess is that it can't resolve credentials. It does not give me any pop up to login also
If it fails, you will get an error, there will never a popup from code 🙂
... We need a more permanent place to store data
FYI you can store the "Dataset" itself on GS (instead of...
HurtWoodpecker30 in order to have the venv cache activated, it uses the full "pip freeze" it stores on the "installed packages", this means that when you cloned a Task that was already executed, you will see it is using the cached venv.
(BTW: the packages themselves are cached locally, meaning no time is spent on downloading just on installing, but this is also time consuming, hence the full venv cache feature).
Make sense ?
Hi GrievingTurkey78
I think it is already fixed with 0.17.5, no?
Try this one 🙂HyperParameterOptimizer.start_locally(...)
https://clear.ml/docs/latest/docs/references/sdk/hpo_optimization_hyperparameteroptimizer#start_locally
Can you print the actual values you are passing? (i.e. local_file
remote_url
)
Hmm can you test with the latest RC?pip install clearml==0.17.6rc1
WackyRabbit7 How do I reproduce it ?
Why can I only callÂ
import_model
Actually creates a new Model object in the system
InputModel(id) will "load" a model based on the model id
Make sense ?
gm folks, really liking ClearML so far as my top choice (after looking at dvc, mlflow), and thank you for your help here!
Thanks HurtWoodpecker30 !
Is there a recommended workflow to be able to “drop into” the
exact
env
(code, venv, data) of a previous experiment (which may have been several commits ago), to reproduce that experiment?
You can use clearml-agent on your local machine to build the env of any Task,
` clearml-agent build --id <ta...
I think it's inside the container since it's after the worker pulls the image
Oh that makes more sense, I mean it should not build the from source, but make sense
To solve for build for source:
Add to the "Additional ClearML Configuration" section the following line:agent.package_manager.pip_version: "<21"
You can also turn on venv caching
Add to the "Additional ClearML Configuration" section the following line:agent.venvs_cache.path: ~/.clearml/venvs-cache
I will make sure w...
instead of terminating them once they are inactive, so that they could be available immediately when they are needed.
JitteryCoyote63 I think you can increase the IDLE timeout on the autoscaler, and achive the same behavior, no ?
The reasoning is that most likely simultaneous processes will fail on GPU due to memory limit
Hi @<1627478122452488192:profile|AdorableDeer85>
I'm sorry I'm a bit confused here, any chance you can share the entire notebook ?
Also any reason why this is pointing to "localhost" and not IP/host of the clearml-server ? is the agent running on the same machine ?
I basically just mean having a date input like you would in excel where it brings up a calendar and a clock if it’s time – and defaults to “now”
I would love that as well, but I kind of suspect the frontend people will say these things tend to start small and grow into a huge effort. At the moment what we do is the UI is basically plain text and the casting is done on the SDK side.
You can however provide type information and help (you can see it when you hover over the arguments on th...
Hi @<1691620877822595072:profile|FlutteringMouse14>
Yes, feast has been integrated by at least a couple if I remember correctly.
Basically there are two ways offline and online feature transformation. For offline your pipeline is exactly what would be recommended. The main difference is online transformation where I think feast is a great start
Are hparms saved in hypeparameter section superior to hparams saved in configuration objects?
well I'm not sure about "superior" but they are structured, as opposed to configuration object, which is as generic as could be
Can you provide some further explanation, please? Sorry, I am beginner.
My bad, I was thinking out loud on improving the HPO process and allowing users to modify the configuration_object , not just the hyperparameters
Nice! I'll see if we can have better error handling for it, or solve it altogether 🙂
(I think it is the empty config file)
Hi @<1526371965655322624:profile|NuttyCamel41>
. I do that because I do not know how to get the pickle file into the docker container
What would the pickle file do?
and load the MinMaxScaler within the script, as the sklearn dependency is missing
what do you mean by that? are you getting an error when loading your model ?
I ended up usingÂ
task_overrides
 for every change, and this way I only need 2 tasks (a base task and a step task, thus I useÂ
clone_base_task=True
 and it works as expected - yay!)
Very cool!
BTW: you can also provide a function to create the entire Task, see base_task_factory
argument in add_step
I think it's still an issue, not critical though, because we have another way to do it and it works
I could not reproduce it, I think the issue w...
but cant catch that only one way for service queue or I can experiments with that?
UnevenOstrich23 I'm not sure what exactly is the question, but if you are asking weather this is limited, the answer is no it is not limited to that use case.
Specifically you can run as many agents in "services-mode" pulling from any queue/s that you need, and they can run any Task that is enqueued on those queues. There is no enforced limitation. Did that answer the question ?
Create a new version of the dataset by choosing what increment in SEMVER standard I would like to add for this version number (major/minor/patch) and uploadOh this is already there
` cur_ds = Dataset.get(dataset_project="project", dataset_name="name")
if version is not given it will auto increase based on semantic versions incrementing the last number 1.2.3 -> 1.2.4
new_ds = Dataset.create(dataset_project="project", dataset_name="name", parents=[cur_ds.id]) `