Martin, thank you very much for your time and dedication, I really appreciate it
My pleasure 🙂
Yes, I have latest 1.0.5 version now and it gives same result in UI as previous version that I used
Hmm are you saying the auto hydra connection doesn't work ? is it the folder structure ?
When is the Task.init is called ?
See example here:
https://github.com/allegroai/clearml/blob/master/examples/frameworks/hydra/hydra_example.py
Wait, it shows "hydra==2.5" not "hydra-core==x.y" ?
How does
deferred_init
affect the process?
It ders all the networking and stuff in the background (usually the part that might slow the Task initialization process)
Also, is there a way of specifying a blacklist instead of a whitelist of features?
BurlyPig26 you can while list per framework and file name, exampletask = Task.init(..., auto_connect_frameworks={'pytorch' : '*.pt', 'tensorflow': ['*.h5', '*.hdf5']} )
What am I missing ?
Just curious about the timeout, was it configured by clearML or the GCS? Can we customize the timeout?
I'm assuming this is GCS, at the end the actual upload is done GCS python package.
Maybe there is an env variable ... Let me google it
Failing when passing the diff to the git command...
Yes, the same will work with artifacts, use pass the full url to the artifact_object
it should just register it as is.
VictoriousPenguin97 I'm not sure there is an easy solution, basically you have to edit both MongoDB (artifacts) and Elastic (think debug samples) 😞
Could you run your code not from the git repository.
I have a theory, you never actually added the entry point file to the git repo, so the agent never actually installed it, and it just did nothing (it should have reported an error, I'll look into it)
WDYT?
The Overview panel would be extremely well suited for the task of selecting a number of projects for comparing them.
Could you elaborate ?
Another useful feature would be to allow adding information (e.g. metrics or metadata) to the tooltip.
You mean are we still talking about the "Overview" Tab?
based on this:
https://clear.ml/docs/latest/docs/references/api/endpoints#post-debugping
" http://localhost:8080/debug.ping ”
btw: What'd the usage scenario ?
if I want to run the experiment the first time without creating the
template
?
You mean without manually executing it once ?
Will they get ordered ascending or descending?
Good point, I'll check the docs... but I think they do not specify
https://clear.ml/docs/latst/docs/references/sdk/task#taskget_tasks
From the code it seems the ordered is not guaranteed.
You can however pass '-last_update'
: order_by
which will give you the latest updated first
` task_filter = {
'page_size': 2,
'page': 0,
'order_by': ['last_metrics.{}.{}'.format(title, series), '-last_update']
}
Task.get_tasks(...
So it sounds as if for some reason calling Task.init inide a notebook on your jupyterhub is not detecting the notebook.
Is there anything special about the jupyterhub deployment ? how is it deployed ? is it password protected ? is this reproducible ?
For local testing, we have added a
ScantChimpanzee51 there is already an environment variable for that, you can just set CLEARML_OFFLINE_MODE
🙂
By the way, if we don’t wrap other calls in
is_offline()
we get errors like “DateTime object is not serializable”, but that’s a secondary issue.
I think this was fixed, can you verify with the latest RC 1.7.3rc0
? If this still happens can you share the code
However, this results in the process getti...
The confusion matrix shows under debug sample, but the image is empty, is that correct?
However, are you thinking of including this callbacks features in the new pipelines as well?
Can you see a good use case ? (I mean the infrastructure supports it, but sometimes too many arguments is just confusing, no?!)
because it should have detected it...
Did you see "Repository and package analysis timed out ..."
Hi OutrageousGiraffe8
when I save model using tf.keras.save_model
This should create a new Model in the system (not artifact), models have their own entity and UID.
Are you creating the Task with output_uri="
gs://bucket/folder "
?
Is there a quicker way to abort all running experiments in a project? I have over a thousand running anonymous data tasks in a specific project and I want to abort them before debugging them.
We are adding "select" all in the next UI version to do that as quickly as possible 🙂
default is clearml data server
Yes the default is the clearml files server, what did you configure it to ? (e.g. should be something like None )
Hi @<1578555761724755968:profile|GrievingKoala83>
Is it possible to overrided the parameters through the configuration file when restarting the pipeline from ui?
The parameters of the Pipeline are overridden from the UI, not the pipeline components,
you can to use the pipeline parameters as is as the pipeline components parameters
Is your pipeline built from Tasks, or decorators over functions ?
Hmm that is odd, can you send an email to support@clear.ml ?
understood trains does not have auto versioning
What do you mean auto versioning ?
task name is not unique, task ID is unique, you can have multiple tasks with the same name and you can edit the name post execution
Actually, dumb question: how do I set the setup script for a task?
When you clone/edit the Task in the UI, under Execution / Container you should have it
After you edit it, just push it into the execution with the autoscaler and wait 🙂
JitteryCoyote63 instead of _update_requirements, call the following before Task.init:Task.add_requirements('torch', '1.3.1') Task.add_requirements('git+
')