Or did you mean I can couple a short "mini config" with the package and redirect clearml to use this local one (instead of the one at ~/clearml.conf)?
Actually yes, you can set a "fixed" config point to it with ENV variable, then setup per user just the access/secret .
wdyt?
(I was also pointing to the fact you do not have to use clearml-init you can create a simple partial config template and let user just fill in the missing "key"/"secret")
Hi @<1541954607595393024:profile|BattyCrocodile47>
It seems to me that instead of implementing webhooks to react to things like adding a tag to a model
Did you look at this example ?
None
Can we straightforwardly stream ALL ClearML events to another system?
what would you consider an event?
The "basic" object type is Task, a state in task is changed via an api call, would that be an e...
Hi HappyLion37
It seems that you are "reusing" the Tasks. Which means the second time you open them you are essentially resetting the old run and starting all over.
Try to do:task1 = Task.init('examples', 'step one', reuse_last_task_id=False) print('do stuff') task1.close() task2 = Task.init('examples', 'step two', reuse_last_task_id=False) print('do some more stuff') task2.close()
I thought about the fact that maybe we need to write everything in one place
It will be in the same place, under the main Task
Should work out of the box
Can you post here the actual line? seems like we can fix it to also support this scenario (if we could test it)
PompousBeetle71 Check the beginning of the log, it should print the configuration, including the access key (excluding the secret) see if it makes sense...
we made two tb versions of / task and wrote in parallel.
And I wanted to know if it is possible here as well.
Basically you will have different series (based on the TB log file) on the same graph so you can compare 🙂 all automatically
Woot woot! 🤩
Yes, because when a container is executed, the agent creates a new venv and inherits from the system wide installed packages, but it cannot inherit or "understand" there is an existing venv, and where it is.
do I still need to specify a OutputModel
No need, only if you want to upload a local model file (but I assume in this case, no new model is created)
EnviousStarfish54 Sure, see scatter2d
https://allegro.ai/docs/examples/reporting/scatter_hist_confusion_mat_reporting/#2d-scatter-plots
tasks.add_or_update_artifacts/v2.10 (Invalid task status: expected=created, status=completed)>
Hi UpsetCrow72
How come you are trying to sync a "completed" (finalized) dataset ?
Hi @<1523701304709353472:profile|OddShrimp85>
Do you mean Dataset.get_local_copy() ?
@<1523703080200179712:profile|NastySeahorse61> / @<1523702868694011904:profile|AbruptCow41>
Is there a way to avoid each task to create a new environment?
You can just define CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 it will just use whatever you have there (notice it will totally ignore requirements.txt and "installed packages" on the Task)
BTW I would recommend turning on the venv caching, this is per docker/python/packages caching so the next time you are using th exact requi...
Hi DeliciousKoala34
I am using Pycharm and i have set up the clear-ml plugin, but it still doesnt work.
Did you provide the key/secret to the plugin? I think this is a must for it to actually work
I'm trying to figure if this is reproducible...
Hi NonchalantGiraffe17
You mean this documentation?
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksclone
CleanPigeon16 Can you send also the "Configuration Object" "Pipeline" section ?
main clearml repo?
Yep that sounds right 🙂 thank you!
@<1560074028276781056:profile|HealthyDove84> if you want you can PR a fix, it should be very simple basically:
None
elif np_dtype == str:
return "STRING"
elif np_dtype == np.object_ or np_dtype.type == np.bytes_:
return "BYTES"
return None
Hmm, yes this fits the message. Which basically says that it gave up on analyzing the code because it run out of time. Is the execution very short? Or the repo very large?
GloriousPanda26 wouldn't it make more sense that multi run would create multiple experiments ?
Hmm that makes sense to me, any chance you can open a github issue so we do not forget ? (I do not think it should be very complicated to fix)
let me check a sec
AdventurousRabbit79 are you passing cache_executed_step=False to the PipelineController ?
https://github.com/allegroai/clearml/blob/332ceab3eadef4997e897d171957975a247a6dc1/clearml/automation/controller.py#L129
Could you send a usage example ?
my pipeline controller always updates to the latest git commit id
This will only happen if the Task the pipeline creates has no specific commit ID, and instead just uses the latest from the git repo. Is this the case ?
But pytorch has no specific backend, it uses TB.
No?! Can you point me to an example? What I mostly find is how to calc metrics not standard way to then store them...
PompousBeetle71 If this is argparser and the type is defined, the trains-agent will pass the equivalent in the same type, with str that amounts to '' . make sense ?