so that you can get the latest artifacts of that experiment
what do you mean by " the latest artifacts "? do you have multiple artifacts on the same Task or s it the latest Task holding a specific artifact?
We are always looking for additional talented people 😉 DM me...
Hi @<1536518770577641472:profile|HighElk97>
Is there a way to change the smoothing algorithm?
Just like with TB, this is front-end, not really something you can control ...
That said you can report a smoothed value (i.e. via python) as additional series, wdyt ?
Hi HandsomeCrow5 hmm interesting use case,
we have seen html reports as artifacts, then you can press "download" and it should open in another tab, what would you expect on "debug samples" ?
This is a horrible setup, it means no authentication will pass, it will literally break every JWT authentication scheme
Wait, @<1686547375457308672:profile|VastLobster56> per your config clearml-fileserver
who sets this domain name? could it be that it is only on our host machine? you can quickly test by running any docker on your machine and running ping clearml-fileserver
from the docker itself.
also your log showed "could not download None ..." , I would expect it to be None ...
, no?
Follow up: I see that if I move an Experiment to a new project, it does not copy the associated model files and must be done manually. Once I moved the models to the new project, the query works as expected.
Correct 🙂
Nice catch!
And you are calling Task.init? And the scalars show under scalars and the images are not under debug samples?
One more question, in the second log, trains agent is configured with Conda, on the first it is configured with pip, or at least this is what it looks like, can you confirm?
Can i log new lines to an old dataframe plot? any other suggestions?
Hi ChubbyLouse32
you mean to an already reported Table? or an artifact ? or a dataset ?
Hi GrievingTurkey78
Turning of pytorch auto-logging:Task.init(..., auto_connect_frameworks={'pytorch': False})
To manually log a model:from clearml import OutputModel OutputModel().update_weights('my_best_model.pt')
Can you copy the "Installed Packages" here, and point to the package causing the issue?
...instance to stop
you mean spin the instance down?
Epochs are still round numbers ...
Multiply by 2?! 😅
GiganticTurtle0
I think that what you are looking for is:param_dict = {'key': 1234} task.connect(param_dict, name='general')
Notice that when this code runs manually (i.e. not by the agent), the dict is stored on "general" parameter section of the Task.
But when the code is executed by the Agent, the opposite happens and the parameters from the "general" section of the Task or put back into the param_dict
, here the casting is done based on the type of the original values.
Generall...
Hi ConvolutedChicken69
assuming you are runnign the agent in venv mode you can do something like:$ CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 clearml-agent daemon --queue default
This will basically only clone the code and use the default python the clearml-agent itself is using.
Does that help?
BTW:
it gets an error as it can't find it with pip.
What's the error? how come the package cannot be installed ?
It should be autodetected, and listed in the installed packages with something like:keras-contrib @git+https://www.github.com/keras-team/keras-contrib.git
Is this what you are seeing?
If not you can add it manually with:Task.add_requirements('git+
') Task.init(...)
Notice to call before Task.init
I can read them programmatically using tensorboard and the log the using clearml logger,
StaleButterfly40 this will be a great script to put somewhere (I'm sure you are not the only one with this problem). Maybe put it as a GitHub issue ? wdyt ?
Hi StaleButterfly40
but if I sync more than once I get a duplication of each line in log
Hmm.. let me check if we can "force" overwriting (it might require you to have a more stateful code for the sync process)
sometime we resume training
How would that work in offline mode? The offline process cannot sync with the backend... Are you saying you would like to get a new capability, "continue-offline-session" ?
StaleButterfly40 just making sure I understand, are we trying to solve the "import offline zip file/folder" issue, where we create multiple Tasks (i.e. Task per import)? Or are you suggesting the Actual task (the one running in offline mode) needs support for continue-previous execution ?
I’ve did saw this “publish” option for pipelines, just for models, is this a new feature?
Kind of hidden in the UI (not sure if on purpose), but if you click on the pipeline then go to details, in the new tab (of the pipeline Task) you can publish the Task (aka the pipeline)
In this example:
https://github.com/allegroai/clearml-actions-train-model/blob/7f47f16b438a4b05b91537f88e8813182f39f1fe/train_model.py#L14
replace with something like:
` task = Task.get_tasks(project_name="pipel...
have a CI/CD (e.g Github Actions) thats update my “production” pipeline on ClearML UI,
I think this is the easiest way, basically the CI/CD launches a pipeline (which under the hood is another type of Task), by querying the latest "Published" pipeline that is also Not archived, then cloning+pushing it to execution queue.
In the UI when you want to "upgrade" the production pipeline you just right click "Publish" on the pipeline you want to launch. Another way is to do the same with Tags...
Hi IrritableGiraffe81
Can you share a code snippet ?
Generally I would trytask = Task.init(..., auto_connect_frameworks={"pytorch': False, 'tensorflow': False)
Thank you @<1523701949617147904:profile|PricklyRaven28> !!!
Let me see if we can reproduce and how to solve it
DrabCockroach54 notice here there is no aarch64 wheel for anything other than python 3.5...
(and in both cases only py 3.5/3.6 builds, everything else will be built from code)
https://pypi.org/project/pycryptodome/#files
AttractiveCockroach17 I verified this is an issue with hypeparemeters with "." or section names with ".", thank you for noticing!
I will make sure I pass it along, should be part of the next version (ETA a week) 🙂
Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?