The package detection is done when running the code on your laptop, and this is when it first logs the packages and versions. Following it, what do you have on your laptop? OS/Conda/Python
GiganticTurtle0 is it in the same repository ?
If it is it should have detected the fact that it needs to analyze the entire repository (not just the standalone script, and then discover tensorflow)
is number of calls performed, not what those calls were.
oh, yes this is just a measure of how many API calls are sent.
It does not really matter which ones
hi ElegantCoyote26
but I can't see any documentation or examples about the updates done in version 1.0.0
So actually the docs are only for 1.0... https://clear.ml/docs/latest/docs/clearml_serving/clearml_serving
Hi there, are there any plans to add better documentation/example
Yes, this is work in progress, the first Item on the list is custom model serving example (kind of like this one https://github.com/allegroai/clearml-serving/tree/main/examples/pipeline )
about...
BTW:
======> WARNING! Git diff to large to store (1327kb), skipping uncommitted changes <======
This means all your git changes are stored as an artifact, which is consistent with the "wait for upload" message.
Hi CleanWhale17 let me see if I can address them all
Email Alert for finished Job(I'm not sure if it's already there).
Slack integration will be public by the end of the weekend π
It is fully customization / extendable, I'll be happy to help .
DVC
Full dataset tracking is supported using the artifacts and the ability to integrate to any central storage (shared folders/ S3 / GS / Azure etc.)
From my experience, it is easier to work with artifacts from Data-Processing Tasks...
Hi SteadyFox10
I promised to mention here once we start working on ignite integration, you can check it here:
https://github.com/jkhenning/ignite/tree/trains-integration
Feel free to provide insights / requests π
As for the model upload. The default behavior is
torch.save() calls will only be logged , nothing more. But, if you pass to the Task.init output_uri field, then all your models will be uploaded automatically. For example:
` task = Task.init('examples', 'model upload test', o...
Hi RipeGoose2
I think it "should" take of uploading the artifacts as well (they are included in the zip file created by the offline package)
Notice that the "default_output_uri" on the remote machine is meaningless as it stored them locally anyhow. It will only have an effect on the machine that actually imports the offline session.
Make sense ?
suppose I have an S3 bucket where my data is stored and I wish to transfer it to ClearML file server.
Then you first have to download the entire bucket locally, then register the local copy.
Basically:
StorageManager.download_folder("
", "/target/folder")
# now register the local "/target/folder" with Dataset.add_files
Notice both needs to be str
btw, if you need the entire folder just use StorageManager.upload_folder
Hi MelancholyElk85
So the way datasets now work, is they are actually an entity (folder) inside a project , all under TFW hidden .datasets sub project
This is so all data and tasks are both on the same project , but at the same time will not intersect with subprojects by the same name. Does that make sense?
Seems like it is working (including seaborn)
but when I run the same task again it does not map the keys..Β (edited)
SparklingElephant70 what do you mean by "map the keys" ?
yes, that makes sense to me.
What is your specific use case, meaning when/how do you stop / launch the hpo?
Would it make sense to continue from a previous execution and just provide the Task ID? Wdyt?
I remember there were some issues with it ...
I hope not π Anyhow the only thing that does matter is the auto_connect arguments (meaning if you want to disable some, you should pass them when calling Task.init)
I'm assuming you are building for x86
It looks like the tag being used is hardcoded to 1.24-18. Was this issue identified and fixed in later versions?
BoredHedgehog47 what do you mean by "hardcoded 1.24-18" ? tag to what I think I lost context here
The remaining problem is that this way, they are visible in the ClearML web UI which is potentially unsafe / bad practice, see screenshot below.
Ohhh that makes sense now, thank you π
Assuming this is a one time credntials for every agent, you can add these arguments in the "extra_docker_arguments" in clearml.conf
Then make sure they are also listed in: hide_docker_command_env_vars which should cover the console log as well
https://github.com/allegroai/clearml-agent/blob/26e6...
Hi JitteryCoyote63
I change the project.default_output_destination? I tried setting it to None but it is not updated
How did yo try to change it? and where do you see the effect ?
Well if we the "video" from TB is not in mp4/gif format than someone will have to encode it.
I was just pointing that for the encoding part we might need additional package
WickedElephant66 is this issue the same as this one?
https://clearml.slack.com/archives/CTK20V944/p1656537337804619?thread_ts=1656446563.854059&cid=CTK20V944
Depending on your security restrictions, but generally yes.
I suppose one way to perform this is with a
that kicks
Yes, that was my thinking.
It seems more efficient to support a triggered response to task fail.
Not sure I follow this one, I mean the pipeline logic itself monitors the execution. If I'm not mistaken, try/except will catch a step that files, and a global will catch the entire pipeline. Am I missing something ?
yup! that's what I was wondering if you'd help me find a way to change the timings of. Is there an option I can override to make the retry more aggressive?
you mean wait for less?
None
add to your clearml.conf:
api.http.retries.backoff_factor = 0.1
apologies @<1798887585121046528:profile|WobblyFrog79> somehow I missed your reply,
My workflow is based around executing code that lives in the same repository, so itβs cumbersome having to specify repository information all over the place, and changing commit hash as I add new code.
It automatically infers the repo if the original as long as the pipeline code itself is inside the repo, by that I mean the pipeline logic, when you run it the first time (think development etc), if it s...
they are just neighboring modules to the function I am importing.
So I think that is you specify the repo,, on the remote machine you will end with the code of the component sitting at the root folder of the repo, from there I assume you can import the rest, the root git path should be part of your PYTHONPATH automatically.
wdyt?
BTW: CloudyHamster42 I think this issue was discussed on GitHub, and the final "verdict" was we should have an option to split/combine graphs on the UI side (i.e. similar to the "smoothing" or wall-time axis etc.)