But they are all running inside the same pod, correct ?
For now we've monkey-patched it to our usecase:
LOL, that's a cool hack
That gives us the benefit of creating "local datasets" (confined to the scope of the project, do not appear in
Datasets
tabs, but appear as normal tasks within the project)
So what would be a "perfect" solution here?
I think I'm missing the point on why it became an issue in the first place.
Notice that in new versions Dataset will be registered on the Tasks that use them (they are already...
Ohh, sorry 🙂:param run_pipeline_steps_locally: (default False) If True, run the pipeline steps themselves locally as a subprocess (use for debugging the pipeline locally, notice the pipeline code is expected to be available on the local machine)
A definite maybe, they may or may not be used, but we'd like to keep that option
The precursor to the question is the idea of storing local files as "input artifacts" on the Task, which means that if the Task is cloned the links go with it. Let's assume for a second this is the case, how would you upload these artifacts in the first place?
@<1556812486840160256:profile|SuccessfulRaven86> is the issue with flask
reproducible ? if so could you open a github issue, so we do not forget to look into it?
SuperiorDucks36 you mean to manually set an experiment (and the dummy Task is just a way to have an entry to configure), do I understand you correctly ?
Following on that, we are thinking of doing it all for you with a CLI , that will basically create a task from a code/repo you already have on your machine. What do you think?
Ok, just my ignorance then?Â
LOL, no it is just that with a single discrete parameter the strategy makes less sense 🙂
Hi GrotesqueOctopus42 ,
BTW: is it better to post the long error message on a reply to avoid polluting the channel?
Yes, that is appreciated 🙂
Basically logs in the thread of the initial message.
To fix this a had to spin the agent using --cpu-only flag (--docker --cpu-only)
Yes if you do not specify --cpu-only it will default to trying to access gpus
Nice!
This seems to only work for a single file (weights_path implies a single file, not multiple ones). Is that the case?See update_weights_package
actually packages an entire folder as zip and will do the extraction when you get it back (check the function docstring, I think you can also specify wildcard etc if needed)
Why do you see this as preferred to the dataset method we have now?
So it answers a few requirements that you raised
It is fully visible as part of the project and se...
Sure SharpDove45 ,from clearml import Model model = Model('model_id_aabbcc') model.system_tags += ['archived']
Thank you @<1523720500038078464:profile|MotionlessSeagull22> always great to hear 🙂
btw, if you feel like sharing your thoughts with us, consider filling our survey , it should not take more than 5min
Could it be that this is the callback that causes it?
None
Hi RipeGoose2
Any logs on the console ?
Could you test with a dummy example on the demoserver ?
This, however, requires that I slightly modify the clearml helm chart with the aws-autoscaler deployment, right?
Correct 🙂
file and redirect the public url to k8 dns url?
Yes! that would work, Nice!
You can add it into the extra_docker_shell_script
it will be executed in any pod the clearml-glue will spin (obviously this needs to be configured on the pod running the clearml k8s glue)
https://github.com/allegroai/clearml-agent/blob/ba2db4e727b90e595df2b13f458d9580659bf12e/docs/clearml.conf#L152
Regrading the demoapp, this is just a default server that allows you to start play around with ClearML without needing to setup any of your own servers or signup
That said, I would recommend to sign up (totally free) on the community server
https://app.community.clear.ml/
Hi WhimsicalLion91
You can always explicitly send a value:from trains import Logger Logger.current_logger().report_scalar("title", "series", iteration=0, value=1337)
A full example can be found here:
https://github.com/allegroai/trains/blob/master/examples/reporting/scalar_reporting.py
Hi DepressedFish57
In my case download each part takes ~5 second, and unzip ~15.
We run into that, and the new version will employ multithreading approach for the unzip (meaning the unzipping will happen in the background)
ReassuredTiger98
Okay, but you should have had the prints ...uploading artifact
anddone uploading artifact
So I suspect something is going on with the agent.
Did you manage to run any experiment on this agent ?
EDIT: Can you try with artifacts example we have on the repo:
https://github.com/allegroai/clearml/blob/master/examples/reporting/artifacts.py
using the cleanup service
Wait FlutteringWorm14 , the cleanup service , or task.delete call ? (these are not the same)
you are correct, I was referring to the template experiment
Hi PleasantGiraffe85
Did you set git_host
to only point to your host ? do you expect all the git clones to use SSH? how does the requirements.txt git link looks like ?
https://github.com/allegroai/clearml-agent/blob/bf07b7f76d3236c1118b81730c6d9718705a795a/docs/clearml.conf#L22
instead of terminating them once they are inactive, so that they could be available immediately when they are needed.
JitteryCoyote63 I think you can increase the IDLE timeout on the autoscaler, and achive the same behavior, no ?
No worries, and I hope you manage to get that backup.
right now I can't figure out how to get the session in order to get the notebook path
you mean the code that fires "HTTPConnectionPool" ?
Hi @<1578193378640662528:profile|MoodySeaurchin4>
but is it possible to log some metrics too, like rmse or the likes? If so, how would you do it?
Sure, I'm assuming this is part of the output ? if not, this means this is part of your code, and if this is the case then yes you should use collect_custom_statistics_fn
None
`collect_custom_statistics_fn({'rmse'...
Great ascii tree 🙂
GrittyKangaroo27 assuming you are doing:@PipelineDecorator.component(..., repo='.') def my_component(): ...
The function my_component
will be running in the repository root, so in thoery it could access the packages 1/2
(I'm assuming here directory "project" is the repository root)
Does that make sense ?
BTW: when you pass repo='.'
to @PipelineDecorator.component
it takes the current repository that exists on the local machine running the pipel...
but when the dependencies are installed, the git creds are not taken in account
I have to admit, we missed that use case 😞
A quick fix will be to use git ssh, which is system wide.
but I want know to switch to git auth using Personal Access Token for security reasons)
Smart move 😉
As for the git repo credentials, we can always add them, when you are using user/pass. I guess that would be the behavior you are expecting, unless the domain is different......