Reputation
Badges 1
25 × Eureka!It seems like the configuration is cached in a way even when you change the CLI parameters.
@<1523704461418041344:profile|EnormousCormorant39> nice!
Yes the configuration is cached so that after you set it once you can just call clearml-session again without all the arguments
What was the actual issue ? Should we add something to the printout?
Then this is by default the free space on the home folder (`~/.clearml') that is missing free space
Thanks @<1523701713440083968:profile|PanickyMoth78> for pining, let me check if I can find something in the commit log, I think there was a fix there...
ReassuredTiger98 if this user passes to the task as docker args the following, it might work:
'-e CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1'
Or maybe you could bundle some parameters that belongs to PipelineDecorator.component into high-level configuration variable (something like PipelineDecorator.global_config (?))
So in the PipelineController we have a per step callback and generic callbacks (i.e. for all the steps), is this what you are referring to ?
Well, I can see the difference here. Using the new pipelines generation the user has the flexibility to play with the returned values of each step.
Yep 🙂
We...
SmarmySeaurchin8args=parse.parse() task = Task.init(project_name=args.project or None, task_name=args.task or None)
You should probably look at the docstring 😉
:param str project_name: The name of the project in which the experiment will be created. If the project does
not exist, it is created. If project_name
is None
, the repository name is used. (Optional)
:param str task_name: The name of Task (experiment). If task_name
is None
, the Python experiment
...
So this is an additional config file with enterprise?
Extension to the "clearml.conf" capabilities
Is this new config file deployable via helm charts?
Yes, you can also set it company/user wide using the clearml Vault feature (again enterprise, sorry 😞 )
The issue itself is the name of the function (bottom line it has to be unique for every call). So the only very ugly hack is to copy paste the function X times?! 😞
(I'll see if we can push the fix to GitHub sooner)
Good news a dedicated class for exactly that will be out in a few days 🙂
Basically task scheduler and task trigger scheduler, running as a service cloning/launching tasks either based on time (cron alike) or based on a trigger).
wdyt?
SmarmyDolphin68 okay what's happening is the process exists before the actual data is being sent (report_matplotlib_figure is an async call, and data is sent in the background)
Basically you should just wait for all the events to be flushedtask.flush(wait_for_uploads=True)
That said, quickly testing it it seems it does not wait properly (again I think this is due to the fact we do not have a main Task here, I'll continue debugging)
In the meantime you can just dosleep(3.0)
And it wil...
Hi @<1657918706052763648:profile|SillyRobin38>
I have included some print statements
you should see those under the Task of the inference instance.
You can also do:
import clearml
...
def preprocess(...):
clearml.Logger.current_logger().report_text(...)
clearml.Logger.current_logger().report_scalar(...)
, specifically within the containers where the inferencing occurs.
it might be that fastapi is capturing the prints...
[None](https://github.com/tiangolo/uvicor...
Hi @<1528908687685455872:profile|MassiveBat21>
However
no useful
template
is created for down stream executions - the source code template is all messed up,
Interesting, could you provide the code that is "created", or even better some way to reproduce it ? It sounds like sort of a bug? or maybe a feature support that is missing.
My question is - what is a best practice in this case to be able to run exported scripts (python code not made availa...
I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally
Yeah, that is way too much, I think relates to the frequency it updates the console 😞
Hi @<1576381444509405184:profile|ManiacalLizard2>
If you make sure all server access is via a host name (i.e. instead of IP:port, use host_address:port), you should be able to replace it with cloud host on the same port
I think you are correct and the first time you spin the server it is not possible (I mean you need it up to get the access/secerey and only then you can insert them into the helm values) ... 😞
Yes you have to spin the server in order to generate the access/secret key...
(Not sure it actually has that information)
Bummer... that seems like a bit of an oversight tbh.
There is never a solution for those, unless the helm chart "knows" something about the server before spinning it the first time, which basically means a predefined access-key, I do not think we want that 😉
I have to admit, I'm not sure...
Let me talk to backend guys, in theory you are correct the "initial secret" can be injected via the helm env var, but I'm not sure how that would work in this specific case
I think that by default the zipped package files are 0.5GB
(you can control it None look for --chunk-size)
I think the missing part of the api is understanding which chunk your specific file stored in.
You can do something like:
ds = Dataset.get(...)
the_artifact_chunk_I_need = ds.file_entries_dict["myt/file/here"].artifact_name
wdyt?
maybe worth to add an interface ?
I see, give me a minute to check what would be the easiest
WackyRabbit7 if this is a single script running without git repo, you will actually get the entire code in the uncommitted changes section.
Do you mean get the code from the git repo itself ?
Thank you @<1523701949617147904:profile|PricklyRaven28> !!!
Let me see if we can reproduce and how to solve it
I would like to start off by saying that I absolutely love clearml.
@<1547028031053238272:profile|MassiveGoldfish6> thank you for saying that! 😍
Is is possible to download individual files from a dataset without downloading the entire dataset? If so, how do you do that?
Well by default files are packaged into multiple zip files, you can control the size of the zip file for finer granularity, but at the end when you download, you are downloading the entire packaged ...
And having a pdf is easier/better than sharing a link to the results page ?
I think this is the main issue, is this reproducible ? How can we test that?
t = Task.get_task('aabbcc') t.update_task(task_data={'task_type': "optimizer"})