where is the port? why https ?
Hi JitteryCoyote63 ,
These properties are usually not available on the UI and are used internal, hence the lack of documentation. Regrading parent
property, it will hold a parent Task.id (str) , that said it has no real effect on the Task itself. You can however search for Tasks with a specific parent ID (For examples, this is how the the hyper parameter class is using this property)
The only downside is that you cannot see it in the UI (or edit it).
You can now do:data = {'datatask': 'idhere'} task.connect(data, 'DataSection')
This will create another section named "DataSection" on the configuration tab. then you will be able to see/edit the input Task.id
JitteryCoyote63 what do you think?
JitteryCoyote63 I meant to store the parent ID as another "hyper-parameter" (under its own section name) not the data itself.
Makes sense ?
Hi @<1729309120315527168:profile|ShallowLion60>
Clearml in our case installed on k8s using helm chart (version: 7.11.0)
It should be done "automatically", I think there is a configuration var in the helm chart to configure that.
What urls are you urls seeing now, and what should be there?
now realise that the ignite events callbacks seem to not be fired
So this is an ignite issue ?
Amazing! ๐
Let me know how we can help ๐
CrookedWalrus33
Force SSH git authentication, it will auto mount the .ssh from the host to the docker
https://github.com/allegroai/clearml-agent/blob/6c5087e425bcc9911c78751e2a6ae3e1c0640180/docs/clearml.conf#L25
If I call explicitlyย
task.get_logger().report_scalar("test", str(parse_args.local_rank), 1., 0)
ย , this will log as expected one value per process, so reporting works
JitteryCoyote63 and do prints get logged as well (from all processes) ?
Why do you ask? is your server sluggish ?
Notice that the StorageManager has default configuration here:
https://github.com/allegroai/trains/blob/f27aed767cb3aa3ea83d8f273e48460dd79a90df/docs/trains.conf#L76
Then a per bucket credentials list, with detials:
https://github.com/allegroai/trains/blob/f27aed767cb3aa3ea83d8f273e48460dd79a90df/docs/trains.conf#L81
Hi VexedCat68
Could it be the python version is not the same? (this is the only reason not to find a specific python package version)
. but when we try to do a "New Run" from UI, it tries to follow the DAG of previous run (the run with all child nodes skipped) and the new run fails too.
This is odd, is this reproducible ? what's the clearml python package version ?
This is a part of a bigger process which times quite some time and resources, I hope I can try this soon if this will help get to the bottom of this
No worries, if you have another handle on how/why/when we loose the current Task, please share ๐
Ohh sorry you will also need to fix the
def _patched_task_function
The parameter order is important as the partial call relies on it.
My bad no need for that ๐
thanks @<1715900788393381888:profile|BitingSpider17> for attaching the log it really helps/
Notice from the log:
'-v', '/home/clearml/.clearml/cache:/clearml_agent_cache'
and as expected we also get:
sdk.storage.cache.default_base_dir = /clearml_agent_cache
Yet I can see the error you pointed:
FileNotFoundError: [Errno 2] No such file or directory: '/clearml_agent_cache/storage_manager/datasets'
Now, could it be that the same folder is used for both root and...
You can make reports on experiments with interactive graphs
Yes, I can totally see how this is a selling point. The closest is the Project Overview (full markdown capabilities, with the ability to embed links to specific experiments). You can also add a "leader metric", so you can track the project performance/progress over time.
I have to admit that creating a better reporting tool is always pushed down in priority as I think this is a good selling point to management but the actual ...
Hey WickedGoat98
I found the bug, it is due to the fact the numpy (passed to plotly) contains both datetime and nan, and plotly.js does not like it. I'll make sure this is fixed, in the meantime you can just remove the first row (it contains the nan):df = pd.concat([tickerDf.Close, tickerDf_Change.Close_pcent], axis=1) df = df[1:]
Yeah, but I still need to update the links in the clearml server
yes... how many are we talking about here?
Runtime, every time the add_step needs to create a New Task to be enqueued
Anyhow from your response is it safe to assume that mixing inย
ย code with the core ML task code has not occurred to you as something problematic to start with?
Correct ๐ Actually we believe it makes it easier, as worst case scenario you can always run clearml in "offline" without the need for the backend, and later if needed you can import that run.
That said, regrading (3), the "mid" interaction is always the challenge, clearml will do the auto tracking/upload of the mod...