Reputation
Badges 1
25 × Eureka!And actually the slack thing is actually a good workaround this since people can just comment easily
Any reference for similar integration between Slack and other platforms ?
I'm thinking maybe the easiest way is Slack bot to you can @ task id ?
so that you can get the latest artifacts of that experiment
what do you mean by " the latest artifacts "? do you have multiple artifacts on the same Task or s it the latest Task holding a specific artifact?
DisturbedWorm66 it does, I think there is an example here:
https://github.com/allegroai/nvidia-clearml-integration/tree/main/tlt
My current experience is there is only print out in the console but no training graph
Yes Nvidia TLT needs to actually use tensorboard for clearml to catch it and display it.
I think that in the latest version they added that. TimelyPenguin76 might know more
JumpyPig73 you should be able to find in in the bottom pf the page, try scrolling down (it should be after the installed packages)
Hi SmoggyGoat53
What do you mean by "feature store" ? (These days the definition is quite broad, hence my question)
I'm assuming your are looking for the AWS autoscaler, spinning EC2 instances up/down and running daemons on them.
https://github.com/allegroai/clearml/blob/master/examples/services/aws-autoscaler/aws_autoscaler.py
https://clear.ml/docs/latest/docs/guides/services/aws_autoscaler
Hi SarcasticSparrow10
I think the default search is any partial match, let me check if there is a way to do some regexp / wildcard
Hi SarcasticSparrow10 ,
So the bad news is the UI is actually escaping the query, so you cannot search regexp from the UI. The good news, you can do achieve that from python:from trains import Task tasks = Task._query_tasks(task_name='exp.*i1')
Usually in the /tmp folder under a temp filename (it is generated automatically when spinned)
In case of the services, this will be inside the docker itself
btw:
If you need to access it, just bash into the running dockerdocker exec -it <container_name> /bin/bash
SarcasticSparrow10 LOL there is a hack around it π
Run your code with python -O
Which basically skips over all assertion checks
Hi SarcasticSparrow10 , so yes it does, this is more efficient when using pytorch loaders, and in some other situations.
To disable it add to your clearml.conf:sdk.development.report_use_subprocess = false
2. interesting error, maybe we can revert to "thread mode" if running under a daemon. (I have to admit, I'm not sure why python has this limitation, let me check it...)
Yes it does. I'm assuming each job is launched using a multiprocessing.Pool (which translates into a sub process). Let me see if I can reproduce this behavior.
Okay, I was able to reproduce, this will only happen if you are running from a daemon process (like in the case of a process pool), Python is sometimes very picky when it comes to multi-threading/processes I'll check what we can do π
This is odd I was running the example code from:
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py
It is stored inside a repo, but the steps that are created (i.e. checking the Task that is created) do not have any repo linked to them.
What's the difference ?
I look forward to your response on Github.
Great, I would like to make this discussion a bit more open and accessible so GitHub is probably better
I'd like to start contributing to the project...
That will be awesome!
For example, store inference results, explanations, etc and then use them in a different process. I currently use separate database for this.
You can use artifacts for complex data then retrieve them programatically.
Or you can manually report scalers / plots etc, with Logger
class, also you can retrive them with task.get_last_scalar_metrics
I see that you guys have made a lot of progress in the last two months! I'm excited to dig inΒ
Thank you!
You can further di...
Hi RoughTiger69
unfortunately, the model was serialized with a different module structure - it was originally placed in a (root) module called
model
....
Is this like a pickle issue?
Unfortunately, this doesnβt work inside clear.ml since there is some mechanism that overrides the import mechanism using
import_bind
.
__patched_import3
What error are you getting? (meaning why isn't it working)
Nice workaround!
RoughTiger69 how do I reproduce this behavior? (I'm still unsure on why exactly the clearml binding broke it, and would like to fix that)
(can you also provide the crash trace, maybe that could help as well)
Hi SarcasticSparrow10
which database services are used to...
Mongo & Elastic
You can query everything using ClearML interface, or talk directly with the databases.
Full RestAPI is here:
https://clear.ml/docs/latest/docs/references/api/endpoints
You can use the APIClient for easier pythonic interface:
See example here
https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py
What is the exact use case you have in mind?
Ohh if this is the case, and this is a stream of constant inference Results, then yes, you should push it to some stream supported DB.
Simple SQL tables would work, but for actual scale I would push into a Kafka stream then pull it (serially) somewhere else and push into a DB
The problem is of course filling in all the configuration details, so that they are viewable.
Other than that, check out:
https://allegro.ai/docs/task.html#trains.task.Task.export_task
https://allegro.ai/docs/task.html#trains.task.Task.import_task
Sounds good ?
Hmm make sense, then I would call the export_task once (kind of the easiest to get the entire Task object description pre-filled for you) with that, you can just create as many as needed by calling import_task.
Would that help?
. Could you clarify the question for me, please?
...
Could you please point me to the piece of ClearML code related to the downloading process?
I think I mean this part:
https://github.com/allegroai/clearml/blob/e3547cd89770c6d73f92d9a05696018957c3fd62/clearml/datasets/dataset.py#L2134