Hi @<1619867994005966848:profile|HungryTurtle13>
I'm using Python's joblib library and the Parallel class to run an experiment in multiple parallel threads.
I believe joblib creates subprocesses not threads, but yes you are correct,
Basically once Task.init is called, every forked/spawned process will be automatically logged to the main process Task (you can, and probably should call either Task.init or Task.current_task() from the forked processes, but this is just a detial)
The mai...
no, at least not yet, someone definitely needs to do that though haha
Currently all the unit tests are internal (the hardest part is providing server they can run against and verify the results, hence the challange)
For example, if ClearML would offer a
TestSession
that is local and does not communicate to any backend
Offline mode? it stores everything into a folder, then zips it, you can access the target folder or the zip file and verify all the data/states
I think so, when you are saying "clearml (bash script..." you basically mean, "put my code + packages + and run it" , correct ?
Hi @<1610808279263350784:profile|FriendlyShrimp96>
Is there a way to get a list of variants given a metric, or even just a full list of metrics and variants for a given task id?
Try this
None
from clearml.backend_api.session.client import APIClient
c = APIClient()
metrics = c.events.get_task_metrics(tasks=["TASK_ID_HERE"], event_type="training_debug_image")
print(metrics)
I think API ...
Do you think the local agent will be supported someday in the future?
We can take this ode sample and extent it. can't see any harm in that.
It will enable very easy to ran "sweeps" without any "real agent" installed.
I'm thinking roll out multiple experiments at once
You mean as multiple subprocesses, sure if you have the memory for it
Or can I enable agent in this kind of local mode?
You just built a local agent
let me check a sec
Yeah the ultimate goal I'm trying to achieve is to flexibly running tasks for example before running, could have a claim saying how many resources I can and the agent will run as soon as it find there are enough resources
Checkout Task.execute_remotely()
you can push it anywhere in your code, when execution get to it, If you are running without an agent it will stop the process and re-enqueue it to be executed remotely, on the remote machine the call itself becomes a noop,
I...
sorry typo client.task.
should be client.tasks.
from clearml.backend_api.session.client import APIClient client = APIClient() result = client.queues.get_next_task(queue='queue_ID_here')
Seems to work for me (latest RC 1.1.5rc2)
Because it lives behind a VPN and github workers don’t have access to it
makes sense
If this is the case, I have to admit that combining offline-mode and remote execution makes sense, no?
I guess the thing that's missing from offline execution is being able to load an offline task without uploading it to the backend.
UnevenDolphin73 you mean like as to get the Task object from it?
(This might be doable, the main issue would be the metrics / logs loading)
What would be the use case for the testing ?
ClearML maintains a github action that sets up a dummy clearml-server,
You have one, it's the http://app.clear.ml (not a dummy one, but for this purpose it will work)
thoughts ?
Hi SplendidToad10
In order to run a pipeline you first have to create the steps (i.e Tasks).
This is usually dont by running the code once (basically running any code with Task.init call will create a Task for that specific code, including the enviroement definition needed to reproduce it by the Agent)
BTW: there is a full Pipeline class that does everything for you, example here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
FrustratingWalrus87 If you need active one, I think there is currently no alternative to TB tSNE 🙂 it is truly great 🙂
That said you can use plotly for the graph:
https://plotly.com/python/t-sne-and-umap-projections/#project-data-into-3d-with-tsne-and-pxscatter3d
and report it to ClearML with Logger report_plotly
:
https://github.com/allegroai/clearml/blob/e9f8fc949db7f82b6a6f1c1ca64f94347196f4c0/examples/reporting/plotly_reporting.py#L20
Just run once (from your python console / pycharm etc.):
https://github.com/allegroai/clearml/blob/master/examples/automation/toy_base_task.py
Hi ConvincingSwan15
A few background questions:
Where is the code that we want to optimize? Do you already have a Task of that code executed?
"find my learning script"
Could you elaborate ? is this connect to the first question ?
Hmm, maybe the original Task was executed with older versions? (before the section names were introduced)
Let's try:DiscreteParameterRange('epochs', values=[30]),
Does that gives a warning ?
Hi PricklyJellyfish35
My apologies this thread was forgotten 😞
What's the current status with the OmegaConf, ? (I'm not sure I understand what do mean by resolve=False)
PricklyJellyfish35 yes that's kind of what I was thinking 🙂
I still wonder if we should configure it or just have both.
Could I ask you to open a GitHub issue on this feature request, I'd love to get some input on what would make more sense to implement. Regardless it is not a major change and should be very quick to implement
PricklyJellyfish35
Do you mean the original OmegaConf, before the overrides ? or the configuration files used to create the OmegaConf ?
Most likely yes, but I don't see how clearml would have an impact here, I am more inclined to think it would be a pytorch dataloader issue, although I don't see why
These are most certainly dataloader process. But clearml-agent when killing the process should also kill all subprocesses, and it might be there is something going on that prenets it from killing the subprocesses ...
Is this easily reproducible ? Can you verify it is still the case with the latest RC of clearml-agent ?
As I understand, providing this param at the Task.init() inside the subtask is too late, because step is already started.
If you are running the task on an agent (with I assume you do), than one way would be to configure the "default_output_uri" on the agnets clearml.conf file.
The other option is to change the task as creation time, task.storage_uri = 's3://...'
Hi SpotlessLeopard9
I got many tasks that were just hang at the end of the script without ...
I remember this exact issue was fixed with 1.1.5rc0, see here:
https://clearml.slack.com/archives/CTK20V944/p1634910855059900
Can you verify with the latest RC?pip install clearml==1.1.5rc3
Oh no 😞 I wonder if this is connected to:
Any chance the logger is running (or you have) from a subprocess ?
Hi ShallowArcticwolf27
Does the
clearml-task
cli command currently support remote repositories with that are intended to be used with ssh
It does 🙂
but the
git@
prefix used for gitlab's ssh it seems to default to looking for the repository locally
git@ is always the prefix for SSH repositories (it does not actually mean it uses it, it's what git will return when asked on the origin of the repository. The agent knows (if SSH credentials ...
Hi ConvincingSwan15
For the train.py do I need a setup.py file in my repo to work corerctly with the agent ? For now it is just the path to train,py
I'm assuming the train.py is part of the repository, no?
If it is, how come the agent after cloning the repository cannot find it ?
Could it be it was accidentally not added to the git repo ?
Hmm ConvincingSwan15
WARNING - Could not find requested hyper-parameters ['Args/patch_size', 'Args/nb_conv', 'Args/nb_fmaps', 'Args/epochs'] on base task
Is this correct ? Can you see these arguments on the original Task in the UI (i.e. Args section, parameter epochs?)