Reputation
Badges 1
25 × Eureka!, but what I really want to achieve is to share this code:
You mean to share the code between them, unless this is a "preinstalled" package in the container, each endpoint has it's own separate set of modules / files
(this is on purpose, so you could actually change them, just image diff versions of the same common.py file)
If you have a requirements file then you can specify it:Task.force_requirements_env_freeze(requirements_file='requirements.txt')
If you just want pip freeze output to be shown in your "Installed Packages" section then use:Task.force_requirements_env_freeze()
Notice that in both cases you should call the function Before you call Task.init()
btw, what do you mean by "Packages will be installed from projects requirements file" ?
Thanks!
I think this one will cover both case (the issue is with files on the root of the dataset)if not (fnmatch(k, path) and fnmatch(k if '/' in k else '/{}'.format(k), '*/' + wildcard))}
Thanks @<1547028074090991616:profile|ShaggySwan64> !!
Passing to the backend guys to take a look
Hi JealousParrot68
no need for decorators, you can just pass the function to schedule_function=<function goes here> π
See scheduler here
https://github.com/allegroai/clearml/blob/8708967a5ef4d8529a1a5ea417672e3ebbb258d7/clearml/automation/scheduler.py#L485
And triggers here:
https://github.com/allegroai/clearml/blob/8708967a5ef4d8529a1a5ea417672e3ebbb258d7/clearml/automation/trigger.py#L193
https://github.com/allegroai/clearml/blob/8708967a5ef4d8529a1a5ea417672e3ebbb258d7/clea...
Hi ThankfulOwl72 checkout TrainsJob object. It should essentially do what you need:
https://github.com/allegroai/trains/blob/master/trains/automation/job.py#L14
Disable automatic model uploads
Disable the auto uploadtask = Task.init(..., auto_connect_frameworks{'pytorch': False})
Ohh I see, okay next pipeline version (coming very very soon π will have the option of function as Task, would that be better for your use case ?
(Also in case of local execution, and I can totally see why this is important, how would you specify where is the current code base ? are you expecting it to be local ?)
I'm all for trying to help with debugging pipeline, because this is really challenging.
BTW: you can run your code as if it is executed from an agent (including the param ove...
SoreDragonfly16 as SmallDeer34 mentioned, you can iterate over the Tasks, pull metrics (with either task.get_last_scalar_metrics or task.get_reported_scalar ) then report them on the Task that runs the Loop itself with the Logger.
wdyt?
Which clearml version are you using ?
- There is a workaround the fastai.launch, that is probably similar to this one:
I think you can do the launching "manually", something like:
https://github.com/allegroai/clearml/blob/fd2d6c6f5d46cad3e406e88eeb4d805455b5b3d8/examples/frameworks/pytorch/pytorch_distributed_example.py#L160
At least until we understand how to fix it automatically
Hi @<1541954607595393024:profile|BattyCrocodile47>
Has anyone used ClearML for this use case?
you mean as experiment management / model registry / data? I think this is the bread&butter of clearml π
regrading the other options ion the list, I think most of them are alternatives to metaflow, not covering the parts you mentioned, no?
Hi @<1720249416255803392:profile|IdealMole15>
I'm assuming you mean on a remote machine with clearml-agent running ?
If you do, then you either use clearml-task to create a Task (Job) and specify the container and script. or click on "Create New Experiment" in the UI, and fill out the git repo / script and specify the docker image.
Make sense?
I simplified the code, just so I could test it, this one seems to work, feel free to add the missing argparser parts :)
` from argparse import ArgumentParser
from trains import Task
model_snapshots_path = 'mnt/trains'
task = Task.init(project_name='examples', task_name='test argparser', output_uri=model_snapshots_path)
logger = task.get_logger()
def main(args):
print('Got args: %s' % args)
if name == 'main':
parent_parser = ArgumentParser(add_help=False)
parent_parser....
I can verify the behavior, I think it has to do with the way the subparser was setup.
This was the only way for me to get it to run:script.py test blah1 blah2 blah3 42When I passed specific arguments (for example --steps) it ignored them...
I'm looking into the savefig issue, meanwhile you can disable the popup by adding at the top of your code the following:import matplotlib matplotlib.rcParams['backend'] = 'agg' import matplotlib.pyplot matplotlib.pyplot.switch_backend('agg')
Can you test with the latest RC:pip install clearml==1.0.3rc0
IrateBee40
Check the first steps here:
https://clear.ml/docs/latest/docs/getting_started/ds/ds_first_steps
(Basically you have to generate credentials / configure you machine so it knows where the server is and how to access it)
Make sense ?
Hi SubstantialElk6
I'm not sure what you are asking π
Basically the clearml-agent will pull a Task from an execution queue, and execute it (based on the definition on the Task, i.e. git repo, python packages docker image etc.)
how to make sure it will traverse only current package?
Just making sure there is no bug in the process, if you call Task.init in your entire repo (serve/train) you end up with "installed packages" section that contains all the required pacakges for both use cases ?
I have separate packages for serving and training in a single repo. I donβt want serving requirements to be installed.
Hmm, it cannot "know" which is which, because it doesn't really trace all the import logs (this w...
I would clone the first experiment, then in the cloned experiment, I would change the initial weights (assuming there is a parameter storing that) to point to the latest checkpoint, i.e. provide the full path/link. Then enqueue it for execution. The downside is that the iteration counter will start from 0 and not the previous run.
@<1671689437261598720:profile|FranticWhale40> could you test the fix? just pull & run
allegroai/clearml-serving-triton:1.3.1
allegroai/clearml-serving-inference:1.3.1
Hi @<1523715429694967808:profile|ThickCrow29>
clearml.automation.auto_scaler.AutoScaler which runs smoothly (kudos!!).
NICE!
The only thing I am missing is the in the clearml dashboard/orchestration --> Is there a way to make it
hmm kind of needs backend support for that π
For now, I can just see the log of the clearML task to monitor whatβs happening
Or is this retricted to pro user ?
Yeah the GCP and AWS autoscalers dashboards are paid tier feature. But...
Sure :task = Task.init(..., auto_connect_arg_parser={'arg_not_to_log': False})This will cause all argparse to automatically be logged (and later editable) with the exception of the argument arg_not_to_log
Notice that if you have --arg-something, to exclude it add to the dict arg_something': False
You need to mount it to ~/clearml.conf (i.e. /root/clearml.conf)
So in theory you can clone yourself 2 extra times and push into an execution queue, but the issue might be actually making sure the resources are available. what did you have in mind?
Is there a way to move existing pipelines between projects?
You should be able to, go to your settings page and turn on "show hidden folders"
Then go to your project, you should see " .pipeline " sub project there, right click it and move it to another folder.