Reputation
Badges 1
25 × Eureka!yes.
Obviously when you import the offline session, you will need to set it to point to your server with the correct credentials
Change to add_missing_installed_packages=False,
here, and see if you end up with git diff
https://github.com/allegroai/clearml/blob/1f82b0c4010799be6157f5c845c7f6ac48e71c0c/clearml/backend_interface/task/populate.py#L158
Unfortunately this sounds a classic case of RBAC (role based access control), and only the enterprise version has that feature (I think there is a contact us button on the website for those queries).
The easiest way to support the use case you describe is to share on a Task level π
Depends on what you want to do, what do you want to do ?
Full markdown edit on the project so you can create your own reports and share them (you can also put links to the experiments themselves inside the markdown). Notice this is not per experiment reporting (we kind of assumed maintaining a per experiment report is not realistic)
ideally, I want to hardcode, e.g. use_staging = True, enqueue it; and then via clone-edit_user_properties-enqueue in UI start the second instance (edited)
Oh I see!
Actually the easiest would be to use a Section:
` task = Task.init(...)
my_params = {'use_staging': True}
task.connect(my_params, name="General")
if my_params['use_staging']:
# do something
scheduler = TaskScheduler(...) `wdyt?
So could you re-explain assuming my piepline object is created byΒ
pipeline = PipelineController(...)
?
pipe.add_step(name='stage_train', parents=['stage_process', ], monitor_artifact=['my_created_artifact'], base_task_project='examples', base_task_name='pipeline step 3 train model', parameter_override={'General/dataset_task_id': '${stage_process.id}'})
This will put the artifact names "my_created_artifact" from the step Tas...
Basically it gives it direct access to the host, this is why it is considered less safe (access on other levels as well, like network)
All the 3 steps can be found here:
https://github.com/allegroai/trains/tree/master/examples/pipeline
Hi @<1546665666675740672:profile|AttractiveFrog67>
- Make sure you stored the model's checkpoint (either pass
output_uri=True
inTask.init
or manually upload) - When you call
Task.init
pass "continue_last_task=True
" - Now you can do
last_checkpoint=task.models["output"][-1].get_local_copy()
and all you need is to loadlast_checkpoint
Hi CheekyAnt38
However now I would like to evaluate directly my machine learning model via api requests, directly over clearml. Itβs possible?
This basically means serving the model, is this what you mean?
Hi @<1601386194774528000:profile|AmusedPanda8>
I think the project name is ./model_training/trained_models/yolov8n-TEST_OKTODELETE/
and for some reason you have "." as a project project?
(notice jested projects are automatically created based on the project name with '/' as separator)
I called task.wait_for_status() to make sure the task is done
This is the issue, I will make sure wait_for_status() calls reload at the ends, so when the function returns you have the updated object
WickedGoat98 is this related to plotly opening a web page when you call show()
method ?
You can do:if not Task.running_locally() fig.show()
Calling the script without the
PipelineDecorator.run_locally()
i.e. running the pipeline remotely still gives the
ModuleNotFoundError: No module named
Do you have the needed module listed on the pipeline controller Task ? (press on the details link, then go to Execution tab / "Installed Packages"
Hi EnchantingWorm39
Great question!
Regrading the data management, I know the enterprise edition has full support for unstructured data, and we plan to soon have a solution for structured data as part of the open source (soon= hopefully in a month time)
Regrading model serving, I know you can integrate with TFServing or seldon with very little effort (usually the challenge is creating triggers etc, but but in most cases this is custom code anyhow π )
I do not have experience with Cortex/B...
. I was just wondering if instead of using local subprocesses, several agents could serve the same purpose (running several pipelines concurrently)
wouldn't --service-mode
(read as multiple simultaneous Tasks on the same agent) solve the issue?
(BTW: if you set the pipeline component target queue to "services" , this is exactly what will happen)
JitteryCoyote63 s3 should work, you can go to your profile page, see if you do not have some old credentials already there, maybe this is the issue.
Hi RoughTiger69
How about using the pipeline decorator as a way to run this logic?
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py
I think I'm missing the context of where the code is executed....
btw: you can now set the configuration_objects directly when calling add_step π
https://clearml.slack.com/archives/CTK20V944/p1633355990256600?thread_ts=1633344527.224300&cid=CTK20V944
multiple machines and reporting to the same task.
Out of curiosity , how do you launch it on multiple machines?
reporting to the same task.
So the "funny" think is, they all report on on top (overwriting) the other...
In order for them to report individually, it might be that you need multiple Tasks (i.e. one per machine)
Maybe we could somehow have prefix with rank on the cpu/network etc?! or should it be a different "title", wdyt?
Pseudo-ish code:
create pipelinepipeline = Task.create(..., task_type="controller") pipeline.mark_started() print(pipeline.id)
2. launch step A (pass arguments via command line argument / os environment)
` task = Task.init(...)
pipeline_id = os.environ['MY_MAIN_PIPELINE']
pipeline_task = Task.get_task(task_id=pipeline_id)
send some metrics / reports etc.
pipeline_task.get_logger().report_scalar(...)
pipeline_task.get_logger().report_text(...) `wdyt? (obvioudly you need to somehow pass th...
Right, you need to pass "repo" and direct it to the repository path
(BTW, what's the cleaml version)
DeliciousBluewhale87 and is it working?
Hi JitteryCoyote63
The new pipeline is almost ready for release (0.16.2),
It actually contains this exact scenario support.
Check out the example, and let me know if it fits what you are looking for:
https://github.com/allegroai/trains/blob/master/examples/pipeline/pipeline_controller.py
WackyRabbit7 hmmm seems like non regular character inside the diff.
Let me check something
MagnificentPig49 I was not aware of jsonargparse
from what I understand it's a nicer way to parse json configuration files, with argparser alike interface. Did I get that correctly?
Regrading the missing argparser, you are correct, the auto-magic is not working since jsonargparse
is calling an internal ArgParser function and not the external one (hence we miss it).
The quickest fix is adding the following line before you call parse_args()
:task.connect(parent_parser)
On my to do list, but will have to wait for later this week (feel free to ping on this thread to remind me).
Regrading the issue at hand, let me check the requirements it is using.