Is there a specific reason you would want them executed on the same machine? Cache?
@<1545216070686609408:profile|EnthusiasticCow4> , I think add_files
always generates a new version. I mean, you add files to your dataset, so the version has changed. Does that make sense?
Hi RoughTiger69 , you can specify a queue per step with execution_queue
parameter in add_function_step
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller
Same goes for the docker image - docker
parameter add_function_step
Hi ShallowGoldfish8 , can you elaborate please? You mean train with different data?
Hi @<1523701062857396224:profile|AttractiveShrimp45> , I think this is currently by design. How would you suggest doing multiple metric optimization - priority between metrics after certain threshold is met?
Can you please elaborate what AWS Lambda is and what your use case is with it? When running in a regular state does this error occur?
I think you need to do latest_dataset = Dataset.get(dataset_id=<DATASET_ID>)
UnevenDolphin73 , can you please provide a screenshot of the window, message and the URL in sight?
Is there any specific reason you're not running in docker mode? Running in docker would simplify things
This can be a bit of a problem as well since not all packages for 3.8 have the same versions available for 3.6 for example. It's recommended to run on the same python versions OR have the required python version installed on the remote machine
TartSeal39 , Hi 🙂
Do I understand correctly that you want to push parameters for Task.create() from a .yml file?
WittyOwl57 , It determines the user that created the object. What is the sign in method that you and your team are using?
Hi @<1539780258050347008:profile|CheerfulKoala77> , it seems that you're trying to use the same 'Workers Prefix' setup for two different autoscalers, workers prefix must be unique between autoscalers
Hi ClumsyElephant70 ,
What about# pip cache folder mapped into docker, used for python package caching docker_pip_cache = ~/.clearml/pip-cache # apt cache folder mapped into docker, used for ubuntu package caching docker_apt_cache = ~/.clearml/apt-cache
SubstantialElk6 ,
We were trying with 'from task' at the moment. But the question apply to all methods.
You can specify this using add_function_step(..., execution_queue="<QUEUE>")
Make certain tasks in the pipeline run in the same container session, instead of spawning new container sessions? (To improve efficiency)
I'm not sure this is possible currently. This could a be nice feature request. Maybe open a github request?
Can you please add a screenshot of how the hyper params show in the UI for you?
SubstantialElk6 , can you please verify that you have all the required packages installed locally ? Also in your ~/clearml.conf
what is the setting of agent.package_manager.system_site_packages
Setup shell script works in docker mode
WackyRabbit7 I don't believe there is currently a 'children' section for a task. You could try managing the children to access them later.
One option is add_pipeline_tags(True)
this should mark all the child tasks with a tag of the parent task
Hi @<1671689469218000896:profile|PleasantWalrus77> , is this AWS S3 or something like minio?
I am not very familiar with KubeFlow but as far as I know it is mainly for orchestration whereas ClearML offers a full E2E solution 🙂
I would advise using ClearML 😄
Hi @<1828241063677005824:profile|ReassuredAlligator91> , I think as long as you have access to the email account that signed up to the original workspace, you should be OK. Just pass down the credentials to the account to the relevant people (yourself for example) and manage it from there.
WDYT?
What do you mean by organization? In Enterprise, you have users, roles & access controls based on those roles.
Looks like it's not running in docker mode 🙂
Otherwise you'd have the 'docker run' command at the sttart
Yep, although I'm quite sure you could build some logic on top of that to manage proper queueing
Hi @<1545216070686609408:profile|EnthusiasticCow4> , I suggest you try ClearML-Serving
None
Hi RattyLouse61 ,
Do you have an example of the parameters you're trying to connect?