Reputation
Badges 1
25 × Eureka!Quite hard for me to try this right
π
How do I reproduce it ?
I pass my dataset as parameter of pipeline:
@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df
dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.
Another option is to use pipeline from dec...
Try to upload something to the file server ?
None
Thank you so much!! π€©
I would say 4vCPUs and 512GB storage , but it really depends on the load you will put on it
Hi @<1603198134261911552:profile|ColossalReindeer77>
Hello! does anyone know how to do
HPO
when your parameters are in a
Hydra
Basically hydra parameters are overridden with "Hydra/param"
(this is equivalent to the "override" option of hydra in CLI)
OmegaConf
is the configuration, the overrides are in the Hyperparameters "Hydra" section
None
try Hydra/trainer.params.batch_size
hydra separates nesting with "."
Hi @<1673501379764686848:profile|VirtuousSeaturtle4>
What I dont get is that the example does not refer to a bucket path. What bucket path should I specify ?
you mean to store data?
Hi @<1523704667563888640:profile|CooperativeOtter46>
Is there a way to set the name/path of the
requirements.txt
file the agent uses to install packages?
When the agent is installing packages it takes it from the "Onstalled Packages" section of the Task. Only if it is empty it will revert to "requirements.txt" from the git repository
That said, if you can Add the following to your "Installed Pacakges"
-r my_other_requirements.txt
And the agent will `my_...
The docker crashes and I want to be abel to debug it exactly as it is run by the agent
On your machine (any machine)
pip install clearml-agent
clearml-agent build --id <taskID> --docker "local_mydocker_name"
docker run -it local_mydocker_name bash
And is "requirements-dev.txt" in your git root folder?
What is your clearml-agent version?
Does adding external files not upload them ti the dataset output_uri?
@<1523704667563888640:profile|CooperativeOtter46> If you are adding the links with add_external_files
these files are Not re-uploaded
Hi @<1523701295830011904:profile|CluelessFlamingo93>
from your log:
ImportError: cannot import name 'packaging' from 'pkg_resources' (/home/bat/.clearml/venvs-builds/3.9/lib/python3.9/site-packages/pkg_resources/__init__.py)
I'm guessing yolox/setuptools
None
Try adding to the "Installed packages"
setuptools==69.5.1
(Something about the `setup...
using the cleanup service
Wait FlutteringWorm14 , the cleanup service , or task.delete call ? (these are not the same)
Hi IrritableGiraffe81
Yes it deploys all ClearML (including web).
ClearML-serving unfortunately is a bit more complicated to spin, as it needs actual compute nodes.
That said we are working on making it a lot easier π
IrritableGiraffe81 could it be the pipeline component is not importing pandas inside the function? Notice that a function decorated with pipeline component become a stand-alone, this means that if you need pandas you need to import inside the function. The same goes for all the rest of the packages used.
When you are running with run_loclly or debug_pipeline you are using your local env , as opposed to the actual pipeline where a new env is created inside the repo.
Can you send the Entire p...
Hi IrritableGiraffe81
PipelineDecorator.debug_pipeline() runs everything as regular python functions, but "PipelineDecorator.run_locally()" is actually sumulating all the steps on the same local machine (so that it is easier to debug the "real" pipeline running on multiple machines)
What I think is happening is that the casting of the arguments passed to the component fail.
Basically the type hints are currently ignored (we are working on using them for casting in the next version)
but righ...
Hi IrritableGiraffe81
Can you share a code snippet ?
Generally I would trytask = Task.init(..., auto_connect_frameworks={"pytorch': False, 'tensorflow': False)
I have install a python environment by virtualenv tool, let's say
/home/frank/env
and python is
/home/frank/env/bin/python3.
How to reuse the virtualenv by setting clearml agent?
So the agent is already caching the entire venv for you, nothing to worry about, just make sure you have this line in clearml:
https://github.com/allegroai/clearml-agent/blob/249b51a31bee97d63f41c6d5542e657962008b68/docs/clearml.conf#L131
No need to provide it an existing...
Hi AttractiveShrimp45
Well, I would use the Task.connect
to add a section with any configuration your are using. for exampleTask.current_task().connect(my_dict_with_conf_for_data, name="dataset51")
wdyt?
Correct π
btw: my_dict_with_conf_for_data
can be any object, not just dict. It will list all the properties of the object (as long as they do not start with _)
Would I be able to add customized columns like I am able to inΒ
task.connect
Β ? Same question applies for parallel coordinates and all kinds of comparisons
No to both π
Hi SmarmySeaurchin8
Could you open a bug on GitHub, so this is not lost? Let's assume 'a' is tracked, how would one change 'a' in the UI?
SmarmySeaurchin8 what do you think?
https://github.com/allegroai/trains/issues/265#issuecomment-748543102
task.connect_configuration