Reputation
Badges 1
103 × Eureka!Thx for your reply
from the example -
since the `mp_hander`` runs
cmd = [sys.executable, sys.argv[0],
'--counter', str(counter - 1),
'--num_workers', str(args.num_workers),
'--use-subprocess' if args.subprocess else '--no-subprocess']
p = subprocess.Popen(cmd, cwd=os.getcwd())
can I run another subprocess in the mp_worker ?
Hi @<1523701205467926528:profile|AgitatedDove14>
I'm having a similar issue.
Also notice the cleaml-agent will not change the entry point of the docker meaning if the entry point does not end with plain bash, it will not actually run anything
Not sure I understand how to run a docker_bash_setup_script and then run a python script - Do you have an example? I could not find one.
Here is our CLI command
clearml-task --name <TASK NAME> \
--project <PRJ NAME> \
--repo git@gi...
Hi HugeArcticwolf77
I'v run the following code - which uploads the files with compression, although compression=None
ds.upload(show_progress=True, verbose=True, output_url='
', compression=None)
ds.finalize(verbose=True, auto_upload=True)
Any idea way?
Btw -after updating clearml.conf do I need to restart the agent?
I can't see the additional tab under https://clearml.slack.com/archives/CTK20V944/p1658199530781499?thread_ts=1658166689.168039&cid=CTK20V944 , and I reran the task and got the same error
not sure I understand
runningclearml-agent listI get
`
workers:
- company:
id: d1bd92...1e52b
name: clearml
id: clearml-server-...wdh:0
ip: x.x.x.x
... `
Thx CostlyOstrich36 for your reply
Can't see the reverence to parquet . we are currently using the above functionality , but the pd.DataFrame is only saved as csv compressed by gz
clearml-3.5.0
Hi
you will have to configure the credentials there (in a local
clearml.conf
or using environment variables
This is the part that confuses me - is there a way to configure clearml.conf from the values.yaml ? I would like the GKE to load the cluster with the correct credentials without logging into the pods and manually updating the claerml.conf file
ClearML key/secret provided to the agent
When is this provided? Is this during the build ?
google.storage { credentials = [ { bucket: "clearml-storage" project: "my-project" credentials_json: "/path/to/creds.json" }, ] }No - just emulating - it is more of /home/... /creds.json
Hi AnxiousSeal95 ,
Is there an estimate when the above feature will be available?
Are we suppose to use the "Extra Configurations" from the https://clear.ml/docs/latest/assets/images/ClearML_Server_Diagram-7ea19db8e22a7737f062cce207befe38.png ?
https://docs.google.com/drawings/d/11f-AWVmIq7P0e8bP5OnMUz0hguXm2T_Xqq7iNMA-ANA/edit?usp=sharing
add the google.storage parameters to the conf settingssdk { google.storage { credentials = [ { bucket: "clearml-storage" project: "dev" credentials_json: /path/to/SA/creds/user.json }, ] } }%
Strange
I ranclearml-agent daemon --stopand after 10 min I ranclearml-agent listand I still see a worker
updated the clearml.conf with empty worker_id/name ran
clearml-agent daemon --stop
top | grep clearmKilled the pidsran
clearml-agent list
still both of the workers are listed
we reinstalled the clearml-agent$clearml-agent --version CLEARML-AGENT version 1.2.3running top | grep clearmlwe can see the agent running
running clearml-agent listwe can see 2 workers
before running clearml-agent daemon --stopWe updated the clearml.conf and updated the worker_id and worker_name with the relevant name/id that we can see from clearml-agent list
and we get
` Could not find a running clearml-agent instance with worker_name=<clearml_worker_na...
yes - the agent is running with --docker
Great - where do I define the volume mount?
Should I build a base image that runs on the server and then use it as the base image in the container?
is this running from the same linux user on which you checked the git ssh clone on that machine?
yes
The only thing that could account for this issue is somehow the agent is not getting the right info from the ~/.ssh folder
maybe -
Question - if we change the clearml.conf do we need to stop and start the daemon?
Still trying to understand what is this default worker.
I've removed clearml.conf and reinstall clearml-agent
then running theclearml-agent listgets the following error
` Using built-in ClearML default key/secret
clearml_agent: ERROR: Could not find host server definition (missing ~/clearml.conf or Environment CLEARML_API_HOST)
To get started with ClearML: setup your own clearml-server, or create a free account at and run clearml-agent init Then returning the...
not sure i understand
we are running the daemon in a detached mode
clearml-agent daemon --queue <execution_queue_to_pull_from> --detached
Hi,
You may want to consider to do the visualizing while creating the Datasets - see https://github.com/thepycoder/asteroid_example/blob/main/get_data.py#L34 logging the head() of the dataframe