Reputation
Badges 1
103 × Eureka!I'm checking the possibility of our firewall between the clearml-agent
machine and the local computer running the session
Hi SmugDolphin23
Do you have a timeline for fixing this https://clearml.slack.com/archives/CTK20V944/p1661260956007059?thread_ts=1661256295.774349&cid=CTK20V944
Hi HugeArcticwolf77
I'v run the following code - which uploads the files with compression, although compression=None
ds.upload(show_progress=True, verbose=True, output_url='
', compression=None)
ds.finalize(verbose=True, auto_upload=True)
Any idea way?
I'm guessing .1
is since there were datasets that I could not see - but actually they were there (as sub projects). so everything is related
I'm looking for the bucket URI
I think my work flow needs to alter.
get the data into the bucket and then create the Dataset using the add_external_file
and then be able to consume the data locally or stream And then I can use - link_entries
I've updated the configuration and now i'm able to see sub projects
that I didn't see before.
As I can see - each dataset
is a separate sub project
- is that correct?
Well - that will convert it to a binary pickle format but not as parquet -
since the artifact will be accessed from other platforms we want to use parquet
Thx CostlyOstrich36 for your reply
Can't see the reverence to parquet
. we are currently using the above functionality , but the pd.DataFrame
is only saved as csv
compressed by gz
I found the task in the UI -
and in the UNCOMMITTED CHANGES
execution section there is
No changes logged
Any other suggestions?
Here is the screenshot - we deleted all the workers - accept for the one that we couldn't
Hi SuccessfulKoala55
Thx again for your help
in case of the google colab, the values can be provided as environment variables
We still need to run the code in a colab environment (or remote client)
do you have any example for setting the environment variables?
For a general environment variable there is an example! export MPLBACKEND=TkAg
But what would be for the clearml.conf
?
retrieving we can use
config_obj.get('sdk.google')
but how would the setting work? we did ...
Hi SweetBadger76 -
I'm I misunderstanding how this tests
worker runs?
Well it seems that we have similar https://github.com/allegroai/clearml-agent/issues/86
currently we are just creating a new worker and on a separate queue
Possibly - thinking more of https://github.com/pytorch/data/blob/main/examples/vision/caltech256.py - using clearml dataset as root path.
shape -> tuple([int],[int])
I decided to use
._task.upload_artifact(name='metadata', artifact_object=metadata)
where metadata is a dict
metadata = {**metadata, **{"name":f"{Path(file_tmp_path).name}", "shape": f"{df.shape}"}}
not sure i understand
we are running the daemon in a detached mode
clearml-agent daemon --queue <execution_queue_to_pull_from> --detached
will do
A work around that worked for me is to explicitly complete the task, seems like the flush
has some bug
task = Task.get_task('...')
task.close()
task.mark_completed()
ds.is_final()
True
SmugDolphin23 Where can I check the lates RC? I was not able to find it in the clearml github repo
I think I have a lead.
looking at list of workers from clearml-agent list
e.g. https://clearml.slack.com/archives/CTK20V944/p1657174280006479?thread_ts=1657117193.653579&cid=CTK20V944
is there a way to find the worker_name
?
in the above example the worker_id
is clearml-server-agent-group-cpu-agent-5df4476cfc-j54gh:0
but I'm not able to stop this worker using the command
clearml-agent daemon --stop
since this orphan worker has no corresponding clearml.conf
Just for the record - for who ever will be searching for a similar setup with colab
prerequisitecreate a dedicated Service Account (I was not able to authenticate with a regular User credentials (and not SA)) get SA key ( credentials.json ) Upload json to an ephemeral location (e.g. root of colab)login into ClearML Web UI - Create access key for user - https://clear.ml/docs/latest/docs/webapp/webapp_profile#creating-clearml-credentials prepare credentials` %%bash
export api=`ca...
is this running from the same linux user on which you checked the git ssh clone on that machine?
yes
The only thing that could account for this issue is somehow the agent is not getting the right info from the ~/.ssh folder
maybe -
Question - if we change the clearml.conf
do we need to stop and start the daemon?
We need to convert it a DataFrame since
Displaying metadata in the UI is only supported for pandas Dataframes for now. Skipping!
Hi @<1523701205467926528:profile|AgitatedDove14>
I'm having a similar issue.
Also notice the cleaml-agent will not change the entry point of the docker meaning if the entry point does not end with plain bash, it will not actually run anything
Not sure I understand how to run a docker_bash_setup_script
and then run a python script - Do you have an example? I could not find one.
Here is our CLI command
clearml-task --name <TASK NAME> \
--project <PRJ NAME> \
--repo git@gi...