Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
PanickyMoth78
Moderator
34 Questions, 167 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

166 × Eureka!
0 Hi. I'M Running This Little Pipeline:

Note that if I change the component to return a regular meaningless string - "mock_path" , the pipeline completes rather quickly and the dataset is not uploaded.

2 years ago
0 Hi. I'M Running This Little Pipeline:

I found that instead of returning some_returned_url (which triggers zipping and saving of the filed under that url), I can wrap it in a dict: {"the url": some_returned_url} which then lets me pass back the url to the pipeline and only that dict gets uploaded (e.g. {'run_datasets_path': Path('/data/my_datasets_path/run_id_1')} ) I can divert all files that I do want uploaded and tracked by clearml to gs:// by adding at start of task-fuction: ` Logger.current_logger().se...

2 years ago
2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

I'll do a clean relaunch of everything (scaler and pipeline)

2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

so..
I restarted the autoscaler with this configuration object:
` [{"resource_name": "cpu_default", "machine_type": "n1-standard-1", "cpu_only": true, "gpu_type": null, "gpu_count": 1, "preemptible": false, "num_instances": 5, "queue_name": "default", "source_image": "projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220131", "disk_size_gb": 100}, {"resource_name": "cpu_services", "machine_type": "n1-standard-1", "cpu_only": true, "gpu_type": null, "gpu_count": 1, "preemptible": fa...

2 years ago
0 Hi. First Time User Here

In case anyone else is interested. We found two alternative solutions:
Repeating the first steps but from within a Docker container ( docker run -it --rm python:3.9 bash ) worked for me.alternatively
The example tasks (or at least those I've tried) that appear in the clear ml examples within a new workspace have clearml==0.17.5 (an old clearml version) listed in "INSTALLED PACKAGES". Updating the clearml package within the task to 1.5.0 let me run the clear-ml agent daemon lo...

2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

TimelyPenguin76 , CostlyOstrich36 thanks again for trying to work through this.

How about we change approach to make things easier?

Can you give me instructions on how to start a GCP Autoscaler of your choice that would work with the clearml pipline example such as the one I shared earlier https://clearml.slack.com/files/U03JT5JNS9M/F03PX2FSTK2/pipe_script.py ?

At this point, I just want to see an autoscaler that actually works (I'd need resources for the two queues, default and ...

2 years ago
0 Hi (Again... Sorry For Asking So Many Questions) Question About Using Google Cloud Storage In A Clearml Agent Running In Aws Ec2 Instance. My

My local environment has clearml version 1.6.3rc0
and agents in aws were started with the AWS Autoscaler which has no explicit place for google credentials.

I see a place for Additional ClearML Configuration in the AWS autoscaler UI which I suspect may help but I don't see how I can pass a secrets file along with my agent.

2 years ago
0 Hi There. I'M Trying To Switch Pipeline Code From A Local Run Using

Thanks for the fix and the mock HPO example code !
Pipeline behaviour with the fix is looking good.
I see the point about changes to data inside the controller possibly causing dependencies for step 3 (or, at least, making it harder for the interpreter to know).

2 years ago
0 Hi. First Time User Here

thanks, I'll DM

2 years ago
0 Hi. Question About Dataset Upload Errors: When Uploading A

If

Dataset.upload()

does not crash or return a success value that I can check and

Are you saying that with this error showing upload data does not crash? (edited)

Unfortunately that is correct. It continues as if nothing happened!

To replicate this in linux (even with max_workers=1 ):
https://averagelinuxuser.com/limit-bandwidth-linux/ to throttle your connection: sudo apt-get install wondershaper
Throttle your connection to 1mb/s with somethin...

2 years ago
2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

On the bright side, we started off with agents failing to run on VMs so this is progress 🙂

2 years ago
0 Bug?

hmm.
this isn't supported though:
dataset_args = dataset.connect(dataset_args)

2 years ago
0 Hi There. I'M Trying To Switch Pipeline Code From A Local Run Using

first, thanks for having these discussions. I appreciate this kind of support is an effort 🙏
Yes. i perfectly understand that once a pipeline job (or a task) is sent off in this manner, it executes separately (and, most likely in a different machine) from the process that instantiated it.
I still feel strongly that such a command should not be thought of as a fire and exit operation. I can think of several scenarios where continued execution of the instantiating process is desired:
I ...

2 years ago
0 Bug?

Oh sure, use

they will be visible on the Dataset page on the version in question

That sounds simple enough.
Though I imagine I'd need to explicitly report every figure. Correct?

2 years ago
0 Hi (Again... Sorry For Asking So Many Questions) Question About Using Google Cloud Storage In A Clearml Agent Running In Aws Ec2 Instance. My

For anyone following, you can "inject" a credentials json file for a google cloud service account so at to get access to your google cloud storage from agents on aws ec2 instances that are managed by the AWS autoscaler by providing the following in the ADDITIONAL CLEARML CONFIGURATION when starting the autoscaler:
` sdk.google.storage.credentials_json: "/root/gs.cred"
sdk.google.storage.project: "<my-gcp-project-id>"
files {
gsc {
contents: """<copy-paste the contents of yo...

2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

I noticed that the base docker image does not appear in the autoscaler task' configuration_object
which is:
` [{"resource_name": "cpu_default", "machine_type": "n1-standard-1", "cpu_only": true, "gpu_type": "", "gpu_count": 1, "preemptible": false, "num_instances": 5, "queue_name": "default", "source_image": "projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220131", "disk_size_gb": 100}, {"resource_name": "cpu_services", "machine_type": "n1-standard-1", "cpu_only": true, "gp...

2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

I can try switching to gpu-enabled machines just to see if that path can be made to work but the services queue shouldn't need gpu so I hope we figure out running the pipeline task on cpu nodes

2 years ago
0 Another Question On The Topic Of How A Remote Execution Of A Pipeline Kills The Calling Process (Previously Discussed

I've also not figured out how to modify the examples above to wait for one pipline to end before the next begins

2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

I'll give it a try.
And if I wanted to support GPU in the default queue, are you saying that I'd need a different machine from the n1-standard-1 ?

2 years ago
2 years ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

that's strange because, opening the currently running autoscaler config I see this:

2 years ago
0 Hi. I Have A Few Questions About The Snippet Attached

perhaps anecdotal but just calling random.seed() will set the seed using the system time for you
https://docs.python.org/3/library/random.html#random.seed

2 years ago
0 Hi There. I'M Trying To Switch Pipeline Code From A Local Run Using

Hmm interesting, so like a callback?!

like https://github.com/allegroai/clearml/blob/bca9a6de3095f411ae5b766d00967535a13e8401/examples/pipeline/pipeline_from_tasks.py#L54-L55 pipe-step level callbacks? I guess that mechanism could serve. Where do these callbacks run? In the instantiating process? If so, that would work (since the callback function can be any code I wish, right?)

I might want to dispatch other jobs from within the same process.

This is actually something t...

2 years ago
Show more results compactanswers