Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Thank You

Thanks BroadSeaturtle49
I think I was able to locate the issue != breaks the pytroch lookup
I will make sure we fix asap and release an RC.
BTW: how come 0.13.x have No linux x64 support? and the same for 0.12.x
https://download.pytorch.org/whl/cu111/torch_stable.html

one year ago
0 If I'M Using A Drive Mapping To Save Files Is There Any Easy Method/Hack That Would Allow Me Having Different Base Mappings On Different Machines?

HelplessCrocodile8

Basically the file URI might be different on a different machine (out of my control) but they point to the same artifact storage location

We might have thought of that...
in your clearml.conf file:
` sdk{
storage {
path_substitution = [
# Replace registered links with local prefixes,
# Solve mapping issues, and allow for external resource caching.
{
registered_prefix = file:///mnt/data/...

2 years ago
0 Hi

HugeArcticwolf77 changing the color is definitely a feature we will have in the next version, right now I think you cannot 😞 it is randomly chosen based on the title/series and I think your example is a great failure case of that randomness 😅

one year ago
0 Hi, I'M Trying To Run Task.Init Inside A Jupyter Notebook For The First Time (Used It A Lot Before In Normal Python Scripts), And I Get A Warning-

Nice! I'll see if we can have better error handling for it, or solve it altogether 🙂

3 years ago
0 Hi, I'M Looking At

JumpyPig73 you should be able to find in in the bottom pf the page, try scrolling down (it should be after the installed packages)

2 years ago
0 Is There A Way To Generate Usage Stats And Reports For Queues? For Example, How Often Is A Queue Used, How Much Cpu Does

We're wondering how many on-premise machines we'd like to deprecate.

I think you can see that in the queues tab, no?

one year ago
0 Hey, Don'T Really Understand Why The Clearml Worker Needs To Pull The Repository Where My Pipeline (Defined With Decorators) Is Written Is Since Apparently A Temporary Python File (Containing At Least The Code And Imports For The Executed Component) Seems

Oh I see the pipeline controller itself (not the components) is the one with the repo
To fix that add at the top of the script the following:
` from clearml import Task

Task.force_store_standalone_script()

@PipelineDecorator.pipeline(...) `That should do the trick

one year ago
0 Would Appreciate Some Help. Getting This Error. Valueerror: Node Train_Model, Parameter '${Split_Dataset.Split_Dataset_Id}', Input Type 'Split_Dataset_Id' Is Invalid

Hi VexedCat68
So if I understand correctly, the issue is this argument:
parameter_override={'Args/dataset_id': '${split_dataset.split_dataset_id}', 'Args/model_id': '${get_latest_model_id.clearml_model_id}'},I think that what is missing is telling it this an artifact:
parameter_override={'Args/dataset_id': '${split_dataset.artifacts.split_dataset_id.url}', 'Args/model_id': '${get_latest_model_id.clearml_model_id}'},You can see the example here:
https://clear.ml/docs/latest/docs/ref...

2 years ago
0 Hello Again, How Can I Use The

Hi JumpyDragonfly13

  1. is "10.19.20.15" accessible from your machine (i.e. can you ping to it)?
  2. Can you manually SSH to 10.19.20.15 on port 10022 ?
3 years ago
0 Hi Today I'M Suddenly Getting This

I think there was an issue with the entire .ml domain name (at least for some dns providers)

one year ago
0 Please See Screenshot Of Clearml-Agent Readme From The Github Page. In This Section, It Is Detailed That Clearml-Agent Picks Up Pytorch Version Automatically Based On The Cuda Version. I Would Like To Bypass This Behavior Because My Code Has A Need For A

I would like to bypass this behavior because my code has a need for a specific version of PyTorch.

DilapidatedCow43 you will get exactly the pytorch version you need, but complied to the CUDA version that is installed (pytorch people actually maintain multiple versions based on different cuda versions)

one year ago
0 Hi, We Have A Use Case That We Would Like To Upload A Local Folder Into The Cloud

OutrageousSheep60 so if this is the case I think you need to add "external links" i.e. upload the individual files to GCS, then register the links to GCS, does that make sense ?

one year ago
0 I'M A Little Confused As To How Force_Requirements_Env_Freeze Works When No Requirements File Is Supplied. Is It Supposed To Store The Full Reqs Of The Environment That Calls It?

If you have a requirements file then you can specify it:
Task.force_requirements_env_freeze(requirements_file='requirements.txt')
If you just want pip freeze output to be shown in your "Installed Packages" section then use:
Task.force_requirements_env_freeze()
Notice that in both cases you should call the function Before you call Task.init()
btw, what do you mean by "Packages will be installed from projects requirements file" ?

2 years ago
0 Hi, Does Anyone Use Mlflow / Weight & Biases /

Hmmm, can you view the settings? that's the only thing I can think of at the moment that will be diff between your setup and the working one...

Also, is there a way for you to have the trains-server behind https (on your GCP)

4 years ago
0 Typo: Was Going Crazy For A Short Amount Of Time Yelling To Myself: I Just Installed Clear-Agent Init!

Was going crazy for a short amount of time yelling to myself: I just installed clear-agent init!

oh noooooooooooooooooo
I can relate so much, happens to me too often that copy pasting into bash just uses the unicode character instead of the regular ascii one
I'll let the front-end guys know, so we do not make ppl go crazy 😉

3 years ago
0 Typo: Was Going Crazy For A Short Amount Of Time Yelling To Myself: I Just Installed Clear-Agent Init!

BTW: is this on the community server or self-hosted (aka docker-compose)?

3 years ago
0 When I Do

he problem is due to tight security on this k8 cluster, the k8 pod cannot reach the public file server url which is associated with the dataset.

Understood, that makes sense, if this is the case then the path_substitution feature is exactly what you are looking for

one year ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

Hi ReassuredTiger98
Agent's queue priory can be translated to the order the agent will pull jobs from.
Now let's assume we have two agents with priorities A,B for one and B,A for the other. If we only push a Task to queue A, and both agents are idle (implying queue B is empty), there is no guarantee which one will pull the job.
Does that make sense ?
What is the use-case you are trying to solve/optimize for ?

3 years ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

but it is not optimal if one of the agents is only able to handle tasks of a single queue (e.g. if the second agent can only work on tasks of type B).

How so?

3 years ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

Sure thing 🙂
BTW: ReassuredTiger98 this is definitely an interesting use case, and I think you can actually write some code to solve it if you like.
Basically let's followup on you setup:
Machine X: agent listening to queue A, B_machine_a *notice we have two agents here Machine Y: agent listening to queue B_machine_bNow we (the users) will push our jobs into queues A and B
Now we have a service that does the following:
` see if we have a job in queue B
check if machine Y is working...

3 years ago
0 Hi, I Try To Optimize My Hyperparamters With

Hmm ConvincingSwan15

WARNING - Could not find requested hyper-parameters ['Args/patch_size', 'Args/nb_conv', 'Args/nb_fmaps', 'Args/epochs'] on base task

Is this correct ? Can you see these arguments on the original Task in the UI (i.e. Args section, parameter epochs?)

3 years ago
0 Hi, I Try To Optimize My Hyperparamters With

Hi ConvincingSwan15
A few background questions:

Where is the code that we want to optimize? Do you already have a Task of that code executed?

"find my learning script"

Could you elaborate ? is this connect to the first question ?

3 years ago
0 Hi, I Try To Optimize My Hyperparamters With

Hmm, maybe the original Task was executed with older versions? (before the section names were introduced)
Let's try:
DiscreteParameterRange('epochs', values=[30]),Does that gives a warning ?

3 years ago
0 Trying To Access The Csv File Uploaded On The Clearml Dataset In My Local Device Is Giving Me Some Errors

You put it there 🙂 so the assumption you know what you are looking for, or use glob? wdyt?

2 years ago
Show more results compactanswers