Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi. After Upgrading Clearml To Latest Version, Got This Error From My Pipeline (Windows10, Configured And Running Tensorflowod For Tf 2.3.):

BattyLion34 is this running with an agent ?
What's the comparison with a previously working Task (in terms of python packages) ?

3 years ago
0 Hello Friends! I Am Trying To Play Around With The Configs For

Hi @<1547028116780617728:profile|TimelyRabbit96>
You are absolutely correct, we need to allow to override configuration
The code you want to change is here:
None
You can try:

channel = self._ext_grpc.aio.insecure_channel(triton_server_address, options=dict([('grpc.max_send_message_length', 512 * 1024 * 1024),  ('grpc.max_receive_message_len...
one year ago
0 Hello, I Am Trying To Run Some Algorithm In My Docker Container With Clearml Task . But The Algorithm Uses Ros, So I Need Somehow To Setup Environment Before Run It And Launch

@<1523701323046850560:profile|OutrageousSheep60> the assumption is that you have "pre_installations.sh" locally (i.e. when you are calling clearml-task ) what will happen is that this bash script will be put on top of the Task and executed before everything else inside the container
does that make sense ?

one year ago
0 It Is A Good Practice To Call A Function Decorated By

I assume the task is being launched sequentially. I'm going to prepare a more elaborate example to see what happens.

Let me know if you can produce a mock test, I would love to make sure we support the use case, this is a great example of using pipeline logic 🙂

3 years ago
0 Hello Clearml Community, Does Anyone Have An Idea How I Could Integrate/Manager Carla (

ReassuredTiger98 I ❤ the DAG in ASCII!!!

port = task_carla_server.get_parameter("General/port")

This looks great! and will acheive exactly what you are after.
BTW: when you are done you can do :
task_carla_server.mark_aborted(force=True)And it will shutdown the Clara Task 🙂

2 years ago
0 Hi, I Noted That Clearml-Serving Does Not Support Spacy Models Out Of The Box And That Clearml-Serving Only Supports Following;

Hi SubstantialElk6

noted that clearml-serving does not support Spacy models out of the box and

So this is a good point.

To add any pissing package to the preprocessing docker you can just add them in the following environment variable here: https://github.com/allegroai/clearml-serving/blob/d15bfcade54c7bdd8f3765408adc480d5ceb4b45/docker/docker-compose.yml#L83
EXTRA_PYTHON_PACKAGES="spacy>1"
Regrading a custom engine, basically this is supported with --engine custom
you c...

2 years ago
0 For The Clearml-Server Component, Can The Clearml File Server Be Configured To Any Kind Of Storage ? Example Hdfs Or Even A Database Etc..

can the ClearML File server be configured to any kind of storage ? Example hdfs or even a database etc..

DeliciousBluewhale87 long story short, no 🙂 the file server, will just store/retrieve/delete files from a local/mounted folder

Is there any ways , we can scale this file server when our data volume explodes. Maybe it wouldnt be an issue in the K8s environment anyways. Or can it also be configured such that all data is stored in the hdfs (which helps with scalablity).I would su...

2 years ago
0 Hi There Trains Riders, Is There A Built-In Way To Send Notifications Upon Completed/Failed Experiment? I Have Seen The Slack_Alerts Code Sample, Where The Monitor Is Implemented By Code. Nice. My Question Is About Existing Monitors In The Trains-Server (

Hi ColossalDeer61 ,

My question is about existing monitors in the trains-server (preferably the web UI)

So the idea is you run the code once, it creates a Task in the system and verifies the Slack credentials are working. then you can enqueue it in the "services", and voila, you have a monitoring service running, that you can control from the UI and creates alerts to Slack. unfortunately there is no built-in way to achieve that in the UI. but it should not take more than a few minute...

4 years ago
0 Hi Everyone, I Have A Question About Using

The other order (with custom decorator above pipeline fails - just for you info

)

This is on "purpose" the pipeline decorator has to be the top decorator.
Glad it works!

9 months ago
0 I See That In The Default Setup, This Command Is Part Of The Docker Bash Setup Script:

I do expect it to 

pip

 install though which doesn’t root access I think

Correct, it is installed on a venv (exactly for that).
It will not fail if the apt-get fails (only warnings)
Let me know if it worked

3 years ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1636175432829112320:profile|PlainSealion45>

  1. I used this initial model to create the endpoint with

model add

command.

I think that the initial model needs to be added with model auto-aupdate Not with model add
basically do not call model add - this is static, always using the model ID specified (you can deploy new models with manually callign model add on the same endpoint and specifying diffrent model ID , but again manual)

To Automatically have the m...

10 months ago
0 Hey All. Another Question - How Are Private Packages Handled/Installed So That Clearml-Agent Can Execute A Task? I Have A Bunch Of Private Repos For Communicating With The Data Warehouse. I Could Do A System-Wide Installation For It On The Clearml-Agent I

I'm guessing the extra index URL can be a URL to the github repo of interest?

The extra index URL is exactly what you would be passing to pip install, meaning it has to comply to pypi artifactory api.
Make sense ?

3 years ago
0 Hi, I Have A Question About Clearml-Data. Clearml-Data Probably Does Well On Data Versioning, But When It Comes To Actual Loading Of Data, Are There Examples Of How It Can Make Use Of Advanced Features Such That Those In

Hi SubstantialElk6
ClearML-Data doesn't actually "load" the data, it brings it locally and returns a folder with all your data files, from that point onward, it's up to your code to load it to the framework. Make sense ?

3 years ago
0 Getting This Error At

BTW:
TrickySheep9 what's the jupyter version / python version / OS ?

3 years ago
3 years ago
0 Hi! Regarding The

Hi GrievingTurkey78
the artifacts are downloaded to the cache folder (and by default the last 100 accessed artifacts are maintained there).

node executes the task all the info will be erased or does this have to be done explicitly?

Are you referring to the trains-agent running a docker?
By default the cache is persistent between execution (i.e. saving time on multiple downloads between experiments)

3 years ago
0 Hi There, I Have A Pipeline That Query Data From A Neo4J Database. When I Run It Using

Hi IrritableGiraffe81
PipelineDecorator.debug_pipeline() runs everything as regular python functions, but "PipelineDecorator.run_locally()" is actually sumulating all the steps on the same local machine (so that it is easier to debug the "real" pipeline running on multiple machines)
What I think is happening is that the casting of the arguments passed to the component fail.
Basically the type hints are currently ignored (we are working on using them for casting in the next version)
but righ...

2 years ago
0 Multiprocessing.Pool.Remotetraceback: """ Traceback (Most Recent Call Last): File "/Usr/Lib/Python3.6/Multiprocessing/Pool.Py", Line 119, In Worker Result = (True, Func(*Args, **Kwds)) File "/Usr/Lib/Python3.6/Multiprocessing/Pool.Py", Line 44, I

GreasyPenguin14 whats the clearml version you are using, OS & Python ?
Notice this happens on the "connect_configuration" that seems to be called after the Task was closed, could that be the case ?

3 years ago
Show more results compactanswers