Reputation
Badges 1
25 × Eureka!HugeArcticwolf77 from the CLI you cannot control it (but we could probably add that), from code you can:
https://github.com/allegroai/clearml/blob/d17903d4e9f404593ffc1bdb7b4e710baae54662/clearml/datasets/dataset.py#L646
pass compression=ZIP_STORED
Which clearml
version are you using ?
Hi MassiveBat21
CLEARML_AGENT_GIT_USER is actually git personal token
The easiest is to have a read only user/token for all the projects.
Another option is to use the ClearML vault (unfortunately not part of the open source) to automatically take these configuration on a per user basis.
wdyt?
Can you test with the latest RC:pip install clearml==1.0.3rc0
Hi VivaciousWalrus99
Could you attach the log of the run ?
By default it will use the python it is running with.
Any chance the original experiment was executed with python2 ?
VivaciousWalrus99
Yes this is odd:1608392232071 spectralab:gpu0 DEBUG New python executable in /cs/usr/gal.hyams/.trains/venvs-builds/3.7/bin/python2
So it thinks it has python v3.7 but it is using python2 in the venv...
In your trains.conf file, set agent.python_binary to the python3.7 binary. It should be something like:agent.python_binary=/path/to/python/python3.7
Nice!
script, and the kwcoco not imported directly (but from within another package).
fyi: usually the assumption is that clearml will only list the directly imported packages, as these will pull the respective required packages when the agent will be installing them ... (meaning that if in the repository you are never actually directly importing kwcoco, it will not be listed (the package that you do import directly, the you mentioned is importing kwcoco, will be listed). I hope this ...
clearml.conf is the file thatΒ
clearml-init
Β suppose to create, right?
Correct, specifically ~/clearml.conf
BTW: see if this works:$ CLEARML_API_HOST_VERIFY_CERT=0 clearml-init
Yeah I think this kind of makes sense to me, any chance you can open a GH issue on this feature request?
Lol yeah Hydra is great. Notice you still have the ability to override Hydra from the UI so you really have the best of the two worlds
Hi SmarmySeaurchin8 , you can point to any configuration file by setting the environment variable:TRAINS_CONFIG_FILE=/home/user/my_trains.conf
Jupyter Notebook is fully supported.
Could you try and restart the notebook kernel?
@<1569496075083976704:profile|SweetShells3> remove these from your pbtext:
name: "conformer_encoder"
platform: "onnxruntime_onnx"
default_model_filename: "model.bin"
Second, what do you have in your preprocess_encoder.py
?
And where are you getting the Error? (is it from the triton container? or from the Rest request?
And how is the endpoint registered ?
Okay that makes sense.best_diabetes_detection
is different from your example curl -X POST "
None "
notice best_mage_diabetes_detection` ?
Remove this from your startup script:
#!/bin/bash
there is no need that, it actually "markes out" the entire thing
Hi WickedGoat98
but is there also a way to delete them, or wipe complete projects?
https://github.com/allegroai/trains/issues/16
Auto cleanup service here:
https://github.com/allegroai/trains/blob/master/examples/services/cleanup/cleanup_service.py
Hi @<1697419082875277312:profile|OutrageousReindeer5>
Is NetApp S3 protocol enabled or are you referring to NFS mounts?
this is very odd, can you post the log?
Woo, what a doozy.
yeah those "broken" pip versions are making our life hard ...
If that's the case you have two options:
- Create a Dataset from local/nfs and upload it to the S3 compatible NetApp storage (notice this create an immutable copy of the data)
- Create a Dataset and add "external links" to either the S3 storage with None
:port/bucket/...
or direct file linkfile:///mnt/nfs/path
, notice that in this example the system does not manage the data that means that if someone deletes/moves the data you are unaware of that And of course you can...
Hi @<1610083503607648256:profile|DiminutiveToad80>
This sounds like the wrong container ? I think we need some more context here
Hi @<1562973083189383168:profile|GrievingDuck15>
Thanks for noticing, yes the api is always versioned, we should make it clear in the docs. Also if you need the latest one use version 999 , it will default to the latest one it can support
Correct, which makes sense if you have a stochastic process and you are looking for the best model snapshot. That said I guess the default use case would be min/max (and not the global variant)
Okay, I think this might be a bit of an overkill, but I'll entertain the idea π
Try passing the user as key, and password as secret?
UnevenDolphin73 following the discussion https://clearml.slack.com/archives/CTK20V944/p1643731949324449 , I suggest this change in the pseudo code
` # task code
task = Task.init(...)
if not task.running_locally() and task.is_main_task():
# pre-init stage
StorageManager.download_folder(...) # Prepare local files for execution
else:
StorageManager.upload_file(...) # Repeated for many files needed
task.execute_remotely(...) `Now when I look at is, it kinds of make sense to h...
oh dear π if that's the case I think you should open an Issue on pypa/pip , I'm not sure what we can do other than that ...