Uninstall the current clearml-agent and reinstall this wheel, I hacked it to have ==, let's see if that works
@<1523701868901961728:profile|ReassuredTiger98> what are you getting with:
nvidia-smi
And here:
ls -la /usr/local/
Hi SarcasticSparrow10
I think the default search is any partial match, let me check if there is a way to do some regexp / wildcard
I'm thinking of a few plots in my current in-house tooling which are slightly different than the standard charts we look at. For example a custom parallel coordinate chart that can use aggregations, categorical variables, etc.
This can be done by comparing experiments, then check the Hyper-Parameters tab, and select graph from the drop down at the top
So my question in general is pertaining to if I would need to get better at Javascript if I were to make those changes. My guess is ...
Hey SarcasticSparrow10 see here 🙂
https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading
I think EmbarrassedSpider34 is correct.
When you pass the requirements to clearml-task, actually the agent depending on how it was configured (conda / pip) will do the installation.
That said, maybe it is worth adding support to provide the env.yml in the CLI ?
(Notice that adding specific channels needs to be configured on the agent, they are not stored per Task)
AlertCamel57 wdyt?
using the cleanup service
Wait FlutteringWorm14 , the cleanup service , or task.delete call ? (these are not the same)
Let me take a look, what's the clearml-server version and clearml python version?
FlutteringWorm14 any insight on the Task the it fails to delete ? or to reproduce ?
Thanks for pinging OutrageousGiraffe8
I think I was able to reproduce.
model is saved to the clearml as an output model when
b
is not a dictionary.
How did you make the example work with the automagic ?
Hi OutrageousGiraffe8
Does anybody knows why this is happening and is there any workaround, e.g. how to manually report model?
What exactly is the error you are getting? and with which clearml version are you using?
Regrading manual Model reporting:
https://clear.ml/docs/latest/docs/fundamentals/artifacts#manual-model-logging
Thanks! I think I was able to locate the issue, but I wanted to verify 🙂
TroubledHedgehog16 generally speaking you can expect about 10 api calls per minute if you have many reports, and about 3 per minute on low report. We just optimized the sdk so in cases there are lots of consequential reports they are better batched, I would recommend the latest RC
Hi IrritableGiraffe81
Can you share a code snippet ?
Generally I would trytask = Task.init(..., auto_connect_frameworks={"pytorch': False, 'tensorflow': False)
Hi @<1619867971730018304:profile|WhimsicalGorilla67>
No 😞 only the "admin" (owner) of the workspace has access to it
Hi DrabOwl94
I think that if I understand you correctly you have a Lot of chunks (which translates to a lot of links to small 1MB files, because this is how you setup the chunk size). Now apparently you have reached the maximum number of chunks per specific Dataset version (at the end this meta-data is stored in a document with limited size, specifically 16MB).
How many chunks do you have there?
(In other words what's the size of the entire dataset in MBs)
MoodyCentipede68 could it be that the model is on one account (workspace) and your credentials (the ones provided to the docker compose) are from another workspace?
The error itself point to the triton helper failing to get the model ID from the backend. The models are uploaded to a a specific workspace, and it looks like a mismatch (I.e. the model Id is nowhere to be found) wdyt?
before exposing our IP to the world, I suggest going over security advisory in the docs: None
as a general note, do not expose your server, the open source version is not designed for it, just put it inside your VPN and it will be fine
Hi @<1673501379764686848:profile|VirtuousSeaturtle4>
What I dont get is that the example does not refer to a bucket path. What bucket path should I specify ?
you mean to store data?
Hi @<1523704667563888640:profile|CooperativeOtter46>
Is there a way to set the name/path of the
requirements.txt
file the agent uses to install packages?
When the agent is installing packages it takes it from the "Onstalled Packages" section of the Task. Only if it is empty it will revert to "requirements.txt" from the git repository
That said, if you can Add the following to your "Installed Pacakges"
-r my_other_requirements.txt
And the agent will `my_...
@<1540142651142049792:profile|BurlyHorse22> do you mean the one refereed in the video ? (I think this is the raw data in kaggle)
CooperativeFox72
Could you try to run the docker and then inside the docker try to do:su root whoami
BeefyCow3 see this https://allegroai-trains.slack.com/archives/CTK20V944/p1593077204051100 :)
MelancholyElk85 notice there is the pipeline controller queue (i.e. which agent will run the logic of the pipeline), and the default queue for the pipeline steps (i.e. the actual steps of the pipeline).
The default queue for the pipeline logic itself is services
. you can change it ( pipeline.start(..., queue='another_q')
)
Make sense ?
Yeah.. that should have worked ...
What's the exact error you are getting ?