
Reputation
Badges 1
25 × Eureka!Hi @<1661542579272945664:profile|SaltySpider22> I'm not sure I understand the answer to my parallel quesion
Can I assume that if we have two agents spinning the same experiment, your code will take it from there?
Is this true ?
Hi @<1533257278776414208:profile|SuperiorCockroach75>
ModuleNotFoundError: No module named 'statsmodels'
seems like this package is missing from the Task
wither import it manually import statsmodels
(so the automagic logs it)
Or add before task init:
Task.add_requirements("statsmodels")
task = Task.init(...)
ps: no need to @ so many people ...
but this would be still part of the clearml.conf right?
You can pass it per Task , also you can configure the agent to always pass it add this env.
https://github.com/allegroai/clearml-agent/blob/5a080798cb4292e198948fbe16cba70136cb6bdf/docs/clearml.conf#L137
I suspect it's the localhost - and the trains-agent is trying too hard to access the port, but for some reason does not report an error ...
@<1571308003204796416:profile|HollowPeacock58> seems like an internal issue copying this object config.model
This is a complex object, and it seems that for some reason
None
As a workaround just do not connect this object. it seems you cannot pickle it / copy it (see GH issue)
LazyTurkey38 notice the assumption is that the docker entry-point ends with bash, and only then the agent take charge. I'm assuming this is not te case hence the agent spins the docker, then the docker just ends, could that be?
Hi SkinnyPanda43
Are you trying to access the same Task or an external one ?
I was using clearml == 0.17.5 and I also had this issue
I think it was introduced when we moved to subprocess reporting, with 0.17.5
You can disable it with the following in clearml.conf:sdk.development.report_use_subprocess = false
Correct (copied == uploaded)
Yes, let's assume we have a task with id aabbcc
On two different machines you can do the following:trains-agent execute --docker --id aabbcc
This means you manually spin two simultaneous copies of the same experiment, once they are up and running, will your code be able to make the connection between them? (i.e. openmpi torch distribute etc?)
SparklingElephant70 , let me make sure I understand, the idea is to make sure the pipeline will launch a specific commit/branch, and that you can control it? Also are you using the pipeline add_step
function or are you decorating a function with PipelineDecorator ?
LazyTurkey38 I think this is caused by new versions of pip to report the wrong link:
https://github.com/bwoodsend/pip/commit/f533671b0ca9689855b7bdda67f44108387fe2a9
Just making sure, the original code was executed on python 3?
JitteryCoyote63 I think I failed explaining myself.
- I think the problem of the controller is that you are interacting (aka changing hyper parameters)) with a Task created using new SDK version, with an older SDK version. specifically we added section names to the hyper parameters, and only new version of the SDK is aware of it.
Make sense? - Regrading the actual problem. It seems like this is somehow related to the first one, the task at run time is using an older SDK version , and I t...
using the cleanup service
Wait FlutteringWorm14 , the cleanup service , or task.delete call ? (these are not the same)
GrievingTurkey78 MagnificentSeaurchin79 do you guys want to start a PR branch we cal all work on it?
Hi GrievingTurkey78
How can I check the server dashboard to make sure everything is working? I have tried to access the external ip through https but the browser is not able to connect.
What do you mean by the server dashboard ?
regrading (2) see here: https://allegro.ai/docs/faq/faq/#web-auth
diff line by line is probably not useful for my data config
You could request a better configuration diff feature 🙂 Feel free to add to GitHub
But this also mean I have to first load all the configuration to a dictionary first.
Yes 😞
Hi @<1587615463670550528:profile|DepravedDolphin12>
Is there anyway to get the id of the pipeline using pipeline name?
In the UI top right "details" panel should have the Pipeline ID
Is this what you are looking for ?
ok, yes, but this will install the package of the branch specified there.
Correct
So If im working on my own branch and want to run an experiment, I would have to manually put in the git path my current branch name.
When you say your own branch you mean local (i.e. not pushed to remote git repo) ?
Hi @<1697056701116583936:profile|JealousArcticwolf24>
Awesome deployment 🤩
Yes if you need another scalable model serving you can just run another instance of the clearml-serving-inference
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L77
So you end up with two of them, one per models environ...
Hi @<1697056701116583936:profile|JealousArcticwolf24> just saw the reply
Image look okay?! what what is the query? basically I'm truing to understand if grafana is connected to the Prometheus, and if the Prometheus has any data in it
Secondly, just to make sure, kafka service should be able to connect directly to the the container running the actual inference
MuddySquid7 I might have found something, and this is very very odd, it seems it will Not upload any new images post the history size, which is very odd considering the number of users actively using this feature...
Do you want to try a hack to see if it solved your issue ?
GiddyTurkey39 Okay, can I assume "Installed packages" contains the packages you need?
If so, you can setup trains-agent on a machine (see instructions on the github)
And then clone the experiment, and enqueue it into the "default" queue (or any other queue your agent is connected to)
https://github.com/allegroai/trains-agent
just got the pipeline to run
Nice!
using the default queue okay?
Using the default queue is fine. The different queue is the "services" queue that by default the "trains-server" is running an agent the will pull jobs from there.
With "services" mode, an agent will pull jobs right after the other (not waiting for the previous job to finish), as opposed to regular queue (any other) that the trains-agent will pull a job only after the previous one completed .
And do you need to run your code inside a docker, or is venv enough ?
I have the agent configured to force install requirements.txt
what do you mean by that?