Reputation
Badges 1
662 × Eureka!AgitatedDove14 Unfortunately not, the queues tab shows only the number of tasks, but not resources used in the queue . I can toggle between the different workers but then I don't get the full image.
We just do task.close() and then start a new task.Init() manually, so our "pipelines" are self-controlled
It's a small snippet that ensures identically named projects are still unique'd with a running number.
I'm using some old agent I fear, since our infra person decided to use chart 3.3.0 π
I'll try with the env var too. Do you personally recommend docker over the simple AMI + virtual environment?
More complete log does not add much information -Cloning into '/root/.clearml/venvs-builds/3.10/task_repository/xxx/xxx'... fatal: could not read Username for ' ': terminal prompts disabled fatal: clone of ' ` ' into submodule path '/root/.clearml/venvs-builds/3.10/task_repository/...
It seems that the agent uses the remote repository 's lock file. We've removed and renamed the file locally (caught under local changes), but it still installs from the remote lock file π€
My current workaround is to use poetry and tell users to delete poetry.lock if they want their environment copied verbatim
Then perhaps mac treats missing environment variables as empty and linux just crashes? Anyway, the config loading should be deferred, shouldn't it?
Created this for follow up, SuccessfulKoala55 ; I'm really stumped. Spent the entire day on this π₯Ή
https://github.com/allegroai/clearml-agent/issues/134
So basically I'm wondering if it's possible to add some kind of small hierarchy in the artifacts, be it sections, groupings, tabs, folders, whatever.
Hurrah! Addedgit config --system credential.helper 'store --file /root/.git-credentials' to the extra_vm_bash_script and now it works
(logs the given git credentials in the store file, which can then be used immediately for the recursive calls)
I can see the task in the UI, it is not archived, and that's pretty much the snippet, but in full I do e.g.
Of course. We'd like to use S3 backends anyway, I couldn't spot exactly where to configure this in the chart (so it's defined in the individual agent's configuration)
At any case @<1537605940121964544:profile|EnthusiasticShrimp49> this seems like a good approach, but itβs not quite there yet. For example, even if Iβd provide a simple def run_step(β¦) function, Iβd still need to pass the instance to the function. Passing it along in the kwargs for create_function_task does not seem to work, so now I need to also upload the inputs, etc β Iβm bringing this up because the pipelines do already do this for you.
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !
That's weird -- the concept of "root directory" is defined to a bucket. There is no "root dir" in S3, is there? It's only within a bucket itself.
And since the documentation states:
If we have a remote file
then StorageManager.download_folder(β
β, β~/folder/β) will create ~/folder/sub/file.ext
Then I would have expected the same outcome from MinIO as I do with S3, or Azure, or any other blob container
Well the individual tasks do not seem to have the expected environment.
Hey AgitatedDove14 π
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good start π Thanks!
Couple of thoughts from this experience:
Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)? Could we add a filter on the project name in the "All Experiments" project? Could we add the project for each of the search results? (see above pictur...
Hey @<1537605940121964544:profile|EnthusiasticShrimp49> ! Youβre mostly correct. The Step classes will be predefined (of course developers are encouraged to add/modify as needed), but as in the DataTransformationStep , there may be user-defined functions specified. Thatβs not a problem though, I can provide these functions with the helper_functions argument.
- The
.add_function_stepis indeed a failing point. I canβt really create a task from the notebook because calling `Ta...
Yes, Iβve found that too (as mentioned, Iβm familiar with the repository). My issue is still that there is documentation as to what this actually offers.
Is this simply a helm chart to run an agent on a single pod? Does it scale in any way? Basically - is it a simple agent (similiar to on-premise agents, running in the background, but here on K8s), or is it a more advanced one that offers scaling features? What is it intended for, and how does it work?
The official documentation are very spa...
Is there a way to accomplish this right now FrothyDog40 ? π€
But it does work on linux π€ I'm using it right now and the environment variables are not defined in the terminal, only in the .env π€
I'll try upgrading to 1.1.5, one moment
(the extra_vm_bash_script is what you're after)
You can use logger.report_scalar and pass a single value.
I was thinking of using the --volume settings in clearml.conf to mount the relevant directories for each user (so it's somewhat customizable). Would that work?
It would be amazing if one can specify specific local dependencies for remote execution, and those would be uploaded to the file server and downloaded before the code starts executing