It seems that the agent uses the remote repository 's lock file. We've removed and renamed the file locally (caught under local changes), but it still installs from the remote lock file 🤔
My current workaround is to use poetry and tell users to delete poetry.lock if they want their environment copied verbatim
Then perhaps mac treats missing environment variables as empty and linux just crashes? Anyway, the config loading should be deferred, shouldn't it?
So basically I'm wondering if it's possible to add some kind of small hierarchy in the artifacts, be it sections, groupings, tabs, folders, whatever.
I can see the task in the UI, it is not archived, and that's pretty much the snippet, but in full I do e.g.
Of course. We'd like to use S3 backends anyway, I couldn't spot exactly where to configure this in the chart (so it's defined in the individual agent's configuration)
At any case @<1537605940121964544:profile|EnthusiasticShrimp49> this seems like a good approach, but it’s not quite there yet. For example, even if I’d provide a simple def run_step(…) function, I’d still need to pass the instance to the function. Passing it along in the kwargs for create_function_task does not seem to work, so now I need to also upload the inputs, etc — I’m bringing this up because the pipelines do already do this for you.
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !
That's weird -- the concept of "root directory" is defined to a bucket. There is no "root dir" in S3, is there? It's only within a bucket itself.
And since the documentation states:
If we have a remote file
then StorageManager.download_folder(‘
’, ‘~/folder/’) will create ~/folder/sub/file.ext
Then I would have expected the same outcome from MinIO as I do with S3, or Azure, or any other blob container
Well the individual tasks do not seem to have the expected environment.
Hey AgitatedDove14 🙂
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good start 👍 Thanks!
Couple of thoughts from this experience:
Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)? Could we add a filter on the project name in the "All Experiments" project? Could we add the project for each of the search results? (see above pictur...
Yes, I’ve found that too (as mentioned, I’m familiar with the repository). My issue is still that there is documentation as to what this actually offers.
Is this simply a helm chart to run an agent on a single pod? Does it scale in any way? Basically - is it a simple agent (similiar to on-premise agents, running in the background, but here on K8s), or is it a more advanced one that offers scaling features? What is it intended for, and how does it work?
The official documentation are very spa...
Is there a way to accomplish this right now FrothyDog40 ? 🤔
But it does work on linux 🤔 I'm using it right now and the environment variables are not defined in the terminal, only in the .env 🤔
I'll try upgrading to 1.1.5, one moment
(the extra_vm_bash_script is what you're after)
You can use logger.report_scalar and pass a single value.
Thanks David! I appreciate that, it would be very nice to have a consistent pattern in this!
Added the following line under volumes for apiserver , fileserver , agent-services :- /data/clearml:/data/clearml
I thought so too - so I added flush calls just in case, but nothing's changed.
This is somewhat weird since it always happens in the above scenario (Ray + ClearML), and always in the last task/job from Ray
Thought it might be via docker, thanks!
ClearML 1.1.4, Matplotlib 3.3.0 (it's not the latest as we have some backward compatibility issues)
For now this is okay - no data lost, really - but I'd like to make sure we're not missing any steps in the next upgrade
Note that it would succeed if e.g. run with pytest -s