Reputation
Badges 1
662 × Eureka!Is there a way to accomplish this right now FrothyDog40 ? 🤔
Each user creates a .env
file for their needs or exports them in the shell running the python code. Currently I copy the environment variables to an S3 bucket and download it from there.
One way to circumvent this btw would be to also add/use the --python
flag for virtualenv
This seems to be fine for now, if any future lookups finds this thread, btwwith mock.patch('clearml.datasets.dataset.Dataset.create'): ...
Well you can install the binary in the additional start up commands.
Matter of fact, you can just include the ECR login in the "startup steps" offered by the scaler, so no need for this repository. I was thinking these are local instances.
Not sure if ClearML has any built in support, but we used the above for a similar issue but with Prefect2 :)
(the extra_vm_bash_script
is what you're after)
That's up and running and is perfectly fine.
Thought it might be via docker, thanks!
I would expect the service to actually implicitly inject it to new instances prior to applying the user's extra configuration 🤔
Either, honestly, would be great. I meant even just a link to a blank comparison and one can then add the experiments from that view
-
I guess? 🤔 I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
Managed now 🙂 Thank you for your patience!
I edited the previous post with some suggestions/thoughts
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
I'm saying it's a bug
For now this is okay - no data lost, really - but I'd like to make sure we're not missing any steps in the next upgrade
Removing the PVC is just setting the state to absent AFAIK
Yes exactly, but I guess I could've googled for that 😅
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch
, run git apply changes.patch
👍
Anyway sounds good! 🙂
Does that make sense SmugDolphin23 ?
It also happens when use_current_task=False
though. So the current best approach would be to not combine the task and the dataset?
How or why is this the issue? I great something is getting lost in translation :D
On the local machine, we have all the packages needed. The code gets sent for remote execution, and all the local packages are frozen correctly with pip.
The pipeline controller task is then generated and executed remotely, and it has all the relevant packages.
Each component it launches, however, is missing the internal packages available earlier :(
Pinging about this still, unresolved 🤔
ClearML does not capture our internal libraries and so our functions (pipeline steps) crash with missing modules.
Alternatively, it would be good to specify both some requirements and auto-detect 🤔
Hey @<1523701205467926528:profile|AgitatedDove14> , thanks for the reply!
We would like to avoid dockerizing all our repositories. And for the time being we have not used the decorators, but we can do that too.
The pipeline is instead built dynamically at the moment.
The issue is that the components do not have their dependency. For example:
def step_one(...):
from internal.repo import private
# do stuff
When step_one
is added as a component to the pipeline, it does ...
Exactly, it should have auto-detected the package.