Reputation
Badges 1
662 × Eureka!AgitatedDove14
I'll make a PR for it now, but the long story is that you have the full log, but the virtualenv version is not logged anywhere (the usual output from virtualenv just says which Python version is used, etc).
We can change the project name’s of course, if there’s a suggestion/guide that will make them see past the namespace…
I’d like to refrain from manually specifying the dependencies, since it adds a lot of overhead to extend
I've tried also e.g. setting gent.package_manager.priority_packages = ["poetry"] , and/or agent.package_manager.poetry_version = ">1.2.0" , and other flags, but these affect only the main /clearml_agent_venv environment, and not the one actually generated by the clearml-agent when executing the task
I'm not too worried about the dataset appearing (or not) in the Datasets tab. I would like it (the original task ) to to not disappear from the original project I assigned it to
I can also do this via Mongo directly, but I was hoping to skip the K8S interaction there.
I'm not entirely sure I understand the flow but I'll give it a go. I have two final questions:
This seems to only work for a single file (weights_path implies a single file, not multiple ones). Is that the case? Why do you see this as preferred to the dataset method we have now? 🤔
It's not exactly "debugging", but rather a description of the generated model/framework (generated with pygraphviz).
I guess it does not do so for all settings, but only those that come from Session()
Opened a matching feature request issue for this -> https://github.com/allegroai/clearml/issues/418
I'm not sure how the decorators achieve that; from the available examples and trials I've done, it seems that:
- Components anyway need to be available when you define the pipeline controller/decorator, i.e. same codebase
- The component code still needs to be self-composed (or, function component can also be quite complex)
- Decorators do not allow any dynamic build, because you must know how the component are connected at decoration time
With that said, it could be that the provided example...
Also, creating from functions allows dynamic pipeline creation without requiring the tasks to pre-exist in ClearML, which is IMO the strongest point to make about it
Yes, using this extra_clearml_conf parameter you can add configuration
This is again exposing the environment variables on the WebUI for everyone to see.
The idea was to specify just the names of the environment variables, and that those would be exposed automatically to the EC2 instance, without specifying what values they should have (the value is taken from the agent running the scaler)
Yes, I’ve found that too (as mentioned, I’m familiar with the repository). My issue is still that there is documentation as to what this actually offers.
Is this simply a helm chart to run an agent on a single pod? Does it scale in any way? Basically - is it a simple agent (similiar to on-premise agents, running in the background, but here on K8s), or is it a more advanced one that offers scaling features? What is it intended for, and how does it work?
The official documentation are very spa...
Note that it would succeed if e.g. run with pytest -s
i.e.ERROR Fetching experiments failed. Reason: Backend timeout (600s)ERROR Fetching experiments failed. Reason: Invalid project ID
Much much appreciated 🙏
(in the current version, that is, we’d very much like to use them obviously :D)
The new task is not running inside a new subprocess. Our platform trains several models, and we'd like each of them to be tracked in their own Task . When running locally, this is "out of the box", as we can init and close before and after each model.
When running remotely, one cannot close the main task (since it is what orchestrates everything), and so this workaround was needed.
I believe that a Pipeline should have the system tags ( pipeline , maybe hidden ), even if it created in a running Task .
Looks good! Why is it using an OutputModel and an InputModel?
Ultimately we're trying to avoid docker in AWS autoscaler (virtualization on top of virtualization seems redundant), and instead we maintain an AMI for a faster boot sequence.
We had no issues when we used pip , but now when trying to work with poetry all these issues came up.
The way I understand poetry to work, is that it is expected there is one system-wide installation that is used for virtual environment creation and manipulation. So at least it may be desired that the ...
Why not give ClearML read-only access credentials to the repository?
I guess the big question is how can I transfer local environment variables to a new Task