Reputation
Badges 1
662 × Eureka!Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
I'm saying it's a bug
Hey @<1537605940121964544:profile|EnthusiasticShrimp49> ! You’re mostly correct. The Step
classes will be predefined (of course developers are encouraged to add/modify as needed), but as in the DataTransformationStep
, there may be user-defined functions specified. That’s not a problem though, I can provide these functions with the helper_functions
argument.
- The
.add_function_step
is indeed a failing point. I can’t really create a task from the notebook because calling `Ta...
For the former (static-ish environment variables), just add:
environment {
VAR1: value1
VAR2: value2
}
to the agent’s clearml.conf
For now this is okay - no data lost, really - but I'd like to make sure we're not missing any steps in the next upgrade
Removing the PVC is just setting the state to absent AFAIK
There's a specific fig[1].set_title(title)
call.
I see! The Hyper Datasets don't really fit our use case - it seems really focused on CNNs and image-based data, but lacking support for database-oriented tabular data.
So for now we mainly work with parquet and CSV files, and I was hoping there'd be an easy way to view those... I'll make a workaround with a "Datasets" project I suppose!
Yes exactly, but I guess I could've googled for that 😅
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch
, run git apply changes.patch
👍
Without knowing anything, I'm assuming maybe ClearML patches plt.title
and not Axes.set_title
?
Anyway sounds good! 🙂
Does that make sense SmugDolphin23 ?
It also happens when use_current_task=False
though. So the current best approach would be to not combine the task and the dataset?
Hmmm maybe 🤔 I thought that was expected behavior from poetry side actually
How or why is this the issue? I great something is getting lost in translation :D
On the local machine, we have all the packages needed. The code gets sent for remote execution, and all the local packages are frozen correctly with pip.
The pipeline controller task is then generated and executed remotely, and it has all the relevant packages.
Each component it launches, however, is missing the internal packages available earlier :(
I have no idea what’s the difference, but it does not log the internal repository 😞 If I knew why, I would be able to solve it myself… hehe
Pinging about this still, unresolved 🤔
ClearML does not capture our internal libraries and so our functions (pipeline steps) crash with missing modules.
Alternatively, it would be good to specify both some requirements and auto-detect 🤔
Either one would be nice to have. I kinda like the instant search option, but could live with an ENTER to search.
I opened this meanwhile - https://github.com/allegroai/clearml-server/issues/138
Generally, it would also be good if the pop-up presented some hints about what went wrong with fetching the experiments. Here, I know the pattern is incomplete and invalid. A less advanced user might not understand what's up.
Hey @<1523701205467926528:profile|AgitatedDove14> , thanks for the reply!
We would like to avoid dockerizing all our repositories. And for the time being we have not used the decorators, but we can do that too.
The pipeline is instead built dynamically at the moment.
The issue is that the components do not have their dependency. For example:
def step_one(...):
from internal.repo import private
# do stuff
When step_one
is added as a component to the pipeline, it does ...
I’d like to refrain from manually specifying the dependencies, since it adds a lot of overhead to extend
Exactly, it should have auto-detected the package.
It is. In what format should I specify it? Would this enforce that package on various components? Would it then no longer capture import statements?
Still crashing, I think that may not be the correct virtual environment to edit 🤔
It's the one created later down the line
I'll try that in a bit (that requires some access control changes). Any idea how can I modify the dynamically created virtualenv?
` Poetry Enabled: Ignoring requested python packages, using repository poetry lock file!
The currently activated Python version 3.10.6 is not supported by the project (~3.8.0).
Trying to find and use a compatible version.
Using python3.8 (3.8.16)
Creating virtualenv ... in /root/.clearml/venvs-builds/3.10/task_repository/...git/.venv
Installing dependencies from ...
Ultimately we're trying to avoid docker in AWS autoscaler (virtualization on top of virtualization seems redundant), and instead we maintain an AMI for a faster boot sequence.
We had no issues when we used pip
, but now when trying to work with poetry
all these issues came up.
The way I understand poetry
to work, is that it is expected there is one system-wide installation that is used for virtual environment creation and manipulation. So at least it may be desired that the ...