Reputation
Badges 1
662 × Eureka!Also I appreciate the time youre taking to answer AgitatedDove14 and CostlyOstrich36 , I know Fridays are not working days in Israel, so thank you π
@<1523704157695905792:profile|VivaciousBadger56> It seems like whatever you pickled in the zip file relies on some additional files that are not pickled.
Okay this was a deep dive into clearml-agent code π
Took a long time to figure out that there was a specific Python version with a specific virtualenv that was old (Python 3.6.9 and Python 3.8 had latest virtualenv, but Python 3.7.5 had an old virtualenv).
Then the task requested to use Python 3.7, and that old virtualenv version was broken.
As a result -> Could the agent maybe also output the virtualenv version used with setting up the environment for the first time?
No, I have no running agents listening to that queue. It's as if it's retained in some memory somewhere and the server keeps creating it.
SmugDolphin23 we've been working with this for 2 weeks now, and it creates a lot of junk in our UI. Is there anyway to have better control over this?
SmugDolphin23 I think you can simply change not (type(deferred_init) == int and deferred_init == 0) to deferred_init is True ?
So some UI that shows the contents of users.get_all ?
Indeed. I'll open an issue, sure!
Hm, this didn't happen until now; I'd be happy to try again with a new version, but something with 1.4.0 broke our StorageManager, so we reverted to 1.3.2
There's no decorator, just e.g.
def helper(foo: Optional[Any] = None):
return foo
def step_one(...):
# stuff
Then the type hints are not removed from helper and the code immediately crashes when being run
That's what I thought @<1523701087100473344:profile|SuccessfulKoala55> , but the server URL is correct (and WebUI is functional and responsive).
In part of our code, we look for projects with a given name, and pull all tasks in that project. That's the crash point, and it seems to be related to having running tasks in that project.
The S3 bucket credentials are defined on the agent, as the bucket is also running locally on the same machine - but I would love for the code to download and apply the file automatically!
I understand, but then the toml file needs to be parsed to ensure poetry is used. It's just a tool entry in the pyproject.toml.
On an unrelated note, when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty
We're using self hosted account
Sure! It's a bit intricate as it accommodates many of our different plotting functionalities, but this consists of the important bits (I realize we have some bad naming here, but fig[0] is actually a Figure object, and fig[1] is an Axes object):
` plt.switch_backend('agg')
sns.set_theme(...)
fig = plt.subplots(...)
sns.histplot(data, ax=fig[1], ...)
fig[1].set_xlim(...)
fig[1].set_ylim(...)
fig[1].legend(loc='best')
fig[1].set_xlabel(xlabel)
fig[1].set_ylabel(ylabel)
fig[1].set_...
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
I'm not sure what you mean by "entity", but honestly anything work. We're already monkey-patching our way π
Thanks! I'll wait for the release note/docs update π
Uhhh, not really unfortunately :white_frowning_face: . I have ~20 tasks happening in a single file, and it's quite random if/when this happens. I just noticed this tends to happen with the shorter tasks
I'm guessing that's not on pypi yet?
The Task.init is called at a later stage of the process, so I think this relates again to the whole setup process we've been discussing both here and in #340... I promise to try ;)
What's new in 1.1.6rc0?
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? π«£
Dynamic pipelines in a notebook, so I donβt have to recreate a pipeline every time a step is changed π€