Reputation
Badges 1
662 × Eureka!FYI @<1523701087100473344:profile|SuccessfulKoala55> (or I might be doing something wrong), but it seems the python migration code comes with carriage returns, so it fails on linux by default (one has to tr -d '\r'
to use it)
EDIT: And also it defaults to /opt/allegro/data
rather than the recommended /opt/clearml/data
which is suggested when installing the server 🤔
It was really easy with the attached code, really 👍
I would only maybe suggest adding in the documentation, that if one uses the default recommended install location, then the script can be run without any command line arguments.
I had to momentarily look at the code to see the default paths match my own (though I could've also looked at --help
default values 😛 )
And agent too, I hope..?
I'd be happy to join a #releases channel just for these!
Just randomly decided to check and saw there's a server 1.4 ready 🎉
The Task.init
is called at a later stage of the process, so I think this relates again to the whole setup process we've been discussing both here and in #340... I promise to try ;)
I... did not, ashamed to admit. The documentation says only boolean values.
And last but not least, for dictionary for example, it would be really cool if one could do:my_config = task.connect_configuration(my_config, name=name) my_other_config = task.connect_configuration(my_other_config, name=other_name) my_other_config['bar'] = my_config # Creates the link automatically between the dictionaries
Running a self-hosted server indeed. It's part of a code that simply adds or uploads an artifact 🤔
https://clear.ml/docs/latest/docs/references/sdk/services_monitor
Then you can run this as a task, see also this example https://clear.ml/docs/latest/docs/guides/services/slack_alerts
So basically I'm wondering if it's possible to add some kind of small hierarchy in the artifacts, be it sections, groupings, tabs, folders, whatever.
I can navigate through the projects, but selecting one task in one project, then navigating to another project and selecting a different task -> there is no suggestion to compare the tasks.
In the projects page if I show all - I just see the projects. If I search for a task of similar name, I get results, but I can't compare them via the UI.
The only way I managed so far was to create a pseudo-comparison between unrelated tasks in the same project, then remove one task from comparion, and u...
Latest (1.5.1 I believe?), full log incoming, but it's like I've posted elsewhere already 🤔
It just sets up the environment and immediately crashes when trying to run the code.
The setup itself is done correctly.
Another example - trying to validate dataset interactions ends with
` else:
self._created_task = True
dataset_project, parent_project = self._build_hidden_project_name(dataset_project, dataset_name)
task = Task.create(
project_name=dataset_project, task_name=dataset_name, task_type=Task.TaskTypes.data_processing)
if bool(Session.check_min_api_server_version(Dataset.__min_api_version)):
get_or_create_proje...
You mean at the container level or at clearml?
Yes, the container level (when these docker shell scripts run).
The per user ID would be nice, except I upload the .env
file before the Task
is created (it's only available really early in the code).
Interesting, why won’t it be possible? Quite easy to get the source code using e.g. dill
.
I think also the script path in the created task will cause some issues, but let’s see…
After the task was initialized? 🤔
I have seen this quite frequently as well tbh!
Anything else you’d recommend paying attention to when setting the clearml-agent helm chart?
We’re using karpenter
(more magic keywords for me), so my understanding is that that will manage the scaling part.
Much much appreciated 🙏
What do you mean 😄 Using logging.config.dictConfig(...)
I'll try it out, but I would not like to rewrite that code myself maintain it, that's my point 😅
Or are you suggesting I Task.import_offline_session
?
I'd like to set up both with and without GPUs. I can use any region, preferably some EU one.
CostlyOstrich36 That looks promising, but I don't see any documentation on the returned schema (i.e. workers.worker_stats
is not specified anywhere?)
We have an internal mono-repo and some of the packages are required - they’re all available correctly for the controller, only some are required for the individual tasks, but the “magic” doesn’t happen 😞
That is, the controller does not identify them as a requirement, so they’re not installed in the tasks environment.
It’s just that for the packages
argument, ClearML says:
If not provided, packages are automatically added based on the imports used inside the wrapped function.
So… 🤔
I can also do this via Mongo directly, but I was hoping to skip the K8S interaction there.
Any follow up thoughts SuccessfulKoala55 or CostlyOstrich36 ?