Reputation
Badges 1
14 × Eureka!Am I doing something differently from you?
Oh!!! Sorry 🙂
So...basically it's none of them.
All of these are hosted tiers. The self-hosted is our Open Source which you can find https://github.com/allegroai/clearml-server
It has an explanation on how to install it and some of the options available for you.
Looking at our pricing page, I can see how it's not trivial to get from there to the github page...I'll try to improve that! 😄
Hi anton, so the self-hosted ClearML provides all the features that you get from the hosted version so you're not losing on anything. You can either deploy it with docker-compose or on K8s cluster with helm charts.
We plan to expand our model object and have searchable key:value dicts associated with it, and maybe metric graphs. What you ask is for us to also add artifacts to it. These artifacts are going to be datasets (or something else?)? If I understand correctly, a key:value would be enough as you're not saving data, but only a links to where the data is. Am I right?
Hmm, I actually think there isn't a way. Once you'll have more projects in the system the project will be pushed down and you won't see it in the front page. Is there any specific reason why you want it removed?
You get all the features that are available for the hosted version such as experiment management, orchestration (with ClearML agent), data management (with ClearML Data), model serving (with ClearML serving) and more 🙂
Does that answer your question?
I'll check with R&D if this is the plan or we have something else we planned to introduce and update you
Hi CourageousKoala93 , not 100% sure I understand what graphmode is, I see it's a legacy option maybe from TF1? If you can put a small snippet so I can try it on my side that'll be helpful!
GiganticTurtle0 So 🙂 had a short chat with one of our R&D guys. ATM, what you're looking for isn't there. What you can do is use OutputModel().update_weights_package(folder_here)
and a folder will be saved with EVERYTHING in it. Now I don't think it would work for you (I assume you want to donwload the model all the time, but artifacts just some times, and don't want to download everything all the time) but it's a hack.
Another option is to use model design field to save links to a...
Hi GrittyHawk31 , you can use Dataset.get(). If you're using a file you can call Dataset.get_local_copy() to download it.
You can check https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_python#data-ingestion documentation out or an https://github.com/allegroai/clearml/blob/master/examples/datasets/data_ingestion.py that uses it
Hi EnviousStarfish54 If you want to not send info to the server, I suggest you to set an environment variable, this way as long as the machine has this envvar set it won't send to the server
That's true 🙂 Our SDK is a python based and your code needs to be python code for us to integrate with
Hmm, can you give a small code snippet of the save code? Are you using a wandb specific code? If so it makes sense we don't save it as we only intercept torch.save() and not wandb function calls
pipe.add_step(name='stage_process', parents=['stage_data', ],
base_task_project='examples', base_task_name='pipeline step 2 process dataset',
parameter_override={'General/dataset_url': '${stage_data.artifacts.dataset.url}',
'General/test_size': 0.25}, pre_execute_callback=pre_execute_callback_example, post_execute_callback=post_execute_callback_example)
Why not add the extra_index_url to the installed packages part of the script? Worked for me 😄
If you return on a pre_execute_callback false (or 0, not 100% sure 🙂 ) the step just won't run.
Makes sense?
Let me circle this back to the UI folks and see if I can get some sort of date attached to this 🙂
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
Hey, AFAIK, SDK version 1.1.0 disabled the demo server by default (still accessible by setting an envvar).
https://github.com/allegroai/clearml/releases/tag/1.1.0
Is this still an issue even in this version?
Hey Tim! So what natanM gave you is a fair place to start (albeit probably not up-to date on either side). There are a few product overviews online (but they are outdated like a month after they're written so...)
As for pricing, we are going to release our new website with updated pricing that will make it more transparent AND easier to compare 🙂
Yeah! I think maybe we don't parse the build number..let me try 🙂
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '
If you're using method decorators like https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py , calling the steps is just like calling functions (The pipeline code translates them to tasks). Then the pipeline is a logic you write on your own and then you can add whatever logic needed. Makes sense?