Reputation
Badges 1
25 × Eureka!PanickyAnt52 when the docker is loaded, it will search for the highest python version to use for the agent. Then when it is launching the Task itself, it will first try to match the python version requested by the Task. It does so by looking for "python3.7" ,
what are you getting when running "which python3.7" inside the docker ? Could it be you have a venv inside the docker with the diff python version ?
Task.add_requirements('.')Should work
I made a custom image for the VMSS nodes, which is based on Ubuntu and has multiple CUDA versions installed, as well as conda and docker pre-installed.
This is very cool, any reason for not using dockers the multiple CUDA versions?
BTW: Full RestAPI reference here
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html
I think this is the discussion you are after:
https://clearml.slack.com/archives/C01H5VAUZ8R/p1612452197004900?thread_ts=1612273112.002400&cid=C01H5VAUZ8R
HI QuizzicalDove0
I guess the reason is that the idea is integration is literally 2 lines, and it will take less time to execute the code on a system with working env (we assume there is one) then to configure all the git , python packages, arguments etc...
All that said you can create an experiment from code , using Task.import_task https://allegro.ai/docs/task.html#trains.task.Task.import_task
ShallowCat10 try something similar to this one, due notice that it might take a while to get all the task objects, so I would start with a single one π
`
from trains import Task
tasks = Task.get_tasks(project_name='my_project')
for task in tasks:
scalars = task.get_reported_scalars()
for x, y in zip(scalars['title']['original_series']['x'], scalars['title']['original_series']['y']):
task.get_logger().report_scalar(title='title', series='new_series', value=y, iteration=...
PompousBeetle71 is this ArgParser argument or a connected dictionary ?
I have one agent running on the machine. I also have only one task running. This
only
happens to us when we use pipelines
@<1724960468822396928:profile|CumbersomeSealion22> notice that when you are launching a pipeline you are actually running Two tasks, one is the "pipeline" itself (i.e. the logic) and one is the component in the pipeline (i.e. the step)
If you have one agent, I'm assuming what happens is the pipeline itself (the one that you launch on your machine)...
and I have no way to save those as clearml artifacts
You could do (at the end of the codetask.upload_artifact('profiler', Path('./fil-result/'))wdyt?
GiganticTurtle0 notice that when you spin an agent with --services-mode, you basically let it run many Tasks at once (this is in contrast to the default behavior, when you have one Task per agent).
That is a good point, when the UI returns a pop error, you have all the info, on "toaster" messages, I guess it does not?!
As long as it does not make the toaster message too large, seems like a good idea to add
We abuse the object description here to store the desired file path.
LOL, yep that would work, I'm assuming you have some infrastructure library that does this hack for you, but really cool way around it π
And last but not least, for dictionary for example, it would be really cool if one could do:
Hmm what you will end up now is the following behaviour,my_other_config['bar'] will hold a copy of my_config , if you clone the Task and change "my_config" it will hav...
ConvolutedSealion94 if you do bash:cd ~/work/repo/code/ git statuswhat are you getting ?
... transformed to 'str' when passed to a function decorated withΒ
PipelineDecorator.component
Β at the time of calling it in the pipeline itself. Again, is this something intentional?
Are you sure about that? Notice the example code specifies, int as well...
Intersting!
I would also add that Task name is not unique and you can use to describe the "process / goal etc" which would make it pretty obvious to search / review from the UI.
Regrading models and branchs, Iw ould use the Task tags (you can have as many as you like) to tag the specific model type (or dev branch if the alg is diff), this means you can also easily filter based on the Tags in the UI.
can you use the Web UI to compare the artifacts from two separate subprojects?
Yes comp...
Hi StaleHippopotamus38
I imagine I could make the changes specified in the warning toΒ
/etc/security/limits.conf
Yep seems like elastic memory issue, but I think the helm chart takes care of it,
You can see a reference in the docker compose:
https://github.com/allegroai/clearml-server/blob/09ab2af34cbf9a38f317e15d17454a2eb4c7efd0/docker/docker-compose.yml#L41
It seems that I solved the problem by moving all of the local codeΒ (local repos) imports to after the Task.init
PunyPigeon71 I'm confused, how did that solve the issue on the remote machine?
With pleasure, I'll make sure we officially release RC1 soon :)
PompousParrot44 the venv created in the docker always inherits form the docker system-wide packages, so in essence if you are using the same set pf python packages, nothing will get reinstalled.
Thanks SmallDeer34 , I think you are correct, the 'output' model is returned properly, but "input" are returned as model name not model object.
Let me check something
Yep π but only in RC (or github)
Hi @<1691258563357315072:profile|ColorfulKitten60>
I think we need some context for this question π
I don't know whether you have access to the backend,
Creepy , no I do not π
I can't make anything appear in the console part of the ui
clearml_task.logger.report_text("some text") should work
Hi DrabCockroach54
I think the Kubernetes integration (k8s glue) is not part of the open-source features, and is only available as enterprise feature π
Oh, then no, you should probably do the opposite π
What is the flow like now? (meaning what are you using kubeflow for and how)
Does adding external files not upload them ti the dataset output_uri?
@<1523704667563888640:profile|CooperativeOtter46> If you are adding the links with add_external_files these files are Not re-uploaded