Of course. Here it is
https://github.com/allegroai/clearml/issues/684
I'll keep you updated
can you also check that you can access the servers ?
try to do curl http://<my server>:port
for your different servers ? and share the results 🙂
Hello Sergios,
We are working on reproducing your issue. We will update you asap
hi VexedKoala41
Your agent is running into a docker container that may have a different version of python installed. It tries to install a version of the package that doesn't exist for this python version.
Try to specify the latest matching version Task.add_requirements( ‘ipython’ , '7.16.3')
yes it could worth it, i will submit, thanks. This is the same for Task.get_task() : either id or project_name/task_name
🙂
Hi RobustRat47
Is your issue solved ? 🙂
it is basically auto-generated when you do clearml-init
there are a bench of optional configurations that are not in the auto generated file though.
Have a look here it is pretty detailed https://clear.ml/docs/latest/docs/configs/clearml_conf
hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on 🙂
Could you please give me some details about what you need to achieve ? It could also help if you could explain me what you mean by : When I use Task.create
it works ?
A screenshot would be welcomed here 🙂
DepravedSheep68 you could also try to add the port to your uri.
Output_uri: "s3://...... : port"
This means that the function will create a directory structure at local_folder , which structure will be the same as the minio's. That is to say that it will create directories corresponding to the buckets there - thus your clearml directory, which is the bucket the function found in the server root
yes it is 🙂 do you manage to upgrade ?
We also brought a lot of new features in the datasets in 1.6.2 version !
i suggest you to use a docker image that has the same python version as your local one, in order to avoid such requirements errors
you might have a proxy error or a firewall blocking somewhere
Hey TartSeagull57
We have released a version that fixes the bug. It is a RC but it is stable. Version number is 1.4.2rc1
hope it will help. keep me informed 🙂
Hello Ofir,
in general matter, the agent parses the script and finds all the imports, through an intelligent analysis (it installs the ones you use/need).
It then build an env where it will install them and run (docker, venv/pip.etc).
You can also force a package/ package version
For the pipelines (and the different ways to implement them), it is a bit different
In order to answer you precisely, we would need to have a bit more detais about what you need to achieve :
Is it a pipeline that ...
hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.
` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events
task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `
Hi CheerfulGorilla72
You have an example of implementation here :
https://github.com/allegroai/clearml/tree/master/examples/services/monitoring
Hope it will help 🙂
I see some points that you should fix
in the train step, you return 2 items but you have only one in its decorator: add mock do you really need to init a task in the pipeline controller ? you will automatically get one when executing the pipeline
can you try again after having upgraded to 3.6.2 ?
Hey UnevenDolphin73
Is there any particular reason why not to create the dataset ? I mean, you need to use it in different tasks, so it could make sense to create it , for it to exist on its own, and then to use it at will in any task, by simply retrieving its id (using Dataset.get)
Makes sense ?
hi AbruptHedgehog21
clearml-serving will use your clearml.conf file
Configure it to access your s3 bucket - that is the place for bucket, host etc
Hi Alon
This is indeed a known bug, we are currently working on a fix.
Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section
hi AbruptHedgehog21
which s3 service provider will you use ?
do you have a precise list of the var you need to add to the configuration to access your bucket ? 🙂
hey ZanyPig66
Have you set up development.default_output_uri into the configuration file ; when you init your task you add the parameter output_uri=True.
You can bind a local volume to the docker container and make the output_uri point to it
Do you mean from within a pipeline ? Do you manually report the model ? It might point to a local file, especially if it has been auto logged. That is what happens when you are saving your model (thus to the local file system) from your script.