have you tried to add the requirements using Task.add_requirements( local_packages ) in your main file ?
hey WhoppingMole85
Do you want to initiate a task and link it to a dataset, or simply create a dataset ?
yes it could worth it, i will submit, thanks. This is the same for Task.get_task() : either id or project_name/task_name
🙂
those are the credentials you got from your self hosted server ?
what about the logs before the error ? i think it relevant to have them all. i try to isolate the error, and to understand if it comes from the cred, the servers addresses, a file error or a network error
hey SmugSnake6
Can you give some more precisions on your configuration please ? (clearml, agent, server versions)
Also, if you have some example code to share it could help us reproduce the issue and thus help you a lot faster 🙂 (script, command line for firing your agent)
I see some points that you should fix
in the train step, you return 2 items but you have only one in its decorator: add mock do you really need to init a task in the pipeline controller ? you will automatically get one when executing the pipeline
Of course. Here it is
https://github.com/allegroai/clearml/issues/684
I'll keep you updated
hey WhoppingMole85 good morning !
try to pip it !pip install clearml -U
and then check withpip show clearml
Hey
I'll play a bit with what you sent, because reproducing the issues help a lot to solve them. I keep you updated 😊
can you also check that you can access the servers ?
try to do curl http://<my server>:port
for your different servers ? and share the results 🙂
We have released a lot of versions since that one 🙂 🙂
Can you please try to upgrade to the lastest clearml (1.6.2) and try again ?
Hi AverageRabbit65
You are using Pipeline from Task.
The steps in this case are existing clearml tasks, thus the task you specify when you add each step ( parameters base_task_project and base_task_name ) are attributes of pre existing tasks.
To make this example work, you have first to create themproject_name = 'rudolf' Task.init(project_name=project_name, task_name="Pipeline step 1 process dataset") Task.init(project_name=project_name, task_name="Pipeline step 2 train model")
You co...
You can initiate your task as usual. When some dataset will be used in it - for example it could start by retrieving it using Dataset.get - then the dataset will be registered in the Info section (check in the UI) 😊
Hi Alon
This is indeed a known bug, we are currently working on a fix.
yes it is 🙂 do you manage to upgrade ?
We also brought a lot of new features in the datasets in 1.6.2 version !
you are in a regular execution - i mean not a local one. So the different pipeline tasks has been enqueued. You simply need to fire an agent to pull the enqueued tasks. I would advice you to specify the queue in the steps (parameter execution_queue ).
You then fire your agent :
clearml-agent daemon --queue my_queue
check that your task are enqueued in the queue the agent is listening to.
from the webUI, in your step's task, check the default_queue in the configuration section.
when you fire the agent you should have a log that also specifies which queue the agentis ssigned to
finally, in the webApp, you can check the Workers & Queues section. There you could see the agent(s), the queue they are listening to, and what tasks are enqueued in what queue
Hi CheerfulGorilla72
You have an example of implementation here :
https://github.com/allegroai/clearml/tree/master/examples/services/monitoring
Hope it will help 🙂
hey
You have 2 options to retrieve a dataset : by its id or by the project_name AND dataset_name - those ones are working together, you need to pass both of them !
Agent is a process that pulls task from a queue and assigns ressources (worker) to them. In the pipeline, when not runned locally, steps are enqueued tasks
Hi SmugSnake6
I might have found you a solution 🎉 I answered on the GH thread https://github.com/allegroai/clearml-agent/issues/111
can you tell me what your clearml and clearml server versions are please ?
When the pipeline or any step is executed, a task is created, and it name will be taken from the decorator parameters. Additionally, for a step, the name parameter is optional : if not provided, the function name will be used instead.
It seems to me that your script fails creating the pipeline controller task because it fails pulling the name parameter. which is weird ... Weird because in the last error line, we can see that name !
i dont know if it will help but here is what i would test :
remove temporary the task init in the controller use name and project parameters that dont have spaces in their name dont use services as a default queue