Reputation
Badges 1
53 × Eureka!For now, docker compose down && docker compose up -d helps
Nothing at all. There are only 2 logs from this day, and all were at 2am
I succeeded with your instructions, so thank you!
However, we concluded that we don't want to run it through ClearML after all, so we ran it standalone.
But, I'll update you if we ever run it with ClearML so you could also provide it
Ok great. We were writing clearml triggers and they didn't work with "aborted". 😅
I would kindly suggest perhaps adding a set of all statuses in the docs
trigger.add_task_trigger(name='export', schedule_task_id=SCHEDULE_ID, task_overrides={...})I would like to override the commit hash of the SCHEDULE_ID with task_overrides
Is the trigger controller running on the services queue ?
Yes, yes it is
MelancholyElk85 thank you, however I am not sure where do I put that label?
Okay, thank you for the suggestions, we'll try it out
I don't think I expressed myself well 😅
My problem is I don't know how to run a jupyterhub Task. Basically what I want is a clearml-session but with a docker container running JupyterHub instead of JupyterLab.
Do I write a Python script? If yes, how can I approach writing it? If not, what are the altenatives?
Yes, thank you. That's exactly what I'm refering to.
The agent is deployed on our on-premise machines
I think I know why though.
Clearml tries to install a package using pip, and pip cannot find the installation because it's not on pypi but it's listed in the pytorch download page
i think you're right, the default elastic values do not seem to work for us
You are not missing nothing, it is what we would like to have, to allow multiple people have their own notebook servers. We have multiple people doing different experiments, and JupyterHub would be their "playground" environment
to answer myself, the first part, task.get_parameters() retrieves a list of all the arguments which can be set. The syntax seems to be Args/{argparse destination}
However, this does not return the commit hash :((
We've sucessfully deployed it without helm with custom made docker-compose and makefiles 😄
Errors pop in occasionally in the Web UI. All we see is a dialog with the text "Error"
we didn't change a thing from the defaults that's in your github 😄 so it's 500M?
CostlyOstrich36 jupyterhub is a multi-user server, which allows many users to login and spawn their own jupyterlab instances (with custom dependencies, data etc) for runing notebooks
AgitatedDove14 no errors, because I don't know how to start 😅 I am just exploring if anyone did this before I get my hands dirty
AgitatedDove14 Well, we have gotten relatively close to the goal, i suppose you wouldn't have to do a lot of work to support it natively
I tried to build allegroai/clearml-agent-services on my laptop with ubuntu:22.04 and it failed
I haven't looked, I'll let you know next time it happens
That's only a part of a solution.
You'd also have to allow specifying jupyterhub_config.py , mounting it inside a container at a right place, mounting the docker socket in a secure manner to allow spawning user containers, connecting them to the correct network ( --host won't work), persisting the user database and user data...
This was actually a reset (of a one experiment) not a delete
Mostly the configurabilty of clearml-session and how it was designed. Jupyterhub spawns a process at :8000 which we had to port foreward by hand, but spawning new docker containers using jupyterhub.Dockerspawner and connecting them to the correct network (the hub should talk to them without --network host ) seem too difficult or even impossible.
Oh, and there was no JupyterHub stdout in the console output on clearml server, it shows the jupyterlab's output by default