
Reputation
Badges 1
53 × Eureka!SuccessfulKoala55 sorry for the bump, what's the status of the fix?
we didn't change a thing from the defaults that's in your github 😄 so it's 500M?
No errors in logs, but that's because I restarted the deployment :(
I guess I'll let you know the next time this happens haha
Okay, thank you for the suggestions, we'll try it out
i think you're right, the default elastic values do not seem to work for us
This means that an agent only ever spins up one particular image? I'd like to define different container images for different tasks, possibly even build them in the process of starting a task. Is such a thing possible?
CostlyOstrich36 this sounds great. How do I accomplish that?
By language, I meant the syntax. What is Args
and what is batch
in Args/batch
and what other values exist 😀
By commit hash, I mean the hash od the commit a task was run from. I wish to refer to that commit hash in another task (started with a triggerscheduler) in code
to answer myself, the first part, task.get_parameters()
retrieves a list of all the arguments which can be set. The syntax seems to be Args/{argparse destination}
However, this does not return the commit hash :((
trigger.add_task_trigger(name='export', schedule_task_id=SCHEDULE_ID, task_overrides={...})
I would like to override the commit hash of the SCHEDULE_ID
with task_overrides
Haha we manage our own deployment without k8s, so no dice there
But, it turns out we are using nginx as a reverse proxy so putting a client_max_body_size
inside a nginx.conf solved it for us. Thanks :))
MelancholyElk85 thank you, however I am not sure where do I put that label?
Yeah, sorry I typoed 😅 "newer than 18.04" was I supposed to say
What I meant was that we rebuilt them with 22.04
Thank you, I understand now :D
CostlyOstrich36 jupyterhub is a multi-user server, which allows many users to login and spawn their own jupyterlab instances (with custom dependencies, data etc) for runing notebooks
AgitatedDove14 no errors, because I don't know how to start 😅 I am just exploring if anyone did this before I get my hands dirty
Ok great. We were writing clearml triggers and they didn't work with "aborted". 😅
I would kindly suggest perhaps adding a set of all statuses in the docs
I think I know why though.
Clearml tries to install a package using pip, and pip cannot find the installation because it's not on pypi but it's listed in the pytorch download page
SOLVED: It was an expired service account key in a clearml config
It is likely you have mismatched cuda. I presume you locally have cu113 but cu114 remotely. Were you running any updates lately?
The log suggests there is no cu113 installation as well:
Warning, could not locate PyTorch torch==1.12.1 matching CUDA version 113
Yup, absolutely. Otherwise it cannot run your code haha
When installing locally you said to pip to look for packages at that page, and you dont say that to the remote pip
I don't think I expressed myself well 😅
My problem is I don't know how to run a jupyterhub Task. Basically what I want is a clearml-session
but with a docker container running JupyterHub instead of JupyterLab.
Do I write a Python script? If yes, how can I approach writing it? If not, what are the altenatives?
Mostly the configurabilty of clearml-session
and how it was designed. Jupyterhub spawns a process at :8000 which we had to port foreward by hand, but spawning new docker containers using jupyterhub.Dockerspawner
and connecting them to the correct network (the hub should talk to them without --network host
) seem too difficult or even impossible.
Oh, and there was no JupyterHub stdout in the console output on clearml server, it shows the jupyterlab's output by default
I succeeded with your instructions, so thank you!
However, we concluded that we don't want to run it through ClearML after all, so we ran it standalone.
But, I'll update you if we ever run it with ClearML so you could also provide it