Reputation
Badges 1
75 × Eureka!I'm not sure if Subprojects will work for that - can you use the Web UI to compare the artifacts from two separate subprojects?
but one possible workaround is to try to figure out if it's running in a gateway and then find the only notebook running on that server
and  cat /var/log/studio/kernel_gateway.log | grep ipynb  comes up empty
environ{'PYTHONNOUSERSITE': '0',
        'HOSTNAME': 'gfp-science-ml-t3-medium-d579233e8c4b53bc5ad626f2b385',
        'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/_sagemaker-instance-credentials/xxx',
        'JUPYTER_PATH': '/usr/share/jupyter/',
        'SAGEMAKER_LOG_FILE': '/var/log/studio/kernel_gateway.log',
        'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/miniconda3/condabin:/tmp/anaconda3/condabin:/tmp/miniconda2/condabin:/tmp/anaconda2/condabin'...
					as best I can tell it'll only have one .ipynb in  $HOME  with this setup, which may work...
but even then the sessions endpoint is still empty
if I change it to 0.0.0.0 it works
awesome, I'll test it out - thanks for the tips!
I think it just ends up in  /home/sagemaker-user/{notebook}.ipynb  every time
and $QUEUE and $NUM_WORKERS are particular to my setup, but they just give the name of the queue and how many copies of the agent to run
lots of things like  {"__timestamp__": "2023-02-23T23:49:23.285946Z", "__schema__": "sagemaker.kg.request.schema", "__schema_version__": 1, "__metadata_version__": 1, "account_id": "", "duration": 0.0007679462432861328, "method": "GET", "uri": "/api/kernels/6ba227af-ff2c-4b20-89ac-86dcac95e2b2", "status": 200}
But we're also testing out new models all the time, which are typically implemented as git branches - they run on the same set of inputs but don't output their results into production
at least in 2018 it returned sessions! None
And then we want to compare backtests or just this week's estimates across multiple of those models/branches
which I looked at previously to see if I could import sagemaker.kg or kernelgateway or something, but no luck
curious whether it impacts anything besides sagemaker. I'm thinking it's generically a kernel gateway issue, but I'm not sure if other platforms are using that yet
and is there any way to capture hydra from a notebook as a Configuration? you don't use the typical  @hydra.main()  but rather call the  compose API  , and so far in my testing that doesn't capture the OmegaConf in ClearML
the key point is you just loop through the number of workers, set a unique CLEARML_WORKER_ID for each, and then run it in the background
I additionally tried using a Sagemaker  Notebook  instance, to see if it was the kernel dockerization that  Studio  uses that was messing things up. But it seems to actually log  less  information from a Notebook instance vs Studio .

  directly inside the notebook I can manually cause it to log the Hydra config, it just doesn't seem to autodetect if you're doing a manual call to Hydra Compose
looks like the same as in  server_info
As in, which tab when I'm viewing the Experiment should I see it on? Should it be code, an artifact, or something else?
I can get it to run up to here: None