Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

What sort of integration is possible with ClearML and SageMaker? On the page describing ClearML Remote it says:

Create a remote development environment (e.g. AWS SageMaker, GCP CoLab, etc.) on any on-prem machine or any cloud.

But the only mention of SageMaker I see in the docs is the release notes for 0.13 saying "Add support for SageMaker".

I have SageMaker Studio up and running with access to my ClearML server and it's successfully able to log plots and scalars from experiments, but in terms of code it just logs the code used to launch the kernel:

"""Entry point for launching an IPython kernel.
This is separate from the ipykernel package so we can avoid doing imports until
after removing the cwd from sys.path.
"""
import sys

if __name__ == '__main__':
    # Remove the CWD from sys.path while we load stuff.
    # This is added back by InteractiveShellApp.init_path()
    if sys.path[0] == '':
        del sys.path[0]
    from ipykernel import kernelapp as app
    app.launch_new_instance()

Is it possible to capture more than that while using SageMaker?

  
  
Posted one year ago
Votes Newest

Answers 77


poking around a little bit, and clearml.backend_interface.task.repo.scriptinfo.ScriptInfo._get_jupyter_notebook_filename() returns None

  
  
Posted one year ago

but the only exception handler is for requests.exceptions.SSLError

  
  
Posted one year ago

Hi @<1532532498972545024:profile|LittleReindeer37>
Yes you are correct it should capture the entire jupyter notebook in sagemaker studio.
Just verifying this is the use case, correct ?

  
  
Posted one year ago

and cat /var/log/studio/kernel_gateway.log | grep ipynb comes up empty

  
  
Posted one year ago

the server_info is

[{'base_url': '/jupyter/default/',
  'hostname': '0.0.0.0',
  'password': False,
  'pid': 9,
  'port': 8888,
  'root_dir': '/home/sagemaker-user',
  'secure': False,
  'sock': '',
  'token': '',
  'url': '
',
  'version': '1.23.2'}]
  
  
Posted one year ago

As another test I ran Jupyter Lab locally using the same custom Docker container that we're using for Sagemaker Studio, and it works great there, just like the native local Jupyter Lab. So it's seemingly not the image, but maybe something to do with how Studio runs it as a kernel.

  
  
Posted one year ago

Try to add here:
None

server_info['url'] = f"http://{server_info['hostname']}:{server_info['port']}/"
  
  
Posted one year ago

weird that it won't return that single session

  
  
Posted one year ago

so notebook path is empty

  
  
Posted one year ago

but the call to jupyter_server.serverapp.list_running_servers() does return the server

  
  
Posted one year ago

image

  
  
Posted one year ago

Nice

  
  
Posted one year ago

I can get it to run up to here: None

  
  
Posted one year ago

if I add the base_url it's not found

  
  
Posted one year ago

which I looked at previously to see if I could import sagemaker.kg or kernelgateway or something, but no luck

  
  
Posted one year ago

At the top there should be the URL of the notebook (I think)

  
  
Posted one year ago

@<1532532498972545024:profile|LittleReindeer37> nice!!! 😍
Do you want to PR? it will be relatively easy to merge and test, and I think that they might even push it to the next version (or worst case quick RC)

  
  
Posted one year ago

curious whether it impacts anything besides sagemaker. I'm thinking it's generically a kernel gateway issue, but I'm not sure if other platforms are using that yet

  
  
Posted one year ago

and that requests.get() throws an exception:

ConnectionError: HTTPConnectionPool(host='default', port=8888): Max retries exceeded with url: /jupyter/default/api/sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7ba9cadc30>: Failed to establish a new connection: [Errno -2] Name or service not known'))
  
  
Posted one year ago

This is very odd ... let me check something

  
  
Posted one year ago

This is strange, let me see if we can get around it, because I'm sure it worked 🙂

  
  
Posted one year ago

as best I can tell it'll only have one .ipynb in $HOME with this setup, which may work...

  
  
Posted one year ago

if I instead change the request url to f"http://{server_info['hostname']}:{server_info['port']}/api/sessions" then it gets a 200 response... however , the response is an empty list

  
  
Posted one year ago

Just ran the same notebook in a local Jupyter Lab session and it worked as I expected it might, saving a copy to Artifacts

  
  
Posted one year ago

I think it just ends up in /home/sagemaker-user/{notebook}.ipynb every time

  
  
Posted one year ago

but one possible workaround is to try to figure out if it's running in a gateway and then find the only notebook running on that server

  
  
Posted one year ago

and this

server_info['url'] = f"http://{server_info['hostname']}:{server_info['port']}/{server_info['base_url']}/"
  
  
Posted one year ago

Hi @<1532532498972545024:profile|LittleReindeer37> @<1523701205467926528:profile|AgitatedDove14>
I got the session with a bit of "hacking".
See this script:

import boto3, requests, json
from urllib.parse import urlparse

def get_notebook_data():
    log_path = "/opt/ml/metadata/resource-metadata.json"
    with open(log_path, "r") as logs:
        _logs = json.load(logs)
    return _logs

notebook_data = get_notebook_data()
client = boto3.client("sagemaker")
response = client.create_presigned_domain_url(
    DomainId=notebook_data["DomainId"],
    UserProfileName=notebook_data["UserProfileName"]
)
authorized_url = response["AuthorizedUrl"]
authorized_url_parsed = urlparse(authorized_url)
unauthorized_url = authorized_url_parsed.scheme + "://" + authorized_url_parsed.netloc
with requests.Session() as s:
    s.get(authorized_url)
    print(s.get(unauthorized_url + "/jupyter/default/api/sessions").content)

Basically, we can get the session directly from AWS, but we need to be authenticated.
One way I found was to create a presigned url through boto3, by getting the domain id and profile name from a resoure-metadata file that is found on the machine None .
Then use that to get the session...
Maybe there are some other ways to do this (safer), but this is a good start. We know it's possible

  
  
Posted one year ago

print(requests.get(url='

print(requests.get(url='
  
  
Posted one year ago

. I'm thinking it's generically a kernel gateway issue, but I'm not sure if other platforms are using that yet

The odd thing is that you can access the notebook, but it returns zero kernels ..

  
  
Posted one year ago
23K Views
77 Answers
one year ago
one year ago
Tags
Similar posts