Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Migration Clearml 1.5.0 -> 1.8? -> 1.9?

Migration ClearML 1.5.0 -> 1.8? -> 1.9?
Good day to all. I am currently on ClearML Server 1.5.0
Everything seems to be just fine. Maybe some minor stuff in the web UI.
What would be the recommended version for updating the server? 1.8/1.9?
And the migration just as simple as pull new version of the docker and fire up?
Any changes in API since that?
Thanks!

  
  
Posted one year ago
Votes Newest

Answers 17


Hi GentleSwallow91 , I would highly recommend upgrading to 1.9 as it brings new also a new major feature (as well as minor bug fixes). I'm not sure about DB migration - there might be one or two. I suggest taking a look at the versions in between 🙂

  
  
Posted one year ago

Hi GentleSwallow91 , no major migration between either of these versions - should be fully automatic, just make sure you're using the latest docker-compose.yaml file, pull the images and you should be good to go 🙂

  
  
Posted one year ago

Thanks @<1523701087100473344:profile|SuccessfulKoala55>
I've looked into the docker-compose and found a new image async_delete
Not sure what it does and if I should include it into upgraded installation.
If I do - there is a parameter CLEARML__services__async_urls_delete__fileserver__url_prefixes: "[${CLEARML_FILES_HOST:-}]"
I guess I should set it to fileserver in case of one docker-compose?

  
  
Posted one year ago

Made an upgrade to the latest version from 1.5 and have stumbled upon an issue with webserver:
I am saving all artefacts to a custom s3 server. Used to work fine - saving and downloading them from webserver. Now I can now download anything that resides on s3 - getting the following errors in browser console:
Unable to parse "https None " as a whatwg URL.
ERROR EndpointError: Custom endpoint https [None](//storage.yandexcloud.net) was not a valid URI

Back at 1.5 there was an issue with custom s3 endpoints - to treat those as custom one needed to specify a port at the end - even it it is 443 - maybe that's the case?

  
  
Posted one year ago

Thanks @<1523703436166565888:profile|DeterminedCrab71>
I've tried that - it does not work - I have a valid endpoint in settings but a missing colon in js console
Waiting for a fix 🙏
image

  
  
Posted one year ago

I was just informed that the images for 1.9.2 were already released on Thursday, please test if the new version solve the issue for you and let us know.

  
  
Posted one year ago

Hi! any update on that fix? @<1523703436166565888:profile|DeterminedCrab71>
maybe it is not present in 1.8 and will just use that version?

  
  
Posted one year ago

1.9.2 with a fix should be released, I think today, let me check

  
  
Posted one year ago

ClearML is awesome - all works fine now! Will test the rest.

  
  
Posted one year ago

there is a parameter

CLEARML__services__async_urls_delete__fileserver__url_prefixes: "[${CLEARML_FILES_HOST:-}]"

I guess I should set it to

fileserver

in case of one docker-compose?

I think you can just leave it as-is

  
  
Posted one year ago

The webapp error seems to be coming from the AWS JS packages used for accessing s3-stored resources... Can you share an example of the s3 file references? Obviously these would not start with https ...?

  
  
Posted one year ago

Hi @<1523701366600503296:profile|GentleSwallow91>
https None is not a valid URL.
you're missing a colon : after https

  
  
Posted one year ago

image

  
  
Posted one year ago

it might be a bug we already fixed, let me check

  
  
Posted one year ago

thanks 😊

  
  
Posted one year ago

cool with report shortly!

  
  
Posted one year ago

the bug is during creation of S3 bucket credential creation. you can manually fix the URL in settings -> Configuration
under WEB APP CLOUD ACCESS find the right Host (Endpoint) field and add the missing colon.
we will release a fix to 1.9 soon.

  
  
Posted one year ago
635 Views
17 Answers
one year ago
one year ago
Tags