Hi SweetHippopotamus84 🙂
What do you mean different? Entirely unrelated packages or entirely different unrelated versions? Could it be those packages are dependencies of your requirements?
Can you provide some examples/screenies?
Can you copy paste the error you got?
I'll take a large snippet too 😛
Do you have any idea what's the source of this?TypeError: __init__() got an unexpected keyword argument 'configurations'
ExtensiveCamel16 , hi! Hope you give us a shout out 🙂
Is it hosted by you or is it app.clear.ml ?
Hi @<1523702786867335168:profile|AdventurousButterfly15> , I think this is what you're looking for - None
@<1556812486840160256:profile|SuccessfulRaven86> , you can specify different containers in clearml.conf
I'm afraid not, as it would still require a data merge.
That's the controller. I would guess if you fetch the controller you can get it's id as well
Hi UnevenDolphin73 , it does run that danger, however it will spin down after time out if there is nothing for it to pick up from the queue
I think if you copy all the data from original server and stick it in the new server it should transfer all data. Otherwise I think you would need to extract that through the API or copy mongo documents
AgitatedDove14 , I think gitdiff wasn't ran. I think Laszlo ran git status manually, not git diff
Can you please open a GitHub issue to follow up on this issue?
@<1718799873618219008:profile|FunnyPeacock68> , it appears that reading a yaml isn't supported. Currently only requirements.txt. I'd suggest opening a GitHub feature request for this capability!
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you can just leave the packages as any other package and add the --extra-index-url in clearml.conf
of the agent
Hi @<1719524669695987712:profile|ClearHippopotamus36> , what if you manually add these two packages to the installed packages section in the execution tab of the experiment?
It isn't a bug, you have to add the previews manually through reporting. For example:
ds = Dataset.create(...) ds.add_files(...) ds.get_logger().report_media(...)
It's worth a try 🙂
Basically the same capabilities that are offered for the unstructured data - ability to register files, keep track and manage them with links and ability to query into all of their metadata and then connect it to the experiment as a query on the metadata inside different versions - basically giving you a feature store.
I am of course over simplifying as the HyperDatasets feature is an extremely powerful tool for managing unstructured data.
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , HyperDatasets are built mainly for unstructured data since the problem itself is more difficult, but all features can be applied also to tabular data. Is there something specific you're looking for?
SwankySeaurchin41 , what do you mean? Can you give a specific example?
JitteryCoyote63 , I think so.
config = OmegaConf.load(train_task.connect_configuration(config_path))
Should work
BoredPigeon26 , are images from previous iterations still showing?
Hi @<1523704157695905792:profile|VivaciousBadger56> , can you provide some screenshots of what you're seeing?
Hi @<1564060263047499776:profile|ThoughtfulCentipede62> , I think the issue is still open. Can you please open a GitHub issue to track it so we can make sure it is resolved?