GrittyStarfish67
I do not wish for data duplication. Any Idea how to do this with clearml-data CLI/GUI/python?
At least in theory creating a new version with parents from multiple Datasets should just work out of the box.
wdyt?
Glue machine or K8S Worker machine?
The K8s worker machine.
You could also configure an ingest service as part of the template, so they always have an external port mapped into the port.
GrievingTurkey78 notice that when enqueuing an aborted Task, the agent will not deleted the previously reported metrics/logs
Thanks GrievingTurkey78
Sure just PR (should work with any Python/Hydra version):kwargs['config']=config kwargs['task_function']=partial(PatchHydra._patched_task_function, task_function,) result = PatchHydra._original_run_job(*args, **kwargs)
Ohh sorry you will also need to fix the def _patched_task_function
The parameter order is important as the partial call relies on it.
GrievingTurkey78 short answer no π
Long answer, the files are stored as differentiable sets (think changes set from the previous version(s)) The collection of files is then compressed and stored as a single zip. The zip itself can be stored on Google but on their object storage (not the GDrive). Notice that the default storage for the clearml-data is the clearml-server, that said you can always mix and match (even between versions).
Hi NastyFox63
This seems like most of the reports are converted to pngs (which is what the automagic does if it fails to convert the matplotlib into interactive plot).
no more than 114 plots are shown in the plots tab.
Are you saying we have 114 limit on plots ?
Is this true for "full screen" mode (i.e. not in the experiments table but switch to full detailed view)
-e
:user/private_package.git@57f382f51d124299788544b3e7afa11c4cba2d1f#egg=private_package
Is this the correct link to the repo and a valid commit id ?
Can you post a few more lines from the agent's log ?
Something is failing to install I'm just not sure what
Iβll check if I could wrap the code in something that calls the Task.delete if debugging
Whatever you think works best for you, I was genuinely curious π
To me (personally) it is helpful to have a log even while debugging (comparing to previous runs etc, trying to see what went wrong even on a console output level). When I'm done I just search for everything I worked on select all, and archive them. Then a cleanup service in the background clears all the archived Tasks once they ar...
I want to be able to access the data just avoid reporting the experiment results
Yes, you are correct π
If you just want to skip the logging you can always add an if to the Tasl.init call ?!
Correct (copied == uploaded)
We just donβt want to pollute the server when debugging.
Why not ?
you can always remove it later (with Task.delete) ?
It will store everything locally, later you can import it back to the server, if you want.
GrievingTurkey78 did you open the 8008 / 8080 / 8081 ports on your GCP instance (I have to admit I can't remember where exactly in the admin panel you do that, but I can assure you it is there :)
can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is,
Great idea!
This line actually creates the control Task (2)clearml-serving triton --project "serving" --name "serving example"
This line configures the control Task (the idea is that you can do that even when the control Task is already running, but in this case it is still in draft mode).
Notice the actual model serving configuration is already stored on the crea...
Hi @<1523701337353621504:profile|FlutteringSheep58>
are you asking how to convert a worker IP into a dns resolved host name ?
Thanks StrongHorse8
Where do you think would be a good place to put a more advanced setup? Maybe we should add an entry for DevOps? Wdyt?
GrievingTurkey78 where do you see this message? Can you send the full server log
?
Hi ContemplativePuppy11
This is really interesting point.
Maybe you can provide a pseudo class abstract of your current pipeline design, this will help in trying to understand what you are trying to achieve and how to make it easier to get there
MagnificentSeaurchin79 making sure the basics work.
Can you see the 3D plots under the Plot section ?
Regrading the Tensors, could you provide a toy example for us to test ?
Hi QuaintPelican38
Assuming you have open the default SSH port 10022 on the ec2 instance (and assuming the AWS premissions are set so that you can access it). You need to use the --public-ip
flag when running the clearml-session. Otherwise it "thinks" it is running on a local network and it registers itself with the local IP. With the flag on it gets the public IP of the machine, then the clearml-session running on your machine can connect to it.
Make sense ?
Hi CleanPigeon16
Put the specific git into the "installed packages" section
It should look like:... git+
...
(No need for the specific commit, you can just take the latest)
ohh AbruptHedgehog21 if this is the case, why don't you store the model with torch.jit.save
and use Triton to run the model ?
See example:
https://github.com/allegroai/clearml-serving/tree/main/examples/pytorch
(BTW: if you want a full custom model serve, in this case you would need to add torch to the list of python packages)
Funny it's the extension "h5" , it is a different execution path inside keras...
Let me see what can be done π
Yey @ https://app.slack.com/team/U01CJ43KX2N this one does not work!
Give me a minute I'll
This is what I just used:
` import os
from argparse import ArgumentParser
from tensorflow.keras import utils as np_utils
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation, Dense, Softmax
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
from clearml import Task
parser = ArgumentParser()
parser.add_argument('--output-uri', type=str, required=False)
args =...
GrievingTurkey78 I have to admit I can't see the difference, can you help me out π