Reputation
Badges 1
25 × Eureka!Hi @<1523701168822292480:profile|ExuberantBat52>
What do you mean by:
- dataset_1 -> script_2 -> dataset_2a dataset creates a script ?
It seems stuck somewhere in the python path... Can you check in runtime what's os.environ['PYTHONPATH']
As a hack you can try DEFAULT_VERSION
(it's just a flag and should basically do Store)
EDIT: sorry that won't work π
So if everything works you should see "my_package" package in the "installed packages"
the assumption is that if you do:pip install "my_package"
It will set "pandas" as one of its dependencies, and pip will automatically pull pandas as well.
That way we do not list the entire venv you are running on, just the packages/versions you are using, and we let pip sort the dependencies when installing with the agent
Make sense ?
Or you want to generate it from a previously executed run?
How can I ensure that additional tasks arenβt created for a notebook unless I really want to?
TrickySheep9 are you saying two Tasks are created in the same notebook without you closing one of them ?
(Also, how is the git diff warning there with the latest clearml, I think there was some fix related to that)
Hi, I changed it to 1.13.0, but it still threw the same error.
This is odd, just so we can make the agent better, any chance you can send the Task log ?
However, the pipeline experiment is not visible in the project experiment list.
I mean press on the "full details" in the pipeline page
Oh, and good job starting your reference with an author that goes early in the alphabetical ordering, lol:
LOL, worst case it would have been C ... π
Hi @<1699955693882183680:profile|UpsetSeaturtle37>
What's your clearml-session version? where is the remote machine ?
And yes if the network connection is bad we have seen this behavior you can try with --keepalive=true
Notice that these are SSH networking issue, not something to do with the clearml-session layer the --keepalive is trying to automatically detect these disconnects and make sure it reconnects for you.
models been trained stored ...
mongodb will store url links, the upload itself is controlled via the "output_uri" argument to the Task
If None is provided, the Trains log the local stored model (i.e. link to where you stored your model), if you provide one, Trains will automatically upload the model (into a new subfolder) and store the link to that subfolder.
- how can I enable the tensorboard and have the graphs been stored in trains?
Basically if you call Task.init all your...
I use Yaml config for data and model. each of them would be a nested yaml (could be more than 2 layers), so it won't be a flexible solution and I need to manually flatten the dictionary
Yes, you are correct, the recommended option would be to store it with task.connect_configuration
it's goal is to store these types of configuration files/objects.
You can also store the yaml file itself directly just pass Path object instead of dict/string
Yep, this will run the pipeline controller itself on the clearml-server (or any other machine running clearml-agent services mode)
you can also check
https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
Which will stop a local execution of a Task and re-launch it on a remote machine
Hi ReassuredTiger98
So let's assume we call:logger.report_image(title='training', series='sample_1', iteration=1, ...)
And we report every iteration (keeping the same title.series names). Then in the UI we could iterate back on the last 100 images (back in time) for this title / series.
We could also report a second image with:logger.report_image(title='training', series='sample_2', iteration=1, ...)
which means that for each one we will have 100 past images to review ( i.e. same ti...
But do consider a sort of a designer's press kit on your page haha
That is a great idea!
Also you can use:
https://2928env351k1ylhds3wjks41-wpengine.netdna-ssl.com/wp-content/uploads/2019/11/Clear_ml_white_logo.svg
In your trains.conf, change the valuefiles_server: '
s3://ip :port/bucket'
Could you please add it, I really do not want to miss it π
WorriedParrot51 trains should support subparsers etc.
Even if your code calls the parsing before trains.
The only thing you need is to import the package when argparser is called (not to initialize it, that can happen later)
It should (hopefully) solve the issue.
Would love to just cap it at a fixed amount for a month for API calls.
Try the timeout configuration, I think this shoud solve all your issues, and will be fairly easy to set for everyone
Hi @<1657918706052763648:profile|SillyRobin38>
I have included some print statements
you should see those under the Task of the inference instance.
You can also do:
import clearml
...
def preprocess(...):
clearml.Logger.current_logger().report_text(...)
clearml.Logger.current_logger().report_scalar(...)
, specifically within the containers where the inferencing occurs.
it might be that fastapi is capturing the prints...
[None](https://github.com/tiangolo/uvicor...
I understand, but then the toml file needs to be parsed to ensure poetry is used. It's just a tool entry in the pyproject.toml.
Probably too much for the agent... and specifically it seems poetry actually managed to parse it?! what are you getting in the log?
Which one of those? the 3d ball dots or the 3d face mesh?
2023-02-15 12:49:22,813 - clearml - WARNING - Could not retrieve remote configuration named 'SSH'
This is fine, it means it uses the default identity keys
The thing is - when I try to connect with normal SSH there are no issues
Now I'm lost, so when exactly do you see the issue ?
Only the dictionary keys are returned as the raw nested dictionary, but the values remain casted.
Using which function ? task.get_parameters_as_dict
does not cast the values (the values themselves are stored as strings on the backend), only task.connect
will cast the values automatically
clearml doesn't change the matplotlib backend under the hood, right? Just making sure
if the agent is running it, of course it does π otherwise where is the automagic, it would break the moment you run it on a remote machine