
Reputation
Badges 1
151 × Eureka!AgitatedDove14 Thanks! This seems to be a more elegant solution
I mean, once I add environment variable, can trains.conf overwrite it? I am guessing environment variable will have a higher hierarchy.
The things that I want to achieve is:
Block user to access to public server If they configure trains.conf, then it's fine
import os os.environ["TRAINS_API_HOST"] = "YOUR API HOST " os.environ["TRAINS_WEB_HOST"] = "YOUR WEB HOST " os.environ["TRAINS_FILES_HOST"] = "YOUR FILES HOST "
I need this as I want to write a wrapper for internal use.
I need to block the default behavior that link to public server automatically when user has no configuration file.
I don't think it is running in subprocess, stdout/stderr is output in terminal. If I use print() it actually logged, but the logger info is missing.
SuccessfulKoala55 Where can I find related documentation? I am not aware that I can configure this, I would like to create user myself.
Digest: sha256:407714e5459e82157f7c64e95bf2d6ececa751cca983fdc94cb797d9adccbb2f Status: Downloaded newer image for nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
I am abusing the "hyperparameters" to have a "summary" dictionary to store my key metrics, due to the nicer behaviour of diff-ing across experiments.
I am not sure what are those example/1/2/3 are, I only have one chart
This will cause a redundant Trains session, I guess.
repository detection is fine
Could u give me some pointers where ClearML auto-capture log/stdout? I suspect as Kedro has configuration on logging
and somehow ClearML fail to catch it.
I don't want to mess with the standard setup.
is it possible to overwrite if trains.conf did exist
Great, as long as it will continue to work with S3(Minio), it's good for me. I am already using MinIO with Trains (older version).
Was planning to do a upgrade soon.
It's for addition filtering only right? My use case is to prevent user accidentally querying the entire database.
I want to achieve something similar we would do in SQL
select * from user_query limit 100;
currently I do it in a hacky way. I call trains.backend_api Session, and check if 'demoapp' in web server URL.
So I found that if I change this plot, it seems changes across all the expeiment too? How did it cache this setting? could this be shared among users or only per user or it is actually cached by the browser only?
Is it possible to set the frequency of sampling?
i.e. some files in a shared drive, then someone silently updated the files and all the experiments become invalid and no one knows when did that happened.
I also get this from the logging
TRAINS Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring
Yup, I am only more familiar with the experiment tracking part, so I don't know if I have a good understanding before I have reasonable knowledge of the entire ClearML system.
VivaciousPenguin66 How are you using the dataset tool? Love to hear more about that.
In this case, I would rather use task.connect(), diff line by line is probably not useful for my data config. As shown in the example, shifting 1 line would result all remaining line different.
But this also mean I have to first load all the configuration to a dictionary first.