Reputation
Badges 1
151 × Eureka!It's for addition filtering only right? My use case is to prevent user accidentally querying the entire database.
I want to achieve something similar we would do in SQL
select * from user_query limit 100;
as I have wandb docker set up on the same VM for teting
really appreciate the help along the way... I have taken way too many of your time
I am abusing the "hyperparameters" to have a "summary" dictionary to store my key metrics, due to the nicer behaviour of diff-ing across experiments.
currently I do it in a hacky way. I call trains.backend_api Session, and check if 'demoapp' in web server URL.
that can't be done easily, I have no control for that
AgitatedDove14 Git is fine, I just create a local repository for this. The code is two line
from trains import Task
task = Task.init(project_name="my project", task_name="my task3")
Great discussion, I agree with you both. For me, we are not using clearml-data, so I am a bit curious how does a "published experiment" locked everything (including input? I assume someone can still just go inside the S3 bucket and delete the file without Clearml noticing).
From my experience, absolute reproducibility is code + data + parameter + execution sequence. For example, random seed or some parallelism can cause different result and could be tricky to deal with sometimes. We did bu...
SuccessfulKoala55 Where can I find related documentation? I am not aware that I can configure this, I would like to create user myself.
oh, this is a bit different from my expectation. I thought I can use artifact for dataset or model version control.
seems not all settings are stored? for example if I add a custom column in hyperparameters and do a refresh
hmmmm, maybe I missed some UI Element, I can't locate any ID
but somewhere along the way, the request actually remove the header
Sorry, let me get back to you tomorrow. Maybe I did something wrong now the entire UI crash
AgitatedDove14
The core of Kedro is pipeline (multi-nodes), where you can stitch different pipeline together. For the data part, they use something called DataCatalog, which is a YAML file defined how your file is going to be saved/loaded, and where is the file. Kedro also resolve the DAGs of your pipeline, so you actually don't define what's the order of execution (it's defined by the input/output dependencies). The default is a sequentialRunner, optionally, you can use ParallelRunner where...
for workaround, I write a function to recursive cast my config dictionary into string if needed.