Reputation
Badges 1
21 × Eureka!Here is the network tab when trying to load the projects page
Oh, I've opened 8080, 8081 in my security group and NOT 8008.
That was it! I had not added 8008.
try:
Task.init('examples', 'training', continue_last_task='<previous_task_id_here>')
Just tried this and it works. Thanks! Really appreciate the great response!
Thanks, I appreciate the answer!
So not the latter. I can always log metrics during training and visualize them.
I'm thinking of a few plots in my current in-house tooling which are slightly different than the standard charts we look at. For example a custom parallel coordinate chart that can use aggregations, categorical variables, etc.
To move over to trains, I'd like to have all these custom plots in my dashboard. I haven't tried to do them in trains yet (I'm just starting).
So my quest...
AMI : allegroai-trains-server-0.15.1-366-248-c5c210e4-5094-4eb9-a613-a32c0378de31-ami-0bc20623da659a8cd.4
Hmm, I'm using 0.15.1 which I guess is an old version. I just created a new one with 0.16.1 and will test it out
Mostly they are a set of user defined hyper-parameters. I've been reading about hyper-param optimization since posting this. It seems like I would have to use hyper-param opt to achieve that.
Got it. Thanks for the help and explanation!
How did you figure out that there was no communication between the server and the web-app?
HiĀ AgitatedDove14
Thanks for the quick response.
I will try that out. Great! This is a great tool and I will start contributing. Why use both and not just one of them? What does one offer that the other doesn't?
Also, I would like to add some other plots to the dashboard. I see the plotting is done using Plotly Javascript. I'm a Python developer and don't know much Javascript. Do you have any suggestions on how to go about that or I should just get going with Javascript?
'by the same name' you mean names of the metrics and not the experiment name, right?
And yes, I'm logging different metrics
I understand regarding not opening up the ports for entire world. I'm just testing the setup š
Great, yes that makes sense.
Here is when I try to load the profile page
I just tried the .16.1 and am seeing the same behavior.
Here is the AMI id: allegroai-trains-server-0.16.1-320-273-c5c210e4-5094-4eb9-a613-a32c0378de31-ami-06f5e9f4dfa499dca.4
I agree it would be better to have it fully configurable. But if every marginal feature adds complexity, we might have to think how applicable that is to the general use cases. I'm thinking of examples in my domain which might not be useful in other domains. Maybe if that becomes an issue, there could be a domain specific feature base?!
I haven't fully compared all the things that I am currently doing with the in-house tool and what we can do with trains. I think I will have more concrete id...
how are you thinking of running those HP tests?
I'm not sure if I completely understand the question. Here is what I do presently. This maybe achieved more efficiently in trains (that's why I'm trying to move to trains).
Example:
I have a set of 10 user defined HPs. I have a scheduler that runs them independently in parallel. Once the training is complete, I run inference on the test set for these experiments. The data for both training and inference is logged under the respective exp...
Ok, cool. Thanks. This clears up things. I need to read more about the trains agent then. I have another question, I'll post it as a separate thread.
Yes every run is log as a new experiment (with it's own set of HP). Do notice that the execution itself is done by the "trains-agent". Meaning the HP process creates experiments with new set of HP an dputs them into the execution queue, thenĀ
trains-agent
Ā pulls them from the queue and starts executing them. You can have multipleĀ
trains-agent
Ā on as many machines as you like with specific GPUs etc. each one will pull a single experiment and execute it, once...
For HPO (hyper-param opt), are all experiments which are part of the optimization process logged? I understand the HPO process takes a base experiment and runs subsequent experiments with the new HPs. Are these experiments logged too (with the train-valid curves, etc)?