Reputation
Badges 1
533 × Eureka!I assume we are talking about the IP I would find here right?
https://www.whatismyip.com/
the link to manual model registry doesn't work
When you are inside a project, the search bar searches for experiments
so if you want to search inside a specific project, go to that project and use the search bar, if you want to search all over, go to the project called "All Experiments" and search there
Thanks Alon
but using that code - how would I edit fileds?
anyway, my ultimate goal is to create templates for other tasks... Is that possible in any other way through the CLI?
CostlyOstrich36 so why 1000:1000? My user and group are not that and so do all the otehr files I have under /opt/clearml
By the examples I figured out this ould appear as a scatter plot with X and Y axis and one point only.. Does it avoid that?
I manually deleted the allegroai/trains:latest image, that didn't help either
why not use my user and group?
I doubled checked the credentials in the configurations, and they have full EC2 access
` # define pipeline
pipe = clearml.PipelineController(
name=TASK_NAME,
project=PROJECT_NAME,
version='0.0.1',
add_pipeline_tags=False,
)
pipe.set_default_execution_queue('default')
Adding steps
pipe.add_step(name=f'{start_date_train}_{end_date_train}_choose_best',
base_task_project=CHOOSE_PROJECT_NAME,
base_task_name=CHOOSE_TASK_NAME,
parameter_override=params_override,
...
I think you are talking about separate problems - the "WARNING DIFF IS TOO LARGE" is only a UI issue, that you can't see hte diff in the UI - correct me if I'm wrong with this
Maria seems to be saying that the execution FAILS when she has uncomitted changes, which is not the expected behavior - am I right maria?
(I'm working with maria)
essentially, what maria says is when she has a script with uncomitted changes, when executing remotely, the script that actually runs on the remote machine is without the uncomitted changes
e.g.:
Her git status is clean, she makes some changes to script.py and executes it remotely. What gets executed remotely is the original script.py and not the modified version she has locally
So regarding 1, I'm not really sure what is the difference
When running in docker mode what is different the the regular mode? No where in the instructions is nvidia docker a prerequisite, so how exacly will tasks on GPU get executed?
I feel I don't underatand enough of the mechanism to (1) understand the difference between docker mode and not and (2) what is the use casr for each
I set it to true and restarted by agent
Anyway I checked the base task, and this is what it has in installed packages (seems like it doesn't list all the real packages in the environment)
What do you mean by submodules?
She did not push, I told her she does not have to push before executing as trains figures out the diffs.
When she pushes - it works
and then how would I register the final artifact to the pipelien? AgitatedDove14 ⬆
pgrep -af trains shows that there is nothing running with that name
btw my site packages is false - should it be true? You pasted that but I'm not sure what it should be, in the paste is false but you are asking about true
it seems that only the packages that are on the script are getting installed
