Reputation
Badges 1
25 × Eureka!Hi WickedStarfish97
As a result, I donβt want the Agent to parse what imports are being used / install dependencies whatsoever
Nothing to worry about here, even if the agent detects the python packages, they are installed on top of the preexisting packages inside the docker. That said if you want to over ride it, you can also pass packages=[]
Funny enough Iβm running into a new issue now.
Sorry my bad, I thought have known π yes it probably should be packages=["clearml==1.1.6"]
BTW: do you have any imports inside the pipeline function itself ? if you do not, then no need to pass "packages" at all, it will just add clearml
JitteryCoyote63
Should be added before theΒ
if name == "main":
?
Yes, it should.
From you code I understand it is not ?
What's the clearml
version you are using ?
JitteryCoyote63 no you should not (unless you already have the Task.init call in your code)clearml-data
add the Task.init call at the beginning of the code in the entry point.
This means you should be able to get Task.current_task()
and get back the object.
What do you have under the "uncommitted changes" on the Task that was created?
UnevenDolphin73 clearml.config.get_remote_task_id()
will return the Task ID not the Task object. in order to get automagic to work, one h...
I remember there were some issues with it ...
I hope not π Anyhow the only thing that does matter is the auto_connect arguments (meaning if you want to disable some, you should pass them when calling Task.init)
JitteryCoyote63 I think I found the bug in clearml-task
it adds it at the end instead of before everything else
Hi BitterStarfish58
What's the clearml version you are using ?
dataset upload both work fine
Artifacts / Datasets are uploaded correctly ?
Can you test if it works if you change " http://files.community.clear.ml " to " http://files.clear.ml " ?
So are you saying the large file size download is the issue ? (i.e. network issues)
BitterStarfish58 I would suspect the upload was corrupted (I think this is the discrepancy between the files size logged, to the actual file size uploaded)
Hi BitterStarfish58
Where are you uploading it to?
Hmm maybe we should add a test once the download is done, comparing the expected file size and the actual file size, and if they are different we should redownload ?
BitterStarfish58 could you open a GitHub issue on it? I really want to make sure we support it (and I think it should not be very difficult)
(currently I think the implementation expects that if the download completed, it was successful)
It might be the file upload was broken?
Hmm BitterStarfish58 what's the error you are getting ?
Any chance you are over the free tier quota ?
I'm not sure the files-server supports "continue" from last position...
StaleButterfly40 just making sure I understand, are we trying to solve the "import offline zip file/folder" issue, where we create multiple Tasks (i.e. Task per import)? Or are you suggesting the Actual task (the one running in offline mode) needs support for continue-previous execution ?
Hi StaleButterfly40
but if I sync more than once I get a duplication of each line in log
Hmm.. let me check if we can "force" overwriting (it might require you to have a more stateful code for the sync process)
sometime we resume training
How would that work in offline mode? The offline process cannot sync with the backend... Are you saying you would like to get a new capability, "continue-offline-session" ?
Is there a solution for that?
Hi DisturbedElk70
Well assuming you mount/sync the "temp" folder of the offline experiment to a storage solution, then have another process (on the other side), syncing these folders, it will work and you will get "real-time" updates π
Offline Folder:get_cache_dir() / 'offline' / task_id
The azure section:
https://github.com/allegroai/trains/blob/master/docs/trains.conf#L117
NICE! MoodyCentipede68 this is awesome π
WickedElephant66 is this issue the same as this one?
https://clearml.slack.com/archives/CTK20V944/p1656537337804619?thread_ts=1656446563.854059&cid=CTK20V944
MoodyCentipede68 could it be that the model is on one account (workspace) and your credentials (the ones provided to the docker compose) are from another workspace?
The error itself point to the triton helper failing to get the model ID from the backend. The models are uploaded to a a specific workspace, and it looks like a mismatch (I.e. the model Id is nowhere to be found) wdyt?
MoodyCentipede68 from your log
clearml-serving-triton | E0620 03:08:27.822945 41 model_repository_manager.cc:1234] failed to load 'test_model_lstm2' version 1: Invalid argument: unexpected inference output 'dense', allowed outputs are: time_distributed
This seems the main issue of triton failing to.load
Does that make sense to you? how did you configure the endpoint model?
can I mount the s3 bucket as file system on place where
you need to mount it where the file server is storing it's files, correct (notice, not the DBs, just the files server)
Check the links that are generated in the ui when you upload an artifact or model
assuming you have http://hparams.my _param
my suggestion is:
` @hydra.main(config_path="solver/config", config_name="config")
def train(hparams: DictConfig):
task = Task.init(hparams.task_name, hparams.tag)
overrides = {'my_param': hparams.value}
task.connect(overrides, name='overrides')
in remote this will print the value we put in "overrides/my_param"
print(overrides['my_param'])