Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...
What if you add images to the dataset? Can you see them being previewed? @<1523701168822292480:profile|ExuberantBat52>
what about this script (replace with your creds, comment out creds in clearml.conf
for now)
from clearml import Task
from clearml.storage.helper import StorageHelper
task = Task.init("test", "test")
task.setup_aws_upload(
bucket="bucket1",
host="localhost:9000",
key="",
secret="",
profile=None,
secure=True
)
helper = StorageHelper.get("
")
Hi @<1670964662520254464:profile|LonelyFly70> ! FrameGroups are part of the enterprise sdk, thus they can only be imported from allegroai
Regarding number 2.
, that is indeed a bug and we will try to fix it as soon as possible
Btw, to specify a custom package, add the path to that package to your requirements.txt
(the path can also be a github link for example).
[package_manager.force_repo_requirements_txt=true] Skipping requirements, using repository "requirements.txt"
Try adding clearml to the requirements
Hi PanickyMoth78 ! This will likely not make it into 1.9.0 (this will be the next version we release, most likely before Christmas). We will try to get the fix out in 1.9.1
Hi @<1546303293918023680:profile|MiniatureRobin9> ! When it comes to pipeline from functions/other tasks, this is not really supported. You could however cut the execution short when your step is being ran by evaluating the return values from other steps.
Note that you should however be able to skip steps if you are using pipeline from decorators
Hi PanickyMoth78 ! I ran the script and yes, it does take a lot more memory than it should. There is likely a memory leak somewhere in our code. We will keep you updated
Regarding 1.
, are you trying to delete the project from the UI? (I can't see an attached image in your message)
Hi @<1523711002288328704:profile|YummyLion54> ! By default, we don't upload the models to our file server, so in the remote run we will try to pull the file from you local machine which will fail most of the time. Specify the upload_uri
to the api.files_server
entry in your clearml.conf
if you want to upload it to the clearml server, or any s3/gs/azure links if you prefer a cloud provider
Hi @<1702492411105644544:profile|YummyGrasshopper29> ! Parameters can belong to different sections. You should append it before some_parameter
. You likely want ${step2.parameters.kwargs/some_parameter}
Also, do you need to close the task? It will close automatically when the program exits
ShinyPuppy47 Try this: use task = Task.init(...)
(no create
) then call task.set_base_docker
@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server
entry to your s3 bucket in clearml.conf
. This way you wouldn't need to call set_default_upload_destination
every time you run a new script.
Also, in clearml.conf
, you can change `sdk.deve...
this only affects single files, if you wish to add directories (with wildcards as well) you should be able to
@<1545216070686609408:profile|EnthusiasticCow4> a PR would be greatly appreciated. If the problem lies in _query_tasks
then it should be addressed there
Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Note that the Datasets
section is created only if you get the dataset with an alias? are you sure that number_of_datasets_on_remote != 0
?
If so, can you provide a short snippet that would help us reproduce? The code you posted looks fine to me, not sure what the problem could be.
Hi @<1679661969365274624:profile|UnevenSquirrel80> ! Pipeline projects are hidden. You can try to pass task_filter={"search_hidden": True, "_allow_extra_fields_": True}
to the query_tasks
function to fetch the tasks from hidden projects
Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Could you please share some code that could help us reproduced the issue? I tried cloning, changing parameters and running a decorated pipeline but the whole process worked as expected for me.
Hi @<1678212417663799296:profile|JitteryOwl13> ! Are you trying to return the logger from a step?
Hi @<1654294820488744960:profile|DrabAlligator92> ! The way chunk size works is:
the upload will try to obtain zips that are smaller than the chunk size. So it will continuously add files to the same zip until the chunk size is exceeded. If the chunk size is exceeded, a new chunk (zip) is created. The initial file in this chunk is the file that caused the previous size to be exceeded (regardless of the fact that the file itself might exceed the size).
So in your case: am empty chunk is creat...
@<1545216070686609408:profile|EnthusiasticCow4>
This:
parent = self.clearml_dataset = Dataset.get(
dataset_name="[LTV] Dataset",
dataset_project="[LTV] Lifetime Value Model",
)
# generate the local dataset
dataset = Dataset.create(
dataset_name=f"[LTV] Dataset",
parent_datasets=[parent],
dataset_project="[LTV] Lifetime Value Model",
)
should l...
Yes, passing custom object between steps should be possible. The only condition is for the objects to be pickleable. What are you returning exactly from init_experiment
?
in the meantime, we should have fixed this. I will ping you when 1.9.1 is out to try it out!
ShinyPuppy47 does add_task_init_call
help your case? https://clear.ml/docs/latest/docs/references/sdk/task/#taskcreate
There are only 2 chunks because we don't split large files into multiple chunks