Hmm yes, @<1570220858075516928:profile|SlipperySheep79> I think you are right in your case it make sense to do add this option.
Could you add GH issue with the feature request? it should be fairly easy to add and we use GH to make sure we track those requests
wdyt?
https://github.com/allegroai/clearml/blob/master/clearml/automation/trigger.py
Example coming soon, with docs :)
Hi RoughTiger69
One quirk I found was that even with this flag on, the agent decides to install whatever is in the requirements.txt
Whats the clearml-agent you are using?
I just noticed that even when I clear the list of installed packages in the UI, upon startup, clearml agent still picks up the requirements.txt (after checking out the code) and tries to install it.
It can also just skip the entire Python installation with:CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1
Hi CheerfulGorilla72
the "installed packages" section is used as "requirements.txt for the agent.
Are you saying the autodetection fails to detect all packages? You can specify in "manual execution" (i.e not when the agent is running the code), to just take the requirements.txt locally:` Task.force_requirements_env_freeze(requirements_file="./requirements.txt")
notice the above call should be executed Before Task.init
task = Task.init(...) `3. If you clear all the "installed packages" se...
SteadyFox10 I suspect you are correct 🙂
CourageousLizard33 see also section (4) here:
https://github.com/allegroai/trains-server/blob/master/docs/install_linux_mac.md#launching-the-trains-server-docker-in-linux-or-macos
Any chance you can PR a fix to the docs?
Let's assume the host has a folder for all users for persistence storage, for example '/mnt/user_data/and you have a user named 'myuser' and a matching subfolder '/mnt/user_data/myuser
Then we can do:clearml-session ... --docker "my_docker_image -v /mnt/user_data/:/host_mount/" --user-folder "/host_mount/myuser"
BTW: The next time you call clearml-session
these will become the default parameters, so no need to change anything 🙂
So it sounds as if for some reason calling Task.init inide a notebook on your jupyterhub is not detecting the notebook.
Is there anything special about the jupyterhub deployment ? how is it deployed ? is it password protected ? is this reproducible ?
I put two models in the same endpoint, then only one was running,
without providing version number, you are overriding the models (because this is the same endpoint)
I started another docker container having a different port number and then the curls with the new model endpoint (with the new port) started working
Seems like misconfiguration on the first one?
, which apparently I can't specify when I establish the model endpoint but I need to re compose the docker container by...
@<1535793988726951936:profile|YummyElephant76>
Whenever I create any task the "uncommitted changes" are the contents of
ipykernel_launcher.py
, is there a way to make ClearML recognize that I'm running inside a venv?
This sounds like a bug, it should have the entire notebook there, no?
Hi @<1523701295830011904:profile|CluelessFlamingo93>
from your log:
ImportError: cannot import name 'packaging' from 'pkg_resources' (/home/bat/.clearml/venvs-builds/3.9/lib/python3.9/site-packages/pkg_resources/__init__.py)
I'm guessing yolox/setuptools
None
Try adding to the "Installed packages"
setuptools==69.5.1
(Something about the `setup...
The current implementation (since 1.6.3 I think) creates the issues in the linked comment (with images to visualize).
Understood, basically the moment we add nested project view to the dataset (and pipelines for that matter, and both are already being worked on), it should solve everything. Is that correct?
PompousBeetle71 Could you check with 0.14.3 that just released?
Are you saying that in the UI you do not see "confusion matrix" at all, only on the GS bucket ?
Hmmm that sounds like a good direction to follow, I'll see if I can come up with something as well. Let me know if you have a better handle on the issue...
@<1687643893996195840:profile|RoundCat60> I'm assuming we are still talking about the S3 credentials, sadly no 😞
Are you familiar with boto and IAM roles ?
The issue I want to avoid is aborting of the dataset task that these regular tasks update.
HelpfulHare30 could you post a pseudo code of the dataset update ?
(My point is, I'm not sure the Dataset actually supports updating, as it need to reupload the previous delta snapshot). Wouldn't it be easier to add another child dataset and then use dataset.squash (like one would do in git) ?
This is odd, can you send th full log of the failed Task and if possible the code?
SmarmySeaurchin8 what do you think?
https://github.com/allegroai/trains/issues/265#issuecomment-748543102
task.connect_configuration
Was going crazy for a short amount of time yelling to myself: I just installed clear-agent init!
oh noooooooooooooooooo
I can relate so much, happens to me too often that copy pasting into bash just uses the unicode character instead of the regular ascii one
I'll let the front-end guys know, so we do not make ppl go crazy 😉
sets up the venv correctly, prints
Starting Task Execution:
then does nothing
Can you provide a log?
Do you see the code/git reference in the Pipeline Task details - Execution Tab ?
I theory this would be doable, but wouldn't it be a bit confusing? Also why not always use containers if the host supports it, there is no real downside, just set the default docker image to something that is a good starting point
In the agent, no, it pipes stdout/stderr of the container and logs everything 😞
to get a json or something like that?
There is an api to get all the console logs, is this what you are after?
Do you mean it recently become part of enterprise version?
I do not think so, but it seems this the support for the open-source is more like a PoC
https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py
And when retrieve just this file? is it working ?
(Maybe for some reason the file is corrupted) ?
You mean for running a worker? (I think plain vanilla python / ubuntu works)
The only change would be pip install clearml / clearml-agent ...
MelancholyElk85
How do I add files without uploading them anywhere?
The files themselves need to be packaged into a zip file (so we have an immutable copy of the dataset). This means you cannot "register" existing files (in your example, files on your S3 bucket?!). The idea is to make sure your dataset is protected against changes on the one hand, but on the other to allow you to change it, and only store the changeset.
Does that make sense ?
Hi SpotlessFish46 ,
Is the artifact already in S3 ?
Is the S3 configured as the default files_server in the trains.conf
?
You can always use the StorageManager upload to wherever and register the url on the artifacts.
You can also programmatically change the artifact destination server to S3, then upload the artifact as usual.
What would be the best natch for you?