PompousBeetle71 I think that was you saw as tags in previous version was actually systems tags, now we also have users tags (i.e. .tags). If you still want to access the system tags can you try:InputModel('aabbcc')._get_base_model().data.system_tags
DeliciousBluewhale87 you can try:
` import sqlite3
import pandas as pd
conn = sqlite3.connect('test_database')
sql_query = pd.read_sql_query ('''
SELECT
*
FROM products
''', conn)
sql_query.to_csv(...) `
report_text does not, this is very weird
Okay this seems to be the issue.
Just making sure the Task status is "running" and task.get_logger().report_text("something")
does not report a thing ?
Do you see it on your screen?
Can you test without the "Task.debug_simulate_remote_task / init" ?
I just set
agent.enable_git_ask_pass: true
in the config of the clearml agent (v1.5.1) and the task is still stuck at asking username when trying to get the private dependency.
Hmm that should not happen, could you delete the cache and retry? maybe?
DefeatedOstrich93 many thanks I was able to reproduce it (basically newly added files caused git apply to fail)
Fix will be part of the next clearml-agent RC
Maybe different API version...
What's the trains-server version?
Hi GiddyTurkey39
First, yes you can just edit the "installed packages" section and add any missing package (this is equal to requirements.txt)
I wonder why trains
failed detecting the "bigquery" package in the first place... Any thoughts ?
time.sleep(time_sleep)
You should not call time.sleep in async functions, it should be asyncio.sleep,
None
See if that makes a difference
BTW is it cheaper than ec2 instance? Why not use the aws autoscaler ?
Sure thing!
BTW: not sure if it helps but the SaaS version integrates with Genesis Cloud I know they provide cheap GPUs might be worth checking
Hi WearyLeopard29
Yes 🙂 this is exactly how it should work
Hi AttractiveCockroach17
. Many of these experiments appear with status running on clearml even though they have finish running,
Could it be their process just terminated? (i.e. not properly shutdown) ?
How are you running these multiple experiments?
BTW: if the server does not see any change in a Task for (I think the default is 2 hours) it will automatically mark these Task as aborted
Hi BlandPuppy7 , is this Trains related, are you trying to integrate it, and need help?
Hi @<1561885921379356672:profile|GorgeousPuppy74>
- Could you copy the 3 messages here into your original message, it helps keeping things tidy and nice (press on the 3 dot menu and select edit)
- what do you mean by "currently its not executing in queue-01", you changed it so it should be pushed to queue-02, no? Also notice that you can run the enire pipeline as sub-processes for debugging,
just callpipe.start_locally(run_pipeline_steps_locally=True)
You also need an agent on the ser...
Hmm good question, I'm actually not sure if you can pass 24GB (this is not a limit on the GPU memory, this affects the memblock size, I think)
--docker or in clearml.conf https://github.com/allegroai/clearml-agent/blob/21c4857795e6392a848b296ceb5480aca5f98e4b/docs/clearml.conf#L153
Hi @<1542316991337992192:profile|AverageMoth57>
Not sure I follow how the integration what you have in mind regarding Gerrit integration None
Sounds interesting ...
wdyt?
Well I guess you can say this is definitely not self explanatory line 😉
but, it is actually asking whether we should extract the code, think of it as:if extract_archive and cached_file: return cls._extract_to_cache(cached_file, name)
Hi @<1523701181375844352:profile|ExasperatedCrocodile76>
the docker containers should get the host IP, not the internal docker IP. what am I missing ?
QuaintJellyfish58 Notice it tries to access AWS not your minio"
This seems like a bug?! can you quickly verify with previous version ?
Also notice you have to provide the minio section in the clearml.conf so it knows how to access the endpoint:
https://github.com/allegroai/clearml/blob/bd53d44d71fb85435f6ce8677fbfe74bf81f7b18/docs/clearml.conf#L113
GrittyKangaroo27 any chance you can open a GitHub issue so this is not forgotten ?
(btw: we I think 1.1.6 is going to be released later today, then we will have a few RC with improvements on the pipeline, I will make sure we add that as well)
FranticCormorant35
See here https://github.com/allegroai/trains/blob/master/examples/manual_reporting.py#L42
That makes total sense. The question was about the Mac users and OS environment in the configuration file and having that os environment set in code (this is my assumption as it seems that at import time it does not exist). What am I missing here?
Yes, but I'm not sure that they need to have separate task
Hmm okay I need to check if this can be easily done
(BTW, the downside of that, you can only cache a component, not a sub-component)
Could you manually configure the ~/trains.conf ?
(Just copy paste the section from the UI)
then try to run:trains-agent list
Hi DrabCockroach54
This seems like a pip issue trying to install from source, try upgrading the pip version and before installing numpy, it should solve it 🤞
I do it to get project name
you can still get it from the task object (even after closing it)
another place I was using was to see if i am in a pipeline task
Yes that makes sense, this is one of the use cases (to see get access to the Task that is currently running). The bug itself will only happen after closing the Task (it needs to clear OS variable).
You can either upgrade to the 1.0.6rc2 or you can hack/fix it with :
` os.environ.pop('CLEARML_PROC_MASTER_ID', None)
os.envi...