macOS script not creating a monthly log file
Closed this issue · 13 comments
Each bash
script for backup
and optimize
needs to touch
the file before adding to it via echo
.
I think maybe the ~
for a relative path is causing the log file to be put into a "phantom" root user account when the script is being run with sudo
. Going to try $HOME
instead to clean everything up.
Got the log file to be created with the proper permissions, but for some reason the bash
scripts that are being created from the heredocs are not writing to the log file. Still trying to figure this out.
Something's wrong with this line:
Also this line:
Also, it doesn't matter if $HOME
or ~
is used. The log file is not being written to, so there's no confirmation that the "backup" or "optimize" commands are running properly.
The plot thickens.
Manually running the one echo
command generated from the heredoc works fine.
Maybe this is just some permissions problem.
Maybe it's the fact that the logs
folder had only had permissions of 755
here:
Trying 777
instead.
Soliciting help from Reddit: https://www.reddit.com/r/bash/comments/8qtg6e/trying_to_figure_out_why_echo_wont_write_to_a_log/
Given the &&
, I'm thinking that perhaps the previous commands aren't returning exit codes of 0.
Yes, the commands aren't returning exit codes of 0. There's some issue with the lack of the password, so maybe the .pgpass
file isn't configured properly.
For pg_dump
, I'm getting the following error: pg_dump: [archiver (db)] connection to database "testworkflow10" failed: fe_sendauth: no password supplied
For reindexdb
, I get: reindexdb: could not connect to database testworkflow10: fe_sendauth: no password supplied
.
For vacuumdb
, I get: vacuumdb: could not connect to database testworkflow10: fe_sendauth: no password supplied
.
So, the fact that it's an &&
operator is properly showing that the earlier commands didn't successfully execute.
Now I need to figure out how to make sure that the .pgpass
file is properly configured so the the commands don't actually need a password to execute.
Very strange.
I've gone ahead and completely nuked my installation of PostgreSQL on macOS 10.12.6 as per Dwaine's instructions.
Now, with a totally fresh PostgreSQL installation, I try to run the macOS script, but the .pgpass
file gets rejected, and neither the backup
nor optimize
scripts will run.
I think perhaps the .pgpass
file needs to live in the postgres
user's home directory. Going to try to put it there instead.
No, it's not the .pgpass
file.
I think the syntax is wrong. -U
can be followed by a space but --username
needs and equals sign with the value: --username=postgres
.
Going to go through all the code and see if cleaning this up resolves everything.
If I might intrude with a suggestion:
This was something one of our devs sent a customer inquiring about scripting postgres a long time ago. Seems like it may be appropriate here?
Add the following line at the end of the pgpass file:
localhost:5432:*:postgres:DaVinci
@Dwainem Yup, my .pgpass
file already looks like that. The heredoc creates it that way, so that's all good.
I think I actually just figured this out though!
For whatever reason, when I changed all three of these lines in the pg_hba.conf
file to the trust
method of identification, all the scripts and launchd
user agents were all able to run perfectly:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
I'm not clear about the full security implications of this though--my thought is that since this is just on a local network, this is acceptable. It's not a PostgreSQL server being used out on the Web.
It doesn't seem like there's anything in the macOS script to modify--I'll just need to add a few lines to the documentation so that people know to modify their pg_hba.conf
accordingly.
There might be some nice and convenient bash
commands to have the script go and update the pg_hba.conf
file for the user, but I'd have to look into that. That'll be for another day.
This has been resolved: #14