PostgreSQL is an open source object-relational database system.
The PostgreSQL monitoring extension captures metrics from a PostgreSQL database and displays them in the AppDynamics Metric Browser.
Before the extension is installed, the prerequisites mentioned here need to be met. Please do not proceed with the extension installation if the specified prerequisites are not met.
- Unzip the contents of "PostgreSQLMonitor.zip" as "PostgreSQLMonitor" and copy the "PostgreSQLMonitor" directory to
<MachineAgentHome>/monitors/
- Configure the extension by referring to the below section.
- Configure the path to the config.yaml file by editing the task-argments in the monitor.xml file.
<task-arguments> <argument name="config-file" is-required="true" default-value="monitors/PostgreSQLMonitor/config.yml" /> </task-arguments>
- Restart the machine agent.
Note : Please make sure not to use tab (\t) while editing yaml files. You can validate the yaml file using a yaml validator
Configure the extension by editing the config.yml file in <MachineAgentHome>/monitors/PostgreSQLMonitor/
. The metricPrefix of the extension has to be configured as specified here. Please make sure that the right metricPrefix is chosen based on your machine agent deployment, otherwise this could lead to metrics not being visible in the controller.
- Configure the PostgreSQL clusters/servers properties by specifying the displayName(required), host(required), port(required), user(required), password (only if authentication enabled), encryptedPassword(only if password encryption required) under
servers
. Also specify the databases that have to be monitored. You can specify multiple servers in the same config.yml file.When using ipv6 address setservers: - displayName: "Local cluster" host: "127.0.0.1" useIpv6: "false" port: "5432" user: "" password: "" encryptedPassword: "" # optionalConnectionProperties: # connectTimeout: 100 # tcpKeepAlive: true databases: ... - displayName: "Local cluster" host: "::1" useIpv6: "true" port: "5432" user: "" password: "" encryptedPassword: "" databases: ...
useIpv6: "false"
. Additional connection properties can be set usingoptionalConnectionProperties
, you can refer here for all connection parameters. - Configure the databases under each server, atleast on database is required under one server to configure some queries and fetch metrics. Configure the database by providing dbName and configuring queries as explained in the next section. You can configure multiple databases.
databases: - dbName: "test" queries: ...
Only queries that start with SELECT are allowed. The extension supports getting values from multiple columns at once but it can only pull the metrics from the latest value from the row returned.
The name of the metric displayed on the Metric Browser will be the "name" value that is specified in columns.
queries : You can add multiple queries under this field, each query configured will consist of the following
- name : The name you would like to give to the metrics produced by this query.
- serverLvlQuery : Set this to true only if the query returns stats for the databases under the current server
- queryStmt : This will be your SQL Query that will be used to query the database.
- columns : Under this field you will have to list all the columns that you are trying to get values from.
- name : The name of the column you would like to see on the metric browser.
- type : This value will define if the value returned from the column will be used for the metric path or if it is going to be the value of the metric.
- metricPath : If you select this, this value will be added to the metric path for the metric.
- metricValue : If you select this, then the value returned will become your metric value that will correspond to the name you specified above.
Example, Consider the below query for server Local Cluster
databases:
- dbName: "test"
queries:
# Add where clauses to query to filter databases
- name: "Database Stats"
serverLvlQuery: "true"
queryStmt: "SELECT datname, numbackends
FROM pg_stat_database"
# the columns are the metrics to be extracted
columns:
- name: "datname"
type: "metricPath"
- name: "numbackends"
type: "metricValue"
properties:
alias: "Number of connections"
aggregationType: "OBSERVATION"
timeRollUpType: "AVERAGE"
clusterRollUpType: "INDIVIDUAL"
The above query will return 1 metric, with metric path -
Custom Metrics|Local Cluster|Database Stats|<datname>|Number of connections
. Since the above query has serverLvlQuery: true
dbName won't be a part of the metric path.
Consider the below query for server Local Cluster
databases:
- dbName: "test"
queries:
- name: "Table Stats"
serverLvlQuery: "false"
# add where clause to the query to filter tables
queryStmt: "SELECT relname, seq_scan, seq_tup_read
FROM pg_stat_user_tables where relname = 'myTable'"
columns:
- name: "relname"
type: "metricPath"
- name: "seq_scan"
type: "metricValue"
properties:
alias: "Sequential Scans"
delta: "true"
aggregationType: "OBSERVATION"
timeRollUpType: "AVERAGE"
clusterRollUpType: "INDIVIDUAL"
- name: "seq_tup_read"
type: "metricValue"
properties:
alias: "Tuples fetched by Sequential Scans"
delta: "true"
aggregationType: "OBSERVATION"
timeRollUpType: "AVERAGE"
clusterRollUpType: "INDIVIDUAL"
Assume that this query returns -
relname | seq_scan | seq_tup_read |
---|---|---|
myTable | 10 | 200 |
The above query will return 2 metrics-
Custom Metrics|Local Cluster|test|Table Stats|relname|Sequential Scans = 10
Custom Metrics|Local Cluster|test|Table Stats|relname|Tuples fetched by Sequential Scans = 200
Use the following formula for calculating numberOfThreads
numberOfThreads = for each server (1 + number_of(databases)). For example if you have 1 server and 2 databases then numberOfThreads = 1 + 2 = 3
Please visit this page to get detailed instructions on configuring Metric Path Character sequence replacements in Extensions.
Please visit this page to get detailed instructions on automatic dashboard upload with extension.
Please visit here page to get detailed instructions on
Please visit this page to get detailed instructions on password encryption. The steps in this document will guide you through the whole process.
Workbench is an inbuilt feature provided with each extension in order to assist you to fine tune the extension setup before you actually deploy it on the controller. Please review the following document for how to use the Extensions WorkBench
Please follow the steps listed in the extensions troubleshooting document in order to troubleshoot your issue. These are a set of common issues that customers might have faced during the installation of the extension. If these don't solve your issue, please follow the last step on the troubleshooting-document to contact the support team.
If after going through the Troubleshooting Document you have not been able to get your extension working, please file a ticket and add the following information.
Please provide us with the following for us to assist you better:
- Config.yml & monitor.xml (
<MachineAgentHome>/monitors/PostgreSQLMonitor
) - Controller-info.xml (
<MachineAgentHome>/conf/controller-info.xml
) - Enable Machine Agent
DEBUG
logging by changing the level values of the following logger elements fromINFO
toDEBUG
in<MachineAgent>/conf/logging/log4j.xml
:<logger name="com.singularity"> <logger name="com.appdynamics">
- After letting the Machine Agent run for 10-15 minutes, attach the complete
<MachineAgentHome>/logs/
directory.
For any support related questions, you can also contact help@appdynamics.com.
Always feel free to fork and contribute any changes directly via GitHub.
Name | Version |
---|---|
Extension Version | 3.0.0 |
Controller Compatibility | 4.3.x or Later |
Postgres Version Support | 9.4 or later |
Last Update | 05/13/2019 |
Changes list | ChangeLog |