# Useful Commands
run postgres on servers
ssh <server>
list databases
\l
\l+ (extra details like size)
list tables from a database
\c <database-name>
\dt
\dt mdl_reen* (list tables with starting "mdl_reen")
list columns names for a table
\d+ <table-name>
find the size of the table including indexes
SELECT pg_size_pretty(pg_total_relation_size('<table-name>'));
for eg, SELECT pg_size_pretty(pg_total_relation_size('mdl_logstore_standard_log'));
convert timestamp to date
select id, modelid, evaluationmode, timesplitting, to_timestamp(timecreated) from mdl_analytics_models_log;
select id, modelid, evaluationmode, timesplitting, to_timestamp(timecreated)::date from mdl_analytics_models_log;
select id, userid, fromaddress, subject, to_timestamp(timesent)::date from mdl_mail_log order by timesent desc;
Begin / Rollback / Commit
When you have made changes to a database on the server start with 'BEGIN' and do the change (for eg postgres update query)
Then if you type 'ROLLBACK' it will undo the change
And if you type 'COMMIT', it will commit the change, otherwise the change goes away
# Clone or copy a database
CREATE DATABASE "mysite-testing-totara1" WITH TEMPLATE "mysite-testing-totara" OWNER "mysite-testing-totara"
# Download and Restore
# Download table structure and data from server to local \ Download table into csv
ssh <server>
sudo -u postgres psql
<code>Copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' header;</code>
exit
Go to local machine
scp learn-xxxxxxx-xxxxxxx-db2:/tmp/mdl_ucdw_studentattributes.csv ~/development
# Download database structure and data from server to local
# Getting database from the server
scp <server-name>:/var/backups/pg/pg-9.1-main/current/pg-{clientname}-{env}-{sitetype}-{timestamp}.pgdump <local/path>
You need to get database backup for site catalystdemo-xxxxxxx-totara
eldb -s catalystdemo-xxxxxxx-totara cluster show --servers
ssh learn-shared20-porwal-testing-db1
sudo -u postgres ls /var/backups/pg/pg-12-main/current | grep catalystdemo-testing-totara
scp sumaiyajaved@learn-shared20-porwal-testing-db1:/var/backups/pg/pg-12-main/current/pg-catalystdemo-testing-totara-2021-12-28-220256.pgdump ~/development/backup
sudo -u postgres chmod 777 testcatalystdemo.pgdump (to change the chmod of the file in case this is stopping the above command)
or
rsync
server=learn-shared20-porwal-testing-db2
db=ucace-testing-moodle
rsync --rsync-path="sudo -u postgres rsync" --include=pg-$db-.pgdump --exclude= --progress -avz $server:/var/backups/pg/pg-$pg-main/current/ .`
or
rsync -av --progress --rsync-path "sudo -u postgres rsync" <db-server>:<dbdumpfile> <path-local-directory>
rsync -av --progress --rsync-path "sudo -u postgres rsync" learn-shared20-porwal-testing-db1:/var/backups/pg/pg-12-main/current/pg-catalystdemo-testing-totara-2021-12-28-220256.pgdump ~/development/backup
# Restoring on local
Restoring database onto local site which resides in a docker container (moodle-docker-environment)
copy the dump file that you have downloaded above in the docker container which is being used for your site
docker cp </path/to/dump/in/host> <container_name>:<path_to_volume>
docker cp ~/development/backup/testcatalystdemo.pgdump moodle-docker-environment_db_1:/var/backups
docker exec <container-name> pg_restore -U <database-owner> -d <database-name> <path-of-the-file-in-container> (this assumes the db already exists)
docker exec 97a87498eb3a_newstack_db_1 pg_restore -U moodle -d totaratxp13demo8 /var/backups/testcatalystdemo.pgdump
# Possible Errors
# Error: "pg_restore: [archiver] unsupported version (1.14) in file header"
This is because the version of the dump file is different than the postgres in the container. In above case of catalystdemo-prod-totara, the dump is for postgres12 while the postgres in the container is 11.
Solution is to rebuild the container with postgres 12.
To do this update the postgres to 12 in base.yml file in moodle-docker-environment (image: postgres:12)
Rebuild container
moodle-docker-compose down
moodle-docker-compose up -d
you might need to remove the docker volume; docker volume rm <container-name>
docker volume rm moodle-docker-environment_pgdata
mget-database <trident> <server> in local moodle-docker-environment; but need to update the pg version to 12
or
mrestore-database <file-path-computer> (it will take the db initiated in the current container, you can check it by moodle-docker-compose top)
mrestore-database ~/development/backup/testcatalystdemo.pgdump
# Subselect in Join
Use select inside JOIN statement to be able to execute a query inside a query. It would resemble having a main MYSQL query in a PHP function followed by having another query inside a forloop.
For e.g., I wanted to get results from a table which was joined via another table. It was leading to an issue because the relationship between the middle and first table was 'one to many'.
* This query lead to duplicates
select c.id, c.userid, gm.userid, c.course, g.courseid, g.idnumber, g.name, g.id, gm.id from mdl_course_completions c LEFT JOIN mdl_groups_members gm on gm.userid=c.userid LEFT JOIN mdl_groups g on c.course=g.courseid and gm.groupid=g.id where course=17 order by c.userid;
* Solution: Select inside a JOIN statement
select c.id, c.userid, z.name, z.idnumber, c.course, z.courseid from mdl_course_completions c LEFT JOIN (select gm.userid,g.name,g.idnumber, g.courseid from mdl_groups_members gm left join mdl_groups g on gm.groupid = g.id) as z on z.userid=c.userid and z.courseid = c.course where c.course=17 and c.userid=387 order by c.userid;
* The structures of the two tables.
# Search and replace
If you want to replace the domain of a user email addredd
UPDATE mdl_user SET email = REPLACE (email, 'sumaiyajave.com', 'example.com' );
# database missing
After restoring the database from the server, I noticed that the website was asking for installation again. This is because I had restored the website from the server which created tables from mdl_ while container was looking for tables with mdl_
# selfjoin statement
Something like the below is useful in getting unique values from the same table. For e.g. the below will get the unique count of assessments done by userid/assignment, as a user can assess more than once the same assignment.
SELECT count(distinct(g1.userid)) FROM {assignfeedback_apt_grades} g1 LEFT JOIN {assignfeedback_apt_grades} g2
on g1.id = g2.id where g1.grader = :grader AND g1.assignment = g2.assignment AND g1.userid = g2.userid
# Useful links
https://simkimsia.com/how-to-restore-database-dumps-for-postgres-in-docker-container/
# Find the largest table in a db
SELECT relname AS table, reltuples AS est_rows, pg_total_relation_size(oid) /(10001000) AS total_MB, pg_total_relation_size(oid) /(10241024) AS total_MiB, (reltuples::int/relpages::int) AS rows_per_page FROM pg_class WHERE relkind ='r' AND relpages > 0 ORDER BY 3 DESC;
# Access db via psql
psql -h <domain> <db-name>
And then do run a db query
If on a prod do open a transaction, which means
BEGIN;
db query ...
COMMIT;
and then anything you do won't actually happen to the database until you go and if you don't like the look of it you can go ROLLBACK; and its as if you never did it. You may already know it, but they are called transactions
# Date differences
Show time difference in hours
select date_part('hour',to_timestamp(timefinish) - to_timestamp(timestart)) as duration from mdl_facetoface_sessions_dates limit 10;
select date_part('hour',to_timestamp(timestart) - now() + interval '3 days') as duration from mdl_facetoface_sessions_dates limit 10;
select id, sessionid, to_char(to_timestamp(timestart) AT TIME ZONE 'NZT', 'dd Mon YYYY, HH12:MI AM ') as timestart, date_part('day',to_timestamp(timestart) - now()) as daystosession from mdl_facetoface_sessions_dates where sessionid=153;
id | sessionid | timestart | daystosession
-----+-----------+------------------------+---------------
170 | 153 | 15 May 2023, 09:00 AM | 9
171 | 153 | 05 May 2023, 09:00 AM | 0
172 | 153 | 08 May 2023, 09:00 AM | 2
https://www.postgresql.org/docs/9.2/functions-datetime.html
# Export table in csv
\c copy (select * from <table>) to '/tmp/samplecsv.csv' CSV HEADER
rsync -av --progress --rsync-path "sudo -u postgres rsync" <local-machine-location> <server>:/tmp/samplecsv.csv
# Moodle / Totara Performance Issues / Query plan
If the db has good statistics of the data distribution in the tables being queried it will choose a good plan, if the statistics are out of date it may choose a very bad plan. I was attempting to make the query be less sensitive to statistics changes.
It may be that my customisation is not required and that'd be great if that's the case.
The statistics are updated by the postgres auto vacuum processes, it will only update the statistics after 10% of the data has changed in a table, when a table has a lot of rows it can be a long time before 10% has changed.(edited)
The map tables are often built by truncating them and re-inserting them again too which is really bad for messing up the stats. In some instances Totara runs an ANALYZE query right after the rebuilt to try and mitigate it.
# See the tables that are getting populated when a script is running on a webserver
select * from pg_stat_activity where datname = '<dbname>' and state != 'idle';
follow that with \watch
This was useful to me when I was running an upgrade via the CLI or playbook and there was no output on screen.
# Visualise PG query
You can visualise pg explain plans with https://www.pgexplain.dev
Paste the explain query + the resulting plan and you get something like
https://www.pgexplain.dev/plan/79db687b-12f4-4c61-acf3-0dffc6730b19#plan/node/47
# Vacuum
# Check the current vacuum cycle of suspected tables
select relname, reloptions, pg_namespace.nspname
from pg_class
join pg_namespace on pg_namespace.oid = pg_class.relnamespace where relname in ('mdl_user', 'mdl_lwm_grade');
relname | reloptions | nspname
---------------+-----------------------------------------------------------------------+---------
mdl_lwm_grade | {autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=1000} | public
mdl_user | {autovacuum_analyze_scale_factor=0,autovacuum_analyze_threshold=1000} | public
(2 rows)
# Distinct record based on two columns
You can use DINTINCT ON (col1, col2)
For e.g. SELECT DISTINCT on (userid,offlinereviewid) * FROM {offlinereview_completion}
# Replace to extract substring from a string
IF an assignment has idnumber SUT4002A_M00085A_01074 and a course had idnumber SUT4002A_M00085A. Extract 01074 from it
REPLACE(g.idnumber, (c.idnumber || '_'), '') as assessmentid, a.attemptnumber,
# Search or find fulltext in a pgdump
pg_restore /var/backups/pg/pg-16-main/current/pg-soe-prod-moodle-2025-07-08-000213.pgdump -f - | grep -o 'https://eddy.au.panopto.com/Panopto/Pages/Embed.aspx' | wc -l