SyndrDB supports PostgreSQL-style server-side cursors for processing large result sets without loading all data into memory at once. Cursors allow you to declare a query, then fetch results in controlled batches.
Declares a named server-side cursor for a SELECT query. The query is planned but not executed until the first FETCH.
DECLARE cursor_name CURSOR FOR SELECT ...
-- Declare a cursor for all users
DECLARE all_users CURSOR FOR SELECT * FROM "users"
-- Declare a cursor with filtering
DECLARE active_orders CURSOR FOR SELECT * FROM "orders" WHERE status == "active"
-- Declare a cursor with sorting
DECLARE sorted_products CURSOR FOR SELECT name, price FROM "products" ORDER BY price
Note: Cursor names are identifiers (not quoted strings). They must be unique within the current session.
Retrieves rows from an open cursor. Each FETCH advances the cursor position.
FETCH 100 FROM cursor_name
Returns up to 100 rows from the cursor. If fewer than 100 rows remain, returns only the remaining rows.
FETCH NEXT FROM cursor_name
Returns a single row and advances the cursor by one position.
FETCH ALL FROM cursor_name
Returns all remaining rows from the cursor. Use with caution on large result sets.
End of Data: When a
FETCHreturns zero rows, the cursor has been fully consumed. You can close it or it will be closed automatically when the transaction ends.
Explicitly closes an open cursor and releases its resources.
CLOSE cursor_name
After closing, the cursor name can be reused for a new DECLARE statement.
Cursors in SyndrDB follow PostgreSQL semantics for lifecycle management:
DECLARE creates the cursor and plans the queryFETCH executes the query and streams results in batchesCLOSE releases resources, or automatic on transaction endCursors are automatically closed in the following situations:
| Setting | Default | Description |
|---|---|---|
MaxOpenCursorsPerSession |
64 | Maximum number of cursors that can be open simultaneously in a single session |
CursorIdleTimeoutSeconds |
300 | Seconds of inactivity before a cursor is automatically closed (5 minutes) |
StreamingChunkSize |
256 | Number of documents per internal streaming chunk |
-- Process a large dataset in batches of 500
BEGIN TRANSACTION
DECLARE batch_cursor CURSOR FOR
SELECT * FROM "events" WHERE created_at > "2025-01-01T00:00:00Z"
-- Fetch first batch
FETCH 500 FROM batch_cursor
-- Fetch next batch (application processes each batch)
FETCH 500 FROM batch_cursor
-- Continue fetching until no more rows are returned...
FETCH 500 FROM batch_cursor
-- Clean up
CLOSE batch_cursor
COMMIT
BEGIN TRANSACTION
-- Create cursor for sorted results
DECLARE page_cursor CURSOR FOR
SELECT name, email, created_at FROM "users"
ORDER BY created_at
WHERE active == TRUE
-- Page 1: first 25 results
FETCH 25 FROM page_cursor
-- Page 2: next 25 results
FETCH 25 FROM page_cursor
-- Page 3: next 25 results
FETCH 25 FROM page_cursor
CLOSE page_cursor
COMMIT
BEGIN TRANSACTION
DECLARE row_cursor CURSOR FOR
SELECT order_id, total FROM "orders" WHERE status == "pending"
-- Process one row at a time
FETCH NEXT FROM row_cursor
-- ... process row ...
FETCH NEXT FROM row_cursor
-- ... process row ...
-- When FETCH returns 0 rows, all rows have been processed
CLOSE row_cursor
COMMIT
BEGIN TRANSACTION
DECLARE small_cursor CURSOR FOR
SELECT name, role FROM "staff" WHERE department == "engineering"
-- Get everything at once (suitable for smaller result sets)
FETCH ALL FROM small_cursor
CLOSE small_cursor
COMMIT
| Command | Description |
|---|---|
DECLARE name CURSOR FOR SELECT ... |
Create a cursor for a query |
FETCH N FROM name |
Retrieve N rows from the cursor |
FETCH NEXT FROM name |
Retrieve the next single row |
FETCH ALL FROM name |
Retrieve all remaining rows |
CLOSE name |
Close the cursor and release resources |
Last updated: March 2026