the life and times of J T Frey


Here at the University of Delaware we were looking for a way to lock-down our campus license server to off-campus IPs. The particular way that we run FlexLM leaves all of the vendor sub-daemons registered on random TCP ports, making a strict and static software firewall setup inaccessible.

Trying to get a large population of end users to install and begin using our VPN from off-campus presented quite a challenge. Since VPN funnels all of the user's traffic through the University via the user's ISP, it also seems excessive to increase the VPN load strictly for the sake of passing a couple of packets to the license server.

In looking for another solution I came across the address pool and “auth” features in IPF. The address pool would allow us to manage a dynamic list of off-campus IPs we've authorized to communicate with the license server. Two drawbacks to using the pool feature would be

  • The address list has the potential to become very large; IPF pools are resident in memory

  • Maintaining the pool (adding/removing IPs) requires rewriting and reloading a configuration file periodically and/or using the ippool CLI; the former means changes take time to propagate to the server, the latter requires locking/sequencing

Both of these properties would be handled quite easily by a transactional database: the address list can be arbitrarily large and the database itself would handle the locking/sequencing. Considering the fact that whatever application we created to allow our end users to authorize specific off-campus IPs would store its data in a database anyway, it would be wonderful to be able to interface IPF to that very same database. That's exactly what the “auth” IPF feature is meant to do: provide a means for a userland program to programmatically block/pass packets!

About pgipfauth...

The IPF firewall software has a little-used feature that allows one to write rules that will attempt to build a packet disposition by passing the packet (headers only or headers and payload) to a program running outside the kernel. The userland program must open the /dev/ipauth device and perform ioctl() calls to wait for a packet and to pass back the disposition. An example program exists in the official IPF source distributions; this program holds lists of authorized IPs in memory. The goal of pgipfauth is to use a Postgres database to hold a persistent, possibly large list of authorized IP addresses and consult that list as needed. Authorizations are cached by pgipfauth to attempt to decrease the number of database queries which must be performed.

Combined with the connection state table of IPF, the typical TCP connection profile is handled quite efficiently:

  1. Initial TCP packet triggers a call to pgipfauth

  2. pgipfauth queries the database with the orginating IP, decides to BLOCK or PASS, caches the IP + disposition

  3. IPF adds the TCP session to its state table

  4. Subsequent packets in TCP session are passed or blocked based solely on the IPF state table

For the lifetime of the cache record added in step 2, the primary difference in the connection profile is that the database is never queried: pgipfauth merely returns the cached disposition.


The pgipfauth daemon accepts the following command line options:


/usr/local/pgipfauth/current/bin/pgipfauth {options}


  --help/-h                 this info
  --quiet/-q                don't print anything except critical information
  --annoying/-a             print so much that the sysadmin will go crazy trying to read
                            our log files
  --daemon/-d               run as a daemon (not in the foreground)
  --invalidator/-i [path]   use [path] as the FIFO we should watch for cache invalidation
                            requests; default is /usr/local/pgipfauth/0.1/etc/cache-invalidate
  --config/-c [path]        use [path] as the configuration file; default configuration file
                            is at: /usr/local/pgipfauth/0.1/etc/pgipfauth.conf


  HUP                 force the daemon to dump its cache, close the database connection,
                      and re-read the configuration file
  USR1                write current info for the cache and database to the daemon's
  USR2                force the daemon to purge its cache
  TERM,ABRT,QUIT,INT  terminate the daemon gracefully

The --daemon option just means that the process forks off a child and exits (the usual daemon behavior). Cache coherency issues and their relation to the USR2 signal and --invalidator CLI option are covered later in this document.

The Configuration File

An XML configuration file is used to provide the majority of the startup parameters to the pgipfauth daemon:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE pgipfauth-conf PUBLIC "-//UDEL//DTD pgipfauth configuration 1.0//EN"
  <cache enabled="yes" size="256" ttl="600" honor-ip-port="no">
    <search method="stateful"/>
    <adaptive enabled="yes" grow-by="64" critical-fraction="0.10"/>

The version attribute MUST be included in the pgipfauth-conf tag; it can also have the following attributes:

  • authoritative: whether or not we're to be authoritative in our passing/blocking of IPs
  • ipf-logging: whether or not we should always OR the FR_LOG option into the returned packet disposition (as well as FR_LOGFIRST for packets that are not blocked)
  • ipf-keep-state: whether or not a PASSed packet should be added to the IPF state table
  • ipf-return-reset: whether or not the blocked packet should have a TCP RESET returned to the remote host

In the configuration above, pgipfauth is instructed to return the following dispositions back to IPF:

  • For PASS, also set the QUICK, KEEP STATE, and LOG flags in the disposition
  • For BLOCK, also set the QUICK, LOG, and RETURN-RST flags in the disposition

The database element provides the connection information pgipfauth needs in order to connect to the database for authorization queries. The following sub-elements are used when connecting to the database:

  • host: Database server hostname — honestly, the database should live on the IPF host itself and be accessed via a file socket, so don't even add this element! It's there in case it's needed for testing, etc.
  • hostaddr: Database server IP address. Same story as for host — you probably don't need this one. Used if the database host has no DNS name, e.g.
  • port: Used in conjunction with host or hostaddr to specify a non-standard TCP port on which the database server is listening
  • user: Database user to connect as
  • password: Password for the specified database user. Can be an explicit password or a file path from which pgipfauth should read the password. Use the type=“inline” attribute for the former and type=“external” for the latter.
  • dbname: Name of the database to connect to

There are two additional sub-elements that configure the nature of the database queries:

  • schema: Use this sub-element if the IP authorization SQL lives in a schema other than public in the database
  • host-group: If the database maintains authorization data for multiple systems, then the value of this sub-element is the “name” that identifies only those authorizations meant for this IPF host

The nature of the authorization SQL and host-groups will be covered in the next chapter.

By default, no caching is done by pgipfauth. The cache is configured by the cache element; this element has two attributes:

  • enabled: “yes” or “no”, default is “no”
  • size: initial number of cache lines available
  • ttl: the number of seconds a cache line remains valid
  • honor-ip-port: “yes” or “no”, default is “yes”. If “no” then the cache will not store distinct lines for multiple inbound IP ports that are hit from the same remote IP address – in other words, once one IP+inbound port has been PASSed/BLOCKed, the cache will return the same disposition for all inbound ports for that IP.

The honor-ip-port option is available to conserve cache lines in the instance where the inbound port is just not important. An example is the application for which pgipfauth was created: an license daemon that listens on a random TCP/IP port needs a large port range to be “open” but access still needs to be controlled to keep unauthorized users from grabbing licenses. In this case, the connection profile dictates that the TCP/IP port is not integral in authorizing a connection.

The inbound IP port is always passed from pgipfauth to the SQL authorization functions (see next section). You must write your SQL authorization functions in such a way that they treat the port the same way you configure the cache to treat the port!

A search sub-element specifies which algorithm should be used when searching the cache for an IP. The algorithm is selected by providing the method attribute, which may have the values:

  • default: use whatever pgipfauth chooses
  • oldest-first: search from the head of the cache (oldest authorizations first)
  • newest-first: search from the tail of the cache (most recent authorizations first)
  • stateful: search from the cache line that the last search stopped at; the search proceeds simultaneously forward- and backward- through the cache lines from this point

Finally, the adaptive sub-element is used to enable/disable pgipfauth's ability to automatically add more cache lines if the cache is full and the miss ratio reaches some critical value:

  • enabled: “yes” or “no”, default is “no”
  • grow-by: number of cache lines to add
  • critical-fraction: A real number between 0 and 1; once the cache is full and this percentage of cache lookups are cache misses, increase the cache size by the value of the grow-by attribute

The configuration file is stored by default in an etc directory inside the install directory of pgipfauth. An alternate configuration can be passed to the daemon by use of the –config command-line option:

$ pgipfauth --config /etc/pgipfauth.conf

Authorization SQL & Host Groups

IP authorization uses the following SQL statement:

SELECT pgipfAuthorize($1,$2)

A sample SQL table and pgipfAuthorize function are worth a thousand words of prose description:

CREATE TABLE validIPAddresses (
  hostAddress       INET UNIQUE NOT NULL
  aRow          RECORD;
  remoteIP      ALIAS FOR $1;
  serverPort    ALIAS FOR $2;
  SELECT * INTO aRow FROM validIPAddresses WHERE hostAddress = remoteIP;
$$ LANGUAGE plpqsql;

The SQL function is given the IP address not as a string but as a Postgres INET type. The table used for the authorized IPs should store them as this type, as well, or the function must type-cast $1 accordingly. Using INETallows the address to be passed to Postgres far more quickly since a call to inet_ntoa is avoided. The inbound port (second argument) is passed as a Postgres INTEGER type.

Host groups should be setup be creating a table named hostGroup. The table must at least include the following fields:

CREATE TABLE hostGroup (
  hgId          SERIAL PRIMARY KEY,
  name          TEXT UNIQUE NOT NULL

The hdId must be an integral field, and the auto-incrementing SERIAL works nicely. The name field must be a textual type: a CHARACTER VARYING(32) UNIQUE NOT NULL would work equally well. To use host groups, a pgipfAuthorizeWithHostGroup function must be created and the validIPAddress table might be redeclared as

CREATE TABLE validIPAddresses (
  hostGroup         INTEGER REFERENCES hostGroup(hgId) ON DELETE CASCADE,
  hostAddress       INET UNIQUE NOT NULL


  aRow          RECORD;
  hgId          ALIAS FOR $1;
  remoteIP      ALIAS FOR $2;
  serverPort    ALIAS FOR $3;
  SELECT * INTO aRow FROM validIPAddresses WHERE hostAddress = remoteIP AND hostGroup = hgId;
$$ LANGUAGE plpqsql;

The first argument to pgipfAuthorizeWithHostGroup is the host group integral identifier, passed as an SQL INTEGER type.

The reason an SQL function is used for authorization is quite simply the fact that it affords the greatest amount of flexibility in how the actual lookup should be accomplished. For example, if we wanted to generate some database usage statistics for pgipfauth we could make the following modifications within the database:

CREATE TABLE lookupLog (
  hostAddress       INET NOT NULL,
  hostGroup         INTEGER REFERENCES hostGroup(hgId) ON DELETE CASCADE,
  allow             BOOLEAN,
  lookupWhen        TIMESTAMP WITH TIME ZONE DEFAULT now()

  aRow      RECORD;
  hgId          ALIAS FOR $1;
  remoteIP      ALIAS FOR $2;
  serverPort    ALIAS FOR $3;
  SELECT * INTO aRow FROM validIPAddresses WHERE hostAddress = remoteIP AND hostGroup = hgId;
    INSERT INTO lookupLog (hostAddress,hostGroup,allow) VALUES (remoteIP,hgId,TRUE);
  INSERT INTO lookupLog (hostAddress,hostGroup,allow) VALUES (remoteIP,hgId,FALSE);
$$ LANGUAGE plpgsql;

Each time the database is queried for an IP authorization, the IP, host group, and result of the query is added to the lookupLog table with a timestamp for the request. This data can then be mined for database lookups per minute, etc.

Cache Coherence

It is quite possible that the following situtation could arise:

  1. A user attempts to connect to the IPF host and is blocked since his/her IP is not in the authorization table
  2. User adds his/her IP address to the authorization table
  3. User attempts to connect again, and is still blocked; user must wait at most the cache TTL for his/her traffic to pass

There are two solutions to this problem: one, drop the cache TTL to a relatively short time (say 30 seconds). Of course, this decreases the usefulness of the cache since an increased number of database lookups will be experienced relative to a longer TTL. The second option is to provide for on-the-fly eviction of IPs from the cache. There are two cache eviction methods available in pgipfauth:

  • Evict all lines from the cache
  • Evict lines matching a specific IP address

Purging the cache in its entirety is done by passing the USR2 signal to the running instance of pgipfauth. Unless you used the --quiet command line option, you'll see this signal be processed in the program's stdout:

Fri Feb 27 14:04:41 2009 [6857] : [PACKET] =>   PASS  (00020512)  [CACHE HIT]
Fri Feb 27 14:04:56 2009 [6857] : [PACKET] =>  BLOCK (00001111)
Fri Feb 27 14:04:59 2009 [6857] : [PACKET] =>  BLOCK (00001111)  [CACHE HIT]
Fri Feb 27 14:05:00 2009 [6857] : [PACKET] =>  BLOCK (00001111)  [CACHE HIT]
Fri Feb 27 14:05:32 2009 [6857] : [NOTICE] the internal authorization cache has been purged
Fri Feb 27 14:05:36 2009 [6857] : [PACKET] =>  BLOCK (00001111)
Fri Feb 27 14:05:38 2009 [6857] : [PACKET] =>   PASS  (00020512)

To evict specific IP addresses, the addresses must be written (in textual form, separated by whitespace and/or newlines) to a FIFO which pgipfauth opens. By default, this FIFO is available in the etc directory inside the install directory of pgipfauth and is named cache-invalidate. The FIFO must at least be readable by the user under which pgipfauth is running:

prw-rw----   1 root     staff          0 Feb 27 13:51 cache-invalidate

If I wished to invalidate the two IP addresses observed in the stdout output above:

% echo "" >> cache-invalidate

Watching the stdout for pgipfauth:

Fri Feb 27 14:23:19 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 14:23:19 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 14:23:27 2009 [6857] : [PACKET] =>   PASS  (00020512)
Fri Feb 27 14:23:29 2009 [6857] : [PACKET] =>   PASS  (00020512)  [CACHE HIT]

In short, the program written to add or remove IP addresses from the authorization table (through a web interface, etc) should also utilize the selective eviction FIFO to keep the cache in-sync with those database changes. For example, a Postgres trigger function could be written such that any changes to the table are accompanied by the function's writting to the invalidation FIFO:

#include "postgres.h"
#include "executor/spi.h"       /* this is what you need to work with SPI */
#include "commands/trigger.h"   /* ... and triggers */


extern Datum ipfcacheinvalidate(PG_FUNCTION_ARGS);


  TriggerData   *triggerData = (TriggerData *) fcinfo-&gt;context;
  TupleDesc     tupleDesc;
  HeapTuple     resultTuple;
  int           fnum;
  /* make sure it's called as a trigger */
  if (!CALLED_AS_TRIGGER(fcinfo))
    elog(ERROR, "ipfcacheinvalidate: not called by trigger manager");
  tupleDesc = triggerData-&gt;tg_relation-&gt;rd_att;
  resultTuple = triggerData-&gt;tg_trigtuple;
  /* Find the appropriate field: */
  fnum = SPI_fnumber(tupleDesc, "hostaddress");
  if ( fnum &gt;= 0 ) {
    char*       ipAddress = SPI_getvalue(resultTuple, tupleDesc, fnum);
    if ( ipAddress && strlen(ipAddress) ) {
      FILE*     cacheFIFO = fopen("/usr/local/pgipfauth/current/etc/cache-invalidate", "w");
      if ( cacheFIFO ) {
        /* On UPDATE, let's do BOTH ipAddresses: */
        if ( TRIGGER_FIRED_BY_UPDATE(triggerData-&gt;tg_event) ) {
          char* newIPAddress = SPI_getvalue(triggerData-&gt;tg_newtuple, tupleDesc, fnum);
          fprintf(cacheFIFO, "%s %s\n", ipAddress, (newIPAddress && strlen(newIPAddress) ? newIPAddress : "") );
        } else {
          fprintf(cacheFIFO, "%s\n", ipAddress);
      } else {
        elog(INFO, "ipfcacheinvalidate: unable to write to cache FIFO");
  /* All done: */
  return PointerGetDatum(resultTuple);

This chunk of code is compiled and linked as a shared object which can be dynamically loaded into Postgres. Within Postgres:

CREATE FUNCTION ipfcacheinvalidate() RETURNS TRIGGER AS '/usr/local/pgipfauth/0.1/src/' LANGUAGE C;

Now, when a row is added or deleted or modified in the validIPAddress table, Postgres will automagically write the IP address (or IP addresses, in the case of an UPDATE) to the cache invalidation FIFO! Does it work, though?

ipfauth=# delete from validIPAddress;
ipfauth=# insert into validIPAddress (hostAddress) values ('');
ipfauth=# insert into validIPAddress (hostAddress) values ('');
ipfauth=# select * from validIPAddress;
(2 rows)
ipfauth=# update validIPAddress set hostAddress = '' where hostAddress = '';

with the following pgipfauth output:

Fri Feb 27 15:12:35 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:12:35 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:12:35 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:12:42 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:16:22 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:23:42 2009 [6857] : [NOTICE] invalidating in cache
Fri Feb 27 15:23:42 2009 [6857] : [NOTICE] invalidating in cache


Written by Jeff Frey on Tuesday March 15, 2016
Permalink -