I'm running a python script which drip-feeds data slowly into a postgres database with postgis extension. I'm using autocommit, and committing one row at a time. Horrendously slow, but I need to do it this way for a good reason :)
Once I add a postgres layer, QGIS seems to poll the database every so often and the number of features increases. This is great, and gives me visual feedback that my script is working.
If I stop my script, TRUNCATE the table using pgAdminIII and restart my script, QGIS correctly clears the display (it notices that there are no features). However, it doesn't seem to track subsequent changes to the database, and the feature count sticks at the number of rows there were, rather than 0. I need to add the postgres layer again, which can take a while.
Is this a bug, a feature, or am I doing something wrong?
(Environment: QGIS 2.12.1 Pisa, Postgres 9.3.10, PostGIS 2.1.2, Ubuntu Tahr 32 bit)
It seems that once the number of features exceeds the number of features in the database before I truncated the database, QGIS starts tracking changes again.
I found out how to get around this - use a filter on the layer, and make it fetch all features.
In my case, the table contained a geometry only, no id field. I added a bigserial field called
CREATE TABLE randomised3 ( unid BIGSERIAL PRIMARY KEY, way geometry(Geometry,27571) ) WITH ( OIDS=FALSE );
Next, I added a filter on the table - this will show all rows.
Refreshing the feature count now resets to 0 when I restart with a truncated table. Clicking 'test' on the filter dialog seems to help too.
It seems that without a filter in place, QGIS is using a cached value of the estimated number of rows - adding the filter forces it to recalculate.