OSM - OpenStreetMap XML and PBF

(GDAL/OGR >= 1.10.0)

This driver reads OpenStreetMap files, in .osm (XML based) and .pbf (optimized binary) formats.

The driver is available if GDAL is built with SQLite support and, for .osm XML files, with Expat support.

The filenames must end with .osm or .pbf extension.

The driver will categorize features into 5 layers :


In the data folder of the GDAL distribution, you can find a osmconf.ini file that can be customized to fit your needs. You can also define an alternate path with the OSM_CONFIG_FILE configuration option.

The customization is essentially which OSM attributes and keys should be translated into OGR layer fields.

Starting with GDAL 2.0, fields can be computed with SQL expressions (evaluated by SQLite engine) from other fields/tags. For example to compute the z_order attribute.

"other_tags" field

When keys are not strictly identified in the osmconf.ini file, the key/value pair is appended in a "other_tags" field, with a syntax compatible with the PostgreSQL HSTORE type. See the COLUMN_TYPES layer creation option of the PG driver.

For example :

ogr2ogr -f PostgreSQL "PG:dbname=osm" test.pbf -lco COLUMN_TYPES=other_tags=hstore

"all_tags" field

(OGR >= 1.11)

Similar to "other_tags", except that it contains both keys specifically identified to be reported as dedicated fields, as well as other keys.

"all_tags" is disabled by default, and when enabled, it is exclusive with "other_tags".

Internal working and performance tweaking

The driver will use an internal SQLite database to resolve geometries. If that database remains under 100 MB it will reside in RAM. If it grows above, it will be written in a temporary file on disk. By default, this file will be written in the current directory, unless you define the CPL_TMPDIR configuration option. The 100 MB default threshold can be adjusted with the OSM_MAX_TMPFILE_SIZE configuration option (value in MB).

For indexation of nodes, a custom mechanism not relying on SQLite is used by default (indexation of ways to solve relations is still relying on SQLite). It can speed up operations significantly. However, in some situations (non increasing node ids, or node ids not in expected range), it might not work and the driver will output an error message suggesting to relaunch by defining the OSM_USE_CUSTOM_INDEXING configuration option to NO.

When custom indexing is used (default case), the OSM_COMPRESS_NODES configuration option can be set to YES (the default is NO). This option might be turned on to improve performances when I/O access is the limiting factor (typically the case of rotational disk), and will be mostly efficient for country-sized OSM extracts where compression rate can go up to a factor of 3 or 4, and help keep the node DB to a size that fit in the OS I/O caches. For whole planet file, the effect of this option will be less efficient. This option consumes addionnal 60 MB of RAM.

Interleaved reading

Due to the nature of OSM files and how the driver works internally, the default reading mode that works per-layer might not work correctly, because too many features will accumulate in the layers before being consumed by the user application.

Starting with GDAL 2.2, applications should use the GDALDataset::GetNextFeature() API to iterate over features in the order they are produced.

For earlier versions, for large files, applications should set the OGR_INTERLEAVED_READING=YES configuration option to turn on a special reading mode where the following reading pattern must be used:

    bool bHasLayersNonEmpty;
        bHasLayersNonEmpty = false;

        for( int iLayer = 0; iLayer < poDS->GetLayerCount(); iLayer++ )
            OGRLayer *poLayer = poDS->GetLayer(iLayer);

            OGRFeature* poFeature;
            while( (poFeature = poLayer->GetNextFeature()) != NULL )
                bHasLayersNonEmpty = true;
    while( bHasLayersNonEmpty );

Note : the ogr2ogr application has been modified to use that OGR_INTERLEAVED_READING mode without any particular user action.

Spatial filtering

Due to way .osm or .pbf files are structured and the parsing of the file is done, for efficiency reasons, a spatial filter applied on the points layer will also affect other layers. This may result in lines or polygons that have missing vertices.

To improve this, a possibility is using a larger spatial filter with some buffer for the points layer, and then post-process the output to apply the desired filter. This would not work however if a polygon has vertices very far away from the interest area. In which case full conversion of the file to another format, and filtering of the resulting lines or polygons layers would be needed.

Reading .osm.bz2 files and/or online files

.osm.bz2 are not natively recognized, however you can process them (on Unix), with the following command :
bzcat my.osm.bz2 | ogr2ogr -f SQLite my.sqlite /vsistdin/
You can convert a .osm or .pbf file without downloading it :
wget -O - http://www.example.com/some.pbf | ogr2ogr -f SQLite my.sqlite /vsistdin/


ogr2ogr -f SQLite my.sqlite /vsicurl_streaming/http://www.example.com/some.pbf -progress
And to combine the above steps :
wget -O - http://www.example.com/some.osm.bz2 | bzcat | ogr2ogr -f SQLite my.sqlite /vsistdin/

Open options

See Also