PostgreSQL security releases out

As you can see, PostgreSQL has just released new updated versions, which include security fixes. They also contain other critical bug fixes, so even if you are not directly affected by the security issues, plan an upgrade as soon as possible.

One of the security issues that have been patched deal with NULL prefixes in SSL certificate names, a vulnerability that is basically the same one that have surfaced in a lot of different products this autumn, for example in the Mozilla suite of products. There is not really space enough to properly discuss the implications this has in a PostgreSQL environment in the release notes, so I'll try to elaborate some here - given that I wrote the fix for it.

First of all, a quick explanation of what the problem is. PostgreSQL uses OpenSSL to deal with certificates. Prior to the fixed version, we just asked OpenSSL for the name of the certificate, got back a string, and used this one. Now, if you know C coding, you know that a string is terminated by a NULL character. The bug in PostgreSQL is that we did not check the return value from this function, and make sure it returned the same value as the length of the returned string. This means that somebody could embed a NULL value in the certificate, and we would incorrectly parse and validate only the part that was before the NULL value. For example, if someone managed to get a certificate with the common name set to "postgresql.bank.com\0attacker.com", PostgreSQL would match this certificate against "postgresql.bank.com" (or "*.bank.com"), which is not correct. With the fix, the certificate will be rejected completely.

It is important to know that in order to make use of this vulnerability, the attacker needs to convince a trusted CA to sign such a certificate - which is quite obviously malicious. If the attacker cannot get the CA to hand this out, PostgreSQL will reject the certificate before we even get this far. It is arguably also a bug in the CA handling (technical or procedural) to even hand out such a certificate, and that bug need to be exploited before the one in PostgreSQL can be.

In the vast majority of cases, if not all, where PostgreSQL is deployed and actually using certificate validation, the certificates will be handed out by a trusted local CA. In which case, exploiting this vulnerability becomes much harder. This scenario is significantly different from the original scenario this bug was discovered in, which is the web browser. In the web browser case, the browser already trusts a large number of external CAs by default. PostgreSQL will trust no CAs by default (unless you are doing a debian install, in which case they put some default CAs in there - this is another reason why this is a really bad idea from a security perspective). PostgreSQL also does not prompt the user with a potentially incorrect name field on the certificate asking if this is ok or not - it will just reject the certificate if it doesn't match (correctly or incorrectly), closing another attack venue. So the bug is really only significant if you can't trust your CA - but the whole point of the CA is that it is a trusted entity...

PostgreSQL 8.4 is the first version to properly support certificate name validation, and also the first version to support client certificate authentication, both of which are vulnerable to this bug, neither of which is enabled by default. However, previous versions are also indirectly vulnerable, because they exposed the CN field of the certificate to the application for further validation. So you could have a stored procedure checking the client certificate, or just the libpq application checking the server certificate, even in earlier versions. And given the API structure, there was no way for these outside processes to know if they were being fooled or not. So if you are using an application that makes use of this on previous versions of PostgreSQL, you still need the patch - there is no way to fix the bug from the application.

The summary of this post is that this vulnerability is a lot less serious in PostgreSQL than in many other systems that had the issue. That doesn't mean it's not there, and that it should be (and have been) fixed. But it means that this vulnerability alone is likely not reason enough to rush an upgrade on your production systems - most likely you're not affected by it. On the PostgreSQL security page it is tagged with classification A, which is the highest. This is more an indication that the system we're using for classification really doesn't take these things into consideration - something we will look into for the future.

Transferring slonified databases without slony

If you back up your Slony database with pg_dump, and try to reload it on a different machine (say transfer from a production system to a testing or benchmarking system), you've probably come across this problem more than once.

The dump will include all the Slony objects, with functions and triggers, that you simply cannot reload on a different machine - unless that machine also has Slony installed, and in the same way. A common way to do this is to just do a restore and ignore the errors - but if your database has lots of objects in it, that makes it very hard to spot actual errors - I always prefer to run with -1 and/or -e.

The first step to fix the problem is to exclude the Slony schema when dumping or restoring. That gets rid of most of the problem, but not all. There are still triggers in the main schemas that reference functions in the Slony schema, and now they will fail. Luckily, pg_restore has functionality to generate a table of contents from a dump, and then you can edit this table of contents file to exclude the triggers specifically. If your database isn't too complicated, you can easily script this.

Which brings me to the point of this post. It's actually very simple to script this, as long as the name of your slony schema doesn't conflict with other objects in your database (including the underscore). This is something that I know a lot of people keep doing manually (given the number of questions I hear about it when I say you should always use -e when restoring, for example). So here is a small semi-generic script that will do this for you - sed to the rescue. It simply comments out all the references to your slony schema.

Continue reading

Feedback from pgday.eu

I've finally had the time to summarize the feedback we received from pgday.eu.

We received feedback from about 35 people, which is obviously way less than we were hoping for. Ideas for how to improve this for next time are very welcome! This also means that the figures we have are not very exact - but they should give a general hint about what our attendees thought.

I just sent out the individual session feedback summaries to each individual speaker. These will not be published - it's of course fine for each speaker to publish his own feedback if he wants to, but the conference organizers will not publish the detailed per-session data.

The statistics we do have show that most of our speakers did a very good job, and that the attendees were in general very happy with the sessions. We have also received a fairly large amount of comments - both to the conference and the speakers - which will help us improve specific points for next year!

I'll show a couple of graphs here with the total across all sessions and speakers. In these graphs, 5 is the highest score and 1 is the lowest.

The attendees also seemed to be very happy with our speakers, which is something I'm very happy to hear about. It's also good to see that almost nobody felt the speakers didn't know very well what they were talking about - always a worry with a conference that has so many experienced community people attending.

Actually trying to figure out which speaker is best using this data is very difficult. But here's a list of the top speakers based on speaker quality, who had more than 5 ratings on their talks. The list includes all speakers with an average score of at least 3.5. There are a lot more hovering around that line, but there has to be a cutoff somewhere... Again note that there are still not that many ratings to consider, so values are pretty unstable. I've included the standard deviation as well to make sure this is visible.

Place | Speaker | Score | Stddev | Num 1 | Gavin M. Roy | 4.9 | 0.5 | 18 2 | Guillaume Lelarge | 4.9 | 0.4 | 7 3 | Robert Hodges | 4.8 | 0.4 | 13 4 | Magnus Hagander | 4.8 | 0.4 | 20 5 | Jean-Paul Argudo | 4.8 | 0.5 | 8 6 | Joshua D. Drake | 4.6 | 0.7 | 9 7 | Simon Riggs | 4.6 | 0.6 | 17 8 | Dimitri Fontaine | 4.5 | 0.5 | 14 9 | Greg Stark | 4.3 | 0.5 | 8 10 | Vincent Moreau | 4.1 | 0.6 | 8 11 | Mark Cave-Ayland | 4.0 | 0.6 | 11 12 | David Fetter | 3.9 | 1.1 | 9 13 | Gabriele Bartolini | 3.7 | 1.0 | 15 14 | Heikki Linnakangas | 3.6 | 0.7 | 9

All of these are clearly very good numbers.

So once again, a big thanks to our speakers for their good work. And also a very big thanks to those who did fill out the session feedback forms - your input is very valuable!

Update: Yes, these graphs were made with a python script calling the Google Charts API. Does anybody know of a native python library that will generate goodlooking charts without having to call a remote web service?

My pgday.eu pictures are up

I've finally gotten around to uploading my pictures from PGDay.EU 2009 to [ my smugmug gallery].

Clearly the conference was tiring, and we all needed a rest... (yes, this was during JD's talk)

And as the picture says, don't forget to submit your feedback - the site is still open for that!

PGDay.EU 2009 - it's a wrap

I'm currently sitting on my flight home from Paris CDG, after a couple of very hectic days. It's going to be a couple of days (which in reality is going to drag out into a couple of weeks due to other work engagements and then travel for the JPUG conference) before it'll be possible to completely evaluate the conference and things around it, but here's what I have so far.

I'm going to leave the evaluation of the talks themselves to somebody else. There were many others of the "regular PostgreSQL bloggers" present at the conference and we've already seen some posts around it. Hopefully there will be more, both in French and English. If you are blogging about this and your blog isn't already up on Planet PostgreSQL, please consider adding it so that the community at large gets notification of your posts.

Continue reading

PGDay.EU day one

align="right" So, day one is almost finished, when it comes to the conference itself - and then it's off to the EnterpriseDB evening party. A quick summary of the day is: awesome.

Going into a little more detail, the day started with us actually getting up painfully early. Got to ParisTech in the morning, right as they opened up our room. I have to say the facilities at ParisTech have been great - the rooms are in great shape, perfect size, and all the A/V equipment is working perfectly. (Yes, there is a slight flicker on one of the projectors, but it's not bad).

I did the intro section with Jean-Paul, and there's really not much to say about that. We (well, I) forgot to add a slide about the feedback - oops. I bet that's one reason we don't have as much feedback entered yet as we'd like.

Simon took over with a keynote, which was very good. Simon is a very good speaker, and he found a good balance between technical and non-technical talks. It was a nice way to kick off the conference. At this time, we had somewhere between 125-150 people had shown up, which is definitely not bad.

I half-followed the English track after that, with talks about PostGIS and Data Warehousing, which were both very good talks. Also spent some time in the organization, which has really worked pretty smoothly. We've had a few minor issues, but they were all solved quickly.

Lunch was fantastic. Many thanks to our great caterers who served us an amazing lunch, with good organization, and more than enough food. Couldn't be better!

align="left" Watched Gavin's talk on scalability after lunch, which is always a good one. After that I had my own talk which went at least Ok, though I did finish a bit early. After that it was off to a pgadmin developer meeting, which is where I am now.

So I'd better go now, so I don't miss out on the activities.

80 pages about PostgreSQL!

I was just told that the latest edition of GNU/Linux Magazine in France is dedicated to PostgreSQL. A full 80 pages about your favorite RDBMS! I'm told all the articles are written by our own Guillaume Lelarge. Well done, Guillaume!

Unfortunately for those of us who don't speak the language, the whole thing is in French. But if you do speak French, it's probably well worth checking out. There's a preview available online, and the magazine should be available in stores. Guillaume has also told me the contents will be available downloaded later on, but not for a few months.

PGDay.EU open for business

Yesterday we announced the schedule for PGDay.EU 2009. The Friday will have one track in English and one in French, and the Saturday will have two tracks in English and one in French. There are a lot of good talks scheduled - I wish I could trust my French enough to go see a couple of those as well...

We are also now open for registration. The cost of the conference is from €60 for a full price two day entry with discounts for single-day and for students. See the registration page for details. While we expect to be able to accommodate all interested people, if we are unable to do so those that register first will obviously be the ones we can take. We also prefer that you register as soon as you can if you know you're coming, since that makes our planning much easier.

Testing PostgreSQL patches on Windows using Amazon EC2

Many people who develop patches for PostgreSQL don't have access to Windows machines to test their patches on. Particularly not with complete build environments for the MSVC build on them. The net result of this is that a fair amount of patches are never tested on Windows until after they are committed. For most patches this doesn't actually matter, since it's changes that don't deal with anything platform specific other than that which is already taken care of by our build system. But Windows is not Posix, so the platform differences are generally larger than between the different Unix platforms PostgreSQL builds on, and in MSVC the build system is completely different. In a non-trivial number of cases it ends up with breaking the buildfarm until somebody with access to a Windows build environment can fix it. Lucky, we have a number of machines running on the buildfarm with Windows on them, so we do catch these things long before release.

There are a couple of reasons why it's not easy for developers to have a Windows machine ready for testing, even a virtual one. For one, it requires a Windows license. In this case the same problem with availability for testing exists for other proprietary platforms such as for example Mac OSX, but it's different from all the free Linux/Unix platforms available. Second, setting up the build environment is quite complex - not at all as easy as on the most common Linux platforms for example. This second point is particularly difficult for those not used to Windows.

A third reason I noticed myself was that running the builds, and regression tests, is very very slow at least on my laptop using VirtualBox. It works, but it takes ages. For this reason, a while back I started investigating using Amazon EC2 to do my Windows builds on, for my own usage. Turns out this was a very good solution to my problem - the time for a complete rebuild on a typical EC2 instance is around 7 minutes, whereas it can easily take over 45 minutes on my laptop.

Now, EC2 provides a pretty nice way to create what's called an AMI (Amazon Machine Image) that can be shared. Using these facilities, I have created an AMI that contains Windows plus a complete PostgreSQL build environment. Since this AMI has been made public, anybody who wants to can boot up an instance of it to run tests. Each of these instances are completely independent of each other - the AMI only provides a common starting point.

I usually run these on a medium size Amazon instance. The cost for such an instance is, currently, $0.30 per hour that the instance is running. The big advantage here is that this includes the Windows license. That makes it a very cost-effective way to do quick builds and tests on Windows.

Read on for a full step-by-step instruction on how to get started with this AMI (screenshot overload warning).

Continue reading

Some planet updates

I found myself unexpected with a day home with nothing but boring chores to do really, so I figured a good way to get out of doing those would be to do some work on the backlog of things that I've been planning to do for planet.postgresql.org. I realize that my blog is turning into a release-notes-for-planet lately since I haven't had much time to blog about other things. So I may as well confess right away that one reason to post is to make sure the updates I deployed actually work...

This round of updates have been around the twitter integration:

  • Since it turned out that a lot of people didn't actually know there was a twitter integration for planet, it is now linked clearly from the planet frontpage.
  • The twitter integration scripts (originally by Selena have been rewritten to work directly with our database of posts instead of pulling back in the RSS feed that the system had just generated, and also to keep the status of posts in the database. With luck, this will fix the very rare case where posts sometimes got dropped, and it made the code a lot simpler.
  • The posts made by the system will refer to the twitter username of the blog owner, if it's registered. For your own blogs, you can see what username is registered by going to the registration site. We've added some of the twitter usernames we know about - if yours is not listed, please let us know at planet@postgresql.org what twitter username to connect with what blog url.
  • The system has been prepared to pull out some usage statistics, but nothing is actually done with that yet.

Conferences

I speak at and organize conferences around Open Source in general and PostgreSQL in particular.

Upcoming

PGCon 2018
May 29-Jun 1, 2018
Ottawa, Canada
Postgres Open 2018
Sep 5-7, 2018
San Francisco, USA
PGConf.EU 2018
Oct 23-26, 2018
Lisbon, Portugal

Past

PGConf.DE 2018
Apr 13, 2018
Berlin, Germany
PGDay.paris 2018
Mar 15, 2018
Paris, France
Nordic PGDay 2018
Mar 13, 2018
Oslo, Norway
ConFoo 2018
Mar 7-9, 2018
Montreal, Canada
P2D2 2018
Feb 15, 2018
Prague, Czech Republic
More past conferences