Viewing entries tagged with postgresql. Return to full view.

Database or schema

A couple of weeks back I helped a guy out with a fairly large PostgreSQL installation, with several things. One of them prompted me to write this post. A fairly common question is "should I use databases or schemas to isolate my users". A fairly common answer is "schemas is the same as databases in other products", which isn't actually true (at least not for all of them). Schemas in PostgreSQL can be very flexible though - much more so than in say SQL Server, where a schema and a user is pretty much the same thing. But there are reasons for using databases, and reasons for using schemas. This was a textbook example on why to use schemas instead of databases.

The system they run has the database hosted in on their server, and a bunch of clients of different kinds (windows fat clients connecting directly and mobile devices connecting through an appserver). The system was set up with one database for each client, and all connecting using the postgres superuser account (oops. but that's a different thing).

An interesting point in this was that apart from the dedicated information for each client, there was a shared address table that had about 300Mb (IIRC) of data in it. This was 99.9%25 the same data between all clients, but each client had the ability to add extra rows to the table.

One of the things I ended up doing with them was to change the model to a single database shared by all clients, with one schema for each client. By giving each client their own account instead of using the superuser account, almost no changes were necessary in the client application - they would automatically see the tables in their own schema (since it's named the same as the account), but have no access to the data from other clients. There were several advantages gained by this:

  • Less overhead, since there were much fewer databases.
  • The ability to do cross-schema queries when needed, to perform cross-client statistical calculations (not by the clients themselves, but from the provider)
  • Less databases to remember to back up ;)

With that step done, the next step was very logical. We moved the address table to the public schema, and replaced it with a view in each of the client schemas. This view was a UNION ALL of the address table in the public schema, and a table called address_local that lived in each clients schema. This table was populated with all the entires the clients had added to their local copy. A couple of RULEs on the view would redirect modifications to the view to hit the local table (since the clients only had SELECT permissions on the shared table, and they really shouldn't modify that). Later on we changed the application to write directly to the address_local table, purely for performance reasons. This brought in several more advantages:

  • There is now only one copy of this large, shared table in the database. That greatly increases the probability of it staying in the cache, as well as reduces the risk that a scan on this table will push other tables out of the cache.
  • The address data is updated occasionally from an external source. Now it only has to be updated once, which will affect all clients
  • Backups are a lot smaller since the address data is only backed up once.

These are all very good arguments for using schemas instead of databases. But what are the downsides?

  • You can't use pg_hba.conf to restrict database access. In this case, that wasn't a reason, since they were all using the same account before. But it's a feature that will no longer work.
  • LISTEN/NOTIFY is per database. So the name of the NOTIFYs must change to include the name of the client
  • Restoring a single client can be somewhat more complex

I'm sure there are other arguments on both the positive and the negative side, but those are the ones we came across.

How's that for saving memory?

PostgreSQL on Windows has suffered from running out of "Desktop Heap" since day one. Things got worse with 8.3beta1, but Dave got that fixed for beta2. However, someone on the list actually managed to run into a different memory scalability problem - running out of address space in the postmaster.

So how could this happen? Because there were too many threads running. Hold on, you think, PostgreSQL isn't threaded?! Which is correct - PostgreSQL on Windows uses the same process based model as it does on Unix. The reason for this is to make sure that the database works the same way on all platforms, which cuts down platform-specific bugs and makes maintaining easier. However, a couple of win32 specific functions are implemented as separate threads. In the postmaster, for example, we start one thread for each child process spawned. The only purpose of this thread is to wait until the child process dies, and then fire off a SIGCHLD signal - since Windows doesn't provide this functionality built in like Unix does.

So with a lot of backends, this leaves us with a whole lot of threads running in the postmaster that doesn't really do anything. Since they don't do anything, this method was chosen originally for 8.0 with the motivation that the code is trivial and that the overhead of starting a backend process is so big that the starting of a secondary thread won't be noticed at all. Which is correct in all aspects except for one - the thread will require 4Mb of address space for it's stack. Most of this is never put into real memory (only the required initial 64Kb are), but it counts against the address space and the commit charge. This gives two possible resources that can run out - room in the pagefile (for the commit charge) and the address space in the process (max 2Gb memory can be addressed by a 32-bit windows process by default). And with 4Mb / thread, you get up there fairly quickly - something nobody thought about at the time.

So for 8.3beta2, I have rewritten the handling of SIGCHLD from scratch. Instead of creating a thread for each backend, we now use an operating system thread pool to do the waiting, and then queue up the child process exists on an I/O completion port. The second part of switching to using an I/O completion port actually simplified and reduced the amount of code necessary pretty substantially, which is always nice.

So did this help? Some pgbench figures from Dave shows:

Without patch:
VM: 1,322,792K
Idle threads: 6     
Peak threads: 306   


With patch:
VM: 98,088K     
Idle threads: 3 
Peak threads: 7

So yes, it makes a huge difference. Yay! Apart from fixing the address space issue, this should also decrease the pagefile requirements for any environments with a lot of backends. You're still going to see significantly worse performance than on Unix with lots of backends - shared memory access is still slow, etc. But it does represent a fairly big step forward.

Even the fact that mingw is once again missing the required functions thus causing breakage on the buildfarm makes that difference smaller. And a patch is in the works to fix that...

Thanks to Trevor and Dave for helping me track this down and test the resulting patch.

Integrated Security in PostgreSQL 8.3

An often requested feature for users migrating from Microsoft SQL Server to PostgreSQL is being able to use what Microsoft calls "Integrated Security". In a Microsoft environment, this means to automatically use your Windows account to log in to the database. It does not mean to use the same password. It means the login has to be transparent. (I believe at least one of the other major databases can do this already, but most requests come from MSSQL users)

Prior to PostgreSQL 8.3, this was possible to do using Kerberos 5 authentication, but this only worked if your PostgreSQL server was running on Unix (clients could be Unix or Windows). PostgreSQL 8.3 brings this to the native Windows version, and also makes things a lot easier to configure and use in a pure Windows environment. The core of the functionality for this is the addition of GSSAPI and SSPI authentication. GSSAPI is an open standard for authentication, that is typically used for Kerberos authentication. SSPI is a Microsoft implementation of this, that is wire compatible with GSSAPI (it's not API compatible, unfortunately). Native Kerberos 5 will still be supported in 8.3 for backwards compatibility, but it's deprecated in favor of GSSAPI in all installations that don't need to be compatible with 8.2 or earlier.

PostgreSQL 8.3 adds support for both these authentication methods. GSSAPI is supported on both Unix and Windows, and SSPI is supported only on Windows. The Windows build will always build with SSPI support. GSSAPI support requires the MIT Kerberos for Windows package to build (the same library used for the krb5 authentication used in pre-8.3 versions). The binary distribution will be shipped with GSSAPI enabled, but if you do your own build from source it's off by default.

There are a couple of ways to use this:

To achieve the same functionality as in pre-8.3 but using GSSAPI instead, configure the Linux server the exact same way as for Kerberos authentication. In pg_hba.conf, specify authentication method gss instead of krb5. Unix clients with GSSAPI support and Windows client with GSSAPI or SSPI support will then be able to authenticate. Note that a Windows SSPI client will be able to authenticate to the Unix server without needing the GSSAPI library.

To run the server on Windows and be able to authenticate to it from both Windows and Unix (or just Unix), configure things the same way as above. This setup requires the GSSAPI library to be linked in on the server.

To run the server on Windows and all clients on Windows, it's trivial to set up. In pg_hba.conf on the server, just specify sspi as authentication method, and you're done. All users connecting from the local machine, your domain, or a trusted domain will be automatically authenticated using the SSPI configured authentication (you can enable/disable things like NTLMv2 or LM using Group Policy - it's a Windows configuration, not a PostgreSQL one). You still need to create the login role in PostgreSQL, but that's it. Note that the domain is not verified at all, only the username. So the user Administrator in your primary and a trusted domain will be considered the same user if they try to connect to PostgreSQL. Note that this method is not compatible with Unix clients.

In almost no case do you actually need to configure and use the MIT Kerberos library on the client. The only case when this is needed is if you specifically don't want to use the Active Directory login, but instead use a standalone Kerberos login based on MIT Kerberos. If you need this, specify the connection parameter gsslib=gssapi (or use the environment variable PGGSSLIB). Otherwise, the client will default to using the SSPI library.

Speeding up NOT IN

Sometimes, old tricks can still work.. I had a case the other day where I needed to insert into a table all rows "missing" compared to a different table. Just the id value, and let all others get the default value. The query to do this the simple way is:

INSERT INTO table2 (itemid) 

  SELECT id FROM table1 

  WHERE id NOT IN (

    SELECT itemid FROM table2

  )

This would be fine, except there are a lot of rows in the tables. Specifically, table2 contains about 10 million and table1 contains 15 million. So the insert should be about 5 million rows. It's not huge, but it's enough to matter.

Left this query running for hours. And hours. And hours. The data fit fine in RAM, so it spent it all at 100%25 CPU and no I/O. Left it running for 12+ hours, still not finished. So it was time to try something else. Tried to rewrite the query to:

INSERT INTO table2 (itemid)

  SELECT id FROM table1

  WHERE NOT EXISTS (

   SELECT * FROM TABLE2 WHERE table2.itemid=table1.id

  )

But no luck. Terminated this one after 7 hours. The query plan for both of these was the same:

Seq Scan on table1  (cost=0.00..3902265020963.13 rows=7539834 width=4)

  Filter: (NOT (subplan))

  SubPlan

    - Seq Scan on table2 s2  (cost=0.00..258776.46 rows=1 width=38)

          Filter: (itemid = $0)

So it's time to bring out the dirty tricks. Specifically something that I do recall at least old versions of access used to do - back when they simply didn't do NOT IN subqueries (yes, I'm slightly embarrassed, but I did hack around access a bit at that time when forced to). The new code is (uses a temporary table, but that's not directly related to the speed change):

INSERT INTO loader (id)

  SELECT id FROM table1

  LEFT JOIN table2 ON table1.id=table2.itemid

  WHERE table2.itemid IS NULL

And wham. Query completes in less than 30 minutes. The INSERT into the actual table is very quick - single digit number of minutes, didn't get an actual measure for it. The new query plan is:

Hash Left Join  (cost=435127.62..2902887.99 rows=1 width=4)

  Hash Cond: (table1.id = table2.itemid)

  Filter: (table2.itemid IS NULL)

  - Seq Scan on table1  (cost=0.00..1880248.68 rows=15079668 width=4)

  - Hash  (cost=229150.50..229150.50 rows=11849450 width=4)

        - Seq Scan on table2  (cost=0.00..229150.50 rows=11849450 width=4)

For some reason the row estimate look differently - but it's the same tables, and nothing happened in between. And either way - it's slightly hackish, but it worked.

For those who already asked

The person on the frontpage of www.postgresql.org is David Fetter, and the picture is taken late afternoon on the day after pgday (which means a rather late evening the day before and a long day of sightseeing in Florence). If you're not properly tired after several days (and nights) of community activity, you didn't try enough!

pgDay is over

After two days of great talks, pgday is over. The event also contained a lot of community networking, which to me was more valuable than the talks themselves. But that's mostly because I knew most of the people giving talks in English and what they had to say, and I didn't understand the ones in Italian. The translation wasn't good enough for me in most of the cases really, but I could tell the talks were interesting for those who fully understood them.

Today was sightseeing day in Florence with Dave, Simon, Greg, Josh, David, Andreas, Stefan and Susanne (and of course our hosts Gabriele and Andrea - and anybody else I forgot about). After a couple of days in there, we also ran into Ishikawa. A bit too hot for my taste really, but an interesting day. Florence is a nice city, and it mixes well with PostgreSQL conversations.

After dinner, we almost missed the train home. Actually, the train was replaced with a bus, which we almost missed. But it all worked out in the end, and now we're all (well, that's me, David, Josh, Stefan and Andreas) in David Fetters room cheering him and Ads on while they are writing up the PostgreSQL weekly news.

Pictures from this day and the previous ones are available on other peoples blogs, for example Andreas.

On a closing note, Stefan just informed me that the benchmarks previously referred to are now posted.

And finally, a great big thanks to the people organizing this great event, and all the volunteers helping out with it. I'm impressed with the result! When I leave tomorrow, I'm certainly very happy having been here!

Another day at pgday

So now the second day of pgday has started. Yesterday continued in the same way it started, with several more good talks. Andreas did a good talk on community relations. In the late afternoon Susanne showed up to do a talk on locales and encoding - that's another name checked off the list of PostgreSQL community people to meet.

After the conference itself, it was off to a restaurant for proper Italian pizza. In what appears to be standard Italian ways, the restaurant hadn't actually opened when we arrived, so we had to walk around the city for a while before they'd let us in and serve us food. But once we got there, it was all good.

The day finished at the PLUG bar (I can't stop being impressed that the usergroup actually has their own bar! If that's not a good driver for membership, I don't know what is..) This time they even had a band with live music!

Got back home around 2, did the mandatory email checks, and off to bed. Actually made it in time for JoshB's keynote today. For those of us well inside the development process right now much of it wasn't news, since it was focused on what's coming up in 8.3. However, one interesting part was that Sun and EnterpriseDB actually published commercial benchmarks on PostgreSQL! They should be going up on the website sometime early next week (that's Sun's website, but I'm sure there will be links from the news section on ours), but the exec summary is that PostgreSQL is very fast and very cheap (same speed, 1/3 the cost as Commercial Vendor One). Which we already knew, of course, but it certainly helps the PHBs to have nice graphs and official numbers. So kudos to Sun and EnterpriseDB for getting that done!

pgday is under way

I arrived in Prato, Italy, yesterday and even got picked up at the airport. We then went to Monash University where the conference would be. After some time waiting to help out we (me, Josh Berkus and David Fetter) gave up on that and went for a walk around town instead. Picked up Andreas (ads) and Stefan at the hotel. Several hours later we met the Italian guys again for dinner and drinks, and it seems they somehow got things sorted without us because they were ready to go. We also picked up the guys from Dalibo (France) at the same time. Finally, after we'd all had our food and drinks, the edb guys from the UK (Dave, Simon, Greg) arrived and we just had to go out for another couple of beers...

Anyway. Today the actual pgday has started (it's actually two days, regardless of what the name implies). Talks are actually translated in realtime, which works reasonably well but not perfect (as always). The amount of merchandise they have is impressive - t-shirts, notepads, pens, more t-shirts, pins etc. And Andreas brought mugs and those cool plush elephants he's been blogging about, so they're really well set. And they have an impressive amount of volunteers, so things are going very smoothly. Even the wireless network works today...

There's a big crowd here, so from what I can tell this is so far a big success. There's of course a large amount of Italian people here, but there's also a lot of people from abroad - as far as the US and Japan, and a good number from all around Europe. Let's hope the rest is as good as the start!

Oh, and if you're not here, there's live audio streaming available!

pgcon midtime report

So it's been a couple of days since I arrived for pgcon in Ottawa, so it's obviously time to blog. At least Devrim keeps telling me to...

Trip in was not too bad, though 4 hours at Heathrow airport is not exactly fun. Got here Monday evening, tried to coordinate with the EnterpriseDB UK team who arrived an hour or so after me. After much problems (ahem, phones that don't work abroad and such - hello Dave), we did manage to hook up and had a dinner and some beers in the evening. Plus, of course, PostgreSQL chatting!

Tuesday was mostly spent walking around the city and playing tourist along with Dave. Simon and Heikki hadn't done their homework properly, so they had to update talks and finish off patches. In the afternoon we headed over to the Royal Oak pub where the registration was, and spent the rest of the day chatting with the other pg people who had arrived.

Since I didn't go to any tutorials, the conference itself started for me on Wednesday, with Bruces keynote. Somehow I managed to get mentioned in that one again, I just don't know how that keeps happening... An interesting look back at some of the things that brought us to where we are now, and a good way to kick off the conference.

Next talk for me was PostgreSQL-IE, which is a system for doing context based image search in PostgreSQL, and looks really really interesting. Currently used in the medical field by them, but it seems like it could be used for several other things as well with fairly small changes.

After that, Robert Treat did a good overview on partitioning. I haven't really looked into the partitioning stuff as much as I've wanted, so this was a really good way to get started on that.

After lunch, I went to a presentation on migrating from SQL Server (primarily) to PostgreSQL. But I'll have to admit I spent a lot of that time working on my own slides, so I can't really comment on the talk.

Denis gave a talk on why the EnterpriseDB team picked PostgreSQL as the foundation for building the EnterpriseDB product and company. Most if the reasons were the ones you'd already have guessed, but he brought some nice details about the decision process.

Getting to the last actual talk, there was that weird PostgreSQL-on-Windows guy giving his talk. I think it was fairly well received, even though Dave had to point out I forgot to say that he had actually fixed the installer so that it'll work on Windows Vista out of the box on 8.3. Oops, sorry about that Dave! We'll see what the eval forms tell me for a more objective view on how well it went.

The final part on the actual conference was the BOF. The discussion we had first turned out into pretty much a repeat of things that were already said on the mailinglists, so I'm not sure it was very productive. We did end it up with a keysigning event, which worked better.

After that it was off to the EnterpriseDB sponsored drinks and dinner party. We got there hours after the other guys (who skipped the BOFs), but we got there before the food was served, so the damage wasn't that big. Food and beer and good discussions were had. When the place closed we moved on to the bar that's in the hotel where the EDB guys stayed for another beer, but we didn't end up staying there very long.

Just in for the start of the last day now, also worrying about the fact that my airline for my trip home have gone on strike, so I have no way to get home right now. But I'm sure it'll work out some way, so I'll just focus on the conference for now.

pgcon, pgday and blogging

I just got booked for going to pgday in Prato, Italy. Looks like it's going to be a great gathering of the European people in the PostgreSQL community. Really looking forward to meeting those from the EU group that I haven't already had a chance to meet. I haven't decided if I'm going to submit a talk yet though, but it seems I have a few more weeks to decide.

Closer in time, we're rapidly coming up on pgcon next week. I really need to polish my slides soon. But I guess that's what long flights are for after all... Between these two events, there isn't that much of the PostgreSQL community that I won't get to meet over the next couple of months. Great times ahead!

Oh, and what has happened to planetpostgresql.org? The post-rate has really dropped drastically (and yes, I realize I'm definitely part to blame for this). There are as usual interesting exceptions, but it seems the number of posts have gone down. People, get back to blogging interesting stuff!

Conferences

I speak at and organize conferences around Open Source in general and PostgreSQL in particular.

Upcoming

PGDay Chicago 2024
Apr 26, 2024
Chicago, USA
PGConf.DEV 2024
May 28-31, 2024
Vancouver, Canada

Past

SCaLE 2024
Mar 14-17, 2024
Pasadena, USA
Nordic PGDay 2024
Mar 12, 2024
Oslo, Norway
FOSDEM PGDay 2024
Feb 2-4, 2024
Brussels, Belgium
PGConf.EU 2023
Dec 12-15, 2023
Prague, Czechia
PGConf.NYC 2023
Oct 3-5, 2023
New York, USA
More past conferences