The call for papers for PGConf.EU 2011 in Amsterdam will close at the end of this week. Now is the time to get your talk submissions in!
We are interested in all kinds of talks - from deep technical ones, to novice oriented advise and case studies of interesting things done with PostgreSQL. We expect a wide range of different skillsets amongst our visitors, so we want a good spread of the talk topics as well!
Of course, all speakers get free entrance to the conference on all days (training sessions not included).
If you have any questions for us, don't hesitate to contact us.
So, there is nothing to wait for. Head over to the call for papers site and submit your ideas! And please help us spread the word to potential speakers in other communities as well, who may not have seen our posts yet!
PostgreSQL Conference Europe is now accepting registrations for conference attendance.
The Early Bird special price will be available until September 5th, but that's no reason not to get your registration in early! Should you for some reason want to register for just a part of the conference, single day rates are also available at this time.
If you are planning to attend one of our training sessions, the schedule has not yet been published for that, and it is therefor not yet possible to register for trainings. However, do not worry: the early bird rate will be available for all attendees who register for the trainings - all the way until right before the conference.
And don't forget - the call for papers is still open! If you have already submitted a talk, or are planning to submit one, we suggest you wait to register until you have received a confirmation on if the talk was accepted or not. The early bird rate will be available long enough for you to register after you have received this notification - and if your talk is accepted, attendance is of course free!
As usual, if you have any questions, don't hesitate to contact us.
PostgreSQL Conference Europe 2011 will be held on Ocober 18-21 in the Casa 400 Hotel in Amsterdam, The Netherlands. It will cover topics for PostgreSQL users, developers and contributors, as well as decision and policy makers. For more information about the conference, please see the website at http://2011.pgconf.eu/.
We are now accepting proposals for talks. Please note that we are looking for talks in English, Dutch, German and French.
Each session will last 45 minutes, and may be on any topic related to PostgreSQL. Suggested topic areas include:
Of course, we're happy to receive proposals for talks on other PostgreSQL related topics as well.
We also have a limited number of longer, 90-minute, slots available. Please indicate clearly in your submission if you wish to make a 90-minute talk.
Finally, there will be a session of five minute lightning talks. A separate call for proposals will be made for them further on.
The submission deadline is August 21st, 2011. Selected speakers will be notified before Sep 5th, 2011.
Please submit your proposals by going to http://2011.pgconf.eu/callforpapers and following the instructions.
If your proposal is in a non-english language, please include a single-sentence description of the presentation in English as well in the field for submission notes.
The proposals will be considered by committee who will produce a schedule to be published nearer the conference date. If your proposal has been accepted, you will be informed by email within two weeks of the submission deadline.
This call for papers is also available on the web at http://2011.pgconf.eu/callforpapers
We look forward to hearing from you, and seeing you in Amsterdam in October!
One thing I've really disliked is the fact that contrib modules had installation scripts that enforced the schema to public for the installation. In my opinion, for no useful reason at all.
For example, I often install the pgcrypto contrib module. And I install this in the pgcrypto schema, that I then either add to the search_path variable or just explicitly use in my queries, with things like pcrypto.crypt('foobar','barfoo'). For versions prior to 9.1, being able to do this required me to manually edit the installed pgcrypto.sql file, to remove the SET search_path = public; command.
Extensions in 9.1 makes this so much nicer. To get pgcrypto into it's own schema, I now just need to do:
postgres=# CREATE SCHEMA pgcrypto;
CREATE SCHEMA
postgres=# CREATE EXTENSION pgcrypto SCHEMA pgcrypto;
CREATE EXTENSION
If I happen to create it in public by mistake, I can even move it after the fact!
postgres=# ALTER EXTENSION pgcrypto SET SCHEMA pgcrypto;
ALTER EXTENSION
You still need to create the schema manually - in theory we could auto-create that, but the work is still a lot easier than before. And fully supported!
PostgreSQL 9.1 beta1 now available - now is a great time to start testing it, and trying out all the great new features.
There have always been a number of ways to read your PostgreSQL logs remotely, over a libpq connection. For example, you can use the pg_read_file() function - which is what pgadmin does. PostgreSQL 9.1 adds a new and more convenient way (in some ways) to do this - using SQL/MED.
PostgreSQL 9.1 comes with SQL standard SQL/MED functionality. The MED in is short for "Managemend of External Data", and as the name sounds, it's about accessing data that's external to the PostgreSQL server. The SQL/MED functionality is not (yet) complete, but it's already very useful in it's current state.
In SQL/MED, there is something called a Foreign Data Wrapper, that can be compared to a driver. Using this FDW, we can create one or more Foreign Servers, which is a definition of how to connect to a specific instance of the service - if any. Finally, we can create one or more Foreign Tables on each of the Foreign Servers, giving us direct access to the remote data using SQL.
As has just been announced here, I was recently invited to join pgsql-core, and have accepted.
I guess the guys currently on it finally got tired of all my complaints, and figured out the way to make me stop was to suck me into the organization. The future will tell if their strategy will be successful or not...
For those who don't know (this hopefully doesn't include my readers on Planet PostgreSQL), pgsql-core is the "steering committee" for the PostgreSQL project. Exactly what they do seem to be somewhat up for debate both outside and inside of the group itself, but it at least has something to do with the leadership of the project...
Anyway, I'd like to thank the guys in the group for showing this trust in me, and shall do my best not to screw it up!
It's time to mark your calendars: PostgreSQL Conference Europe 2011 (formerly known as PGDay.EU) will be held on October 18-21 at the Casa400 Hotel in Amsterdam, The Netherlands.
Like last year, the conference will be held in a hotel venue, combining both the conference rooms and guest rooms, so you don't have to waste any time finding your way around the city. As in previous years, the conference will include full catered coffee breaks and lunches, to make the most of the time. The first day of the conference will be a training day, and the following three days will be regular conference tracks. The conference will accept talks in English, Dutch, German and French, to benefit those attendees who prefer talks in their native language.
We are just starting our search for sponsors - if you are interested in sponsoring the conference, or know someone who is, please take a look at our sponsorship opportunities and don't hesitate to contact us if you have any questions or would like to propose an alternative arrangement.
We will also follow up with a call for papers later, and in due course open for registration and post a conference schedule. For now, mark the dates, and follow the news on our website and on our twitter stream @pgconfeu.
Next week it's time for PgEast: 2011, this time in New York City.
I've already outlined why the East part of "PostgreSQL Conference East" (as it was called at the time) is incorrect: as is obvious to anybody with a basic knowledge of geography, the conference is to the west. From what I can tell, it's approximately 74 degrees west of zero, which means it's more than 20%25 of the world to the west.
In expanding this scope, it seems JD has this year decided to get the rest of the name wrong as well, in a bid to get more people. Just like it's 20%25 of the world wrong in location, it's no longer a PostgreSQL conference. Instead it's more of a cross-database conference, with an entire track dedicated to MongoDB (incidentally, approximately 20%25 of the tracks, it seems). Is that bad? Absolutely not - I'm looking forward to sneaking in on one or two of those MongoDB talks. But I think it means we have to go back to the proper name for the conference - JDCon-East!
And I'm sorry JD, but whatever numbers you get, you will not be the biggest PostgreSQL conference around. We are going to have to leave that title where it belongs - with the Brazilians (for now).
This year, the conference is also running a full 7 parallel training sessions the day before the actual conference. As part of this, I'm giving a half-day training on Streaming Replication and Hot Standby. If you haven't registered for it already, there are still seats open! And tell your friends - since this is how my trip there gets funded, I'd really like to get a full session...
I will also be giving a talk during the regular conference, Data Driven Cache Invalidation.
There's plenty of PostgreSQL - and MongoDB - around for everybody at this conference, so if you're anywhere nearby New York City, there is no reason not to be there!
and along with them, a few other services.
From what we can tell, what has happened is that the datacenter that hub.org hosts most of their servers in, in Panama, dropped completely off the Internet several hours back. The PostgreSQL mailinglists are managed by hub.org, and is tied into their main infrastructure. For this reason, there is nothing the rest of the sysadmin team can do other than wait for the situation to resolve, and we unfortunately have no chance to bring up any backup servers anywhere.
As an added unfortunate bonus, it seems at least one of the hub.org nameservers is still running an incorrectly configured DNS zone file. This means that while this server is geographically hosted elsewhere, like it should be, email will get delivered to that host and then bounce saying that the postgresql.org domain does not exist. This is incorrect - the domain itself exists and works perfectly well, and if it wasn't for this incorrect zone file mail would be queued up and delivered once the main datacenter is back up.
Along with the lists, a few other services hosted with hub.org are currently unavailable - pgfoundry.org, pugs.postgresql.org, the developer documentation, jdbc.postgresql.org and possibly some other minor services.
All other infrastructure services are operating properly, including the website and the download mirrors.
Please be patient as we wait for hub.org to resolve this issue. For any up-to-date status information your best bet is the #postgresql IRC channel on FreeNode - but people are unlikely to be able to provide any information beyond "it's down, and we're waiting for hub.org".
Today I committed the first version of a new PostgreSQL tool, pg_basebackup. The backend support was committed a couple of weeks back, but this is the first actual frontend.
The goal of this tool is to make base backups easier to create, because they are unnecessarily complex in a lot of cases. Base backups are also used as the foundation for setting up streaming replication slaves in PostgreSQL, so the tool will be quite useful there as well. The most common way of taking a base backup today is something like (don't run this straight off, it's not tested, there are likely typos):
psql -U postgres -c "SELECT pg_start_backup('base backup')"
if [ "$?" != "0" ]; then
echo Broken
exit 1
fi
tar cfz /some/where/base.tar.gz /var/lib/pgsql/data --exclude "*pg_xlog*"
if [ "$?" != "0" ]; then
echo Broken
psql -U postgres -c "SELECT pg_stop_backup()"
exit 1
fi
psql -U postgres -c "SELECT pg_stop_backup()"
if [ "$?" != "0" ]; then
echo Broken
exit 1
fi
And when you're setting up a replication slave, it might look something like this:
psql -U postgres -h masterserver -c "SELECT pg_start_backup('replication base', 't')"
if [ "$?" != "0" ]; then
echo Broken
exit 1
fi
rsync -avz --delete --progress postgres@masterserver:/var/lib/pgsql/data /var/lib/pgsql
if [ "$?" != "0" ]; then
echo Broken
psql -U postgres -c "SELECT pg_stop_backup()"
exit 1
fi
psql -U postgres -c "SELECT pg_stop_backup()"
if [ "$?" != "0" ]; then
echo Broken
exit 1
fi
There are obvious variations - for example, I come across a lot of cases where people don't bother checking exit codes. Particularly for the backups, this is really dangerous.
Now, with the new tool, both these cases become a lot simpler:
pg_basebackup -U postgres -D /some/where -Ft -Z9
That simple. -Ft makes the system write the output as a tarfile (actually, multiple tar files if you have multiple tablespaces, something the "old style" examples up top don't take into account). -Z enables gzip compression. The rest should be obvious...
In the second example - replication - you don't want a tarfile, and you don't want it on the same machine. Again, both are easily handled:
pg_basebackup -U postgres -h masterserver -D /var/lib/pgsql/data
That's it. You can also add -P to get a progress report (which you can normally not get out of tar or rsync, except on an individual file basis), and a host of other options.
This is not going to be a tool that suits everybody. The current method is complex, but it is also fantastically flexible, letting you set things up in very environment specific ways. That is why we are absolutely not removing any of the old ways, this is just an additional way to do it.
If you grab a current snapshot, you will have tool available in the bin directory, and it will of course also be included in the next alpha version of 9.1. Testing and feedback is much appreciated!
There are obviously things left to do to make this even better. A few of the things being worked on are: * Ability to run multiple parallel base backups. Currently, only one is allowed, but this is mainly a restriction based on the old method. Heikki Linnakangas has already written a patch that does this, that's just pending some more review. * Ability to include all the required xlog files in the dump, in order to create a complete "full backup". Currently, you still need to set up log archiving for full Point In Time Recovery, even if you don't really need it. We hope to get rid of this requirement before 9.1. * Another option is to stream the required transaction logs during the backup, not needing to include them in the archive at all. This is less likely to hit until 9.2. * The ability to switch WAL level as necessary. For PITR or replication to work, wal_level must be set to archive or hot_standby, and changing this requires a restart of the server. The hope is to eventually be able to bump this from the default (minimal) at the start of the backup, and turn it back down when the backup is done. This is definitely not on the radar until 9.2 though.