Archive

Posts Tagged ‘JDBC’

Oracle JDBC memory settings

Many applications use ancient versions of Oracle’s JDBC drivers, as they are downloaded manually and seldom upgraded. That is a pity as the newer drivers offer much better performance. However, some of the performance gains are bought with increased memory consumption and that can be a problem.

We ran into an issue with the connection pool in JBoss. It uses connections in round-robin, so as long as there is some load the pool tends to stay at peak size. Each connection normally keeps a buffer cache and it may cache other things as well. The size of the connections would grow over time, eventually consuming most of the heap.

A JBoss-specific workaround is to use the cli and close the idle connections (off-peak):


/subsystem=datasources/xa-data-source=DS/:flush-idle-connection-in-pool

DS is the name of the data source. A better approach may be to set the system property oracle.jdbc.useThreadLocalBufferCache to true. That moves the buffer cache from the connections to thread locals. Depending on how the application behaves that may be better as the memory is reclaimed when a thread dies. Plus a given thread may be more likely to issue the same SQL multiple times and can thus benefit more from the cache.

It may also be useful to limit the maximum buffer size with oracle.jdbc.maxCachedBufferSize. The implicit statement cache is normally off, but if it is enabled the size can be controlled using oracle.jdbc.implicitStatementCacheSize and oracle.jdbc.freeMemoryOnEnterImplicitCache can force buffers allocated for a statement to be released when it is put into the cache. In most cases it is best to leave those options alone.

See Oracle JDBC Memory Management for the whole story!

Categories: Java, Oracle, Performance

Beware of ActiveMQ with JDBC persistence

ActiveMQ (and Red Hat’s AMQ derivative) is gaining ground. Many companies that need high availability already have highly available databases, so it makes sense to use ActiveMQ in a master/slave configuration with a JDBC message store. A bit slower than KahaDB or LevelDB, but simple to configure and very safe. Or is it? Well, it depends.

ActiveMQ really has two different message stores: one for normal messages and one for scheduled messages. For normal persistent messages it is possible to use KahaDB, LevelDB or JDBC. For scheduled messages there are only two alternatives: KahaDB and in-memory. So, if you have a highly available master/slave configuration with a JDBC backend for normal messages, all scheduled messages will be lost when the master dies, as they are stored in the local file system or in memory.

Scheduled messages are seldom used, so perhaps this is trivial? Unfortunately not. Most applications want to redeliver failed messages a few times before relegating them to a DLQ. ActiveMQ supports redelivery, but under the hoods it is implemented using scheduled messages. In other words all messages waiting for redelivery are lost at failover unless the job scheduler store is highly available, which means a highly available KahaDB store. Installations that use JDBC don’t do HA KahaDB.

In other words, take care when using ActiveMQ or AMQ with a JDBC backend! Normal messages are safe, but some are likely to be lost at failover. Is that acceptable?

A documentation bug has been filed for this.

Categories: Java