AuraDB instances run with a set-defined maximum of connections.
This is done to contain the memory usage (and protect the AuraDB Instance) dedicated to implementing the network resources (threads) necessary to handle these individual connections.
Neo4j 4.4 - dbms.connector.bolt.thread_pool_max_size
dbms.connector.bolt.thread_pool_max_size=400
Neo4j 5.x - server.bolt.thread_pool_max_size
server.bolt.thread_pool_max_size=400
Should you have some application in an environment or framework designed for scaling you should consider that value carefully.
More importantly, when it comes to implementation it is better to run more "driver sessions" rather than have multiple instances of the driver object.
But if you need to have multiple drivers for architectural/technical reasons then you will need to reduce the default connection pool setting for each driver configuration so as to avoid overwhelming your AuraDB instance.
Clustering consideration
AuraDB Professional and AuraDB Enterprise run 3-node clusters (but AuraDS and Aura Free run on a single node). On a Neo4j cluster, the nodes are either fulfilling the role of Leader (one and only one at any point in time) or Follower (multiple).
The Leader is responsible for the WRITES but also serves READ queries. The followers only serve READ queries. As such it is essential to mark queries explicitly for READ in your code to best utilise the available resources of the Leader. See How to explicitly mark queries as READ while using the official Neo4j drivers?
Thus with 3 nodes the total number of connections can extend to 3 x 400 = 1200 and to leverage this maximum you need to mark queries for READ explicitly
Example
Having a scale factor of 16, and 400 connections to share on a 3-node cluster you should at most use 400 / 16 * 3 = 75. You also have to make sure you balance 2/3 as READ and 1/3 as WRITE connection.
driver = GraphDatabase.driver(
uri,
AuthTokens.basic(username, password),
Config.builder()
.withMaxConnectionPoolSize(25)
.withConnectionLivenessCheckTimeout(2, TimeUnit.MINUTES).build())
Comments
0 comments
Article is closed for comments.