During regular operations of your Aura instance, you may at times see that some of your queries fail with errors similar to the below ones:
org.neo4j.memory.MemoryLimitExceededException: The allocation of an extra 8.3 MiB would use more than the limit 278.0 MiB. Currently using 275.1 MiB. dbms.memory.transaction.global_max_size threshold reached
TransactionExecutionLimit: Timeout after 4 attempts, last error: Neo4jError: Neo.TransientError.General.MemoryPoolOutOfMemoryError (The allocation of an extra 8.3 MiB would use more than the limit 278.0 MiB. Currently using 278.0 MiB. dbms.memory.transaction.global_max_size threshold reached)
This error should be handled by your application code as it may be intermittent.
As the configuration setting name suggests, this acts as a safeguard by limiting the quantity of Memory allocated to all transactions and preserving the regular operation of the AuraDB Instance.
The measured heap usage of all transactions is only an estimate, and the actual heap utilization may differ from the estimated value.
In some cases, limitations of the estimation algorithm to detect shared objects at a deeper level of the memory graph could lead to overestimations because a conservative approach is used, which relies on aggregated estimations of memory usage where the identities of all contributing objects are not known, and cannot be assumed to be shared.
Over estimations are most likely to happen when you use UNWIND on a huge list or expand a variable length or shortest path pattern, where many relationships are shared between the computed result paths.
If this is the case, you may want to test and try if you can manage to run the same query without a sorting operation like ORDER BY
or DISTINCT
and if possible, handle this ordering or uniqueness in your application.
The property mentioned in the error message dbms. Memory.transaction.global_max_size aims to protect the AuraDB Instance from having OOMs (OutOfMemory) and increase resiliency; it is enabled in Aura and cannot be disabled. See here for further details.
If removing the ORDER BY
or DISTINCT
clause does not address the issue, the primary mitigation for this error is to do one or more of the following things:
- Handle this exception in your code and be prepared to retry if this is an intermittent error, but the query can succeed.
- Rework the relevant query to optimize (use EXPLAIN or PROFILE to review the plans, for more details, see query tuning ) and work at reducing the memory footprint.
- You can check the overall memory footprint of a query using PROFILE in cypher-shell.
- The PROFILE output includes the Memory consumption, along with the query's result, if any and the execution plan.
- In the below example, the Memory consumed is 11,080 Bytes.
- Increase the instance size of your Aura deployment to get more resources.
- Reduce concurrency of heavy resource queries to get a better chance of success.
If you face this error while loading data from CSV files:
Please use apoc.periodic.iterate
to import those data and use a relatively small number for the batch_size
parameter. Please refer to the Using apoc to conditional loading large scale data set from JSON or CSV files article.
Further reading on memory management :
See https://neo4j.com/docs/operations-manual/current/performance/memory-configuration/#memory-configuration-considerations; Section - Limit transaction memory usage recommendation.
Comments
0 comments
Article is closed for comments.