Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory #3388

Open
haroldboom opened this issue Nov 3, 2024 · 4 comments

Comments

@haroldboom
Copy link
Contributor

Issue description

One of my nodes is stuck in a reboot loop, as soon as it is up and running it gets a Javascript heap out of memory error and crashes

Expected behavior

Steps to reproduce the problem

  1. Fresh install of the node and it started

Specifications

  • Node version: OriginTrail Node v8.1.1+beta.3
  • Platform: Ubuntu 22
  • Node wallet:0xdA2DE0AD0731Ea28e1d71586798C23701F5375a9
  • Node libp2p identity: 28

Error logs

Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] USERLVL: [getCleanerCommand] level-change Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] USERLVL: [commandsCleanerCommand] level-change Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] USERLVL: [publishResponseCleanerCommand] level-change Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] USERLVL: [getResponseCleanerCommand] level-change Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] INFO: Replay pending/started commands from the database... Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] INFO: Sharding Table Service initialized successfully Nov 03 05:54:03 ottest3 node[210159]: [2024-11-03 05:54:03] INFO: Initializing blockchain event listener for blockchain base:84532, handling missed events Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Event listener initialized for blockchain: 'base:84532'. Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Event Listener Service initialized successfully Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Initializing http api and rpc router Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Enabling network protocol: /store/1.0.0 Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Enabling network protocol: /update/1.0.0 Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Enabling network protocol: /get/1.0.0 Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Node listening on port: 8900 Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Routers initialized successfully Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Network ID is QmWtC2g85qP7UuukZPonZrjS7TpZ8APyibRzAD3eU3hywa, connection port is 9000 Nov 03 05:54:06 ottest3 node[210159]: [2024-11-03 05:54:06] INFO: Node is up and running! Nov 03 05:54:08 ottest3 node[210159]: [2024-11-03 05:54:08] TRACE: [dialPeersCommand] Node connected to QmNcjL4P5Dzq5hn6fkncYEsTcRU2g7g4N5rE2KCvHSPDav, updating sharding table last seen and last dialed. Nov 03 05:54:27 ottest3 node[210159]: <--- Last few GCs ---> Nov 03 05:54:27 ottest3 node[210159]: [210159:0x69b5f70] 84435 ms: Scavenge (reduce) 2046.0 (2082.8) -> 2045.4 (2082.8) MB, 7.90 / 0.00 ms (average mu = 0.304, current mu = 0.262) allocation failure; Nov 03 05:54:27 ottest3 node[210159]: [210159:0x69b5f70] 84502 ms: Scavenge (reduce) 2046.3 (2082.8) -> 2045.7 (2083.0) MB, 6.91 / 0.00 ms (average mu = 0.304, current mu = 0.262) allocation failure; Nov 03 05:54:27 ottest3 node[210159]: [210159:0x69b5f70] 84571 ms: Scavenge (reduce) 2046.5 (2083.0) -> 2045.9 (2083.3) MB, 6.30 / 0.00 ms (average mu = 0.304, current mu = 0.262) allocation failure; Nov 03 05:54:27 ottest3 node[210159]: <--- JS stacktrace ---> Nov 03 05:54:27 ottest3 node[210159]: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory Nov 03 05:54:27 ottest3 node[210159]: ----- Native stack trace ----- Nov 03 05:54:27 ottest3 node[210159]: 1: 0xb8ced1 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 2: 0xf06460 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 3: 0xf06747 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 4: 0x11182e5 [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 5: 0x1118874 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 6: 0x112f764 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 7: 0x112ff7c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 8: 0x11320da v8::internal::Heap::HandleGCRequest() [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 9: 0x109d747 v8::internal::StackGuard::HandleInterrupts() [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 10: 0x1540042 v8::internal::Runtime_StackGuardWithGap(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node] Nov 03 05:54:27 ottest3 node[210159]: 11: 0x7fe71e699ef6 Nov 03 05:54:27 ottest3 systemd[1]: otnode.service: Main process exited, code=dumped, status=6/ABRT Nov 03 05:54:27 ottest3 systemd[1]: otnode.service: Failed with result 'core-dump'. Nov 03 05:54:27 ottest3 systemd[1]: otnode.service: Consumed 58.114s CPU time. Nov 03 05:54:28 ottest3 systemd[1]: otnode.service: Scheduled restart job, restart counter is at 528. Nov 03 05:54:28 ottest3 systemd[1]: Stopped OriginTrail V8 Node. Nov 03 05:54:28 ottest3 systemd[1]: otnode.service: Consumed 58.114s CPU time.

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].

@Larsk97
Copy link

Larsk97 commented Nov 4, 2024

I have the same issue on my main node. I have given ot-node up to 4-5 gb of ram to work with, rather than the default 2gb.

<--- Last few GCs --->
[1327991:0x59233a0]    68273 ms: Mark-Compact (reduce) 2046.6 (2083.9) -> 2045.9 (2084.2) MB, 1548.60 / 0.00 ms  (avera>
[1327991:0x59233a0]    69680 ms: Mark-Compact (reduce) 2046.9 (2084.2) -> 2046.2 (2084.7) MB, 1402.67 / 0.00 ms  (avera>
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
 1: 0xb8ced1 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
 2: 0xf06460 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
 3: 0xf06747 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [nod>
 4: 0x11182e5  [node]
 5: 0x1130168 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, >
 6: 0x1106281 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::int>
 7: 0x1107415 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::in>
 8: 0x10e4a66 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationTy>
 9: 0x1540896 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x7210b28d9ef6
otnode.service: Main process exited, code=dumped, status=6/ABRT
otnode.service: Failed with result 'core-dump'.
otnode.service: Consumed 2min 6.141s CPU time.
otnode.service: Scheduled restart job, restart counter is at 1566.
Started otnode.service - OriginTrail V6 Node.

@haroldboom
Copy link
Contributor Author

Mine has 8GB and still does this

@botnumberseven
Copy link

starting beta.3 i don't see it anymore if I allocate 4GB, it does not work well with lower values

@haroldboom try to reduce number of paranets you are syncing

@Larsk97
Copy link

Larsk97 commented Nov 8, 2024

starting beta.3 i don't see it anymore if I allocate 4GB, it does not work well with lower values

@haroldboom try to reduce number of paranets you are syncing

I see it with 5GB across my nodes. 8 paranets

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants