Enormous memory usage #15

Open
opened 2021-03-04 08:22:41 +00:00 by thomas · 5 comments
Member

Hi,

just noticed that my activitypub-instance consumes more that 14 GB of RAM... which seems very unusual to me. I guess there might be some room for memory optimization? ;-)

The relay has been running less than two weeks. There are 81 instances registered. At the time I was using version 83d56cb5702021-03-04-091709_grim

Hi, just noticed that my activitypub-instance consumes more that 14 GB of RAM... which seems very unusual to me. I guess there might be some room for memory optimization? ;-) The relay has been running less than two weeks. There are 81 instances registered. At the time I was using version 83d56cb5707199c6ccd91fa3d2779d986bb4b17c![2021-03-04-091709_grim](/attachments/8749e514-7cb9-4d87-9c28-c554b54c949a)
Member

Hey, I know this issue has been open for forever, but I was just reading it and noticed that you have (had) a similar number of instances subscribed as mine (yours: 81, mine: 72) and I'm not seeing any resource issues. Are you still having issues, or even still running this relay?

Hey, I know this issue has been open for forever, but I was just reading it and noticed that you have (had) a similar number of instances subscribed as mine (yours: 81, mine: 72) and I'm not seeing any resource issues. Are you still having issues, or even still running this relay?
Author
Member

Unfortunately the issue does still persistwith version 0.1.0. I suspect that a high number of unavailable instances and failing connections might trigger this behavior. In the mean time the number of connected instances has increased to ~200 and 30 of them are not available, anymore + there's a couple of instances which have failing SSL.

Maybe there's something wrong with handling such numbers of failing connections...

Unfortunately the issue does still persistwith version 0.1.0. I suspect that a high number of unavailable instances and failing connections might trigger this behavior. In the mean time the number of connected instances has increased to ~200 and 30 of them are not available, anymore + there's a couple of instances which have failing SSL. Maybe there's something wrong with handling such numbers of failing connections...
Member

Aiohttp's client seems to have bad memory issues for quite a few people. Try upgrading python, aiohttp, or both.

Aiohttp's client seems to have [bad memory issues](https://github.com/aio-libs/aiohttp/issues?q=is%3Aissue+memory+leak) for quite a few people. Try upgrading python, aiohttp, or both.
Member

I have 2 systems that have relays on them - one of them is publicly used, the other is new and comparatively very quiet. The quiet system is running the relay under python3.7 and uses very little RAM even after being up for days.

The public system has >150 instances on it and it's under load all the time - using the relay under python3.7 seems to hold the RAM usage in check.

I've just gone through running it under several versions - 3.8 I shouldn't comment on as the relay didn't seem to work entirely properly under this version, but under python3.10 and 3.11 the RAM usage grows extremely aggressively - it just doesn't seem to stop and it grows by 1MB per second or so.

This is with all the python modules required up to date, including aiohttp. I see aiohttp still has memory leak bug reports, but just from this casual observation it seems like something else may be going on with the newer version of python.

@izalia If there's anything I could run on my system that could help identify the issue, please let me know.

I have 2 systems that have relays on them - one of them is publicly used, the other is new and comparatively very quiet. The quiet system is running the relay under python3.7 and uses very little RAM even after being up for days. The public system has >150 instances on it and it's under load all the time - using the relay under python3.7 seems to hold the RAM usage in check. I've just gone through running it under several versions - 3.8 I shouldn't comment on as the relay didn't seem to work entirely properly under this version, but under python3.10 and 3.11 the RAM usage grows extremely aggressively - it just doesn't seem to stop and it grows by 1MB per second or so. This is with all the python modules required up to date, including aiohttp. I see aiohttp still has memory leak bug reports, but just from this casual observation it seems like something else may be going on with the newer version of python. @izalia If there's anything I could run on my system that could help identify the issue, please let me know.
Member

Because I chimed in to this Issue yesterday, I should update with further findings:

Today I rebuilt python 3.11 with lto and pgo. Fired up the relay under python 3.11 and memory utilization is stable.

I'm a little surprised, but, I'll stick with running the relay under python3.11 for now as it seems to be running as smoothly as it ever has.

Because I chimed in to this Issue yesterday, I should update with further findings: Today I rebuilt python 3.11 with lto and pgo. Fired up the relay under python 3.11 and memory utilization is stable. I'm a little surprised, but, I'll stick with running the relay under python3.11 for now as it seems to be running as smoothly as it ever has.
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
pleroma/relay#15
No description provided.