OpenReplay Pods Failing After One Day on t3.large EC2 Instance: Resource Issue or Configuration Problem?

Hello,
I installed OpenReplay following the guide on the page, and it worked fine for a day.

However, when I checked the next day, I found that dozens of pods were dead, as shown below.
The EC2 instance type is t3.large, as recommended on the page, and it doesn’t seem like the CPU or memory was actually lacking.
I tried to check the logs of the failed pods, but I couldn’t retrieve them properly.

  • Could this be a resource issue? If so, what should I do?
  • Why are the details of dozens of failed pods still present when I run openreplay -s? Could the fact that they remain be causing resource issues?

Has anyone encountered a similar situation or can provide assistance?


After expanding the /dev/nvme0n1p1 partition and removing unnecessary resources, everything is working fine again. Is 50GB of disk space sufficient?

[INFO] Disk
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 50G 8.1G 42G 17%

It really depends on how many replays you’re capturing. 50GB is the minimum.