I have a cluster with the latest GlusterFS on Debian12 with three nodes
but when I try a simple du -sh of the /var/www the node went in "N"
state and it doesn't come back to Y so I need to manually restart
glusterd daemon.
Hello everyone,
I have a cluster with the latest GlusterFS on Debian12 with three nodes
but when I try a simple du -sh of the /var/www the node went in "N"
state and it doesn't come back to Y so I need to manually restart
glusterd daemon.
If I don't run backup, rsync, du, etc. the cluster works well, I used
also vmstat to see ram, cpu and disk; I have 4 vcpu 70% and 8GB of ram
for each node, free ram is roughly of 300MB, used about 1,5GB and buffer/cache about 5GB.
I read on internet AWS starts from 16GB of ram for GlusterFS, other documents said to use 12GB of ram, do you have experience about it?
^Bart
I have a cluster with the latest GlusterFS on Debian12 with three nodes
but when I try a simple du -sh of the /var/www the node went in "N"
state and it doesn't come back to Y so I need to manually restart
glusterd daemon.
If I don't run backup, rsync, du, etc. the cluster works well, I used
also vmstat to see ram, cpu and disk; I have 4 vcpu 70% and 8GB of ram
for each node, free ram is roughly of 300MB, used about 1,5GB and buffer/cache about 5GB.
I read on internet AWS starts from 16GB of ram for GlusterFS, other documents said to use 12GB of ram, do you have experience about it?gluster.org do write, for basic nodes: 2 CPU’s, 4GB of RAM each, 1
On 5/07/2025 4:41 pm, ^Bart wrote:
Hello everyone,
I have a cluster with the latest GlusterFS on Debian12 with three nodes
but when I try a simple du -sh of the /var/www the node went in "N"
state and it doesn't come back to Y so I need to manually restart
glusterd daemon.
If I don't run backup, rsync, du, etc. the cluster works well, I used
also vmstat to see ram, cpu and disk; I have 4 vcpu 70% and 8GB of ram
for each node, free ram is roughly of 300MB, used about 1,5GB and
buffer/cache about 5GB.
Free Ram 300MB approx
Used Ram 1.5GB
Buffer/cache 5GB approx
Total 8GB approx
Available Ram 8GB approx
.... so are you all full up??
I read on internet AWS starts from 16GB of ram for GlusterFS, other
documents said to use 12GB of ram, do you have experience about it?
^Bart
If you clear your Buffer/cache, might things run better??
On Sat, 5 Jul 2025 20:08:15 +1000, Daniel70
<daniel47@eternal-september.org> wrote in <104atij$1e1pj$1@dont-email.me>:
On 5/07/2025 4:41 pm, ^Bart wrote:
Hello everyone,
I have a cluster with the latest GlusterFS on Debian12 with three nodes >>> but when I try a simple du -sh of the /var/www the node went in "N"
state and it doesn't come back to Y so I need to manually restart
glusterd daemon.
If I don't run backup, rsync, du, etc. the cluster works well, I used
also vmstat to see ram, cpu and disk; I have 4 vcpu 70% and 8GB of ram
for each node, free ram is roughly of 300MB, used about 1,5GB and
buffer/cache about 5GB.
Free Ram 300MB approx
Used Ram 1.5GB
Buffer/cache 5GB approx
Total 8GB approx
Available Ram 8GB approx
.... so are you all full up??
I read on internet AWS starts from 16GB of ram for GlusterFS, other
documents said to use 12GB of ram, do you have experience about it?
^Bart
If you clear your Buffer/cache, might things run better??
Hi Daniel,
The way Linux works, free memory gets used as Buffer/cache.
As more memory is allocated, it pulls it from the B/C. It's
basically part of the "free" memory, but being "borrowed"
by the OS for better performance.
Have you seen anything in the logs? Maybe check /var/log/glusterfs/ glusterd.log
It can be lock that hasn't been released, sadly only fix is to restart glusterd.
It's quite many years since I used GlusterFS, but back then at work we
had quite large DELL servers (64GB RAM, 2 CPU with 8 cores each) with
SAN based storage as nodes, the system got degradation with having high
read/write. In the end those was replaced by standard NFS servers which
gave more stability and then have replication from one SAN to another,
sure not a fully HA solution.
gluster.org do write, for basic nodes: 2 CPU’s, 4GB of RAM each, 1
Gigabit network.
Have you seen anything in the logs? Maybe check /var/log/glusterfs/
glusterd.log
There's nothing wrong in the normal mode but when the system starts the backup of some db, more or less six, the node changes from Y to N but
just with the most important db 1,9GB and it works well with other dbs untill 1,4GB and on the logs of gluster I can read something like "the
node is disconnect" because can't read other peers.
It can be lock that hasn't been released, sadly only fix is to restart
glusterd.
It's very sadly I can just restart the daemon to fix the "N" :\ but I
could try to add more ram and get other 2GB so change from 8GB to 10GB.
It's quite many years since I used GlusterFS, but back then at work we
had quite large DELL servers (64GB RAM, 2 CPU with 8 cores each) with
SAN based storage as nodes, the system got degradation with having high
I think GlusterFS needs more than 8GB of ram to prevent spikes when the system does backup, ok also without backup job there isn't a lot of free memory (300-400MB) but there aren't down nodes!
read/write. In the end those was replaced by standard NFS servers
which gave more stability and then have replication from one SAN to
another, sure not a fully HA solution.
I'm watching cephfs but on internet I read it needs more ram than what I
use now so... like what I wrote above I think now I could try to upgrade
ram and run tests on GlusterFS because to change a "production cluster"
is not easy like a charm but I know also there aren't future plans about gluster and I heard it will be closed so... cephfs will be the only alternative.
gluster.org do write, for basic nodes: 2 CPU’s, 4GB of RAM each, 1
Gigabit network.
I read it but in a real production environment I think the cpu and ram quantities are little bit different...
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 148:18:23 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
33 files (6,120K bytes) |
Messages: | 2,410,934 |