Hi,
ich habe gestern meinen NFS Server aus dem 10.0.0.0/24 Netz in das 10.0.1.0/24 Netz verschoben. Jetzt kann kein Client mehr die Shares mounten:
So siehts auf dem Server aus:
Vom Client aus sieht man das hier:
Direkte Auswirkungen in den Logs auf dem Server gibt es nicht wenn ich versuche das Share zu mounten.
Hat jemand ne Ahnung was da los ist?
cu
serow
ich habe gestern meinen NFS Server aus dem 10.0.0.0/24 Netz in das 10.0.1.0/24 Netz verschoben. Jetzt kann kein Client mehr die Shares mounten:
So siehts auf dem Server aus:
Code:
storage:~# cat /etc/exports | grep -v ^#
/srv/archive/iso *(rw,async,no_root_squash,no_subtree_check)
storage:~# exportfs -v
/srv/archive/iso
<world>(rw,async,wdelay,no_root_squash,no_subtree_check)
storage:~# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 0 8342 -
tcp 0 0 0.0.0.0:49805 0.0.0.0:* LISTEN 0 7871 3535/rpc.statd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 7595 3359/portmap
tcp 0 0 0.0.0.0:50195 0.0.0.0:* LISTEN 0 8406 3779/rpc.mountd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 6147 2571/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 6694 3062/exim4
tcp 0 0 0.0.0.0:34108 0.0.0.0:* LISTEN 0 8359 -
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 0 6749 3079/ietd
tcp6 0 0 :::22 :::* LISTEN 0 6145 2571/sshd
tcp6 0 0 :::3260 :::* LISTEN 0 6748 3079/ietd
udp 0 0 0.0.0.0:2049 0.0.0.0:* 0 8341 -
udp 0 0 0.0.0.0:50953 0.0.0.0:* 0 7868 3535/rpc.statd
udp 0 0 0.0.0.0:59856 0.0.0.0:* 0 8401 3779/rpc.mountd
udp 0 0 0.0.0.0:743 0.0.0.0:* 0 7861 3535/rpc.statd
udp 0 0 0.0.0.0:46958 0.0.0.0:* 0 8352 -
udp 0 0 0.0.0.0:111 0.0.0.0:* 0 7594 3359/portmap
storage:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0c:29:7b:d5:7b
inet addr:10.0.1.23 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7b:d57b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1334 errors:0 dropped:0 overruns:0 frame:0
TX packets:762 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:118862 (116.0 KiB) TX bytes:114235 (111.5 KiB)
storage:~#
Vom Client aus sieht man das hier:
Code:
mathias@portal:~$ nmap storage
Starting Nmap 4.62 ( http://nmap.org ) at 2010-02-21 17:29 CET
Interesting ports on storage.mathias-ewald.invalid (10.0.1.23):
Not shown: 1712 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
2049/tcp open nfs
Nmap done: 1 IP address (1 host up) scanned in 0.094 seconds
mathias@portal:~$ sudo mount -t nfs storage:/srv/archive/iso tmp/
mount: wrong fs type, bad option, bad superblock on storage:/srv/archive/iso,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so
mathias@portal:~$
Direkte Auswirkungen in den Logs auf dem Server gibt es nicht wenn ich versuche das Share zu mounten.
Code:
storage:~# tail -f /var/log/messages
Feb 21 17:24:32 storage kernel: [ 58.460298] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Feb 21 17:24:33 storage kernel: [ 58.497518] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Feb 21 17:24:33 storage kernel: [ 58.497532] NFSD: starting 90-second grace period
Feb 21 17:25:07 storage kernel: [ 93.054170] nfsd: last server has exited
Feb 21 17:25:07 storage kernel: [ 93.054173] nfsd: unexporting all filesystems
Feb 21 17:25:08 storage kernel: [ 94.191476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Feb 21 17:25:08 storage kernel: [ 94.191490] NFSD: starting 90-second grace period
Feb 21 17:28:51 storage kernel: [ 316.960036] nfsd: peername failed (err 107)!
Feb 21 17:28:59 storage kernel: [ 325.413402] nfsd: peername failed (err 107)!
Feb 21 17:29:02 storage kernel: [ 328.385905] nfsd: peername failed (err 107)!
^C
storage:~# dmesg | tail
[ 58.460298] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 58.497518] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 58.497532] NFSD: starting 90-second grace period
[ 93.054170] nfsd: last server has exited
[ 93.054173] nfsd: unexporting all filesystems
[ 94.191476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 94.191490] NFSD: starting 90-second grace period
[ 316.960036] nfsd: peername failed (err 107)!
[ 325.413402] nfsd: peername failed (err 107)!
[ 328.385905] nfsd: peername failed (err 107)!
storage:~# tail -f /var/log/daemon.log
Feb 21 11:59:35 storage mdadm[3238]: NewArray event detected on md device /dev/md0
Feb 21 12:10:58 storage mountd[3228]: Caught signal 15, un-registering and exiting.
Feb 21 16:54:33 storage mountd[3396]: Caught signal 15, un-registering and exiting.
Feb 21 17:20:12 storage mountd[4643]: Caught signal 15, un-registering and exiting.
Feb 21 17:21:59 storage mountd[4895]: Caught signal 15, un-registering and exiting.
Feb 21 17:22:10 storage rpc.statd[2378]: Caught signal 15, un-registering and exiting.
Feb 21 17:23:13 storage init: Switching to runlevel: 6
Feb 21 17:23:51 storage mdadm[3093]: NewArray event detected on md device /dev/md0
Feb 21 17:24:31 storage rpc.statd[3535]: Version 1.1.2 Starting
Feb 21 17:25:07 storage mountd[3743]: Caught signal 15, un-registering and exiting.
^C
storage:~#
Hat jemand ne Ahnung was da los ist?
cu
serow