Skip to main content

IPtables and NFS (RHCE)

The portmapper assigns each NFS service to a port dynamically at service startup time. How do I allow legitimate NFS clients to access the NFS server using RHEL / Fedora / CentOS Linux 5.x iptables firewall?

You need to open the following ports:
a] TCP/UDP 111 - RPC 4.0 portmapper

b] TCP/UDP 2049 - NFSD (nfs server)

c] Portmap static ports - Various TCP/UDP ports defined in /etc/sysconfig/nfs file.

Configure NFS Services to Use Fixed Ports

However, NFS and portmap are pretty complex protocols. Firewalling should be done at each host and at the border firewalls to protect the NFS daemons from remote
access, since NFS servers should never be accessible from outside the organization. However, by default, the portmapper assigns each NFS service to a port dynamically at service startup time. Dynamic ports cannot be protected by port filtering firewalls such as iptables. First, you need to configure NFS services to use fixed ports. Open /etc/sysconfig/nfs, enter:
# vi /etc/sysconfig/nfs
Modify config directive as follows to set TCP/UDP unused ports:

# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=lockd-port-number
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=lockd-port-number
# Port rpc.mountd should listen on.
MOUNTD_PORT=mountd-port-number
# Port rquotad should listen on.
RQUOTAD_PORT=rquotad-port-number
# Port rpc.statd should listen on.
STATD_PORT=statd-port-number
# Outgoing port statd should used. The default is port is random
STATD_OUTGOING_PORT=statd-outgoing-port-numbe
Here is sample listing from one of my production NFS server:

LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020
Save and close the files. Restart NFS and portmap services:
# service portmap restart
# service nfs restart
# service rpcsvcgssd restart

Update /etc/sysconfig/iptables files

Open /etc/sysconfig/iptables, enter:
# vi /etc/sysconfig/iptables
Add the following lines, ensuring that they appear before the final LOG and DROP lines for the RH-Firewall-1-INPUT chain:

-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 32769 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 662 -j ACCEP
Save and close the file. Replace 192.168.1.0/24 with your actual LAN subnet /mask combo. You need to use static port values defined by /etc/sysconfig/nfs config file. Restart iptables service:
# service iptables restart

Comments

Popular posts from this blog

PXE boot a LiveCD image

Summary: I have wanted to build a kickstart environment which hosted a "rescue CD" or LiveCD to allow you to boot over the network after you blew your stuff up and needed to repair a few things.  Today I have worked through a method of doing so, with the help of the people who published a succinct script with the Red Hat Enterprise Virtualization Hypervisor.  (the script will be at the bottom of this post - if I have somehow not followed the GPL, please let me know and I will correct whatever is necessary) NOTE/Warning: The boot will fail due the initrd being too large (645mb).  I'm not sure how to proceed.  This procedure worked for RHEVh, because it is quite a bit smaller.  Hopefully I can report back with progress on this? :-$ Procedure: download your LiveCD image to /export/isos/RESCUE/Fedora-16-i686-Live-Desktop.iso # cd /var/tmp # vi livecd-iso-to-pxeboot (populate the file with the script shown below) # chmod 754 ./livecd-iso-to-pxeb...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...