Main index | Section 8 | 日本語 | Options |
Unless otherwise specified, eight servers per CPU for UDP transport are started.
The following options are available:
| |
Register the
NFS
service with
rpcbind(8)
without creating any servers.
This option can be used along with the
| |
| |
Unregister the NFS service with rpcbind(8) without creating any servers. | |
| |
Specifies a hostname to be used as a principal name, instead of the default hostname. | |
| |
Specifies how many servers to create. This option is equivalent to specifying
| |
| |
Specifies the maximum servers that will be kept around to service requests. | |
| |
Specifies the minimum servers that will be kept around to service requests. | |
| |
Specifies which IP address or hostname to bind to on the local host.
This option is recommended when a host has multiple interfaces.
Multiple
| |
| |
Specifies that nfsd should bind to the wildcard IP address.
This is the default if no
| |
| |
Enables pNFS support in the server and specifies the information that the
daemon needs to start it.
This option can only be used on one server and specifies that this server
will be the MetaData Server (MDS) for the pNFS service.
This can only be done if there is at least one FreeBSD system configured
as a Data Server (DS) for it to use.
The
pnfs_setup
string is a set of fields separated by ',' characters:
Each of these fields specifies one DS.
It consists of a server hostname, followed by a ':'
and the directory path where the DS's data storage file system is mounted on
this MDS server.
This can optionally be followed by a '#' and the mds_path, which is the
directory path for an exported file system on this MDS.
If this is specified, it means that this DS is to be used to store data
files for this mds_path file system only.
If this optional component does not exist, the DS will be used to store data
files for all exported MDS file systems.
The DS storage file systems must be mounted on this system before the
nfsd
is started with this option specified.
nfsv4-data0:/data0,nfsv4-data1:/data1
would specify two DS servers called nfsv4-data0 and nfsv4-data1 that comprise
the data storage component of the pNFS service.
These two DSs would be used to store data files for all exported file systems
on this MDS.
The directories
"/data0"
and
"/data1"
are where the data storage servers exported
storage directories are mounted on this system (which will act as the MDS).
nfsv4-data0:/data0#/export1,nfsv4-data1:/data1#/export2 would specify two DSs as above, however nfsv4-data0 will be used to store data files for "/export1" and nfsv4-data1 will be used to store data files for "/export2". | |
When using IPv6 addresses for DSs be wary of using link local addresses. The IPv6 address for the DS is sent to the client and there is no scope zone in it. As such, a link local address may not work for a pNFS client to DS TCP connection. When parsed, nfsd will only use a link local address if it is the only address returned by getaddrinfo(3) for the DS hostname.
| |
This option is only meaningful when used with the
If mirroring is enabled, the server must use the Flexible File layout. If mirroring is not enabled, the server will use the File layout by default, but this default can be changed to the Flexible File layout if the sysctl(1) vfs.nfsd.default_flexfile is set non-zero. | |
| |
Serve TCP NFS clients. | |
| |
Serve UDP NFS clients. | |
| |
Ignored; included for backward compatibility. | |
For example, "nfsd -u -t -n 6" serves UDP and TCP transports using six daemons.
A server should run enough daemons to handle the maximum level of concurrency from its clients, typically four to six.
The nfsd utility listens for service requests at the port indicated in the NFS server specification; see Network File System Protocol Specification, RFC1094, NFS: Network File System Version 3 Protocol Specification, RFC1813, Network File System (NFS) Version 4 Protocol, RFC3530 and Network File System (NFS) Version 4 Minor Version 1 Protocol, RFC5661.
If nfsd detects that NFS is not loaded in the running kernel, it will attempt to load a loadable kernel module containing NFS support using kldload(2). If this fails, or no NFS KLD is available, nfsd will exit with an error.
If
nfsd
is to be run on a host with multiple interfaces or interface aliases, use
of the
If the server has stopped servicing clients and has generated a console message like "nfsd server cache flooded...", the value for vfs.nfsd.tcphighwater needs to be increased. This should allow the server to again handle requests without a reboot. Also, you may want to consider decreasing the value for vfs.nfsd.tcpcachetimeo to several minutes (in seconds) instead of 12 hours when this occurs.
Unfortunately making vfs.nfsd.tcphighwater too large can result in the mbuf limit being reached, as indicated by a console message like "kern.ipc.nmbufs limit reached". If you cannot find values of the above sysctl values that work, you can disable the DRC cache for TCP by setting vfs.nfsd.cachetcp to 0.
The nfsd utility has to be terminated with SIGUSR1 and cannot be killed with SIGTERM or SIGQUIT. The nfsd utility needs to ignore these signals in order to stay alive as long as possible during a shutdown, otherwise loopback mounts will not be able to unmount. If you have to kill nfsd just do a "kill -USR1 <PID of master nfsd>"
If mirroring is enabled via the
NFSD (8) | February 14, 2019 |
Main index | Section 8 | 日本語 | Options |
Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
“ | If you have an emergency I'm great at running around and flailing my arms | ” |
— Artur Bagyants |