When it comes to computer networks, you can often hear the mention of NFS. What does this acronym mean?

It is a distributed file system protocol originally developed by Sun Microsystems in 1984 that allows a user on a client computer to access files over a network similar to accessing local storage. NFS, like many other protocols, is based on the Open Network Computing Remote Procedure Call (ONC RPC) system.

In other words, what is NFS? It is an open standard, defined in the Request for Comments (RFC), that allows anyone to implement the protocol.

Versions and variations

The inventor used only the first version for his own experimental purposes. When the development team added significant changes to the original NFS and released it outside of Sun's credit, they designated new version as v2 so that you can test interoperability between distributions and create a fallback.

NFS v2

Version 2 originally worked only over the User Datagram Protocol (UDP). Its developers wanted to keep the server side without blocking outside the main protocol.

The virtual file system interface allows for modular implementation reflected in a simple protocol. By February 1986, solutions were demonstrated for operating systems such as System V release 2, DOS, and VAX / VMS using Eunice. NFS v2 only allowed the first 2 GB of a file to be read due to 32-bit limitations.

NFS v3

The first proposal to develop NFS version 3 at Sun Microsystems was announced shortly after the release of the second distribution. The main motivation was to try to mitigate the problem of synchronous write performance. By July 1992, practical improvements had resolved many of the shortcomings of NFS version 2, while leaving only inadequate file support (64-bit file sizes and offsets).

  • Support for 64-bit file sizes and offsets for processing data larger than 2 gigabytes (GB);
  • support for asynchronous recording on the server to improve performance;
  • additional file attributes in many responses to avoid having to retrieve them again;
  • a READDIRPLUS operation to get data and attributes along with filenames when scanning a directory;
  • many other improvements.

During the introduction of version 3, support for TCP as a transport layer protocol began to increase. The use of TCP as a means of transferring data, performed using NFS over the WAN, began to allow large file sizes to be transferred for viewing and writing. This allowed developers to overcome the 8K limit imposed by the User Datagram Protocol (UDP).

What is NFS v4?

Version 4, influenced by the Andrew File System (AFS) and Server Message Block (SMB, also called CIFS), includes performance improvements, better security, and a conditional protocol.

Version 4 was the first distribution developed by the Internet Engineering Task Force (IETF) after Sun Microsystems outsourced protocol development.

NFS version 4.1 aims to provide protocol support for leveraging clustered server deployments, including the ability to provide scalable concurrent file access across multiple servers (pNFS extension).

The newest file system protocol, NFS 4.2 (RFC 7862), was officially released in November 2016.

Other extensions

With the development of the standard, appropriate tools have appeared for working with it. For example, WebNFS, an extension for versions 2 and 3, allows the Network File System Access Protocol to more easily integrate into web browsers and enable work through firewalls.

Various third-party protocols have become associated with NFS as well. The most famous of them are:

  • Network Lock Manager (NLM) with byte protocol support (added to support UNIX System V file locking API);
  • remote quota (RQUOTAD), which allows NFS users to view storage quotas on NFS servers;
  • NFS over RDMA - an adaptation of NFS that uses remote direct memory access (RDMA) as the transmission medium;
  • NFS-Ganesha is a user-space NFS server that supports CephFS FSAL (Filesystem Abstraction Layer) using libcephfs.

Platforms

The Network File System is often used with Unix operating systems (such as Solaris, AIX, HP-UX), Apple's macOS, and Unix-like operating systems (such as Linux and FreeBSD).

It is also available for platforms such as Acorn RISC OS, OpenVMS, MS-DOS, Microsoft Windows, Novell NetWare, and IBM AS / 400.

Alternative remote file access protocols include Server Message Block (SMB, also called CIFS), Apple Transfer Protocol (AFP), NetWare Core Protocol (NCP), and OS / 400 Server File System (QFileSvr.400).

This is due to the requirements of NFS, which are oriented mostly towards Unix-like "shells".

At the same time, SMB and NetWare (NCP) protocols are used more often than NFS on systems running Microsoft Windows. AFP is most common on Apple Macintosh platforms, and QFileSvr.400 is most common on OS / 400.

Typical implementation

Assuming a typical Unix-style scenario where one computer (client) needs access to data stored on another (NFS server):

  • The server implements Network File System processes, started by default as nfsd, to make its data publicly available to clients. The server administrator determines how to export directory names and options, usually using the / etc / exports configuration file and the exportfs command.
  • Administering server security ensures that it can recognize and approve a verified client. Its network configuration ensures that appropriate clients can negotiate with it through any firewall system.
  • The client machine requests access to the exported data, usually by issuing the appropriate command. It queries the server (rpcbind) that is using the NFS port and subsequently connects to it.
  • If all goes well, users on the client machine will be able to view and interact with the installed file systems on the server within the allowed options.

It should be noted that the automation of the Network File System process can also take place - perhaps using etc / fstab and / or other similar means.

Development to date

By the 21st century, rival protocols DFS and AFS have not achieved any major commercial success over the Network File System. IBM, which previously acquired all of the commercial rights to the above technologies, donated most of the AFS source code to the free software community in 2000. The Open AFS project still exists today. In early 2005, IBM announced the completion of AFS and DFS sales.

In turn, in January 2010, Panasas introduced NFS v 4.1, a technology that improves concurrent data access capabilities. Network File System v 4.1 protocol defines a method for separating file system metadata from the location of specific files. So it goes beyond simple name / data separation.

What is NFS of this version in practice? The above feature distinguishes it from the traditional protocol, which contains the names of files and their data under the same binding to the server. With Network File System v 4.1, some files can be distributed across multisite servers, but client participation in separating metadata and data is limited.

In the implementation of the fourth distribution kit of the NFS protocol, the server is a set of server resources or components; they are assumed to be controlled by the metadata server.

The client still contacts the same MDS server to crawl or interact with the namespace. When it moves files to and from the server, it can directly interact with the dataset belonging to the NFS group.

Network File System NFS, or Network File System, is a popular network file system protocol that allows users to mount remote network directories on their machine and transfer files between servers. You can use disk space on another machine for your files and work with files located on other servers. In fact, this is an alternative general access Windows for Linux, unlike Samba, is implemented at the kernel level and works more stably.

This article will walk you through installing nfs on Ubuntu 16.04. We will walk through the installation of all the necessary components, setting up a shared folder, as well as connecting network folders.

As already mentioned, NFS is a network filesystem. To work, you need a server that will host the shared folder and clients that can mount the network folder like a regular disk in the system. Unlike other protocols, NFS provides transparent access to remote files. Programs will see files as in a regular file system and work with them as with local files, nfs returns only the requested part of the file, instead of the entire file, so this file system will work fine on systems with fast Internet or local network.

Installing NFS Components

Before we can work with NFS, we need to install a few programs. On the machine that will be the server, you need to install the nfs-kernel-server package, which will open nfs shares in ubuntu 16.04. To do this, run:

sudo apt install nfs-kernel-server

Now let's check if the server was installed correctly. The NFS service listens for connections for both TCP and UDP on port 2049. You can see if these ports are actually being used with the command:

rpcinfo -p | grep nfs

It is also important to check if NFS is supported at the kernel level:

cat / proc / filesystems | grep nfs

We see what works, but if not, you need to manually load the nfs kernel module:

Let's add nfs to startup as well:

sudo systemctl enable nfs

On the client computer, you need to install the nfs-common package to be able to work with this file system. You do not need to install the server components, just this package will be enough:

sudo apt install nfs-common

Setting up an NFS server on Ubuntu

We can open NFS access to any folder, but let's create a new one for this purpose:

folder_address client (options)

The folder address is the folder you want to make available over the network. Client - ip address or network address from which this folder can be accessed. But the options are a little more complicated. Let's consider some of them:

  • rw- allow reading and writing in this folder
  • ro- allow read only
  • sync- respond to the following requests only when the data is saved to disk (default)
  • async- do not block connections while data is being written to disk
  • secure- use only ports below 1024 for connection
  • insecure- use any ports
  • nohide- do not hide subdirectories when opening access to multiple directories
  • root_squash- replace requests from root with anonymous ones
  • all_squash- make all requests anonymous
  • anonuid and anongid- specifies the uid and gid for the anonymous user.

For example, for our folder, this line might look like this:

/ var / nfs 127.0.0.1 (rw, sync, no_subtree_check)

When everything was set up, it remains to update the NFS export table:

sudo exportfs -a

That's it, opening nfs balls in ubuntu 16.04 is complete. Now let's try to configure the client and try to mount it.

NFS connection

We will not dwell on this issue in detail in today's article. This is a fairly large topic that deserves a separate article. But I will say a few words.

To mount a network folder you don't need any ubuntu nfs client, just use the mount command:

sudo mount 127.0.0.1:/var/nfs/ / mnt /

Now you can try to create a file in the connected directory:

We'll also look at the mounted filesystems with df:

127.0.0.1:/var/nfs 30G 6,7G 22G 24% / mnt

To disable this file system, just use the standard umount:

sudo umount / mnt /

conclusions

This article looked at setting up nfs ubuntu 16.04, as you can see, everything is done very simply and transparently. Connecting NFS shares is done in a few clicks using standard commands, and opening nfs shares in ubuntu 16.04 is not much more difficult than connecting. If you have any questions, write in the comments!

Related entries:


What is an NFS file?

NFS filename suffix is ​​mainly used for Network Format System Temporary Format files. Format NFS file compatible with software that can be installed on Linux system platform. The NFS file format, along with other file formats #NUMEXTENSIONS #, belongs to the Misc Files category. For NFS file management it is recommended Network file system.

Programs which support NFS file extension

Below is a table with a list of programs that support NFS files. Files with NFS extension, just like any other file formats, can be found on any operating system. These files can be transferred to other devices, be they mobile or stationary, but not all systems may be able to properly process such files.

Programs that support the NFS file

How to open an NFS file?

NFS access problems can be caused by various reasons. Fortunately, the most common problems with NFS files can be solved without deep knowledge of IT, and most importantly, in a matter of minutes. Below is a list of guidelines to help you identify and resolve file-related problems.

Step 1. Install Network File System software

Problems with opening and working with NFS files are most probably having to do with no proper software compatible with NFS files being present on your machine. The solution to this problem is very simple. Download Network File System and install it on your device. A complete list of programs grouped by operating system can be found above. One of the safest ways to download software is by using the official distributor links. Visit the Network File System website and download the installer.

Step 2. Update Network File System to the latest version

You still cannot access NFS files though Network file system installed on your system? Make sure that software updated. It may also happen that software creators, by updating their applications, add compatibility with other, newer file formats. This may be one of the reasons why NFS files are not compatible with the Network File System. All file formats that were handled just fine by previous versions of this program should also be opened using the Network File System.

Step 3. Associate Network Format System Temporary Format files with Network File System

If you have installed latest version Network File System and the problem persists, select it as the default program to be used to manage NFS on your device. The method is fairly simple and varies little across operating systems.

Windows

  • Right clicking on NFS will open a menu from which you have to select the option To open with
  • Please select Choose another app→ More apps
  • To end the process, select Find another app on this ... and using Explorer select the Network File System folder. Confirm, Always use this app to open NFS files and clicking OK.

Change default app to Mac OS

  • By clicking right button mouse over the selected NFS file, open the file menu and choose Information.
  • Find the option To open with- click the title if it is hidden
  • Select the appropriate program from the list and confirm by pressing “ Change for everyone " .
  • A window should appear with a message that this change will be applied to all files with NFS extension... By clicking Forward, you confirm your choice.

Step 4. Make sure NFS is not faulty

If you followed the instructions in the previous steps and the issue is still not resolved, then you should check the NFS file in question. Problems with opening a file can arise for various reasons.

1. The NFS may be infected with malware - make sure to scan it with an antivirus tool.

If the NFS is indeed infected, it is possible that the malware is blocking it from opening. It is recommended that you scan your system for viruses as soon as possible and malware or use an online antivirus scanner. If the scanner detects that the NFS file is insecure, proceed as directed antivirus software to neutralize the threat.

2. Make sure the structure of the NFS file is intact

If you receive a problematic NFS file from a third party, ask them to provide you with another copy. It is possible that the file was not properly copied to the data store and is incomplete and therefore cannot be opened. This can happen if the download process file with nfs extension was interrupted and the file data is corrupted. Download the file again from the same source.

3. Make sure you have the appropriate access rights

Sometimes a user needs administrator rights to access files. Sign out of your current account and sign in account with sufficient access rights. Then open the Network Format System Temporary Format file.

4. Make sure your device meets the requirements to be able to open the Network File System

If the systems has insufficient resources to open NFS files, try closing all currently running applications and try again.

5. Make sure that your operating system and drivers are updated

Regularly updated system, drivers and programs keep your computer safe. It can also prevent problems with Network Format System Temporary Format files... It is possible that one of the available system or driver updates may solve the problems with NFS files affecting older versions of given software.

Suggestions: Details of extension .nfs He answers questions such as:

  • What is a. nfs?
  • What software do I need to open the. nfs?
  • File? How can the. nfs be opened, edited or printed?
  • How to convert. nfs files to a different format?

We hope you find a useful and valuable resource on this page!

1 extensions and 0 aliases found in the database

✅ Network File System Temporary

Description (in English):
NFS file is a Network File System Temporary. Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems.

Application: -

MIME Type: application / octet-stream

Magic number: -

Magic number: -

Sample: -

NFS aliases:

NFS related links:

NFS related extensions:

Other types of files may also use the file extension .nfs.

🚫 The .nfs file extension is often given incorrectly!

According to Search on our site, these were the most common typos in the past year:

fns , nfw , nes , jfs , fs, nds, nsf, ngs, bfs, nfx, nfe, nf, nbs, nvs, gfs

Is it possible that the filename extension is incorrect?

We found the following similar file extensions in our database:

🔴 Can't open a .nfs file?

When you double-click a file to open it, Windows checks the file name extension. If Windows recognizes the file name extension, the file opens in the program associated with that file name extension. When Windows does not recognize the file name extension, the following message appears:

Windows cannot open this file:

Example.nfs

To open this file, Windows needs to know which program you want to use to open it ...

If you don't know how to set up file associations .nfs, check.

🔴 Can I change the file extension?

Changing the filename of the file extension is not a good idea. When you change the file extension, you change the way the program on your computer reads the file. The problem is that changing the file extension does not change the file format.

If you have any useful information about the file extension .nfs, !

🔴 Rate our NFS page

Please help us by rating our page NFS in the 5 star rating system below. (1 star is bad, 5 stars is great)

Good time, readers and guests. There was a very long break between posts, but I'm back in battle). In today's article I will consider NFS protocol operation, and configuring an NFS server and NFS client on Linux.

An introduction to NFS

NFS (Network file system - network file system) in my opinion - an ideal solution in a local network, where fast (faster than SAMBA and less resource-intensive compared to remote file systems with encryption - sshfs, SFTP, etc ...) data exchange is needed and is not at the forefront security of transmitted information. NFS protocol allows mount remote filesystems over a network into a local directory tree as if it were a mounted disk filesystem. Thus, local applications can work with the remote file system as with the local one. But you need to be careful (!) With setting up NFS, because with a certain configuration it is possible to suspend the client's operating system, waiting for infinite I / O. NFS protocol based on work RPC protocol, which does not yet lend itself to my understanding)) so the material in the article will be a little vague ... Before you can use NFS, be it a server or a client, you must make sure that your kernel has support for the NFS file system. You can check whether the kernel supports the NFS file system by looking for the presence of the corresponding lines in the file / proc / filesystems:

ARCHIV ~ # grep nfs / proc / filesystems nodev nfs nodev nfs4 nodev nfsd

If the specified lines in the file / proc / filesystems fails, then you need to install the packages described below. This will most likely install dependent kernel modules to support the desired filesystems. If, after installing the packages, NFS support is not displayed in the specified file, then it will be necessary with this function enabled.

History Network file system

NFS protocol developed by Sun Microsystems and has 4 versions in its history. NFSv1 was developed in 1989 and was experimental, running on the UDP protocol. Version 1 is described in. NFSv2 was released in the same 1989, was described by the same RFC1094 and was also based on the UDP protocol, while allowing you to read no more than 2GB from a file. NFSv3 revised in 1995 and described in. The main innovations of the third version were support for large files, added support for the TCP protocol and large TCP packets, which significantly accelerated the technology's operation. NFSv4 finalized in 2000 and described in RFC 3010, revised in 2003 and described in. The fourth version included performance improvements, support for various authentication tools (in particular, Kerberos and LIPKEY using the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types). NFS v4.1 was approved by the IESG in 2010, and received the number. An important innovation of version 4.1 is the specification pNFS - Parallel NFS, a mechanism for parallel access of an NFS client to the data of many distributed NFS servers. The presence of such a mechanism in the network file system standard will help build distributed "cloud" storages and information systems.

NFS server

Since we have NFS- this is network file system is necessary. (You can also read the article). Further it is necessary. On Debian, this is the package nfs-kernel-server and nfs-common, in RedHat this is the package nfs-utils... And also, it is necessary to enable the launch of the daemon at the required OS runlevels (the command in RedHat is / sbin / chkconfig nfs on, in Debian - /usr/sbin/update-rc.d nfs-kernel-server defaults).

Installed packages in Debian are run in the following order:

ARCHIV ~ # ls -la /etc/rc2.d/ | grep nfs lrwxrwxrwx 1 root root 20 Oct 18 15:02 S15nfs-common -> ../init.d/nfs-common lrwxrwxrwx 1 root root 27 Oct 22 01:23 S16nfs-kernel-server -> ../init.d / nfs-kernel-server

That is, it starts first nfs-common then the server itself nfs-kernel-server... In RedHat, the situation is similar, with the only exception that the first script is called nfslock and the server is simply called nfs... About nfs-common the debian site literally tells us the following: common files for NFS client and server, this package must be installed on a machine that will act as an NFS client or server. The package includes programs: lockd, statd, showmount, nfsstat, gssd and idmapd... By viewing the contents of the startup script /etc/init.d/nfs-common you can trace the following sequence of work: the script checks for the presence of an executable binary file /sbin/rpc.statd, checks for the presence in files / etc / default / nfs-common, / etc / fstab and / etc / exports parameters that require starting daemons idmapd and gssd , starts the daemon /sbin/rpc.statd , then before starting /usr/sbin/rpc.idmapd and /usr/sbin/rpc.gssd checks for the presence of these executable binaries, then for daemon /usr/sbin/rpc.idmapd checks for sunrpc, nfs and nfsd, as well as support for the file system rpc_pipefs in the kernel (that is, its presence in the file / proc / filesystems), if everything is successful, then it runs /usr/sbin/rpc.idmapd ... Additionally, for the demon /usr/sbin/rpc.gssd checks kernel module rpcsec_gss_krb5 and starts the daemon.

If you view the content NFS server startup script on Debian ( /etc/init.d/nfs-kernel-server), then you can follow the following sequence: at startup, the script checks the existence of the file / etc / exports, Availability nfsd, availability of support NFS file system in (that is, in the file / proc / filesystems), if everything is in place, then the daemon is started /usr/sbin/rpc.nfsd , then checks if the parameter is set NEED_SVCGSSD(set in the server settings file / etc / default / nfs-kernel-server) and, if given, starts the daemon /usr/sbin/rpc.svcgssd , the last to start the daemon /usr/sbin/rpc.mountd ... From this script you can see that NFS server operation consists of daemons rpc.nfsd, rpc.mountd and if Kerberos authentication is used, then the daemon rcp.svcgssd. The daemon rpc.rquotad and nfslogd are still running in the red hat (For some reason, I did not find information about this daemon in Debian and the reasons for its absence, apparently deleted ...).

From this it becomes clear that the Network File System server consists of the following processes (read - daemons) located in the / sbin and / usr / sbin directories:

In NFSv4, when using Kerberos, daemons are additionally started:

  • rpc.gssd- The NFSv4 daemon provides authentication methods through the GSS-API (Kerberos Authentication). Works on client and server.
  • rpc.svcgssd- NFSv4 server daemon that provides server side client authentication.

portmap and RPC protocol (Sun RPC)

In addition to the above packages, an additional package is required for NFSv2 and v3 to work correctly portmap(in newer distributions, replaced by renamed in rpcbind). This package is usually installed automatically with NFS as a dependent and implements the operation of the RPC server, that is, it is responsible for the dynamic assignment of ports for some services registered in the RPC server. Literally, according to the documentation, this is a server that converts Remote Procedure Call (RPC) program numbers to TCP / UDP port numbers. portmap operates on several entities: RPC calls or requests, TCP / UDP ports,protocol version(tcp or udp), program numbers and software versions. The portmap daemon is started by the /etc/init.d/portmap script before starting NFS services.

In short, the job of an RPC (Remote Procedure Call) server is to process RPC calls (aka RPC procedures) from local and remote processes. Using RPC calls, services register or remove themselves to / from the port mapper (aka port mapper, aka portmap, aka portmapper, aka rpcbind, in newer versions), and clients using RPC calls directing requests to the portmapper get the information they need. User-friendly program service names and their corresponding numbers are defined in the / etc / rpc. As soon as a service has sent a corresponding request and registered itself with the RPC server in the port mapper, the RPC server assigns the TCP and UDP ports to the service on which the service was started and stores in the kernel the corresponding information about the running service (name), a unique number service (in accordance with / etc / rpc), about the protocol and port on which the service is running and about the version of the service, and provides the specified information to clients upon request. The port converter itself has a program number (100000), version number - 2, TCP port 111 and UDP port 111. Above, when specifying the composition of the NFS server daemons, I indicated the main RPC program numbers. I probably confused you a little with this paragraph, so I’ll say the main phrase that should make it clear: the main function of the port mapper is to return to him (the client) the port on which the requested program is running. Accordingly, if a client needs to access RPC with a specific program number, it must first contact the portmap process on the server machine and determine the port number for communicating with the RPC service it needs.

The operation of an RPC server can be represented by the following steps:

  1. The port converter must be started first, usually at system boot. This creates a TCP endpoint and opens TCP port 111. It also creates a UDP endpoint that waits for a UDP datagram to arrive on UDP port 111.
  2. At startup, a program running through an RPC server creates a TCP endpoint and a UDP endpoint for each supported version of the program. (An RPC server can support multiple versions. The client specifies the required version when making an RPC call.) A dynamically assigned port number is assigned to each version of the service. The server registers each program, version, protocol, and port number by making the appropriate RPC call.
  3. When the RPC client program needs the information it needs, it calls a port mapper routine to obtain a dynamically assigned port number for a given program, version, and protocol.
  4. In response to this request, the north returns the port number.
  5. The client sends an RPC request message to the port number obtained in step 4. If UDP is used, the client simply sends a UDP datagram containing the RPC call message to the UDP port number on which the requested service is running. In response, the service sends a UDP datagram containing an RPC response message. If TCP is in use, the client does an active open to the TCP port number of the requested service and then sends an RPC call message over the established connection. The server responds with an RPC response message over the connection.

To obtain information from the RPC server, use the utility rpcinfo... When specifying parameters -p host the program lists all registered RPC programs on host. Without specifying the host, the program will display services on localhost. Example:

ARCHIV ~ # rpcinfo -p prog-ma version proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 59451 status 100024 1 tcp 60872 status 100021 1 udp 44310 nlockmgr 100021 3 udp 44310 nlockmgr 100021 4 udp 44310 nlock 44851 nlockmgr 100021 3 tcp 44851 nlockmgr 100021 4 tcp 44851 nlockmgr 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 ud5 ​​tcp 2049 nfs 1000030 1 mount 41405 mountd 100005 2 udp 51306 mountd 100005 2 tcp 41405 mountd 100005 3 udp 51306 mountd 100005 3 tcp 41405 mountd

As you can see, rpcinfo displays (in columns from left to right) the registered program number, version, protocol, port and name. You can use rpcinfo to remove the registration of a program or get information about an individual RPC service (see man rpcinfo for more options). As you can see, the portmapper version 2 daemons are registered on the udp and tcp ports, rpc.statd version 1 on the udp and tcp ports, the NFS lock manager versions 1,3,4, the nfs server daemon version 2,3,4, as well as the mount daemon versions 1,2,3.

The NFS server (more precisely, the rpc.nfsd daemon) receives requests from the client in the form of UDP datagrams on port 2049. Although NFS works with a port mapper, which allows the server to use dynamically assigned ports, UDP port 2049 is hardcoded to NFS in most implementations ...

Network File System Protocol Operation

Mount remote NFS

The process of mounting a remote NFS file system can be represented by the following diagram:

Description of the NFS protocol when mounting a remote directory:

  1. An RPC server is started on the server and client (usually at boot), serviced by the portmapper process and registered on the tcp / 111 and udp / 111 ports.
  2. Services are started (rpc.nfsd, rpc.statd, etc.), which register on the RPC server and register on arbitrary network ports (unless a static port is specified in the service settings).
  3. the mount command on the client computer sends a request to the kernel to mount a network directory with an indication of the file system type, host and directory itself, the kernel sends an RPC request to the portmap process on the NFS server on the udp / 111 port (if the client does not have the option to work via tcp )
  4. The NFS server kernel polls RPC for the presence of the rpc.mountd daemon and returns the network port on which the daemon is running to the client kernel.
  5. mount sends an RPC request to the port on which rpc.mountd is running. Now the NFS server can validate the client based on its IP address and port number to see if the client can mount the specified filesystem.
  6. The mount daemon returns a description of the requested filesystem.
  7. The client mount command issues the mount system call to bind the file descriptor obtained in step 5 to the local mount point on the client host. The file descriptor is stored in the NFS client code, and from this point on, any access by user processes to files on the server's file system will use the file descriptor as a starting point.

Communication between client and NFS server

Typical access to a remote file system can be described as follows:

Description of the process of accessing a file located on the NFS server:

  1. The client (user process) doesn't care if it gets access to a local file or an NFS file. The kernel deals with the interaction with the hardware through kernel modules or built-in system calls.
  2. Kernel module kernel / fs / nfs / nfs.ko, which acts as an NFS client and sends RPC requests to the NFS server through the TCP / IP module. NFS usually uses UDP, however newer implementations can use TCP.
  3. The NFS server receives requests from the client as UDP datagrams on port 2049. Although NFS can work with a port mapper, which allows the server to use dynamically assigned ports, UDP port 2049 is hardcoded to NFS in most implementations.
  4. When the NFS server receives a request from a client, it is passed to the local file access routine, which provides access to the local disk on the server.
  5. The result of the disk access is returned to the client.

Setting up an NFS server

Server Tuning generally consists of specifying the local directories allowed to be mounted by remote systems in a file / etc / exports... This action is called export directory hierarchy... The main sources of information about exported directories are the following files:

  • / etc / exports- the main configuration file that stores the configuration of the exported directories. Used when starting NFS and the exportfs utility.
  • / var / lib / nfs / xtab- contains a list of directories mounted by remote clients. Used by the rpc.mountd daemon when a client tries to mount a hierarchy (a mount record is created).
  • / var / lib / nfs / etab- a list of directories that can be mounted by remote systems, indicating all parameters of the exported directories.
  • / var / lib / nfs / rmtab- a list of directories that are not exported at the moment.
  • / proc / fs / nfsd- a special file system (kernel 2.6) for managing the NFS server.
    • exports- a list of active exported hierarchies and clients to whom they were exported, as well as parameters. The core receives this information from / var / lib / nfs / xtab.
    • threads- contains the number of threads (can also be changed)
    • using filehandle you can get a file pointer
    • and etc...
  • / proc / net / rpc- contains raw statistics that can be obtained using nfsstat, as well as various caches.
  • / var / run / portmap_mapping- information about registered in RPC services

Note: in general, there are a lot of interpretations and formulations of the purpose of the xtab, etab, rmtab files on the Internet, I don’t know who to believe. Even on http://nfs.sourceforge.net/, the interpretation is not unambiguous.

Configuring the / etc / exports file

In the simplest case, the / etc / exports file is the only file that needs editing to configure the NFS server. This file controls the following aspects:

  • What clients can access files on the server
  • Which hierarchies directories on the server can be accessed by each client
  • How custom customer names will be be displayed to local usernames

Each line in the exports file has the following format:

export_point client1 (options) [client2 (options) ...]

Where export_point the absolute path of the exported directory hierarchy, client1 - n the name of one or more clients or IP addresses, separated by spaces, that are allowed to mount export_point . Options describe the mount rules for client specified before options .

Here is a typical example exports file configuration:

ARCHIV ~ # cat / etc / exports / archiv1 files (rw, sync) 10.0.0.1 (ro, sync) 10.0.230.1/24(ro,sync)

In this example, computers files and 10.0.0.1 are allowed access to the export point / archiv1, while the host files is read / write, and for host 10.0.0.1 and subnet 10.0.230.1/24, read-only access.

Host descriptions in / etc / exports are allowed in the following format:

  • The names of individual nodes are described as files or files.DOMAIN.local.
  • Domain masks are described in the following format: * DOMAIN.local includes all nodes in the DOMAIN.local domain.
  • Subnets are specified as IP address / mask pairs. For example: 10.0.0.0/255.255.255.0 includes all nodes whose addresses start with 10.0.0.
  • Setting the name of the @myclients network group that has access to the resource (when using an NIS server)

General export options for directory hierarchies

The exports file uses the following general options(options used by default in most systems are indicated first, in brackets - not by default):

  • auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)- specifies that the server should require authentication of lock requests (using the NFS Lock Manager protocol).
  • nohide (hide)- if the server exports two directory hierarchies, with one nested (mounted) in the other. The client needs to explicitly mount the second (child) hierarchy, otherwise the mount point of the child hierarchy will appear as an empty directory. The nohide option results in a second directory hierarchy without an explicit mount. ( note: I could not get this option to work ...)
  • ro (rw)- Allows only read (write) requests. (Ultimately, whether it is possible to read / write or not is determined based on the file system permissions, while the server is not able to distinguish a file read request from an execution request, so it allows reading if the user has read or execute permissions.)
  • secure (insecure)- requires NFS requests to come from secure ports (< 1024), чтобы программа без прав root не могла монтировать иерархию каталогов.
  • subtree_check (no_subtree_check)- If a subdirectory of the file system is exported, but not the entire file system, the server checks if the requested file is in the exported subdirectory. Disabling verification decreases security, but increases data transfer speed.
  • sync (async)- indicates that the server should only respond to requests after the changes made by those requests are written to disk. The async option tells the server not to wait for information to be written to disk, which improves performance, but decreases reliability, because loss of information is possible in the event of a disconnected connection or equipment failure.
  • wdelay (no_wdelay)- Tells the server to delay execution of write requests if a subsequent write request is pending, writing data in larger blocks. This improves performance when sending large write queues. no_wdelay indicates not to postpone the execution of the command for writing, which can be useful if the server receives a large number of commands that are not related to each other.

Export of symbolic links and device files. When exporting a hierarchy of directories containing symbolic links, the link object must be accessible to the client (remote) system, that is, one of the following rules must be met:

The device file refers to the interface. When you export a device file, this interface is exported. If the client system does not have a device of the same type, then the exported device will not work. On the client system, when mounting NFS objects, you can use the nodev option so that device files in the mounted directories are not used.

The default options may vary from system to system, they can be viewed in the / var / lib / nfs / etab file. After describing the exported directory in / etc / exports and restarting the NFS server, all missing options (read: default options) will be reflected in the / var / lib / nfs / etab file.

User ID display (match) options

For a better understanding of the following, I would advise you to read the article. Each Linux user has its own UID and main GID, which are described in the files / etc / passwd and / etc / group... The NFS server assumes that the operating system of the remote host has authenticated the users and assigned them the correct UIDs and GIDs. Exporting the files gives users on the client system the same access to these files as if they were logging in directly to the server. Accordingly, when an NFS client sends a request to a server, the server uses the UID and GID to identify the user on the local system, which can lead to some problems:

  • the user may not have the same identifiers in both systems and, accordingly, may access the files of another user.
  • since If the root user has an identifier of always 0, then this user is mapped to a local user depending on the specified options.

The following options define the rules for mapping remote users to local ones:

  • root_squash (no_root_squash)- With the given option root_squash, requests from the root user are mapped to the anonymous uid / gid, or to the user specified in the anonuid / anongid parameter.
  • no_all_squash (all_squash)- Does not change the UID / GID of the connecting user. Option all_squash sets ALL users (not just root) to be displayed as anonymous or specified in the anonuid / anongid parameter.
  • anonuid = UID and anongid = GID - Explicitly sets the UID / GID for the anonymous user.
  • map_static = / etc / file_maps_users - Specifies a file in which you can set mapping of remote UID / GID - local UID / GID.

An example of using a user mapping file:

ARCHIV ~ # cat / etc / file_maps_users # Mapping users # remote local comment uid 0-50 1002 # mapping users to remote UID 0-50 to local UID 1002 gid 0-50 1002 # mapping users to / span remote GID 0-50 k local GID 1002

NFS Server Management

The NFS server is managed using the following utilities:

  • nfsstat
  • showmsecure (insecure) ount

nfsstat: NFS and RPC statistics

The nfsstat utility allows you to view the statistics of RPC and NFS servers. Command options can be viewed in man nfsstat.

showmount: display NFS status information

Showmount utility queries the rpc.mountd daemon on the remote host for mounted filesystems. By default, a sorted list of clients is returned. Keys:

  • --all- a list of clients and mount points is displayed, indicating where the client mounted the directory. This information may not be reliable.
  • --directories- a list of mount points is given
  • --exports- gives a list of exported file systems from the point of view of nfsd

If you run showmount without arguments, the console will display information about the systems that are allowed to mount local directories. For example, the ARCHIV host provides us with a list of exported directories with the IP addresses of the hosts that are allowed to mount the specified directories:

FILES ~ # showmount --exports archiv Export list for archiv: / archiv-big 10.0.0.2 / archiv-small 10.0.0.2

If you specify the hostname / IP in the argument, information about this host will be displayed:

ARCHIV ~ # showmount files clnt_create: RPC: Program not registered # this message tells us that the NFSd daemon is not running on the FILES host

exportfs: manage exported directories

This command serves the exported directories specified in the file / etc / exports, it will be more accurate to write does not serve, but synchronizes with the file / var / lib / nfs / xtab and removes nonexistent ones from xtab. exportfs is executed when the nfsd daemon is started with the -r argument. The exportfs utility in 2.6 kernel mode communicates with the rpc.mountd daemon through the files in the / var / lib / nfs / directory and does not communicate directly with the kernel. Without parameters, lists the currently exported file systems.

Exportfs options:

  • [client: directory-name] - add or remove the specified file system for the specified client)
  • -v - display more information
  • -r - re-export all directories (sync / etc / exports and / var / lib / nfs / xtab)
  • -u - remove from the list of exported
  • -a - add or remove all filesystems
  • -o - options separated by commas (similar to the options used in / etc / exports; so you can change options for already mounted file systems)
  • -i - do not use / etc / exports when adding, only parameters of the current command line
  • -f - reset the list of exported systems in the 2.6 kernel;

NFS client

Before accessing a file on the remote file system, the client (client OS) must mount it and get from the server pointer to it. Mounting NFS can be done with or with the help of one of the prolific automatic assemblers (amd, autofs, automount, supermount, superpupermount). The mounting process is well demonstrated in the illustration above.

On NFS clients no daemons need to be started, client functions executes kernel module kernel / fs / nfs / nfs.ko which is used when mounting a remote filesystem. Exported directories from the server can be mounted on the client in the following ways:

  • manually using the mount command
  • automatically at boot, when mounting filesystems described in / etc / fstab
  • automatically using the autofs daemon

I will not consider the third method with autofs in this article, due to its voluminous information. Perhaps in the following articles there will be a separate description.

Mount the Network Files System with the mount command

An example of using the mount command is presented in the post. Here's an example of a mount command to mount an NFS file system:

FILES ~ # mount -t nfs archiv: / archiv-small / archivs / archiv-small FILES ~ # mount -t nfs -o ro archiv: / archiv-big / archivs / archiv-big FILES ~ # mount ..... .. archiv: / archiv-small on / archivs / archiv-small type nfs (rw, addr = 10.0.0.6) archiv: / archiv-big on / archivs / archiv-big type nfs (ro, addr = 10.0.0.6)

The first command mounts the exported directory / archiv-small on server archiv to local mount point / archivs / archiv-small with default options (i.e. read and write). Though mount command in the latest distributions, it knows how to understand what type of file system is used without specifying the type, still specify the parameter -t nfs desirable. The second command mounts the exported directory / archiv-big on server archiv to local directory / archivs / archiv-big with read-only option ( ro). Mount command without parameters, it clearly displays the mount result to us. In addition to the read-only option (ro), it is possible to specify other basic options when mounting NFS:

  • nosuid- This option prohibits execution of programs from the mounted directory.
  • nodev(no device - not a device) - This option prohibits the use of character and block special files as devices.
  • lock (nolock)- Allows NFS locking (default). nolock disables NFS locking (does not start the lockd daemon) and is useful for older servers that do not support NFS locking.
  • mounthost = name- The hostname on which the NFS mount daemon is running - mountd.
  • mountport = n - The port used by the mountd daemon.
  • port = n- port used to connect to the NFS server (by default 2049 if the rpc.nfsd daemon is not registered on the RPC server). If n = 0 (the default), then NFS will query the portmap on the server to determine the port.
  • rsize = n(read block size) - The number of bytes read at one time from the NFS server. Standard - 4096.
  • wsize = n(write block size) - The number of bytes written at one time to the NFS server. Standard - 4096.
  • tcp or udp- To mount NFS, use the TCP or UDP protocol, respectively.
  • bg- If you lose access to the server, retry in the background so as not to block the system boot process.
  • fg- If you lose access to the server, retry in priority mode. This option can block the boot process by repeating mount attempts. For this reason, the fg parameter is used primarily for debugging purposes.

Options Affecting Attribute Caching When NFS Mounts

File attributes stored in (inodes), such as modification time, size, hard links, owner, usually change infrequently for regular files and even less often for directories. Many programs, such as ls, access files read-only and do not change file attributes or content, but waste system resources on expensive network operations. To avoid unnecessary waste of resources, you can cache given attributes... The kernel uses the modification time of a file to determine if the cache is out of date by comparing the modification time in the cache with the modification time of the file itself. The attribute cache is periodically updated according to the specified parameters:

  • ac (noac) (attrebute cache- attribute caching) - Enables attribute caching (default). Although the noac option slows down the server, it avoids attribute expiration when multiple clients are actively writing information to the shared hierarchy.
  • acdirmax = n (attribute cache directory file maximum- attribute caching maximum for a directory file) - The maximum number of seconds that NFS waits before updating directory attributes (default 60 seconds)
  • acdirmin = n (attribute cache directory file minimum- attribute caching at least for a directory file) - The minimum number of seconds that NFS waits before updating directory attributes (by default 30 seconds)
  • acregmax = n (attribute cache regular file maximum- attribute caching maximum for a regular file) - The maximum number of seconds that NFS waits before updating the attributes of a regular file (default 60 sec.)
  • acregmin = n (attribute cache regular file minimum- attribute caching at least for a regular file) - The minimum number of seconds that NFS waits before updating the attributes of a regular file (3 seconds by default)
  • actimeo = n (attribute cache timeout- attribute caching timeout) - Overrides the values ​​for all of the above options. If actimeo is not specified, then the above values ​​are set to their default values.

NFS error handling options

The following options control how NFS behaves when there is no response from the server or when I / O errors occur:

  • fg (bg) (foreground- foreground, background- background) - Try to mount a failed NFS in the foreground / background.
  • hard (soft)- displays the message "server not responding" to the console when the timeout is reached and continues mounting attempts. With the given option soft- if timeout occurs, informs the program that called the operation about an I / O error. (the soft option is advised not to use)
  • nointr (intr) (no interrupt- do not interrupt) - Prevents signals from interrupting file operations in the hard-mounted directory hierarchy when a long timeout is reached. intr- enables interruption.
  • retrans = n (retransmission value- retransmission value) - After n small timeouts, NFS generates a large timeout (3 by default). A long timeout stops operations or displays a "server not responding" message to the console, depending on the hard / soft option specified.
  • retry = n (retry value- retry value) - The number of minutes the NFS service retries mount operations before giving up (10,000 by default).
  • timeo = n (timeout value- timeout value) - The number of tenths of a second that the NFS service waits before retransmission in case of RPC or low timeout (default 7). This value increases with each timeout, up to a maximum of 60 seconds, or until a long timeout occurs. On a busy network, a slow server, or when a request is passing through multiple routers or gateways, increasing this value can improve performance.

Automatic NFS mount at boot (description of filesystems in / etc / fstab)

To find the optimal timeo for a specific value of the transmitted packet (rsize / wsize values), use the ping command:

FILES ~ # ping -s 32768 archiv PING archiv.DOMAIN.local (10.0.0.6) 32768 (32796) bytes of data. 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req = 1 ttl = 64 time = 0.931 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req = 2 ttl = 64 time = 0.958 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req = 3 ttl = 64 time = 1.03 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req = 4 ttl = 64 time = 1.00 ms 32776 bytes from archiv .domain.local (10.0.0.6): icmp_req = 5 ttl = 64 time = 1.08 ms ^ C --- archiv.DOMAIN.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min / avg / max / mdev = 0.931 / 1.002 / 1.083 / 0.061 ms

As you can see, when sending a packet of 32768 (32Kb) size, its travel time from the client to the server and back floats in the region of 1 millisecond. If this time goes off scale in 200 ms, then you should think about increasing the value of timeo so that it exceeds the exchange value by three to four times. Accordingly, it is advisable to do this test during a strong network load.

Starting NFS and Configuring Firewall

The note is copied from the blog http://bog.pp.ru/work/NFS.html, for which many thanks to him !!!

Start NFS server, mount, lock, quota and status with "correct" ports (for firewall)

  • it is advisable to first unmount all resources on clients
  • stop and disallow rpcidmapd to start if you are not planning to use NFSv4: chkconfig --level 345 rpcidmapd off service rpcidmapd stop
  • if necessary, enable the start of the portmap, nfs and nfslock services: chkconfig --levels 345 portmap / rpcbind on chkconfig --levels 345 nfs on chkconfig --levels 345 nfslock on
  • if necessary, stop nfslock and nfs services, start portmap / rpcbind, unload modules service nfslock stop service nfs stop service portmap start # service rpcbind start umount / proc / fs / nfsd service rpcidmapd stop rmmod nfsd service autofs stop # somewhere later it must be run rmmod nfs rmmod nfs_acl rmmod lockd
  • open ports in
    • for RPC: UDP / 111, TCP / 111
    • for NFS: UDP / 2049, TCP / 2049
    • for rpc.statd: UDP / 4000, TCP / 4000
    • for lockd: UDP / 4001, TCP / 4001
    • for mountd: UDP / 4002, TCP / 4002
    • for rpc.rquota: UDP / 4003, TCP / 4003
  • for the rpc.nfsd server add the line RPCNFSDARGS = "- port 2049" to / etc / sysconfig / nfs
  • for the mount server add the line MOUNTD_PORT = 4002 to / etc / sysconfig / nfs
  • to configure rpc.rquota for new versions, add the line RQUOTAD_PORT = 4003 to / etc / sysconfig / nfs
  • to configure rpc.rquota it is necessary for older versions (nevertheless, you must have the quota 3.08 or newer package) add rquotad 4003 / tcp rquotad 4003 / udp to / etc / services
  • will check the adequacy of / etc / exports
  • start the services rpc.nfsd, mountd and rpc.rquota (at the same time rpcsvcgssd and rpc.idmapd are started, if you did not forget to remove them) service nfsd start or in new versions service nfs start
  • for the lock server for new systems add the lines LOCKD_TCPPORT = 4001 LOCKD_UDPPORT = 4001 to / etc / sysconfig / nfs
  • for a lock server for legacy systems add directly to /etc/modprobe [.conf]: options lockd nlm_udpport = 4001 nlm_tcpport = 4001
  • bind the status server rpc.statd to port 4000 (for old systems, run rpc.statd with the -p 4000 switch in /etc/init.d/nfslock) STATD_PORT = 4000
  • start services lockd and rpc.statd service nfslock start
  • make sure that all ports are bound properly with "lsof -i -n -P" and "netstat -a -n" (some of the ports are used by kernel modules that lsof does not see)
  • if before "rebuilding" the server was used by clients and they could not be unmounted, then you will have to restart the automatic mount services on the clients (am-utils, autofs)

Example NFS Server and Client Configuration

Server config

If you want to make your NFS partitioned directory open and writeable, you can use the option all_squash in combination with options anonuid and anongid... For example, to set the rights for the user "nobody" in the group "nobody", you can do the following:

ARCHIV ~ # cat / etc / exports # Read / write access for client on 192.168.0.100, with rw access for user 99 with gid 99 / files 192.168.0.100 (rw, sync, all_squash, anonuid = 99, anongid = 99) ) # Read / write access for client on 192.168.0.100, with rw access for user 99 with gid 99 / files 192.168.0.100 (rw, sync, all_squash, anonuid = 99, anongid = 99))

This also means that if you want to allow access to the specified directory, nobody.nobody must be the owner of the split directory:

man mount
man exports
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/nfs_perf.htm - NFS performance from IBM.

Regards, Mc.Sim!