Managing Remote Mounts for Containerized Masking¶
This section describes how to mount an NFS mountpoint inside the Containerized Masking Engine. For information on file mountpoints for Virtual Machine Masking, please refer to Managing File Mounts.
In Containerized Masking, much more control is available to the admin at the kubernetes layer. That advantage is used to simplify file systems mounts for Containerized Masking. This document will describe the process using NFS as an example mountpoint type.
Restriction
Filesystem mount points must be mounted as a subdirectory of /var/delphix/masking/remote-mounts/.
Restriction
In order for kubernetes to utilize some particular network filesystem, the
underlying host will typically need to be able to support that filesystem. In
this example, to support mounting NFS filesystems, the underlying OS needs to
be able to perform an nfs mount
. This is typically enabled by installing the
nfs-client
package. For example, if the kubernetes cluster runs on top of a
debian-type linux distro, the package would need to be installed
using apt install nfs-client
on each node to ensure all nodes have the
necessary utilities to handle mounting NFS filesystems.
Creating the Mountpoint Connection in kubernetes¶
To establish a remote mount using NFS, the first step is creating the NFS connection to the remote NFS host. This is accomplished utilizing a special NFS persistent volume. This can be added to the beginning of the kubernetes-config.yaml file or created as separate config files just for this purpose. If separate config files are created, they will have to be applied before the main Pod config is applied.
Both a Persistent Volume (PV) and Persistent Volumne Claim (PVC) are necessary and the YAML for each of these looks like the following snippets.
NFS Persistent Volume YAML
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 500Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: nfs-storage
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: <your NFS server host>
path: <the exported directory on the NFS server, for example /var/tmp/masking-mount>
NFS Persistent Volume Claim YAML
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
storageClassName: nfs-storage
resources:
requests:
storage: 500Mi # change corresponding to actual requirements
Using the Mountpoint in the Pod Configuration¶
Next, the recently created NFS PVC must get mounted into the application container. This is achieved by editing the existing Pod config YAML and adding 2 objects. First, attaching the PVC to the pod as a volume. Second, linking that volume into the application container. This is demonstrated in the excerpt below.
Excerpt of kubernetes-config.yaml to show support for NFS volumes
#
# Example of volume definition per Persistent Volume Claim
#
volumes:
- name: nfs-pv-storage
persistentVolumeClaim:
claimName: nfs-pvc
containers:
- image: delphix-masking-app:6.0.16.0-c1
name: app
ports:
- containerPort: 8284
name: http
volumeMounts:
- name: masking-persistent-storage
mountPath: /var/delphix/masking
subPath: masking
- name: masking-persistent-storage
mountPath: /var/delphix/postgresql
subPath: postgresql
#
# Example of mounting an external volume
#
# Mount path is the directory on the `app` container to be mounted to the
# remote provided Persistent Volume.
# It should always start with the `/var/delphix/masking/remote-mounts`
# and to be followed with customer named sub-directory per mount.
# That sub-directory will automatically be created on the Masking Engine `app` container.
#
- name: nfs-pv-storage
mountPath: /var/delphix/masking/remote-mounts/nfs_example
Using the Mountpoint in the UI¶
Once a properly configured Pod is started, the configured NFS filesystem can be accessed in the UI using the same process that was previously used for non-containerized instances documented in Managing Remote Mounts for VM Masking Engines. The one sticking point is that these mount points (in the dropdown list) by default are named "mountpoint_1", "mountpoint_2", etc.
It is possible to rename the default mount point names to something
more friendly. This is done via the PUT /mount-filesystem/{mountID}
API
endpoint. Please Note: with containerized masking, the only attribute
that can be altered is the mount point name. An attempt to alter any other
attribute via the API endpoint will result in an error.
Note
The /mount-filesystem API has a large set of functionality that is used to manage filesystem mounts in the Virtual Machine deployment of the Masking Engine. For Containerized Masking, most of that functionality is handled by kubernetes itself rendering the API tasks useless and therefore disabled. The only functionality available in Containerized is the endpoint that allows you to update an existing mount and only to update its name.
Other Types of Filesystem Mountpoint¶
This example used NFS, but it is possible to mount any filesystem that kubernetes will support. To mount CIFS or some other supported remote filesystem is possible so long as the same general procedure is followed including:
- creating the various kubernetes objects (such as the PV and PVC)
- mounting it under the `/var/delphix/masking/remote-mounts/` required path