Authorized Source IP for OpenShift Project Identification
Avi Vantage can securely identify OpenShift projects using source IP addresses for traffic initiated from within the OpenShift cluster to outside applications.
Use Case For Authorized Source IP
In some deployments, it may be required to identify traffic based on source IP address, to provide differential treatment based on the application. For example, in DMZ deployments there may be firewall, security, visibility, and other circumstances that may require validation of clients prior to passing their traffic on to an application. Such deployments use the source IP to validate the client.
Traffic initiated from within OpenShift clusters to outside applications is masqueraded. The actual source of this traffic is lost to the remote application.
In the following example, source IP 10.10.10.10 securely identifies Project Green and source IP 10.10.10.11 securely identifies Project Blue.
Avi Vantage network security policies prevent pods belonging to projects other than Green from using source IP 10.10.10.10; so, the remote application/firewall can securely identify Project Green using the source IP 10.10.10.10.
Configuring an Authorized Source IP Instance
Prerequisites
- Avi Vantage should be providing east-west services for the cluster. Refer to this section within the OpenShift installation guide to learn how this should be configured.
secure_egress_mode
in the Avi Openshift cloud configuration must be enabled as shown below:
[admin:controller]: cloud> oshiftk8s_configuration
[admin:controller]: cloud:oshiftk8s_configuration> secure_egress_mode
Overwriting the previously entered value for secure_egress_mode
[admin:controller]: cloud:oshiftk8s_configuration> save
[admin:controller]: cloud> save
- Authentication credentials for access to the OpenShift cluster must have cluster-admin privileges (should be able to create SecurityContextConstraints and ServiceAccounts in all projects). Certificates or user-account tokens with such privileges are required to enable this feature.
- Avi Vantage needs credentials with cluster role and privileges as shown below:
apiVersion: v1
kind: ClusterRole
metadata:
creationTimestamp: 2017-04-19T22:55:04Z
name: avirole
resourceVersion: "1328057"
selfLink: /oapi/v1/clusterroles/avirole
uid: 3a57c7ca-2553-11e7-887a-005056b0c674
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods
- projectrequests
- projects
- replicationcontrollers
- securitycontextconstraints
- serviceaccounts
- services
verbs:
- '*'
- apiGroups:
- ""
attributeRestrictions: null
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- routes/status
verbs:
- patch
- update
Workflow Overview
The user is expected to create an egress service per authorized source IP. To authorize multiple source IPs, it is required to create the same number of egress services within OpenShift. Avi Vantage will create a ServiceAccount for every project in OpenShift and add it to SecurityContextConstraint to enable pods to be created in privileged mode. The following code samples depict the order of configuration when a new project is created in OpenShift.
# oc describe scc avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Priority: [NONE]
Access:
Users: system:serviceaccount:default:avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Groups: [NONE]
Settings:
Allow Privileged: true
Default Add Capabilities: [NONE]
Required Drop Capabilities: [NONE]
Allowed Capabilities: [NONE]
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: true
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: RunAsAny
UID: [NONE]
UID Range Min: [NONE]
UID Range Max: [NONE]
SELinux Context Strategy: RunAsAny
User: [NONE]
Role: [NONE]
Type: [NONE]
Level: [NONE]
FSGroup Strategy: RunAsAny
Ranges: [NONE]
Supplemental Groups Strategy: RunAsAny
Ranges: [NONE]
# oc describe serviceaccount avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Namespace: default
Labels: [NONE]
Image pull secrets: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-dockercfg-2j07a
Mountable secrets: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln
avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-dockercfg-2j07a
Tokens: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln
avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-zxi5t
# oc describe scc avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Name: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Priority: [NONE]
Access:
Users: system:serviceaccount:default:avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Groups: [NONE]
Settings:
Allow Privileged: true
Default Add Capabilities: [NONE]
Required Drop Capabilities: [NONE]
Allowed Capabilities: [NONE]
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: true
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: RunAsAny
UID: [NONE]
UID Range Min: [NONE]
UID Range Max: [NONE]
SELinux Context Strategy: RunAsAny
User: [NONE]
Role: [NONE]
Type: [NONE]
Level: [NONE]
FSGroup Strategy: RunAsAny
Ranges: [NONE]
Supplemental Groups Strategy: RunAsAny
Ranges: [NONE]
Configuring the egress pod involves creation of a secure service with the necessary parameters provided in the annotations. Avi Vantage uses these annotations (in the order specified below) for the following three purposes.
- Allocating an IP address from the host network in the OpenShift cluster, as determined by north-south IPAM configured in the Avi Vantage OpenShift cloud. This IP is used as the EGRESS_SOURCE IP for the egress pod (explained later).
- Creating egress ReplicationController with exactly 1 replica and the right parameters, as picked up from annotations below.
- Updating the service selector for the secure service to point to the newly created egress pod from step #2 above.
Create a Secure Egress Service
Service definition for a secure east-west service (secure-egress-service.json)
Note:
networksecuritypolicy
is optional. If using Avi egress solution with kube-proxy, noavi_proxy
label is required in the below sample configuration for the egress service.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "secure-egress-service",
"labels": {
"svc": "secure-egress-service"
},
"annotations": {
"avi_proxy": "{\"networksecuritypolicy\": {\"rules\": [{\"index\": 1000, \"enable\": true, \"name\": \"allowtenant\", \"action\": \"NETWORK_SECURITY_POLICY_ACTION_TYPE_ALLOW\", \"match\": {\"microservice\": {\"match_criteria\": \"IS_IN\", \"group_ref\": \"/api/microservicegroup/?name=default-avi-microservicegroup\"}}}, {\"index\": 2000, \"enable\": true, \"name\": \"defaultdeny\", \"action\": \"NETWORK_SECURITY_POLICY_ACTION_TYPE_DENY\", \"match\": {\"client_ip\": {\"match_criteria\": \"IS_IN\", \"prefixes\": [{\"ip_addr\": {\"type\": \"V4\", \"addr\": \"0.0.0.0\"}, \"mask\": 0}]}}}]}}",
"egress_pod": "{\"destination_ip\": \"10.10.10.200\"}"
}
},
"spec": {
"ports": [
{
"name": "foo",
"port": 80
}
],
"type": "LoadBalancer"
}
}
“egress_pod” is the annotation used to create the corresponding egress pod. “destination_ip” is the destination IP address of the application outside the cluster. Avi Vantage automatically creates a pod named secure-egress-service-avi-egress-pod
, where avi-egress-pod
is the suffix of the secure service name.
Note:
selector
is deliberately omitted from the secure service definition above, as Avi Vantage will update the secure service’s configuration once the egress pod is created successfully.
CUSTOMIZATION (health monitor port): “egress_pod”:
"{\"hm_port\": \"1000\", \"destination_ip\": \"10.10.10.200\"}"
.
CUSTOMIZATION (Docker image): “egress_pod”:
"{\"image\": \"private-repo:5000/avi-egress-router\", \"destination_ip\": \"10.10.10.200\"}"
CUSTOMIZATION (node selection): There are use cases where only certain nodes in the OpenShift/K8S cluster have access to the north-south external network and therefore egress pods need to be restricted to only these nodes. This can be achieved by specifing the “nodeSelector” attribute, as specified in https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ by adding it to egress pod annotation as “egress_pod”:
"{\"nodeSelector\": {\"external-accessible-node\": \"true\"}, \"destination_ip\": \"10.10.10.200\"}"
Avi Vantage automatically creates and maintains a microservice group per project that reflects all the current pods in that project. In the above policy, the first rule allows the microservice group default-avi-microservicegroup
, which allows all pods in the default
project. The second rule denies all other pods from accessing the service. This has the effect of just allowing pods in the default
project to access this service.
Create the Service Using OpenShift Client
>oc create -f secure-egress-service.json
Post Secure Service Creation
Creating a secure service will trigger the following actions from the Avi Controller:
Action 1
An egress ReplicationController of the name service-name-avi-egress-pod
(in this case secure-egress-service-avi-egress-pod
) is created with configuration as below.
Comments inserted into the below code sample appear as [NOTE: comment text]
apiVersion: v1
kind: ReplicationController
metadata:
creationTimestamp: 2017-05-05T20:06:40Z
generation: 1
labels:
name: secure-egress-service-avi-egress-pod
name: secure-egress-service-avi-egress-pod
namespace: default
resourceVersion: "2058613"
selfLink: /api/v1/namespaces/default/replicationcontrollers/secure-egress-service-avi-egress-pod
uid: 5a38d5dc-31ce-11e7-887a-005056b0c674
spec:
replicas: 1
selector:
name: secure-egress-service-avi-egress-pod
template:
metadata:
creationTimestamp: null
labels:
name: secure-egress-service-avi-egress-pod
name: secure-egress-service-avi-egress-pod
spec:
containers:
- env:
- name: EGRESS_SOURCE
value: 10.70.112.155 [NOTE: Source IP allocated by Avi Vantage IPAM]
- name: EGRESS_DESTINATION
value: 10.10.24.85 [NOTE: "destination_ip" from annotation]
- name: BRIDGE_IP_ADDR
value: 172.18.0.1
- name: BRIDGE_NETMASK
value: "16"
- name: TCP_HM_PORT
value: "4" [NOTE: "hm_port" from annotation]
image: avinetworks/avi-egress-router [NOTE: "image" from annotation, defaults to "avinetworks/avi-egress-router"]
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
tcpSocket:
port: 4
timeoutSeconds: 1
name: avi-egress-router
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /hostroot/var/run
name: run
- mountPath: /hostroot/proc/1/ns/net
name: ns1
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
serviceAccount: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
serviceAccountName: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5 [NOTE: serviceaccount created by Avi for every project]
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /var/run
name: run
- hostPath:
path: /proc/1/ns/net
name: ns1
status:
fullyLabeledReplicas: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
Pod from the RC:
# oc describe pod secure-egress-service-avi-egress-pod-s1uhm
Name: secure-egress-service-avi-egress-pod-s1uhm
Namespace: default
Security Policy: avivantage-scc-bfff9603-1ffd-4bcc-aef6-118268e5f2b5
Node: 10.70.112.61/10.70.112.61
Start Time: Fri, 05 May 2017 13:06:40 -0700
Labels: name=secure-egress-service-avi-egress-pod
Status: Running
IP: 10.129.0.229
Controllers: ReplicationController/secure-egress-service-avi-egress-pod
Containers:
avi-egress-router:
Container ID: docker://0bce263bcb1f7c0afacca23c999f0b154d416bd3c9fdbc3d0774dd868a95be7d
Image: avinetworks/avi-egress-router
Image ID: docker-pullable://docker.io/avinetworks/avi-egress-router@sha256:57907a14f6164167ae71866116c0a1cf7a73cc7070de5694f5184a63958f0883
Ports:
State: Running
Started: Fri, 05 May 2017 13:06:42 -0700
Ready: True
Restart Count: 0
Liveness: tcp-socket :4 delay=10s timeout=1s period=3s #success=1 #failure=3
Volume Mounts:
/hostroot/proc/1/ns/net from ns1 (rw)
/hostroot/var/run from run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln (ro)
Environment Variables:
EGRESS_SOURCE: 10.70.112.155
EGRESS_DESTINATION: 10.10.24.85
BRIDGE_IP_ADDR: 172.18.0.1
BRIDGE_NETMASK: 16
TCP_HM_PORT: 4
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /var/run
ns1:
Type: HostPath (bare host directory volume)
Path: /proc/1/ns/net
avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln:
Type: Secret (a volume populated by a Secret)
SecretName: avivantage-bfff9603-1ffd-4bcc-aef6-118268e5f2b5-token-7huln
QoS Class: BestEffort
Tolerations: [NONE]
No events.
EGRESS_SOURCE uniquely identifies the project. The IP address is auto-allocated by Avi from the network in the north-south IPAM Profile.
EGRESS_DESTINATION is the destination IP address of the application outside the cluster.
BRIDGE_IP_ADDR is the IP address of the Avi bridge, the default for which is 172.18.0.1. This address is configurable via the avi_bridge_subnet
field in an OpenShift cloud object.
BRIDGE_NETMASK is the netmask bits for the Avi bridge, the default for which is 16.
TCP_HM_PORT is the port used for TCP health monitoring; it defaults to port 4 if not set. If set to a different value, change the port
field in the livenessProbe
section above to match this port value.
The Avi egress pod has a TCP listener at port TCP_HM_PORT for health monitoring purposes. The pod is configured with a livenessProbe for health monitoring.
Action 2
The secure-egress-service
service selector configuration is updated to reflect and use the egress pod created as shown below:
# oc describe service secure-egress-service
Name: secure-egress-service
Namespace: default
Labels: svc=secure-egress-service
Selector: name=secure-egress-service-avi-egress-pod
Type: LoadBalancer
IP: 172.30.212.151
External IPs: 172.46.161.187
LoadBalancer Ingress: 172.46.161.187
Port: foo 80/TCP
NodePort: foo 30289/TCP
Endpoints: 10.129.0.229:80
Session Affinity: None
No events.
Action 3
In Avi Vantage, the secure service should be UP with one pool member as the egress pod as shown below:
# oc get pod secure-egress-service-avi-egress-pod-s1uhm -o wide
NAME READY STATUS RESTARTS AGE IP NODE
secure-egress-service-avi-egress-pod-s1uhm 1/1 Running 0 5h 10.129.0.229 10.70.112.61
Deleting the egress pod
The egress pod lifecycle is tied to the lifecycle of the secure service. Avi Vantage scales down the ReplicationController to 0 replicas and deletes the ReplicationController for the egress pod when the secure egress service is deleted.
Deleting ServiceAccounts and SecurityContextConstraints
Service accounts created by Avi Vantage for every project will be automatically deleted when the project is deleted in OpenShift or when the OpenShift configuration is removed from Avi Vantage. SecurityContextConstraint is removed from OpenShift only when the associated cloud configuration is removed from Avi Vantage.
Service Usage
Pods in the default
project can access the external application using the name secure-egress-service.default.sub-domain
.
- Avi DNS will resolve
secure-egress-service.default.sub-domain
to the service virtual IP on port 80 or any other port specified in the service definition. - Access to the virtual IP will be proxied to the secure egress Avi pod by the local Avi Service Engine.
- The secure egress Avi pod will source NAT the traffic (using the
EGRESS_SOURCE
IP address) to the remote application and use a destination IP address ofEGRESS_DESTINATION
.
The remote application will see traffic with a source IP address of EGRESS_SOURCE
and destination IP address of EGRESS_DESTINATION
on port 80.
Access Patterns
Source | Destination | Comments |
Pod in “default” project | Service virtual IP | Allowed |
Pod in “default” project | Secure egress Avi pod | Allowed |
Pod in a different project | Service virtual IP | Denied by Avi |
Pod in a different project | Secure egress Avi pod | Denied by OpenShift SDN |
High Availability
When a secure Avi egress pod restarts or the host is down, OpenShift starts another instance of the pod and the service virtual IP always proxies to the right pod IP address.