Kubernetes nginx ingress controller service (version 0.9.0) refusing connections2018-01-05 nginx kubernetes
I have attempted to follow this tutorial to play around with an nginx ingress controller. Some details have changed as I was trying to get it to work - only one backend service instead of two, some port numbers and everything runs in the default namespace. I have a kubernetes master and 3 minions on CentOS Linux release 7.4.1708 VMs.
The backend and default backend are both accessible within the cluster through their respective service endpoints. The nginx status page is available externally (MasterHostIP:32000/nginx_status). The issue is that http requests to the backend app are refused either through the external path or from within the cluster to the nginx-ingress-controller-service endpoints. Hopefully someone out there can see something obvious that I'm missing, or has had similar issues and knows how to overcome this.
[[email protected] ~]# kubectl get endpoints NAME ENDPOINTS AGE appsvc1 10.244.1.2:80,10.244.3.4:80 3h default-backend 10.244.1.3:8080,10.244.2.3:8080,10.244.3.5:8080 14d kubernetes 10.134.45.136:6443 15d nginx-ingress 10.244.2.5:18080,10.244.2.5:9999 2h [[email protected] ~]# wget 10.244.2.5:9999 --2018-01-05 12:10:56-- http://10.244.2.5:9999/ Connecting to 10.244.2.5:9999... failed: Connection refused. [[email protected] ~]# wget 10.244.2.5:18080 --2018-01-05 12:12:52-- http://10.244.2.5:18080/ Connecting to 10.244.2.5:18080... connected. HTTP request sent, awaiting response... 404 Not Found 2018-01-05 12:12:52 ERROR 404: Not Found.
Requests to appsvc1 endpoints behave as expected, returning static html with "Hello app1!".
Backend app deployment:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app1 spec: replicas: 2 template: metadata: labels: app: app1 spec: containers: - name: app1 image: dockersamples/static-site env: - name: AUTHOR value: app1 ports: - containerPort: 80
apiVersion: v1 kind: Service metadata: name: appsvc1 spec: ports: - port: 9999 protocol: TCP targetPort: 80 selector: app: app1
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/rewrite-target: / name: app-ingress spec: rules: - host: test.com http: paths: - backend: serviceName: appsvc1 servicePort: 9999 path: /app1
nginx ingress controller deployment
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller annotations: kubernetes.io/ingress.class: "nginx" spec: replicas: 1 revisionHistoryLimit: 3 template: metadata: labels: app: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 serviceAccount: nginx containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 5 args: - /nginx-ingress-controller - '--default-backend-service=$(POD_NAMESPACE)/default-backend' - '--configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf' - --v=6 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 - containerPort: 9999 - containerPort: 18080
apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 9999 nodePort: 30000 name: http - port: 18080 nodePort: 32000 name: http-mgmt selector: app: nginx-ingress-lb
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: test.com http: paths: - backend: serviceName: nginx-ingress servicePort: 18080
Update It looks like port 9999 is not open in the ingress controller pod. Can anyone suggest why port 18080 gets opened but not 9999? :
[[email protected] ~]# kubectl get pods NAME READY STATUS RESTARTS AGE app1-54cf69ff86-l7kp4 1/1 Running 0 17d app1-54cf69ff86-qkksw 1/1 Running 0 17d app2-7bc7498cbf-459vd 1/1 Running 0 2d app2-7bc7498cbf-8x9st 1/1 Running 0 2d default-backend-78484f94cf-fv6v4 1/1 Running 0 17d default-backend-78484f94cf-vzp8l 1/1 Running 0 17d default-backend-78484f94cf-wmjqh 1/1 Running 0 17d nginx-ingress-controller-cfb567f76-wbck5 1/1 Running 0 15h [[email protected] ~]# kubectl exec nginx-ingress-controller-cfb567f76-wbck5 -it bash [email protected]:/# netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN 14/nginx: master pr tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN 14/nginx: master pr tcp 0 0 0.0.0.0:https 0.0.0.0:* LISTEN 14/nginx: master pr tcp 0 0 0.0.0.0:https 0.0.0.0:* LISTEN 14/nginx: master pr tcp 0 0 0.0.0.0:18080 0.0.0.0:* LISTEN 14/nginx: master pr tcp 0 0 0.0.0.0:18080 0.0.0.0:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:http [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:http [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:https [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:https [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:18080 [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:18080 [::]:* LISTEN 14/nginx: master pr tcp6 0 0 [::]:10254 [::]:* LISTEN 5/nginx-ingress-con
10.x adresses are internal. So the 404s are expected. The ingress controller doesn't make your internal services external all of a sudden. The ingress service is supposed to proxy requests to deployed services via a single address. Since I see you deployed the controller via node port, try making a request to the node's IP port 30000 with Host header test.com you should get your app. Every service you externalize will be available via the ingress IP, host header is set by HTTP clients and ingress controller will fan out requests based on that (as well as path and whatever else you want). So really it only works if you pay for domain names, as I assume you don't own test.com and asking users to fake the request header is not a reasonable interface
Also, since you have minion nodes (plural) you should really change the controller service type from NodePort to LoadBalancer. Node port is used in tutorials so as to be cheaper - LoadBalancer will spin up a cloud load balancer that you would have to pay for. Node port is OK while you're getting situated but certainly not something you can do later on. I really wish people would stop putting it in tutorials without any explanations