You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Operating system information
Cloud virtual machine, Ubuntu22.04, 4C/16G
Kubernetes version information Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
Container runtime
Docker Engine 24.0.6
KubeSphere version
v3.4.1. Install online. Use kk to install.
what is the problem
Service IP segment 10.188.0.0/18 IP range 10.188.0.0 - 10.188.63.255
Initial Pod IP segment: 10.188.192.0/18 IP range 10.188.192.0 - 10.188.255.255
The container group IP pool function is enabled after installation and a new ippool-dev IP pool is created in the console.
ippool-dev IP segment: 10.189.0.0/20 IP range 10.189.0.0 - 10.189.15.255
Now there are two IP pools on the container IP pool page, default-ipv4-ippool and ippool-dev.
Pod A calls Pod B through Service
Pod A → Service → Pod B
Pod B is an nginx pod. Check the access log and find that
If Pod A is assigned the ip of the default-ipv4-ippool IP pool, then the access log shows that it is the container IP
If Pod A is assigned the IP of the newly created ippool-dev IP pool, then the access log shows two situations: 1. The IP of eth0 of the server itself (when Pod A and Pod B are on the same server) 2. The ip of the calico virtual tun device (when Pod A and Pod B are on different servers)
The source IP of the container is lost and SNAT
This does not conform to conventional logic: calling services within the cluster should not be NAT
And why is it normal when using the default default-ipv4-ippool IP pool?
I tried assigning two Pods to the same project (namespace), but the same problem occurred.
May I ask where my configuration is wrong? How can I modify it to have the same effect as the default-ipv4-ippool IP pool? Thank you.
service:
kind: Service
apiVersion: v1
metadata:
name: font
namespace: dev
labels:
app: font
annotations:
kubesphere.io/creator: admin
spec:
ports:
- name: tcp-80
protocol: TCP
port: 80
targetPort: 80
selector:
app: font
clusterIP: 10.188.21.35
clusterIPs:
- 10.188.21.35
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
Operating system information
Cloud virtual machine, Ubuntu22.04, 4C/16G
Kubernetes version information
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
Container runtime
Docker Engine 24.0.6
KubeSphere version
v3.4.1. Install online. Use kk to install.
what is the problem
Service IP segment 10.188.0.0/18 IP range 10.188.0.0 - 10.188.63.255
Initial Pod IP segment: 10.188.192.0/18 IP range 10.188.192.0 - 10.188.255.255
The container group IP pool function is enabled after installation and a new ippool-dev IP pool is created in the console.
ippool-dev IP segment: 10.189.0.0/20 IP range 10.189.0.0 - 10.189.15.255
Now there are two IP pools on the container IP pool page, default-ipv4-ippool and ippool-dev.
Pod A calls Pod B through Service
Pod A → Service → Pod B
Pod B is an nginx pod. Check the access log and find that
If Pod A is assigned the ip of the default-ipv4-ippool IP pool, then the access log shows that it is the container IP
If Pod A is assigned the IP of the newly created ippool-dev IP pool, then the access log shows two situations:
1. The IP of eth0 of the server itself (when Pod A and Pod B are on the same server)
2. The ip of the calico virtual tun device (when Pod A and Pod B are on different servers)
The source IP of the container is lost and SNAT
This does not conform to conventional logic: calling services within the cluster should not be NAT
And why is it normal when using the default default-ipv4-ippool IP pool?
I tried assigning two Pods to the same project (namespace), but the same problem occurred.
May I ask where my configuration is wrong? How can I modify it to have the same effect as the default-ipv4-ippool IP pool? Thank you.
service:
IP pool:
kk config part:
Any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered: