New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would like ClusterCIDR
to be fetchable by pods.
#46508
Comments
/sig network |
Related I guess: kubernetes/community#662 |
I work on BYOIP and explicit networks, so am no fan of having a configured |
@MikeSpreitzer it flourishes pretty well 🙂 But there are three independent settings at present, so let me take them one by one: Weave Net has a CIDR: it allocates IP addresses to pods within that range, and it creates a route for that CIDR inside each pod (the whole Weave network is one hop at L3). As far as I can think, BYOIP would replace the first but not the second - we need to know how to route to other pods. kube-proxy has a
kube-controller-manager also has a *: "clusterCIDR not specified, unable to distinguish between internal and external traffic" |
See related #25533 |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Still an issue (and affects other networking plugins, such as Calico). |
@fasaxc I'd like to help with this, do you have any thoughts on how this could be solved? |
@thockin said in #25533 (comment):
I believe he is referring to ComponentConfig, and there has been significant progress on this, and I kubernetes/pkg/apis/componentconfig/types.go Lines 163 to 363 in f072871
kubernetes/pkg/proxy/apis/kubeproxyconfig/types.go Lines 97 to 153 in f072871
More specifically: kubernetes/pkg/apis/componentconfig/types.go Lines 302 to 303 in f072871
kubernetes/pkg/proxy/apis/kubeproxyconfig/types.go Lines 121 to 124 in f072871
I believe it should be already possible to fetch read these using the API, but I think it's alpha is needs to be enabled explicitly. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten this is still an issue for users of Weave Net. |
The ComponentConfig work looks promising and is probably enough to close this issue once it's beta/GA, though it's still somewhat awkward that this value would need to be configured in two locations (both for the controller manager and the kube-proxy). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Yes, that's the real ask (a proper IPAM API). |
I think it would good to also ask if this should be part of a broader network configuration API. Aside from being able to find out what CIDRs are being used for pods and services, I think it would be very useful to be able to tell at least the name of the network in the given cluster. Of course, beyond that it would also be nice to tell if e.g. there are pods in a certain namespace that are responsible for the network. Happy to elaborate on use-cases, if there is no general objection to this It's something I've been thinking of for a while, and it seems reasonable to bring for discussion, if we are to add some APIs. |
No objections, per se, but even "which network provider" can vary from node
to node, as long as the transport is compatible.
…On Fri, Oct 16, 2020 at 3:11 AM Ilya Dmitrichenko ***@***.***> wrote:
I think it would good to also ask if this should be part of a broader
network configuration API.
Aside from being able to find out what CIDRs are being used for pods and
services, I think it would be very useful to be able to tell at least a
name of the network provider. Of course, beyond that it would also be nice
to tell if e.g. there are pods in kube-system or some other namespace
that are responsible for the network. Happy to elaborate on use-cases, if
there is no general objection to this, but it's something I've been
thinking of for a while, it seems reasonable to bring for discussion, if we
are to add some APIs.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#46508 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVEYI2MBBQEDZMG6Q2TSLAL6HANCNFSM4DM6RNGA>
.
|
There is a KEP "Removing Knowledge of pod cluster CIDR from iptables rules", implemented by #87748 et seq. |
xref: Node podCIDR should be EOL'ed |
Yeah node.spec.podCIDR is another symptom of the lack of an IPAM API.
…On Fri, Oct 16, 2020 at 4:35 PM Antonio Ojea ***@***.***> wrote:
xref: #57130 <#57130>
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#46508 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVFUGME6C4D2QO3DG7DSLDKEBANCNFSM4DM6RNGA>
.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
Discussed at sig-network meeting on March 18, 2021. |
@bridgetkromhout I can see from the notes that it was discussed, but little info was logged... any chance someone could summarise the SIGs verdict, fi something was indeed decided? |
No decision.
There's some work that feels like it could be adjacent - allowing
multi-CIDR for per-node IPAM, which pretty much demands an API to
expose the CIDRs available. If we do that, this issue gets solved for
free EXCEPT...
in-cluster IPAM is *not* used everywhere, so an open question remains
- is this new API required (net-new for cloud providers) or optional,
and if optional, does that satisfy this request?
…On Fri, Mar 19, 2021 at 9:28 AM Ilya Dmitrichenko ***@***.***> wrote:
@bridgetkromhout I can see from the notes that it was discussed, but little info was logged... any chance someone could summarise the SIGs verdict, fi something was indeed decided?
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
An optional API would satisfy my original request, so long as, where available, it matches what In my use-case Kubernetes shouldn't be doing IPAM. |
/area kube-proxy (just testin) |
/remove-area kube-proxy |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
OK. Let's consider kubernetes/enhancements#2593 the PoR fix to this. |
FEATURE REQUEST
Currently,
kube-proxy
andkube-controller-manager
each have a command-line parameter--cluster-cidr
, described as "CIDR range of pods in the cluster". There is no API to fetch this parameter.I work on a network add-on, Weave Net, which would benefit from being able to see this value. Currently we have our own setting for the range of IP addresses to give to pods, and it is frustrating to receive issue reports caused by the values being out of sync.
Could we have
ClusterCIDR
as a top-level value on the cluster, which is then read bykube-proxy
andkube-controller-manager
and anyone else who wants it?And/or supply it via the downward API.
The text was updated successfully, but these errors were encountered: