New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heads up: breaking changes in 0.13.x #1473
Comments
Please provide a parameter, to pass address-pools along through Helm values and throw an error if |
Yep, we were discussing with @gclawes about that. Not sure we can intercept |
What about
? Just with a more descriptive error? |
And at least for
Works with values like
(tested in my homelab) |
Everything went smooth for the most part. Just had to make some adjustments for Flux to not deploy the CR's before the chart is deployed. It did take a couple attempts to get the CR's loaded in the cluster after the pods were running.
|
hmm definitely getting spammed with alerts every time flux reconciles my git repo because of that validating webhook. |
Can you expand a bit? Are you getting the errors only after the initial metallb deployment (while the webhooks are being set up) or also after that? If the latter, would you mind filing a separate issue? |
yeah it's after. i'll open a new issue regarding that. |
Thanks! It would help to have the logs of the controller raised as debug via the -loglevel flag |
First of all, thank you for your work in metallb, I am really happy to see progress here! 💯 I have mixed feelings about this issue and wanted to give feedback: VersioningThis release introduces breaking changes, but it's only a minor release. Especially with the use of dependency mgmt tools like renovate or GitOps operators image update mechanisms that rely on semantic versioning (and trust in the correct usage of it from the dependencies), this release will cause broken clusters. Whats your take on semantic versioning used within the project? Maturity/Backwards compatibilityWhile looking for an answer of aboves question, I've read about the Project maturity, here it's stated that:
That is clearly not the case with this release. |
There has been a pretty long discussion together with the previous (now retired) maintainers (@rata, @johananl and @gclawes). MetalLB is respecting semantic versioning. See the points:
We are going 1.0.0 soon or later, right after the new CRD based api stabilizes.
That was (and always has been) the case with changes within the configmap. And it will probably be the case with the new API, from now on, unless we hit something terrible that needs to be changed. On the other hand, making the old configmap and a new configuration co-exist would have resulted in a more complex codebase and in corner cases to be handled properly. Because of that, and based on the considerations above, we thought that providing a tool for converting the old configuration format to the new one would have been a good compromise in terms of transition and complexity of metallb. I really hope this won't obfuscate all the good will and intentions we are putting in metallb since we (I) started maintaining the project. |
Thanks for clarifying.
Don't get me wrong - just pointing out what I've obsevered. |
I had a very simple setup using helm that stopped working with v 0.13.x The error I get is My ansible setup is the following:
When I try the converter tool all I get is a empty resources.yaml e.g.
Can someone help me with translating the address pool to helm values.yaml, is that still possible or do I need to apply a custom kubernetes yaml ? Is there any new documentation for it ? |
I am issuing the same problem guess I am going to downgrade the versions |
That's because that's not a valid configmap. That's just the section in the values.yaml that generates the actual configmap. k get from the cluster. |
Maybe we can have the migrate step describe that we need to pull the The docs currently state: ... In order to use the tool the container must be ran mapping the path with the source file
Maybe this could be changed to (something like): ... In order to use the tool, the following container must be run with the ConfigMap You'll need to pull a copy of the current ConfigMap out of kuberentes then run the
This will produce a file named
On the subject of |
Hello @fernferret, thank you for the suggestion on improving the migration (configmaptocrs) description. |
May I suggest as an improvement to the controller to help people that logging be improved around the config. I was only able to sort out that I was missing a config because of this line after enabling debugging: https://github.com/metallb/metallb/blob/main/controller/main.go#L64 Which pointed me in the right direction, but as far as I can tell that's the only place where any mention of a config is made in the logs (and at a deeper debug level). Instead, on startup (and/or configuration/reconfiguration) some logging in the controller about the current state of the config (present, not present, correct, incorrect) be logged to |
I converted my very simple 0.12.1 config with a single IP to 0.13.x, but it fails with:
The config:
Additionally, I noted that with 0.13.x a metallb-speaker pod is deployed on all nodes instead of only worker nodes (how it works with the old version). |
Hi, I have some issue as well. My original config for ---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: metallb
namespace: networking
spec:
interval: 15m
chart:
spec:
chart: metallb
version: 0.12.1
sourceRef:
kind: HelmRepository
name: metallb
namespace: flux-system
install:
createNamespace: true
remediation:
retries: 5
upgrade:
remediation:
retries: 5
values:
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- "${METALLB_LB_RANGE}"
apiVersion: v1
data:
config: |
address-pools:
- addresses:
- 192.168.1.200-192.168.1.250
name: default
protocol: layer2
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: metallb
meta.helm.sh/release-namespace: networking
creationTimestamp: "2022-08-19T06:45:38Z"
labels:
app.kubernetes.io/instance: metallb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: metallb
app.kubernetes.io/version: v0.12.1
helm.sh/chart: metallb-0.12.1
helm.toolkit.fluxcd.io/name: metallb
helm.toolkit.fluxcd.io/namespace: networking
name: metallb
namespace: networking
resourceVersion: "10375"
uid: c4f8af61-6a7d-4834-b3e7-93ea30dc21f7
# This was autogenerated by MetalLB's custom resource generator.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
creationTimestamp: null
name: default
namespace: networking
spec:
addresses:
- 192.168.1.200-192.168.1.250
status: {}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
creationTimestamp: null
name: l2advertisement1
namespace: networking
spec:
ipAddressPools:
- default
status: {}
---
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- helm-release.yaml
# - resources.yaml
# k describe ipaddresspools.metallb.io
Name: first-pool
Namespace: networking
Labels: kustomize.toolkit.fluxcd.io/name=apps
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: metallb.io/v1beta1
Kind: IPAddressPool
Metadata:
Creation Timestamp: 2022-08-27T08:23:25Z
Generation: 1
Managed Fields:
API Version: metallb.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:kustomize.toolkit.fluxcd.io/name:
f:kustomize.toolkit.fluxcd.io/namespace:
f:spec:
f:addresses:
Manager: kustomize-controller
Operation: Apply
Time: 2022-08-27T08:23:25Z
Resource Version: 5862530
UID: d7a1a3f2-758a-46ed-a568-e1e495658eac
Spec:
Addresses:
192.168.1.200-192.168.1.250
Auto Assign: true
Avoid Buggy I Ps: false
Events: <none>
# k describe l2advertisements.metallb.io
Name: l2advertisement1
Namespace: networking
Labels: kustomize.toolkit.fluxcd.io/name=apps
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: metallb.io/v1beta1
Kind: L2Advertisement
Metadata:
Creation Timestamp: 2022-08-27T08:02:49Z
Generation: 2
Managed Fields:
API Version: metallb.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:kustomize.toolkit.fluxcd.io/name:
f:kustomize.toolkit.fluxcd.io/namespace:
f:spec:
f:ipAddressPools:
Manager: kustomize-controller
Operation: Apply
Time: 2022-08-27T08:23:25Z
Resource Version: 5862532
UID: 978e5816-4306-4cfd-8fe7-09c35b28664c
Spec:
Ip Address Pools:
first-pool
Events: <none>
|
I am gonna close this issue. @djryanj / @zrav / @fabricesemti80 would you mind filing new issues for your comments? |
Release notes: * https://metallb.universe.tf/release-notes/#version-0-13-5 * metallb/metallb#1473 * https://metallb.universe.tf/#backward-compatibility Configuration was migrated with: docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs -source config.yaml
Release notes: * https://metallb.universe.tf/release-notes/#version-0-13-5 * metallb/metallb#1473 * https://metallb.universe.tf/#backward-compatibility Configuration was migrated with: docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs -source config.yaml
Release notes: * https://metallb.universe.tf/release-notes/#version-0-13-5 * metallb/metallb#1473 * https://metallb.universe.tf/#backward-compatibility Configuration was migrated with: docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs -source config.yaml and then split between public and private address pools.
119: metallb: upgrade from 0.12.1 to 0.13.5 r=bfritz a=bfritz Release notes: * https://metallb.universe.tf/release-notes/#version-0-13-5 * metallb/metallb#1473 * https://metallb.universe.tf/#backward-compatibility Configuration was migrated with: docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs -source config.yaml and then split between public and private address pools. Co-authored-by: Brad Fritz <github@bfritz.com>
The latest metallb version is now configurable only via CRs.
Please refer to the documentation on how to use the configmap to CR converter (doc here https://metallb.universe.tf/#backward-compatibility)
The text was updated successfully, but these errors were encountered: