Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
13 min read
Share
If you're looking for a production ready Istio distribution based on the new Istio 1.8 release and with support for a safe and automated canary > > upgrade flow, check out the new Backyards > > 1.5 release.
Caveat: The Istio 1.8.0 release has a few known issues pointed out in the official change notes. Either make sure you won't be affected by them or wait a few patch releases before upgrading your cluster to make sure these issues are sorted out.First of all, it's important that we point out that the supported Kubernetes versions for Istio 1.8 are 1.16, 1.17, 1.18 and 1.19. If you are running Istio 1.7 in your environment, you should already be on at least Kubernetes 1.16 (as that is also the oldest supported K8s version for Istio 1.7). As a result, you should be ready to upgrade to Istio 1.8 without having to upgrade your K8s cluster. High impact changes (can cause issues when upgrading the mesh):
inbound|<service_port_number>|<service_port_name>|<service_hostname>
. An example cluster name with this format was: inbound|80|http|httpbin.default.svc.cluster.local
.
For Istio 1.8.0 this format has changed, the service port name and the service's hostname are now omitted, the new format looks like this: inbound|<service_port_number>||
. For the example above the new format is simply: inbound|80||
There were issues when multiple services selected the same container port in the same pod and this change is an attempt to solve those issues. The offical release note states that: "For most users, this is an implementation detail, and will only impact debugging or tooling that directly interacts with Envoy configuration."
Well, from what we've seen so far this change causes undesirable behaviour in the following scenario. When there are two services, which use the same service port number and select the same pod, but they are targeting different port numbers inside the pod. Here's an example when a backend and a frontend container are in the same pod:
apiVersion: v1
kind: Service
metadata:
name: backyards-web
namespace: backyards-system
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http-web
selector:
app.kubernetes.io/name: backyards
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: backyards
namespace: backyards-system
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http-app
selector:
app.kubernetes.io/name: backyards
type: ClusterIP
With this setup you'll see the following warning in the istiod logs when using the upstream docker.io/istio/pilot:1.8.0
docker image:
"pilot_duplicate_envoy_clusters": {
"inbound|80||": {
"proxy": "backyards-8cdfc77b5-4jw44.backyards-system",
"message": "Duplicate cluster inbound|80|| found while pushing CDS"
}
}
And the issue will be that when calling either the backyards-web
or the backyards
service that in both cases the same target port will be called inside the pod. So it will be impossible to reach the other container port through its service, which is definitely undesired behaviour and seems to be a bug.
We're planning to open an issue about this with more details to sort out this issue upstream. A workaround for this problem can be to use different service port numbers to avoid such conflicts. What we did in Backyards is that we reverted this pilot change in our own docker image, which we use in Backyards (now Cisco Service Mesh Manager), so that this issue can never happen.
UPDATE: The upstream issue can be tracked here. Kudos to John Howard, who picked up the issue and fixed it very quickly. The patch should already be in the Istio 1.8.1 release.
name: <protocol>[-<suffix>]
.appProtocol
field: appProtocol: <protocol>
.kind: Service
metadata:
name: httpbin
spec:
ports:
- number: 443
name: https-web
- number: 3306
name: db
appProtocol: mysql
ipBlocks/notIpBlocks
fields on the AuthorizationPolicy
resource are now used to allow/deny requests based on the source address of IP packets that arrive at the sidecar.
If you'd like to allow/deny requests based on their original source IP addresses (either because you use the X-Forwarded-For
header or a proxy protocol), then update your AuthorizationPolicy
resources to use the new remoteIpBlocks/notRemoteIpBlocks
fields instead of the ipBlocks/notIpBlocks
fields.
TrustDomainAliases
list.
If you want to allow traffic to your mesh from different trust domains on Istio 1.8, you need to add it to your TrustDomainAliases
list, otherwise it will be rejected.
istioctl
, and it is recommended that they be installed separately.
If you want the easiest way to install all these integrated components - with a production-ready setup and with many additional features - check out Backyards (now Cisco Service Mesh Manager).
WorkloadEntry
resources when new VMs are available.
The Istio 1.8 release laid the groundwork for a wide variety of features around VM support, which should become apparent in future releases as it matures.
INSERT_FIRST
, INSERT_BEFORE
, INSERT_AFTER
operations were added to the EnvoyFilter API. To be able to override http and network filters, a new REPLACE
operation was added to the EnvoyFilter API.
istioctl bug-report
command was added to be able to get an archive of the Istio cluster - mainly for debugging purposes.
istioctl
, and then the official Istio operator, were introduced, and, slowly, the method of installing by Helm became unsupported. It turned out, however, that there was still demand among Istio users for a way of installing and upgrading Istio with Helm (mostly for users who had already deployed all their apps with Helm), and because of that, Helm 3 support was added for Istio 1.8.
As a result, it is possible to install and upgrade Istio with Helm 3, but I would note here that this is NOT recommended. Only in-place upgrades are supported with Helm, whilst the canary upgrade model is the recommended flow today.
Status
field. These fields include, but are not limited to resource readiness, the number of data plane instances associated with the resource, and validation messages.
In Istio 1.8 a new observed generation field is also present, which, when it matches the generation in the resource's metadata, indicates that all Istio updates have been completed. This is useful for detecting when requested changes to an Istio configuration have been served and are ready to receive traffic.
Want to know more? Get in touch with us, or delve into the details of the latest release. Or just take a look at some of the Istio features that Backyards automates and simplifies for you, and which we've already blogged about.
Get emerging insights on emerging technology straight to your inbox.
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.