I went to Virtual KubeCon EU, almost 2 weeks ago. I wrote down some notes, and thought it might be interesting to share them on my blog to wake it up from it’s slumber.
UPDATE 07/12/2021: I realise this blogpost is not really coherent, as it came forth from my note taking during the event. I cleaned it up a bit, and I want to publish this, but it is not as polished as I would like. The event is too long ago to recall everything properly, so I decided to spend time on writing new content instead of making this one really great.
Also, realizing I’m typing this just after Kubecon NA is funny to me. I didn’t really pick up any talks from that event.
The keynotes were pretty generic as expected. A bunch of partner spotlights and generic stuff.
Quote: Let’s build for the ultimate end user, the human experience and make our lives safer, healthier, and happier
Peleton had to scale massively during COVID to enable people to workout at home. They used a lot of CloudNative technologies to enable this.
RedHat is working on a minimal kubernetes API which provides multi-tenant support, isolating crds between different teams.
Justin Cormack from Docker gave us an overview of all CNCF sandbox projects. Of which there are a lot. Tool selection is getting harder.
A presentation from Weaveworks about Flux. Flux is a very nice project on which I am building a proof of concept in my own time. Managing more and more Kubernetes clusters and making sure the config is in sync is getting old quick. Flux helps by keeping clusters in sync by using a git repository as a single source of truth.
Liz Rice gave an update on how Kubernetes SIGs were renamed to TAG (Technical Advisory Group) and which ones are available
Cloud events wants to standardize how events are formatted. Talk about the format of this standard using multiple transport methods.
SDKs are available for multiple languages. You can pick from multiple transports (mqtt, nats, kafka, http)
For consuming these events, a discovery API should be provided. This API should tell what system produces APIs of interests, which events are produced and how to subscribe to them. Cloudevents wants to deliver an API specification with an http / json mapping.
To subscribe to events, a subscription api needs to be provided. In this API, consumers can subscribe to events and tell systems where to publish these events.
Another part is a schema registry, which defines the openapi specifications for all events in an organisation.
The DoD has some interesting challenges
They solved that by:
The way they do governance is really interesting to me!
Session started with some research in how clusters are partitioned. Per app, per domain, all in one cluster etc.
More clusters means more management and more complexity, which leads to more stuff that can break.
Linkerd provides some help to solve this problem
This reminds me of the Consul demo I have laying around and need to finish.
Look into linkerd service mirroring. (uses annotations?) What is the advantage over Consul / Istio multi DC?
take aways:
Kubernetes moves to three releases per year. (Instead of four) tThis will allow for more focus on quality, discussions and overall the state of the project. It is seen as a quality of life improvement.
TODO more research on this topic as the presentation was a bit onsamenhangend.
PSP will be depricated! Move to OPA / Kyverno
focus on security, automation and governance?
Interesting story by DT about their Kubernetes story.
Lorenzo from Sysdig shows us how to persist his rogue access in our Kubernetes cluster without us noticing. ;-)
Hiding processes using libprocesshider.
Mitigations:
Awesome demo as always. ;-)
Ellen starts with trying to exploit her dev cluster starting with what looks like investigation switching to a black hat half way the demo. ;-) Fun stuff.
Tabitha starts with a throwback to the AWESOME talk by Brad and Ian of last year.
This talks was an amazing story about security within Kubernetes delivered in a hillarious story and gave insight on what attack vectors are present within Kubernetes.
Problems / challenges with Prometheus
Possible solutions:
Similarities:
focus on Performance, Ha, costs and operational complexity
M3 uses Prometheus and writes to a horizontally scalable m3db. Managed / coordinated by etcd.
Advantages:
Disadvantages
Cortex
TODO look into architecture
Prom remote write to Cortex distributor -> Cortex Ingester. K/V in Consul or ETCD. Writes to Big Table / Cassandra / Dynamo, S3 and Memcache.
Advantages
Disadvantage
Thanos uses a sidecar next to Prometheus pushing to object storage TODO Look into architecture
Advantages
Disadvantages
In the end it doesn’t really matter which one you pick. Presenter took Thanos for simplicity.
Cloud Native & WebAssembly: Better Together - Liam Randall, Founder, Cosmonic & Co-Founder, Wasmcloud & Ralph Squillace, Principal PM, Azure Core Upstream, Microsoft Azure (10:00-10:15) Starts about the decoupling of physical hardware. But not decoupling gets harder. WASM might be the solution as it builds on the entire cloud native ecosystem. WASM is a polyglot compilation target for the web.
Meshes are awesome
Layer 7: mostly http traffic Layer 3: Network Service Mesh (networkservicemesh.io) Streaming media - github.com/media-streaming-mesh (based on UDP) Public Health Data Mesh: bit-broker.io
RISC-V is a cool new movement regarding an open api to hardware. Which is super interesting and I need to read up on that as well…
Confidential computing is making the guest workload no longer trust the host shared components. Only the tenant can see & modify the its data. The infra owner (CSP) no longer needs to be trusted. I have done some research on this subject in December 2020 for my current customer engagement and it was good to refresh upon the subject.
Requirements:
Data can be in transit with VPNs, TLS. It’s protected and we know how to do it. Encryption at rest can also be done easily
Data in use is harder since we need to encrypt the memory we are using and our data moves through the CPU. (Which can be eavesdropped on)
Dependencies:
How to apply to containers
Solution space:
Kata containers <— Most natural solution Firecracker gVisor
Fully offload to the guest
Mixed:
Attestation service. kata-agent.
Impact on infra operator
We need VMs. For end users, not much should change. This is a work in progress and requires more work.
OPA wants to unify policy enforcement across the stack. Services can offload policy decision to OPA by querying it. It is up to the service to enforce the policy, decision is made by OPA based on the rego policy.
OPA is written in Go and runs as a sidecar or a host-level daemon. It can be compiled to WASM. It contains a management api for control and visibility. Which can be used for offline auditing.
OPA supplies tooling to build, test and debug policies.
Gatekeeper is an extensible admission controller for K8S using OPA policies. Gatekeeper is also able to inject / modify requests. e.g. insert sidecarts or add labels.
Events in k8s can be sent to something that can compose them as spans so we can correlate it.
weaveworks-experimental/kspan picks up all events and creates spans from then, and sends them into Jaeger. Pretty cool technology to debug things going wrong in K8S!
I’ve watched some replays of talks as well, as so much is going on at the same time.
Jason DeTiberus came up with a ‘bare-bones’ API server which does not require a full Kubernetes cluster and can be embedded in applications to support CRDS
Use cases:
The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.
Cool demo about Cluster API being used to created Kubernetes clusters
The solarwinds attack was a wake-up call for a lot of people to not always trust software running in your supply chain / CI-CD environments. It would be nice if containers, like freight containers, come with a bill of material, their content and are signed off upon. Managing signatures involves a lot of key to be managed, and it is not something native to registries. Also, people like to detach the container and the signature, so it is easy to add signatures afterwards. In the end, people want to be able to validate what they are running in their container platform.
Container signing is not being used a lot. The idea is to make it easier to start using it. Notary is working on the base infrastructure to support this.
A usable signing solution will be shipped by the second half of 2021 with iterative updates. You will also be able to attach metadata in the form of JSON documents to images using the same mechanics.