MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.

Overview

MicroK8s

The smallest, fastest Kubernetes

Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for:

  • Developer workstations
  • IoT
  • Edge
  • CI/CD

Canonical might have assembled the easiest way to provision a single node Kubernetes cluster - Kelsey Hightower

Why MicroK8s?

  • Small. Developers want the smallest K8s for laptop and workstation development. MicroK8s provides a standalone K8s compatible with Azure AKS, Amazon EKS, Google GKE when you run it on Ubuntu.

  • Simple. Minimize administration and operations with a single-package install that has no moving parts for simplicity and certainty. All dependencies and batteries included.

  • Secure. Updates are available for all security issues and can be applied immediately or scheduled to suit your maintenance cycle.

  • Current. MicroK8s tracks upstream and releases beta, RC and final bits the same day as upstream K8s. You can track latest K8s or stick to any release version from 1.10 onwards.

  • Comprehensive. MicroK8s includes a curated collection of manifests for common K8s capabilities and services:

    • Service Mesh: Istio, Linkerd
    • Serverless: Knative
    • Monitoring: Fluentd, Prometheus, Grafana, Metrics
    • Ingress, DNS, Dashboard, Clustering
    • Automatic updates to the latest Kubernetes version
    • GPGPU bindings for AI/ML
    • Kubeflow!

Drop us a line at MicroK8s in the Wild if you are doing something fun with MicroK8s!

Quickstart

Install MicroK8s with:

snap install microk8s --classic

MicroK8s includes a microk8s kubectl command:

sudo microk8s kubectl get nodes
sudo microk8s kubectl get services

To use MicroK8s with your existing kubectl:

sudo microk8s kubectl config view --raw > $HOME/.kube/config

User access without sudo

The microk8s user group is created during the snap installation. Users in that group are granted access to microk8s commands. To add a user to that group:

sudo usermod -a -G microk8s <username>

Kubernetes add-ons

MicroK8s installs a barebones upstream Kubernetes. Additional services like dns and the Kubernetes dashboard can be enabled using the microk8s enable command.

sudo microk8s enable dns dashboard

Use microk8s status to see a list of enabled and available addons. You can find the addon manifests and/or scripts under ${SNAP}/actions/, with ${SNAP} pointing by default to /snap/microk8s/current.

Documentation

The official docs are maintained in the Kubernetes upstream Discourse.

Take a look at the build instructions if you want to contribute to MicroK8s.

Get it from the Snap Store
Comments
  • microk8s crashes with

    microk8s crashes with "FAIL: Service snap.microk8s.daemon-apiserver is not running"

    Hello, I have installed microk8s 3 node cluster, all works great for a a couple of days but then it crashes for no evident reason to apiserver FAILed.

    Below is microk8s inspect output and attached tarball inspection-report-20200925_103006.tar.gz

    Inspecting Certificates
    Inspecting services
      Service snap.microk8s.daemon-cluster-agent is running
      Service snap.microk8s.daemon-containerd is running
     **FAIL:  Service snap.microk8s.daemon-apiserver is not running**
    For more details look at: sudo journalctl -u snap.microk8s.daemon-apiserver
      Service snap.microk8s.daemon-apiserver-kicker is running
      Service snap.microk8s.daemon-control-plane-kicker is running
      Service snap.microk8s.daemon-proxy is running
      Service snap.microk8s.daemon-kubelet is running
      Service snap.microk8s.daemon-scheduler is running
      Service snap.microk8s.daemon-controller-manager is running
      Copy service arguments to the final report tarball
    Inspecting AppArmor configuration
    Gathering system information
      Copy processes list to the final report tarball
      Copy snap list to the final report tarball
      Copy VM name (or none) to the final report tarball
      Copy disk usage information to the final report tarball
      Copy memory usage information to the final report tarball
      Copy server uptime to the final report tarball
      Copy current linux distribution to the final report tarball
      Copy openSSL information to the final report tarball
      Copy network configuration to the final report tarball
    Inspecting kubernetes cluster
      Inspect kubernetes cluster
    
    Building the report tarball
      Report tarball is at /var/snap/microk8s/1719/inspection-report-20200925_103006.tar.gz
    

    This is not first time it has happened. My attempt to deploy a small prod cluster based on microk8s is hindered because of this problem in test environment

    kind/bug inactive 
    opened by raohammad 151
  • Failed to enable kubeflow

    Failed to enable kubeflow

    inspection-report-20200728_120622.tar.gz

    • Error Message:
    Couldn't contact api.jujucharms.com
    Please check your network connectivity before enabling Kubeflow.
    Failed to enable kubeflow
    
    • microk8s version:
    Name      Version  Publisher   Notes    Summary
    microk8s  v1.18.6  canonical✓  classic  Lightweight Kubernetes for workstations and appliances
    
    opened by lao-white 84
  • docker disappeared

    docker disappeared

    Docker disappeared from microk8s:

    [email protected]:~# microk8s.docker
    microk8s.docker: command not found
    [email protected]:~# ls /snap/bin/  
    microk8s.config  microk8s.ctr  microk8s.disable  microk8s.enable  microk8s.inspect  microk8s.istioctl  microk8s.kubectl  microk8s.reset  microk8s.start  microk8s.status  microk8s.stop
    [email protected]:~# cat /etc/issue
    Ubuntu 18.04.2 LTS \n \l
    

    It it still going to be used in the project? Is there an alternative to inspect what kube is doing in the background?

    If this was a planned change, is there documentation/release notes?

    inactive 
    opened by g00nix 74
  • Failed to enable kubeflow

    Failed to enable kubeflow

    inspection-report-20200211_171636.tar.gz

    • Error Message:
    $ microk8s.enable kubeflow
    Enabling dns...
    Enabling storage...
    Enabling dashboard...
    Enabling ingress...
    Enabling rbac...
    Enabling juju...
    Kubeflow could not be enabled:
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
     22  116M   22 25.8M    0     0  51231      0  0:39:39  0:08:48  0:30:51 71301
    curl: (18) transfer closed with 94828385 bytes remaining to read
    
    Command '('microk8s-enable.wrapper', 'juju')' returned non-zero exit status 1
    Failed to enable kubeflow
    
    • microk8s version:
    $ sudo snap find microk8s
    Name      Version  Publisher   Notes    Summary
    microk8s  v1.17.2  canonical✓  classic  Kubernetes for workstations and appliances
    

    Maybe this issue is same as #943 but I'm a microk8s newbie and cannot make a correct judgement about it.

    opened by titsuki 72
  • knative HelloWorld Serving Code Example

    knative HelloWorld Serving Code Example

    Deploy a clean microk8s snap deployment:

    snap install microk8s --current

    Enable DNS, Istio + kNative

    sudo microk8s.enable dns istio knative

    Deploy HelloWorld go service example from:

    apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
    kind: Service
    metadata:
      name: helloworld-go # The name of the app
      namespace: default # The namespace the app will use
    spec:
      template:
        spec:
          containers:
            - image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
              env:
                - name: TARGET # The environment variable printed out by the sample app
                  value: "Go Sample v1"
    

    The pods will create and fail due to below error:

    Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "fc02dba843b84f907e3054501f078791474de71dce1d68e37734af3ef30fcf22": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown

    Result from kubectl get ksvc is "RevisionMissing".

    Digging further but has anyone else experienced this?

    opened by datadot 48
  • Service Endpoints not resolving

    Service Endpoints not resolving

    Hi Canonical Team,

    We started facing issues with Microk8s for the past two days where our pods are not able to communicate with each other with the service endpoint url eg:minio-service.default.svc.cluster.local, When we tried to spin up a dummy dns pod and tried to nslookup from there, we could see it's not resolving. also even cluster.local is not being resolved but kubernetes.default is getting resolved.

    microk8s kubectl exec -i -t dnsutils -- nslookup cluster.local
    Server:		10.152.183.10
    Address:	10.152.183.10#53
    
    ** server can't find cluster.local.ec2.internal: SERVFAIL
    
    command terminated with exit code 1
    
    [email protected]:/home/ubuntu# microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
    Server:		10.152.183.10
    Address:	10.152.183.10#53
    
    Name:	kubernetes.default.svc.cluster.local
    Address: 10.152.183.1
    

    Weird thing is when i just nslookup svc.ns it's resolving without any issues.

    [email protected]:/home/ubuntu# microk8s kubectl exec -i -t dnsutils -- nslookup minio-service.default
    Server:		10.152.183.10
    Address:	10.152.183.10#53
    
    Name:	minio-service.default.svc.cluster.local
    Address: 10.152.183.246
    
    [email protected]:/home/ubuntu# microk8s kubectl exec -i -t dnsutils -- nslookup minio-service.default.svc.cluster.local
    Server:		10.152.183.10
    Address:	10.152.183.10#53
    
    ** server can't find minio-service.default.svc.cluster.local.ec2.internal: SERVFAIL
    
    command terminated with exit code 1
    

    Attaching tarball for reference. inspection-report-20211021_073518.tar.gz core-dns.log

    Also the dns entry for ec2.internal is from /run/systemd/resolve/resolv.conf and not from /etc/resolv.conf as mentioned in below link. known issues Was there any breaking change or are we missing something since we started facing this issues only for the past three days.

    opened by Sam-Sundar 42
  •  Kubeflow dashboard does not show in browser

    Kubeflow dashboard does not show in browser

    What steps did you take and what happened: Based on the ubuntu/microk8s#1698 suggestion, I installed kubeflow using microk8s.

    sudo snap install microk8s --classic --channel=latest/edge
    microk8s enable dns storage gpu istio
    microk8s enable kubeflow
    

    After install, I clicked the http://localhost Screenshot from 2020-11-23 13-03-46

    And I got error below Screenshot from 2020-11-23 13-11-40

    What did you expect to happen: Enter the kubeflow dashboard.

    Anything else you would like to add: inspection-report-20201123_124046.tar.gz When I Use the following command to set up port forwarding to the Istio gateway,

    export NAMESPACE=istio-system
    microk8s kubectl port-forward -n ${NAMESPACE} svc/istio-ingressgateway 8080:80
    

    I have below error message:

    upstream connect error or disconnect/reset before headers. reset reason: connection failure
    

    When I enter the link http://10.152.183.51:8082/, I can access to kubeflow dashboard. Screenshot from 2020-11-23 15-01-32 However, I cannot access other section, like pipelines, Notebook Servers, etc. Screenshot from 2020-11-23 15-04-26 Environment:

    • Kubernetes version: (use kubectl version): v1.19.4-34+68a982ef7f1a98
    • OS (e.g. from /etc/os-release): ubuntu 20.04 LTS
    opened by kosehy 37
  • Can't use localhost:25000 for the cluster agent.py API

    Can't use localhost:25000 for the cluster agent.py API

    Hello, I 've been trying to user the agent.py API in order to get the status of the cluster. However localhost:25000/cluster/api/v1.0/status doesn't work. In fact, localhost:25000 doesn't work at all. I have attached the inspection tarball. Thanks. inspection-report-20200126_213715.zip

    opened by giorgos-apo 36
  • API Server hanging on raspberry pi

    API Server hanging on raspberry pi

    I have a 3 node microk8s cluster on raspberry PI running 1.21/edge - at least once daily one of the nodes will go into a Not Ready status and when I restart with microk8s stop ; microk8s start it will hang just after trying to start the API server.

    inspection-report-20210521_093118.tar.gz

    inspection-report-20210521_093759.tar.gz

    The attached inspection reports are from one such event today which has impacted two of the three nodes.

    Any ideas where to look?

    inactive 
    opened by CharlesGillanders 35
  • Add Rook addon

    Add Rook addon

    Rook (https://rook.io/) is a platform for adding CRDs and Operators to Kubernetes to provision various types of Cloud-Native distributed storage systems.

    It would be awesome if we could enable it in microk8s with a simple script (microk8s.enable rook).

    I think a lot of people would find it very interesting to experiment with various storage systems and DBs on their local machine to then test them in other environments later. I also think that microk8s has a unique position of offering new and useful k8s tools like istio, jaeger, etc so people can discover new things.

    For implementation, it seems like

    1. rbd (RADOS block device), a kernel module needed for Rook has support from kernel >~ 3.10 up, but for using a CephFS storage backend, you need >4.17. All we'd need to do is sudo modprobe rbd
    2. Rook also needs a permissive PodSecurityPolicy to run properly. This guide seems to have the best information on how to do that with microk8s, we would just need to add these scripts to the core. Section 8 of the guide shows then configuring a relaxed PSP for 1 namespace, which we could do for the Rook namespace we create.

    microk8s is without a doubt the best local k8s environment of them all, so thanks for all the hard work!

    I'd love any and all feedback on this and I'd be happy to start working on it if I got a sense of if anyone else would find it useful!

    kind/feature inactive 
    opened by alexkreidler 34
  • TaintManagerEviction - Pod gets new IP 1 to a few times a day to a few days appart - preceeded by Warning Unhealthy pod/calico-kube-controllers

    TaintManagerEviction - Pod gets new IP 1 to a few times a day to a few days appart - preceeded by Warning Unhealthy pod/calico-kube-controllers

    I've asked on stack overflow and no one can help. So treat this as a request to enhance the document. 25% chance there is a bug in microk8s.

    Microk8s relaunches a pod and gives it a new IP address every 8-24hrs. The pod does not receive or send traffic at the time of being recreated or generate any logs before microk8s relaunches the pod. microk8s kubectl logs <podname> --follow does not show any logs post starting the experiment, it just stops in the middle of the night.

    microk8s kubectl logs <podname> -p only shows normal operation before is relaunched by microk8s.

    microk8s kubectl get pods reports an AGE for the pod as if it was not restarted.

    The same docker image runs months on docker without any crashes.

    I've searched through the inspect logs and do not understand these logs or where to focus. So I'm effectively blind. This blind man speculates that IP leases set up by microk8s expire and microk8s relaunched the pod so they use the new IP.

    Just in case it's related to the Ubuntu 18 PC, I reformated the drive, installed ubuntu 20, microk8s, and the problem persists.

    Edit: Jan 5, 2021: microk8s kubectl get events mostly returns "no resources found in default namespace" but did report TaintManagerEviction, see jan 5 updates below

    inspection-report-20210104_122244.tar.gz

    BTW, microk8s is way cool and easy to setup.

    opened by johngrabner 33
  • Cert-manager webhook is  not pingable from the pod

    Cert-manager webhook is not pingable from the pod

    Summary

    Creating ClusterIssuer (in a 3-node cluster) times out with the following error Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded

    What Should Happen Instead?

    ClusterIssuer request should complete and create an object. This worked in case of a a single node cluster.

    Reproduction Steps

    1. ...
    2. ...

    Introspection Report

    Can you suggest a fix?

    Nope.

    I wonder if calico connectivity between 10.152.183.0 and 10.1.0.0 nets should be automatically provided on install.

    I further ssh-ed to a node and tried pinging cert-manager webhook service ping cert-manager-webhook.cert-manager.svc PING cert-manager-webhook.cert-manager.svc (10.152.183.186): 56 data bytes It hangs.

    Are you interested in contributing with a fix?

    opened by jsemohub 6
  • Pod stuck in `ContainerCreating` and reporting `TLS handshake timeout` when deployed in fresh multi-node cluster

    Pod stuck in `ContainerCreating` and reporting `TLS handshake timeout` when deployed in fresh multi-node cluster

    Summary

    After deploying a two node cluster, pods scheduled on the worker node are stuck in ContainerCreating.

     Events:                                                                                                                                                         
       Type     Reason                  Age                From               Message                                                                                
       ----     ------                  ----               ----               -------                                                                                
       Normal   Scheduled               102s               default-scheduler  Successfully assigned default/phpmyadmin-6c4dc6967d-k4v6v to kube-worker-1             
       Warning  FailedCreatePodSandBox  81s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network 
                                                                              for sandbox "9705456a848db7e2e12e323195b6eabda47fd5f144def34e156b362b79c236f0": plugin
                                                                              type="calico" failed (add): error getting ClusterInformation: Get "https://10.152.183.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": net/http: TLS handshake timeout                                                
       Normal   SandboxChanged          13s (x4 over 81s)  kubelet            Pod sandbox changed, it will be killed and re-created.
    

    What Should Happen Instead?

    Pods should start running and be ready on the worker node.

    Reproduction Steps

    I use the following Vagrantfile to create two virtual machines for the controller and worker node.

    Vagrant.configure("2") do |config|
    
      config.vm.box = "ubuntu/jammy64"
    
      config.vm.provider "virtualbox" do |vb|
        vb.memory = 4096
        vb.cpus = 2
      end
      
      config.vm.define "kube-controller-1" do |controller|
        controller.vm.hostname = "kube-controller-1"
        controller.vm.network "private_network", ip: "192.168.60.100"
        controller.vm.provider "virtualbox" do |vb|
          vb.name = "kube-controller-1"
        end
      end
      
      config.vm.define "kube-worker-1" do |worker|
        worker.vm.hostname = "kube-worker-1"
        worker.vm.network "private_network", ip: "192.168.60.200"
        worker.vm.provider "virtualbox" do |vb|
          vb.name = "kube-worker-1"
        end
      end
    end
    
    1. Install microk8s in both machines using snap.

      sudo snap install microk8s --classic
      
    2. Enable dns and metallb addons on controller node.

      microk8s enable dns metallb:192.168.60.10-192.168.60.20
      
    3. Join worker node with controller.

      # Run on controller node
      microk8s add-node
      
      # Run on worker node
      microk8s join 192.168.60.100:25000/5f54b10eb07a459de66d40895918053d/2edc51feb413 --worker
      
    4. Enable Bitnami Helm repository on controller node.

      microk8s helm repo add bitnami https://charts.bitnami.com/bitnami
      
    5. Install phpmyadmin Helm chart.

      microk8s helm install phpmyadmin bitnami/phpmyadmin
      

    Introspection Report

    inspection-report-20230105_232440.tar.gz

    Are you interested in contributing with a fix?

    no

    opened by krakowski 0
  • Bump actions/download-artifact from 3.0.1 to 3.0.2

    Bump actions/download-artifact from 3.0.1 to 3.0.2

    Bumps actions/download-artifact from 3.0.1 to 3.0.2.

    Release notes

    Sourced from actions/download-artifact's releases.

    v3.0.2

    • Bump @actions/artifact to v1.1.1 - actions/download-artifact#195
    • Fixed a bug in Node16 where if an HTTP download finished too quickly (<1ms, e.g. when it's mocked) we attempt to delete a temp file that has not been created yet actions/toolkit#1278
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump actions/checkout from 3.1.0 to 3.3.0

    Bump actions/checkout from 3.1.0 to 3.3.0

    Bumps actions/checkout from 3.1.0 to 3.3.0.

    Release notes

    Sourced from actions/checkout's releases.

    v3.3.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/actions/checkout/compare/v3.2.0...v3.3.0

    v3.2.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/actions/checkout/compare/v3.1.0...v3.2.0

    Changelog

    Sourced from actions/checkout's changelog.

    Changelog

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • worker nodes can't support internalTrafficPolicy: Local

    worker nodes can't support internalTrafficPolicy: Local

    Summary

    Service with internalTrafficPolicy: Local does not work on nodes joined using the param --worker, even when endpoints for pods running on this same node show up on the service description.

    What Should Happen Instead?

    Pods that run locally on --worker nodes should be route:able

    Reproduction Steps

    1. microk8s join ... --worker
    2. have a service with internalTrafficPolicy: Local
    3. schedule pods to run on node
    4. iptables now show kube proxy rule comment "no local endpoints" with a -j DROP

    Re-join node without --worker while everything else being exactly the same and it will work as expected.

    Introspection Report

    Can you suggest a fix?

    Maybe kube-proxy needs a configuration change about "DetectLocalMode"

    opened by balboah 0
  • Added the cloud formation template and updated tests

    Added the cloud formation template and updated tests

    Summary

    Added a cloudformation template so that required resources for EKSD testing are handled uniformly and cleanly.

    Changes

    Added a cloudformation template so that required resources for EKSD testing are handled uniformly and cleanly.

    Testing

    Manually tested by running test-distro.sh with required parameters.

    Possible Regressions

    Checklist

    • [x] Read the contributions page.
    • [x] Submitted the CLA form, if you are a first time contributor.
    • [x] The introduced changes are covered by unit and/or integration tests.

    Notes

    opened by berkayoz 0
Releases(v1.26)
  • v1.26(Dec 12, 2022)

    Most important features in this release

    Partner and community addons

    The evolution of the addon ecosystem continues to strengthen MicroK8s. The following addons are new in the 1.26 release under the community repo:

    • ondat: Run stateful workloads at scale.
    • sosivio: Next Generation Kubernetes Security made easy.
    • gopaddle: Provision multi-cloud clusters, dockerise applications, deploy, monitor and build DevOps pipelines within a fraction of time.
    • KWasm: Tooling for cloud-native WebAssembly.

    Core addons

    Core addons are Kubernetes services shipped with MicroK8s and supported by Canonical Kubernetes. MicroK8s 1.26 extends the core addon ecosystem with the introduction of MinIO: high-performance, S3 compatible object storage.

    Updates in detail

    Most important updates since the last release:

    • Kubernetes core services

    • Usability Improvements

      • Code quality improvements in the ClusterAPI providers thanks to @oscr
      • Removing the Calico interfaces when removing the snap
      • Introducing launch configurations for the strict snap
      • etcd upgraded to v3.5
      • CoreDNS uses the host’s resolv.conf to find the forward DNS servers
      • Fixed the dashboard-proxy command on Windows and macOS, thank you @doggy8088
      • Minor improvements to management of the 'microk8s' group, thank you @barrettj12
      • Improved search for kubelet tokens, thank you @ortegarenzy
    • Addon updates

      • OpenEBS addon updated to 3.3.x, thank you @zacbayhan
      • Improved observability for multi-node clusters, thank you @MrRoundRobin
      • K8s services alerting in the observability addon, thank you @dud225
      • Scheduler and controller prometheus scraping, thank you @plomosits
      • osm-edge version upgrade to v1.1.2 along with a new command microk8s osm, thank you @naqvis
      • Starboard addon renamed to Trivy, thank you @AnaisUrlichs
      • New addon, gopaddle. Try it with microk8s enable gopaddle-lite. Thank you @renugadevi-2613.
      • New minio addon, try it with microk8s enable minio
      • New ondat addon, try it with microk8s enable ondat. Thank you @hubvu
      • KWASM.sh addon, a container runtime for WebAssembly workloads, give it a try with microk8s enable kwasm. Thank you @0xE282B0.
      • New community addon sosivio, try it with microk8s enable sosivio. Courtesy of @DanArlowski and the sosivio team.
      • Istio upgraded to v1.15.3, thank you @Azuna1
      • NVIDIA GPU operator upgraded to 22.9.0

    Users following the latest stable MicroK8s track will be automatically upgraded to v1.26 in the next few days. Those who want to upgrade their existing clusters can follow the instructions in our docs. Remember, to call sudo microk8s addons repo update <repo_name> on the addon repositories you would like to fetch updates for.

    For more information on MicroK8s consult the official docs or chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.25(Aug 25, 2022)

    Most important features

    Strict confinement goes into general availability

    MicroK8s is delivered through Snaps and enjoys the benefits of updates and security features. Now we’re stepping even further in direction by implementing strict confinement as a new availability channel. We’re delighted to now offer our users a confined Kubernetes experience that has restricted host system access and a more restrictive security posture. Try it out with:

    snap install microk8s --channel=1.25-strict/stable
    

    Shrinking snap size

    We’re constantly striving to improve our user experience and as part of this we want to give you MicroK8s in the fastest way possible. As part of this we’ve reduced our snap size by up to 25% to help you get your Kubernetes up and running faster than ever.

    Addons go from strength to strength

    With the introduction of core & community addon repositories, there are new observability, networking and security addons that are generally available in this release.

    Image side-loading support

    In order to facilitate offline deployments, faster start up times and local development; we have introducedimage sideloading support into MicroK8s 1.25.

    Power9 architecture support

    You asked for it and we listened. For our community that uses Power9 based machines for acceleration, security and data-intensive workloads; MicroK8s can now be operated on these systems.

    Updates since last release

    Most important updates since the last release:

    • Kubernetes core services

      • Kubernetes v1.25
      • Support for new architecture, Power9 (ppc64el)
      • Containerd upgraded to v1.6.6
      • Runc upgraded to v1.1.2
      • CoreDNS upgraded to v1.9.3
      • Dqlite upgraded to v1.11.1
      • CNI tools upgraded to v0.9.1
      • Helm v3.9.1 is now bundled as part of the snap
      • Flannel upgraded to v0.15.1
      • Calico updated to v3.23
      • Streamlined build process, resulting in a reduced size by about 60MB (230MB -> 170MB)
    • Usability Improvements

      • New microk8s images import and microk8s images export-local commands, allowing side-loading of OCI images across the whole cluster.
      • Extend the microk8s CLI with binaries found under $SNAP_COMMON/plugins/
      • New microk8s version command
      • The ingress addon creates an ingress class with name “nginx”, thank you @Orzelius
      • Hostpath provisioner updated to v1.4.0, now allows for setting the reclaim policy, courtesy of @jkosik, as well as specifying StorageClasses to point to configurable host paths, thank you @balchua
      • Support using a custom storage class for the registry addon, thank you @sudeephb
      • The dashboard addon creates a token for accessing it (microk8s-dashboard-token)
      • Check the correct file for AppArmor confinement, thank you @MFAshby
      • Improved kubelet token search, thank you @ortegarenzy
    • Addon updates

      • Prometheus addon is deprecated and replaced with observability addon
      • New addon: kube-ovn, try it with microk8s enable kube-ovn
      • New community addon: nfs, try it with microk8s enable nfs, thank you @jkosik
      • New community addon for open source mesh, try it with microk8s enable osm-edge, thank you @naqvis
      • Dashboard updated to v2.6.0, thank you @dud225
      • Updated tests for inaccel addon, thank you @eliaskoromilas
      • Portainer addon updated, thank you @balasu
      • NVIDIA GPU operator updated to v1.11.0
      • ArgoCD updated to v4.6.3, thank you @jkosik
      • Upgrade Multus CNI to 3.9.0 and support for arm64 architectures, thank you @dud225
      • Registry addon updated to 2.8.1, adding support for s390x and ppc64le architectures.
      • Updated Linkerd to v2.12.0, thank you @balchua
      • Updated Jaeger to v1.36, thank you @balchua
      • Updated Keda to v2.8.0, thank you @balchua
      • Updated MetalLB to v0.13.3, adding support for configuring address pools via CRD, thank you @balchua
      • Updated Knative to v1.6.0 available on arm64, s390x and ppc64el, thank you @csantanapr

    Users following the latest stable MicroK8s track will be automatically upgraded to v1.25 in the next few days. Those who want to upgrade their existing clusters can follow the instructions in our docs. Remember, to call sudo microk8s addons repo update <repo_name> on the addon repositories you would like to fetch updates for.

    For more information on MicroK8s consult the official docs, and to contribute to the project, check out the repo at https://github.com/ubuntu/microk8s, or chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
    images-amd64.tar(484.18 MB)
  • v1.24(May 4, 2022)

    Most important updates since the last release:

    • Kubernetes core services

      • Kubernetes v1.24
      • Containerd v1.5.12, runc v1.0.3
      • Calico upgraded to v3.21.4
      • Read only kubelet port 10255 closed by default
      • Nginx Ingress controller updated to v1.2.0, thank you @balchua
      • CoreDNS updated to v1.9.0, thank you @balchua
      • Dqlite updated to v1.10.0, improved memory management
    • Usability Improvements

      • The control plane will not start automatically in low memory systems (less than 512MB of RAM)
      • Hostname resolution is now checked when nodes join a cluster
      • The microk8a add-node command now has optional yaml or json output
      • Updated LXD profile to work on the latest OS releases. Thank you @caleblloyd
      • Mayastor HA-storage option available with microk8s enable mayastor
      • microk8s reset refactored with improved output
      • Allow repositories with addons to be added at runtime
      • Addons can now be edited before they are enabled
    • Addon updates

      • Helm upgraded to v3.8.0, thank you @balchua
      • KEDA upgraded to v2.6.0, thanks to @balchua
      • Dashboard upgraded to v2.3.0, thank you @hryyan
      • Traefik updated to v2.5, thank you @miro-balaz
      • Install traefik via Helm, thank you @balasu
      • Install portainer via Helm, thank you @balasu
      • Updated hostpath-provisioner version. Please microk8s disable hostpath-storage and microk8s enable hostpath-storage if you run an old version of the hostpath provisioner.
        • Remove reliance on selfLink, which has been removed for Kubernetes 1.24+, thank you @chris-hamper
        • Fix non-root containers being unable to write to volumes
        • Ensure NodeAffinity rules are set for all PersistentVolumes
        • Support for s390x architecture
      • The Kubeflow and Juju addons have been removed. To install Kubeflow on MicroK8s, please see the Charmed Kubeflow docs.
      • The Ambassador addon has been removed.
      • New addon: ArgoCD. Try it with microk8s enable community; microk8s enable argocd. Thank you @dirien
      • New addon: StarBoard. Try it with microk8s enable community; microk8s enable starboard. Thank you @AnaisUrlichs

    Users following the latest stable MicroK8s track will be automatically upgraded to v1.24 in the next few days. Those who want to upgrade their existing clusters can follow the instructions in our docs.

    Note: In this release selfLink is removed from upstream Kubernetes, therefore if you run an old hostpath-provisioner (version 1.0.0) its pod will start crashlooping. If this is the case please reenable the hostpath-storage addon with microk8s disable hostpath-storage and microk8s enable hostpath-storage. When disabling the addon you will be asked if you want to keep the already provisioned persistence volume claims.

    For more information on MicroK8s consult the official docs or talk to us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.23(Dec 8, 2021)

    Most important updates since last release:

    • Kubernetes core services
      • Kubernetes 1.23
      • Kubernetes services profiling disabled by default
      • Events TTL set to 5 minutes
      • Improved dqlite stability and performance
      • For deployments on lxc conntrack limits are not set to improve compatibility
    • Usability Improvements
      • Option to add worker only nodes. Use --worker in the microk8s join command
      • Improved microk8s join output, thanks @gkarthiks
      • Options to format the output of add-node, thanks @jlettman
      • Ignore unroutable DHCP failure addresses, thanks @erulabs
      • Fix warnings in build process and the addons dns and dashboard, thank you @MichaelCduBois
      • Pull introspection report out of the multipass VM when running microk8s inspect on Windows and Mac, thanks @farazmd
      • Registry configuration in containerd configuration now follows the new format described in the upstream docs. Thank you @BabisK
      • Fix typo in the output of MicroK8s installer, thanks @sfstpala
    • Addon updates
      • Nginx Ingress controller updated to v1.0.5
      • Metrics server updated to v0.5.2, thanks @balchua
      • Portainer will maintain its state while enabling/disabling it, thank you @balasu
      • The NVIDIA operator upgraded to v1.8.2, with enhanced MIG support.
      • Local registry updated to the latest upstream
      • Linkerd upgraded to v2.11.1, thanks @tobiasmuehl
      • Keda upgraded to v2.4.0, thanks @balchua
      • Jaeger operator upgrade to v1.28.0, thanks @balchua
      • OpenEBS v3.0 released, thanks @niladrih
    • New addons:
      • microk8s enable dashboard-ingress, thanks @jlettman
      • inaccel addon targeting FPGA acceleration. Thank you @eliaskoromilas

    For more information on MicroK8s consult the official docs or talk to us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.22(Aug 9, 2021)

    Most important updates since the last release:

    • Kubernetes core services
      • Kubernetes v1.22
      • Improve the performance and stability of dqlite
      • S390x support. Check out the 1.22/edge channel.
      • cgroupV2 support, courtesy of @tbertenshaw
      • Upgrade calico to v3.19.1. Thank you @balchua
    • New kata containers addon. Try it with microk8s enable kata.
    • Addon updates:
      • Nvidia operator v1.7.0 can now detect pre-installed drivers.
      • Kube-prometheus upgraded to v0.8.0. Thank you @balchua
      • Kubernetes dashboard upgraded to v2.2.0, thanks to @nbraquart
      • Upgrade linkerd to v2.10.2. Thank you @balchua
      • Upgrade the metrics-server to v0.5.0. Courtesy of @balchua
      • knative updated to v0.24, thanks to @saikiran2603
      • Cilium CNI updated to v1.10
      • Jaeger addon updated to v1.24, thanks @balchua
      • Istio addon updated to v1.10.3
      • New Elasticsearch and Kibana version, v7.10. Thanks @s12v
      • OpenEBS addon for ARM64. Thank you @balchua
    • Usability improvements
      • Use ClusterFirstWithHostNet as DNS policy for Traefik. Thank you @AlexGustafsson
      • Guards in Cilium clustering thanks to @Jorgeewa
      • OpenFaaS bug fixes, thank you @dsbibby
      • MicroK8s status yaml fixes. Thank you @krichter722
      • Improvements in micrk8s wrapper, thank you @shoce
      • Attempt to configure UFW for calico CNI
      • Seamless snap refreshes. Containers do not restart on snap upgrades.

    Users following the latest stable MicroK8s track will be automatically upgraded to v1.22 in the next few days.

    For more information on MicroK8s consult the official docs, and chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.21(Apr 9, 2021)

    Most important updates since the last release:

    • Kubernetes core services
      • Kubernetes v1.21!
      • Major stability and performance dqlite fixes
      • Kubelite, single go binary for all Kubernetes services
      • Containerd updated to v1.4.4, thanks @tbertenshaw
      • CNI plugins updated to v0.8.7, thanks @angelnu
    • Storage support for multi-node clusters
      • New OpenEBS addon, try it with microk8s enable openebs, courtesy of @balchua
      • CSI storage plugins improved support
    • New OpenFaaS addon courtesy of @LucasRoesler. Try it with microk8s enable openfaas
    • Addon updates:
      • GPU support is now offered via the NVidia operator, make sure you checkout the known issues.
      • Linkerd updated to v2.9.4. Thank you @balchua
      • CoreDNS updated to v1.8.0. Thank you @balchua
      • KEDA updated to v2.1.0. Thank you @balchua
      • Jaeger updated to v1.21.3. Thank you @balchua
      • Prometheus updated to v0.7.0. Thank you @balchua and @tbertenshaw
      • Ingress updated to v0.44.0. Thank you @balchua
      • Fluentd updated to v3.1.0. Thank you @balchua
      • Knative updated to v0.21
      • Helm upgraded to 3.5.0
    • Usability improvements
      • Joining nodes will now verify the peer they contact before forming the cluster
      • microk8s kubectl apply -f now works with local files on Windows and MacOS
    • Other noteworthy enhancements
      • Inspection script detects vxlan.calico UFW rule, thank you @petermetz
      • Fix in traefik RBAC rules, courtesy of @lazyoldbear
      • Update to support distributions with iptables-nft
      • Dashboard and metrics server fixes for multi-os clusters. Thank you @luciimon
      • Remote builds are now supported. Try building the snap with snapcraft remote-build --build-on=amd64,arm64. Thank you @angelnu
      • Improved error messaging and build instructions. Thank you @galgalesh
      • Improvements in the installation path. Thank you @balchua and @barosl

    Users following the latest stable MicroK8s track will be automatically upgraded to 1.21 in the next few days.

    For more information on MicroK8s consult the official docs, and chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.20(Dec 11, 2020)

    Most important updates since the last release:

    • Kubeflow updated to v1.1
    • Make MicroK8s failure domain aware
    • Addons can now use --foo arguments
    • New addon: KEDA. Thank you @balchua
    • New addon: Portainer. Many thanks @balasu
    • Try out Traefik v2.3 ingress with microk8s enable traefik. Thanks @balasu
    • Prometheus monitoring available for ARM64, thank you @balchua
    • Linkerd updated to v2.9.0 and available for ARM64, thank you @balchua
    • Ingress updated to v0.35.0, thank you @balchua
    • Cilum updated to v1.8.3, thank you @balchua and @joestringer
    • Juju updated to 2.8
    • Option to set forward DNS servers when enabling DNS. Thank you @RiyaJohn
    • --help argument in microk8s inspect, thank you @bowers
    • fix race condition in setting the registry configmap, thank you @nicks

    Users following the latest stable MicroK8s track will be automatically upgraded to 1.20 in the next couple of days.

    For more information on MicroK8s consult the official docs, and to contribute to the project or chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • installer-v2.0.1(Sep 8, 2020)

  • v1.19(Aug 28, 2020)

    Most important updates since the last release:

    • High Availability. Try it by clustering at least three nodes.
    • Improved microk8s status output.
    • New ambassador addon, courtesy of @inercia.
    • Multus support via a new addon. Thank you @apnar.
    • New host-access addon to allow you to access host services from pods, courtesy of @iskitsas.
    • The microk8s.dashboard-proxy command makes it easier to access the dashboard.
    • The microk8s.dbctl command allows for backing up the cluster’s datastore.
    • Static token file used for admin authentication.
    • In adding a node you can now provide your own token. You can also set the time a join token expires. Thank you @balchua .
    • You can now set the registry size while enabling the addon, courtesy of @cyril-corbon
    • Addition of the ingress controller ConfigMaps to support ingress of TCP and UDP. Thank you @trulede.
    • Set the TLS certificate when enabling ingress with microk8s.enable ingress:default-ssl-certificate=namespace/secretname . Thank you @marcobellaccini.
    • Ingress images updated to v0.33. Thank you @balchua.
    • “micrk8s.ctr” detects the right snapshotter. Thank you @hpidcock .
    • kubelet comes with token auth enabled so prometheus can monitor it. Thank you @double73.
    • Istio updated to v1.5.1, thank you @nepython for your effort here.
    • The dashboard addon deploys only the dashboard v2.0.0 and the metrics server. Thank you @balchua.
    • Containerd updated to v1.3.7. Thank you @balchua.
    • Dashboard image pull policy set to default (ifNotPresent), thank you @biiiipy.
    • Linkerd updated to v2.8.0. Thank you @balchua.
    • The MetalLB updated to v0.9.3 and now supports multiple ranges and CIDR notation. Thank you @siddharths2710 and @balchua.
    • Fluentd updated to v3.0.2, courtesy of @balchua.
    • Prometheus updated to v2.20.0 as part of kube-promethues v0.6.0. Thank you @balchua.
    • Added local registry discovery support, courtesy of @nicks.

    Users following the latest stable MicroK8s track will be automatically upgraded to 1.19 in the next couple of days.

    For more information on MicroK8s consult the official docs, and to contribute to the project, check out the repo at https://github.com/ubuntu/microk8s, or chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
  • v1.18(Mar 26, 2020)

    Most important updates since the last release:

    • Installers for MacOS and Windows
    • Kubeflow 1.0 addon
    • Added new snap interface enabling other snaps to detect MicroK8s’ presence.
    • CoreDNS addon upgraded to v1.6.6, thank you @balchua
    • New Helm 3 addon, available with microk8s helm3, thanks @qs
    • Ingress RBAC rule to create configmaps, thank you @nonylene
    • Allow microk8s kubectl to use plugins such as krew. Thank you @balchua
    • microk8s reset will attempt to disable add-ons. Thank you @balchua
    • etcd upgraded to 3.4 by @lazzarello
    • Juju has been upgraded to 2.7.3 and is now packaged with the snap
    • On ZFS, the native snapshotter will be used. Thank you @sevangelatos
    • Improved microk8s status output. Thank you @balchua
    • Hostpath can now list events when RBAC is enabled. Thank you @Richard87
    • Certificates are set to have a lifespan of 365 days
    • Linkerd updated to v2.7.0. Thank you @balchua
    • knative updated to v0.13.0.
    • Fix in fetching more stats from cAdvisor. Courtesy of @nonylene
    • Fix enabling add-ons via the rest API. Thank you @giorgos-apo
    • Fix metallb privilege escalation on Xenial. Thank you @davecahill

    Please, consult the official docs at microk8s.io for installation instructions based on your platform or chat with us on the Kubernetes Slack, in the #microk8s channel!

    Source code(tar.gz)
    Source code(zip)
Ansible Collection: A collection of Ansible Modules and Lookup Plugins (MLP) from Linuxfabrik.

ansible_mlp An Ansible collection of Ansible Modules and Lookup Plugins (MLP) from Linuxfabrik. Ansible Bitwarden Item Lookup Plugin Returns a passwor

Linuxfabrik 2 Feb 07, 2022
Project 4 Cloud DevOps Nanodegree

Project Overview In this project, you will apply the skills you have acquired in this course to operationalize a Machine Learning Microservice API. Yo

1 Nov 21, 2021
Create pinned requirements.txt inside a Docker image using pip-tools

Pin your Python dependencies! pin-requirements.py is a script that lets you pin your Python dependencies inside a Docker container. Pinning your depen

4 Aug 18, 2022
Automatically capture your Ookla Speedtest metrics and display them in a Grafana dashboard

Speedtest All-In-One Automatically capture your Ookla Speedtest metrics and display them in a Grafana dashboard. Getting Started About This Code This

Aaron Melton 2 Feb 22, 2022
The leading native Python SSHv2 protocol library.

Paramiko Paramiko: Python SSH module Copyright: Copyright (c) 2009 Robey Pointer 8.1k Jan 04, 2023

Simple ssh overlay for easy, remote server management written in Python GTK with paramiko

Simple "ssh" overlay for easy, remote server management written in Python GTK with paramiko

kłapouch 3 May 01, 2022
Rundeck / Grafana / Prometheus / Rundeck Exporter integration demo

Rundeck / Prometheus / Grafana integration demo via Rundeck Exporter This is a demo environment that shows how to monitor a Rundeck instance using Run

Reiner 4 Oct 14, 2022
Chef-like functionality for Fabric

/ / ___ ___ ___ ___ | | )| |___ | | )|___) |__ |__/ | __/ | | / |__ -- Chef-like functionality for Fabric About Fabric i

Sébastien Pierre 1.3k Dec 21, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 02, 2023
A simple python application for running a CI pipeline locally This app currently supports GitLab CI scripts

🏃 Simple Local CI Runner 🏃 A simple python application for running a CI pipeline locally This app currently supports GitLab CI scripts ⚙️ Setup Inst

Tom Stowe 0 Jan 11, 2022
Inferoxy is a service for quick deploying and using dockerized Computer Vision models.

Inferoxy is a service for quick deploying and using dockerized Computer Vision models. It's a core of EORA's Computer Vision platform Vision Hub that runs on top of AWS EKS.

94 Oct 10, 2022
Hatch plugin for Docker containers

hatch-containers CI/CD Package Meta This provides a plugin for Hatch that allows

Ofek Lev 11 Dec 30, 2022
A repository containing a short tutorial for Docker (with Python).

Docker Tutorial for IFT 6758 Lab In this repository, we examine the advtanges of virtualization, what Docker is and how we can deploy simple programs

Arka Mukherjee 0 Dec 14, 2021
A Python library for the Docker Engine API

Docker SDK for Python A Python library for the Docker Engine API. It lets you do anything the docker command does, but from within Python apps – run c

Docker 6.1k Dec 31, 2022
Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be streamed

iterable-subprocess Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be

Department for International Trade 5 Jul 10, 2022
Azure plugins for Feast (FEAture STore)

Feast on Azure This project provides resources to enable running a feast feature store on Azure. Feast Azure Provider The Feast Azure provider acts li

Microsoft Azure 70 Dec 31, 2022
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.

Glances - An eye on your system Summary Glances is a cross-platform monitoring tool which aims to present a large amount of monitoring information thr

Nicolas Hennion 22k Jan 08, 2023
A cpp project template that uses CMake to build and Google Test / Github Actions to provide a CI

A cpp project template that uses CMake to build and Google Test / Github Actions to provide a CI

Martin Olivier 6 Nov 17, 2022
SSH tunnels to remote server.

Author: Pahaz Repo: https://github.com/pahaz/sshtunnel/ Inspired by https://github.com/jmagnusson/bgtunnel, which doesn't work on Windows. See also: h

Pavel White 1k Dec 28, 2022